uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,108,101,566,215
arxiv
\section*{Abstract} {\bf In the DeeMe experiment, approximately $70\ \mathrm{GHz/mm^{2}}$ prompt-charged particles will hit the multi-wire proportional chambers (MWPCs) between signal read-out periods. To avoid the space charge effect, we developed fast HV-switching MWPCs in order to control the gas gain dynamically. Four such MWPCs have been manufactured. The circuit of readout amplifiers is slightly modified to further improve the efficiency of the detector, and we also started to investigate other possible choices of the gas mixture. In this article, the development of the detectors and results of performance tests will be presented. } \vspace{-20pt} \section{Introduction} \label{sec:intro} \vspace{-20pt} \begin{center} \begin{figure}[b] \begin{tabular}{ccc} \begin{minipage}[t]{0.28\hsize} \centering \includegraphics[width=4.7cm,keepaspectratio,clip]{jparcmlf.pdf} \caption{Photograph of the accelerators and Materials and Life Science Experimental Facility (MLF) at J-PARC.} \label{fig:jparcmlf} \end{minipage} & \begin{minipage}[t]{0.36\hsize} \centering \includegraphics[width=5.8cm,keepaspectratio,clip]{hlinesimu.pdf} \caption{Simulated secondary beamline, H Line, in J-PARC MLF.} \label{fig:hlinesimu} \end{minipage} & \begin{minipage}[t]{0.28\hsize} \centering \includegraphics[width=4cm,keepaspectratio,clip]{mwpcphoto.pdf} \caption{Photograph of one of the four multi-wire proportional chambers (MWPCs).} \label{fig:mwpcphoto} \end{minipage} \end{tabular} \end{figure} \end{center} The DeeMe experiment is planned to search for muon-to-electron ($\mu$-$e$) conversion in the nuclear field at J-PARC Materials and Life Science Experimental Facility (MLF) H Line (see Fig. \ref{fig:jparcmlf} and \ref{fig:hlinesimu}). Our goal is to reach a single event sensitivity of $< 1\times10^{-13}$ for a graphite target or $< 2\times10^{-14}$ for a silicon carbide target with operating the Rapid Cycling Synchrotron (RCS) at a power of $1\ \mathrm{MW}$ for $2\times10^{7}\ \mathrm{sec/year}$. This will improve the sensitivity by one or two orders of magnitude than those achieved so far. \\ \vspace{-20pt} \clearpage \begin{center} \begin{figure}[t] \begin{tabular}{cc} \begin{minipage}[t]{0.4\hsize} \centering \includegraphics[width=6cm,keepaspectratio,clip]{mwpcstructure.pdf} \caption{Structure of the MWPC.} \label{fig:mwpcstructure} \end{minipage} & \begin{minipage}[t]{0.55\hsize} \centering \includegraphics[width=8.2cm,keepaspectratio,clip]{hvswitchingcircuit.pdf} \caption{The circuit of high-voltage switching module and its simulation result.} \label{fig:hvswitchingcircuit} \end{minipage} \end{tabular} \end{figure} \end{center} \vspace{-40pt} \mbox{}\\ \indent $\mu$-$e$ conversion is coherent neutrino-less conversion of a muon into an electron in the nuclear field $\mu$N$\rightarrow e$N. It is one of charged lepton flavor violating (cLFV) processes, which are forbidden in the Standard Model (SM). Some theoretical models beyond the SM however predict observable branching fractions. An observation of cLFVs means the existence of new physics. The energy of signal electron is approximately $105\ \mathrm{MeV}$, which the mass of muon is converted to. \\ \indent In the DeeMe experiment, a combined production and stopping target will be used to produce muonic atoms, and that electrons and other particles are transported to a spectrometer. The momenta of charged particles are measured by it, which consists of a magnet and four multi-wire proportional chambers (MWPCs). One of the four MWPCs is shown in Fig. \ref{fig:mwpcphoto}. \vspace{-5pt} \section{Development of the Detectors} In the experiment, approximately $70\ \mathrm{GHz/mm^{2}}$ or $2\times10^{8}\ \mathrm{charged\ particles/pulse}$ of prompt burst will hit the MWPCs between signal read-out periods. To avoid the space charge effect, we developed fast high-voltage-switching MWPCs in order to control the gas gain dynamically. \subsection{Devices} Figure \ref{fig:mwpcstructure} shows the structure of the MWPC. The MWPC is used with cathode readout. It has anode wires and potential wires stretched alternately. A DC high voltage is applied to the anode wires. To the potential wires, $0\ \mathrm{V}$ is applied in a time window of a few microseconds in which we search for a signal of the $\mu$-$e$ conversion, or a voltage as high as the voltage on the anode wires is applied to reduce the gas gain to the order of 1 \cite{1}.\\ \indent We put a high voltage switching module between HV supply and the potential wires. Figure \ref{fig:hvswitchingcircuit} shows the circuit of the module and its simulation result. It has two MOSFETs for outputting high voltage or $0\ \mathrm{V}$. \subsection{Current Status} For the gas mixture of argon (35\%) ethane (65\%) and applying $1630\ \mathrm{V}$ to the MWPC, a waveform obtained is shown in Fig. \ref{fig:wf}. In the period around $-1(+9)\ \mu\mathrm{s}$ the voltage on the potential wires is decreasing (increasing), resulting in negative (positive) saturation on the cathode strip readout. In between, the voltage on the potential wires is $0\ \mathrm{V}$, and the detector works with a gas gain of approximately $4.5\times10^{4}$, but there is an oscillation. The form of noise is however constant so that we can find a signal by subtracting the noise waveform. \\ \indent In this operation, we conducted experiments at the J-PARC MLF D2 Area in March (three days) and June (five days), 2017 (see Fig. \ref{fig:efficibefore}). The purpose is to measure momenta of electrons from muon Decay in Orbit $\mu^{-} \rightarrow e^{-} \nu_{\mu} \overline{\nu_{e}}$ (DIO), one of the main background in the DeeMe experiment, about $50\ \mathrm{MeV/}c$. The hit efficiencies of x-axis (horizontal direction) readout of the MWPCs are analyzed to be $\simeq 90\%$ (WC0 and WC1 with $0.75\ \mathrm{mm}$ wire spacing and $1630\ \mathrm{V}$ applied) and $\simeq 60\%$ (WC2 and WC3 with $0.7\ \mathrm{mm}$ and $1600\ \mathrm{V}$ applied). Figure \ref{fig:efficibefore} (right) illustrates the efficiency of the second MWPC as a function of time. It fluctuates in time as the shape of output waveform oscillates. \\ \indent To avoid loss of efficiency from negative saturation, we tried two things: (1) changing the filling gas and applying lower voltage and (2) increasing the dynamic range of the readout amplifiers. \begin{center} \begin{figure}[t] \begin{tabular}{cc} \begin{minipage}[t]{0.4\hsize} \centering \includegraphics[width=6.3cm,keepaspectratio,clip]{wf.pdf} \caption{Output waveform of a cathode strip signal amplified through an amplifier.} \label{fig:wf} \end{minipage} & \begin{minipage}[t]{0.55\hsize} \centering \includegraphics[width=8.2cm,keepaspectratio,clip]{efficibefore.pdf} \caption{Setup (left) and hit efficiency of the second MWPC from the upstream (right) in the experiment at the J-PARC MLF D2 Area.} \label{fig:efficibefore} \end{minipage} \end{tabular} \end{figure} \end{center} \vspace{-20pt} \subsubsection{Gas Mixture Study} It is simulated that we can lower the voltage to $1510\ \mathrm{V}$ if we change the gas mixture into argon (80\%) isobutane (20\%) for the MWPC with a wire spacing of $0.75\ \mathrm{mm}$. \\ \indent The stability of the MWPC depends on the tolerance to discharge between two kinds of wires. To check it, two wires were put in a small chamber and we investigated what voltage discharge occurs at \cite{2}. It was found to be at $\simeq 1950\ \mathrm{V}$. That means we have $\simeq 400\ \mathrm{V}$ margin for discharge when we choose the applied voltage $1510\ \mathrm{V}$. \subsubsection{Amplifier Improvement} Radeka-type two-stage amplifier is adopted at present. One stage consists of a common base and two emitter followers, and the amplifier has two stages \cite{3}. By changing the values of resistors of the second stage, we increased negative range of the amplifier from $\simeq 120\ \mathrm{mV}$ to $\simeq 280\ \mathrm{mV}$. \clearpage \section{Results of the Latest Beam Test} We performed a beam test in February, 2018 at Institute for Integrated Radiation and Nuclear Science, Kyoto University. For single electron, hit efficiency was measured to be about 98\% at $300\ \mathrm{ns}$ after the MWPC starting to work (see Fig. \ref{fig:efficiafter}). But we observed random spikes in waveform when beams with the intensity equivalent to the prompt burst hit the MWPC as shown in Fig. \ref{fig:promptburst}. Electrons emitted from the cathode planes by ions might be a cause of these pulses. We have a plan to mix freon with the filling gas to absorb the electrons between the cathode and anode planes. \begin{center} \begin{figure}[t] \begin{tabular}{cc} \begin{minipage}[t]{0.55\hsize} \centering \includegraphics[width=9cm,keepaspectratio,clip]{efficiafter.pdf} \caption{Setup (left) and hit efficiency of the MWPC (right) obtained from the test at Institute for Integrated Radiation and Nuclear Science, Kyoto University.} \label{fig:efficiafter} \end{minipage} & \begin{minipage}[t]{0.35\hsize} \centering \includegraphics[width=5.4cm,keepaspectratio,clip]{promptburst.pdf} \caption{Waveform obtained when electron beams with the intensity equivalent to the prompt burst hit the MWPC.} \label{fig:promptburst} \end{minipage} \end{tabular} \end{figure} \end{center} \vspace{-50pt} \mbox{}\\ \section{Conclusion} The DeeMe experiment aims to search for $\mu$-$e$ conversion with a single event sensitivity of $1\times10^{-13}$ for a graphite target down to $2\times10^{-14}$ for a silicon carbide target. The signal is an electron with a monochromatic energy of $105\ \mathrm{MeV}$, and we will search for it by using a magnetic spectrometer, which consists of a magnet and four MWPCs. \\ \indent By optimizing the gas mixture filling the MWPCs and increasing the negative range of the readout amplifiers, hit-efficiency has been improved for single electrons. To absorb random-spike signals observed when the prompt burst hit the MWPC, the gas needs to be optimized a little more. We have a plan to mix freon with the filling gas.
1,108,101,566,216
arxiv
\section{Introduction} Many colloidal dispersions, such as natural clays, and macromolecular systems consist of oblate or disk-shaped mesogens whose intrinsic ability to form liquid crystalline order gives rise to unique rheological and optical properties. Despite their abundance in nature, the statistical mechanics of fluids containing non-anisometric particles in general (and oblate ones in particular) has received far less attention than that of their spherical counterparts. The possibility of a first order disorder-order transition from an isotropic to a discotic nematic fluid of platelets was first established theoretically by Onsager \cite{Onsager} in the late 1940s. Although originally devised for rod-like particles in solution, his theory also makes qualitative predictions for plate-like particles based on the central idea that orientation-dependent steep repulsive interactions alone are responsible for stabilising nematic order. The intrinsic difficulty with platelets, as pointed out by Onsager in his original paper, is that the contribution of third and higher body correlations can no longer be neglected like for thin rod-like species. Consequently, the original second-virial treatment is expected to give qualitative results at best \cite{Forsyth77}. In a pioneering simulation study, Eppenga and Frenkel \cite{Eppengafrenkel} provided numerical evidence for an isotropic-nematic transition in systems of infinitely thin circular disks and found the transition densities to be much smaller and the first order nature of the transition to be much weaker than predicted by the Onsager theory. Owing to the simplicity of the model, the discrepancy can be attributed entirely to the neglect of third and higher body virial terms in the theory. At high densities, the nematic phase of disk-shaped particles becomes unstable with respect to columnar order, characterised by a planar (2D) hexagonal arrangement of columns each with a liquid internal structure. Similar to the formation of the nematic phase, the stability of the columnar phase can be explained solely from entropic grounds \cite{frenkellc, Veerman}. Although the system loses configurational entropy because of the partial crystallisation associated with columnar order, this loss is more than offset by a simultaneous increase in translational entropy, i.e. the average available space for each particle increases. Attempts to improve Onsager's second virial theory have met with variable success (see \olcite{harnauplates} for a recent overview). These approaches usually involve integral equation or geometric density functional methods whose applicability is often restricted to isotropic fluids \cite{costahansen,cheung}, models with parallel or restricted orientations \cite{harnaurowan} or particles with vanishing thickness \cite{esztermann,harnaucosta}. A recent generalisation of the fundamental measure approach towards arbitrarily shaped hard convex bodies provides a potentially promising avenue to address more realistic models for liquid crystal ordering \cite{goosmecke}. The influence of higher-body correlations can only be assessed numerically via computer simulation \cite{mastersvirial}. For cut spheres, the virial coefficients have been quantified up to the 8th order both in the fluid isotropic \cite{youvlasov} and nematic state \cite{duncanmasters}. Despite the large number of virial terms, the convergence of the virial expansion of the free energy was found to be insufficient to provide an accurate description of dense nematic and columnar states. Alternatively, Scaled Particle Theory (SPT) can be used to incorporate higher virial terms in an indirect manner. Whilst SPT produces reasonable results for infinitely thin disks \cite{savith}, its extension to finite aspect ratios leads to poor predictions for the isotropic-nematic transition densities \cite{unp}. A simpler strategy to account for higher-body particle correlations in the isotropic and nematic fluid state is provided by the so-called Parsons-Lee decoupling approximation \cite{parsons,Lee87,Lee89}. The basic assumption of this approach is that the pair correlation function $g(r)$ of a fluid of hard anisometric bodies, which depends rather intractably on the centre-of-mass distance vector $\Delta {\bf r} $ and orientational unit vectors $ {\bf \hat{u}} $ and $ {\bf \hat{u}} ^{\prime}$, can be mapped onto that of a hard sphere fluid with the same packing fraction via: \begin{equation} g ( \Delta {\bf r} / \sigma_{0} ; {\bf \hat{u}} , {\bf \hat{u}} ^{\prime} ) = g_{\text{HS}} ( \Delta r / \sigma ( \Delta \hat{ {\bf r} } ; {\bf \hat{u}} , {\bf \hat{u}} ^{\prime}) ) \label{map} \end{equation} with $\sigma_{0}$ some reference distance (e.g. particle diameter) and $\sigma ( \Delta \hat{ {\bf r} } ; {\bf \hat{u}} , {\bf \hat{u}} ^{\prime}) $ the distance of closest approach of a pair of hard anisometric bodies at a given set of orientation unit vectors. In case of hard spheres the distance of closest approach is simply the hard sphere diameter $\sigma_{0}$. \eq{map} provides a natural route of decoupling the translational and orientational degrees of freedom. Starting from the generalised virial equation it is possible to derive an expression for the excess free energy which is similar to the one from Onsager with the particle density replaced by a {\em rescaled density} involving the hard sphere excess free energy. Whilst the decoupling approximation is known to work well for short hard spherocylinders \cite{mcgrother}, its merits for plate-like cylinders have not been investigated so far. This we intend to do in the present paper. As for the columnar state, the high degree of positional and orientational order can be exploited to devise simple free-volume approaches inspired by the Lennard-Jones-Devonshire (LJD) cell model \cite{lennardjones,wood,salsburg}. This was first done by Taylor and Hentschke \cite{taylor,hentschke} for the high-density liquid crystal states of parallel cylinders which do not exhibit an isotropic phase. The approach was further developed and modified in \olcite{wensinkcol} showing that a quantitatively reliable equation of state for the columnar phase can be obtained by accounting for the orientational entropy of the particles, neglected in the original version. In this paper, we will combine the Onsager-Parsons approach for the isotropic and nematic fluid state with the modified LJD cell theory for the columnar phase to trace the complete phase diagram for freely rotating hard cylinders as a function of thickness-to-diameter ratio. The theoretical predictions will be tested against simulation results for hard cut-spheres. In view of the inherent difficulty of capturing multi-particle correlations in dense plate fluids, the overall performance of the present theory must be deemed satisfactory. Although quantitative agreement with simulation data is generally lacking, the theory does manage to reproduce the generic features of the phase diagram and provides a simple theoretical underpinning for the relative stability of nematic and columnar order as a function of the plate aspect ratio. This paper is constructed as follows. Section II and III are devoted to a detailed exposition of the Onsager-Parsons and modified LJD theories, respectively. The phase diagram emerging from the present theory will be presented and discussed in Section IV. Next, algebraic forms of the nematic and columnar free energy are given which allow us to obtain universal scaling results for the nematic-columnar transition. Finally, some concluding remarks are formulated in Section VI. \section{Onsager-Parsons theory for the isotropic-nematic transition} Let us consider a system of hard cylinders with length $L$ and diameter $D$ in a macroscopic volume $V$. For disk-like cylinders we consider here the {\em aspect ratio} $L/D$ is much smaller than unity. The particle concentration is expressed in dimensionless form via $c=ND^3/V$. Following \olcite{wensinkbidikte}, the Helmholtz free energy within the Onsager-Parsons-Lee approach takes the following form: \begin{equation} \frac{\beta F}{N} = \ln \tilde{{\mathcal V }} c - 1 + \langle \ln 4 \pi f( {\bf \hat{u}} ) \rangle + \frac{cG_{P}(\phi)}{2} \frac { \langle \langle V_{\text{excl}}(\gamma) \rangle \rangle }{D^3} \label{free} \end{equation} with $\beta^{-1}=k_{B}T$ the thermal energy ($k_{B}$ represents Boltzmann's constant and $T$ temperature) and $\tilde{{\mathcal V}}= {\mathcal V}/D^{3}$ the dimensionless thermal volume of a platelet including contributions from the rotational momenta. The brackets $\langle (\cdot ) \rangle = \int d {\bf \hat{u}} f( {\bf \hat{u}} ) ( \cdot )$, $\langle \langle (\cdot ) \rangle \rangle = \iint d {\bf \hat{u}} d {\bf \hat{u}}^{\prime} f( {\bf \hat{u}} ) f( {\bf \hat{u}}^{\prime} ) ( \cdot )$ denote single and double orientational averages involving some unknown distribution $f( {\bf \hat{u}} )$ of the orientation unit vector $ {\bf \hat{u}} $ of the plate normal which is normalised according to $\int d {\bf \hat{u}} f( {\bf \hat{u}} ) = 1$. Several entropic contributions can be distinguished in \eq{free}. The first two are {\em exact} and denote the ideal translational and orientational entropy, respectively. The last term represents the excess translational or packing entropy which accounts for the particle-particle interactions on the approximate level of pair-interactions. The key quantity here is the excluded volume $V_{\text{excl}}$ between two plate-like cylinders at fixed inter-particle angle $\gamma$ with $\sin \gamma = | {\bf \hat{u}} \times {\bf \hat{u}}^{\prime} |$. This quantity has been calculated in closed form by Onsager \cite{Onsager} and reads: \begin{eqnarray} \frac{ V_{\text{excl}}( \gamma )}{D^3} &=& \frac{\pi}{2} |\sin \gamma| + \frac{L}{D} \left ( \frac{\pi}{2} + 2 E (\sin \gamma) + \frac{\pi}{2} \cos \gamma \right ) \nonumber \\ && + 2 \left ( \frac{L}{D} \right )^{2} | \sin \gamma | \label{vexcl} \end{eqnarray} with $E(x)$ the complete elliptic integral of the second kind. Although the structure of \eq{free} is similar to the classic Onsager second-virial free energy, the effect of higher order virial terms are incorporated via the scaling factor \begin{equation} G_{P}=\frac{1-\frac{3}{4}\phi}{(1-\phi)^2} \end{equation} which depends on the total plate volume fraction $\phi = c (\pi/4)L / D$. The rescaled density stems from the Parsons-Lee method \cite{parsons,Lee87,Lee89,wensinkbidikte} which involves a mapping of the plate pair distribution function onto that of a hard sphere system via the virial equation. The free energy can ultimately be linked to the Carnahan-Starling expression for hard spheres, which provides a simple strategy to account for the effect of higher-body particle interactions, albeit in an implicit and approximate manner. As required, $G_{P}$ approaches unity in the limit $\phi \rightarrow 0$ in which case the original second-virial theory is recovered. Let us now specify the orientational averaging. By definition, all orientations are equally probable in the isotropic (I) phase and $f_{I} = 1/4\pi$. The orientational entropy then vanishes: \begin{equation} \langle \ln 4 \pi f_{I} \rangle _{I}\equiv 0 \end{equation} If we use the random isotropic averages $\langle \langle \sin \gamma \rangle \rangle_{I} = \pi/4$, $\langle \langle E(\sin \gamma) \rangle \rangle_{I} = \pi^2 /8 $ and $\langle \langle \cos \gamma \rangle \rangle_{I} = 1/2$ the excluded volume entropy reduces to: \begin{equation} \frac{ \langle \langle V_{\text{excl}}(\gamma) \rangle \rangle _{I} }{D^3} = \frac{\pi^2}{8} + \left( \frac{3\pi}{4} + \frac{\pi^2}{4} \right ) \frac{L}{D} + \frac{\pi}{2} \left ( \frac{L}{D} \right )^{2} \end{equation} With this, the free energy of the isotropic phase is fully specified. In the nematic (N) phase, the particles on average point along a common nematic director $ {\bf \hat{n}} $ and the orientation distribution $f( {\bf \hat{u}} \cdot {\bf \hat{n}} )$ is no longer a trivial constant. For a uniaxial nematic phase, $f( {\bf \hat{u}} )=f(\theta )$ involving the polar angle $ 0 \leq \theta \leq \pi$ between the plate normal and the director, with $f$ being a peaked function around $\theta = 0$ and $\theta = \pi$. The equilibrium form follows from the minimum condition of the free energy: \begin{equation} \frac{\delta}{\delta f } \left ( \frac{\beta F}{N} - \lambda \int d {\bf \hat{u}} f ( {\bf \hat{u}} ) \right ) = 0 \label{statio} \end{equation} where the Lagrange multiplier $\lambda$ ensures the normalisation of $f$. Applying the condition to \eq{free} leads to a self-consistency equation for $f$: \begin{equation} f( {\bf \hat{u}} ) = \frac{\exp \left [ - cG_{P}(\phi) \int d {\bf \hat{u}}^{\prime} D^{-3} V_{\text{excl}}(| {\bf \hat{u}} \times {\bf \hat{u}}^{\prime} | ) f( {\bf \hat{u}}^{\prime} ) \right ] }{\int d {\bf \hat{u}} \exp \left [ - cG_{P}(\phi) \int d {\bf \hat{u}}^{\prime} D^{-3} V_{\text{excl}}(| {\bf \hat{u}} \times {\bf \hat{u}}^{\prime} | ) f( {\bf \hat{u}}^{\prime} ) \right ] ]} \label{sc} \end{equation} which needs to be solved numerically for a given particle concentration \cite{herzfeldgrid}. Note that the isotropic distribution $f=1/4\pi$ is a trivial solution of \eq{sc}, irrespective of $c$. At higher densities, nematic solutions of \eq{sc} will appear which give rise to a lower free energy than the isotropic one. The nematic order parameter $S$, defined as: \begin{equation} S = \int d {\bf \hat{u}} {\mathcal P}_{2} ( \cos \theta) f( {\bf \hat{u}} ) \label{s2} \end{equation} [where ${\mathcal P}_{2}(x) = (3x^{2}-1)/2$] is used to distinguish the isotropic state ($S=0$) from the nematic ($0 < S \leq 1$). Once the equilibrium orientational distribution function is known, the pressure $P = -(\partial F/ \partial V)_{N,T}$ and chemical potential $\mu = (\partial F/ \partial N)_{V,T} $ can be specified to establish phase equilibria between isotropic and nematic states. \section{LJD cell theory for the columnar phase} To describe the thermodynamical properties of a columnar phase we use an extended cell theory as proposed in \olcite{wensinkcol}. In this approach, the structure of a columnar phase is envisioned in terms of columns ordered along a perfect lattice in two lateral dimensions with a strictly one-dimensional fluid behaviour of the constituents in the remaining direction along the columns. As for the latter, the configurational integral of a system of {\em parallel} platelets with thicknesses $L$ and diameter $D$ with their centre-of-mass moving along the plate normal on a line of length $\ell$ is formally given by \cite{tonks}: \begin{equation} Q_{\text{fluid}} (N , \ell , T ) = \frac{1} {\Lambda ^{N } N ! } \left [ \ell - N L \right ] ^{N} \end{equation} with $\Lambda$ the thermal de Broglie wavelength. The columns are assumed to be strictly linear and rigid so that fluctuations associated with bending of the columns can be ignored. Next, we allow the platelets to rotate slightly about their centre-of-mass. At high packing fractions, the rotational freedom of each platelet is assumed to be asymptotically small and the configurational integral above may be approximated as follows: \begin{equation} Q_{\text{fluid}} (N , \ell , T ) \approx \frac{ Q _{\text{or}} } { {\mathcal V}_{1} ^{N} N ! } \left [ \ell - N \langle L_{\text{eff}} \rangle \right ] ^{N} \label{qrot} \end{equation} where ${\mathcal V_{1}}$ represents the total 1D thermal volume including contributions arising from the 3D rotational momenta of the platelet. Furthermore, $Q_{\text{or}} = \exp [ -N \langle \ln 4 \pi f \rangle ] $ is an orientational partition integral depending on the orientational probability distribution $f$. In the {\em mean-field} description implied by \eq{qrot} there is no coupling between the orientational degrees of freedom of the platelets. The rotational freedom of the platelets is expressed in an {\em effective entropic thickness}, defined as \begin{equation} \langle L_{\text{eff}} \rangle = L \left \{ 1 + \frac{1}{2}\frac{D}{L} \int d(\cos \theta) |\theta| f(\theta) + \cdots \right \} \label{leffe} \end{equation} up to leading order in the polar angle $\theta$ which measures the deviation of the plate normal from the direction of the column. A prefactor of `$1/2$' in \eq{leffe} has been included to correct in part for the azimuthal rotational freedom and captures the effect that the excluded length between two platelets at fixed polar angles becomes minimal when the azimuthal orientations are the same. The free energy of the 1D fluid then follows from $\beta F = -\ln Q$: \begin{equation} \frac{\beta F_{\text{fluid}}}{N} = \ln \tilde{{\mathcal V}}_{1} \rho - 1 + \langle \ln 4 \pi f \rangle - \ln \left [ 1 - \rho \langle \tilde{L}_{\text{eff}} \rangle \right ] \label{free1d} \end{equation} in terms of $\tilde{{\mathcal V}}_{1} = {\mathcal V}_{1}/L$, the reduced {\em linear density} $\rho = NL/\ell$ and effective thickness $\tilde{L}_{\text{eff}} = L_{\text{eff}}/L$. Similar to the nematic case in the previous Section, the equilibrium form $f(\theta)$ is found by a formal minimisation of the free energy under the normalisation constraint. The corresponding stationarity condition is given by \eq{statio}. Since the free energy \eq{free1d} depends only on one-particle orientational averages, the equilibrium ODF can be obtained in closed form and turns out be of a simple exponential form: \begin{equation} f (\theta) = \frac{\xi^{2}}{4 \pi} \exp [- \xi |\theta |] \label{expo} \end{equation} with \begin{equation} \xi = \left ( \frac{3}{2} \frac{D}{L} \right ) \frac{\rho}{1 - \rho } \end{equation} The orientational averages are now easily carried out and the leading order expressions for the orientational entropy and entropic thickness are given by: \begin{eqnarray} \langle \ln 4 \pi f \rangle &=& 2 \ln \xi - 2 \nonumber \\ \langle \tilde{L}_{\text{eff}} \rangle &=& 1 + \left (\frac{D}{L} \right ) \frac{1}{\xi} \end{eqnarray} As a measure for the orientational order along the column, we can define a nematic order parameter $S$ [{\em cf.} \eq{s2}] related to $\xi$ via: \begin{eqnarray} S \equiv \left \langle {\cal P}_{2}(\cos \theta) \right \rangle \sim 1 - \frac{3}{2} \left \langle \theta^{2} \right \rangle \sim 1 - \frac{9}{\xi^{2}} \label{xis} \end{eqnarray} Let us now turn to the free energy associated with the positional order along the lateral directions of the columnar liquid crystal. A formal way to proceed is to map the system onto an ensemble of $N$ disks ordered into a 2D lattice. Near the close packing density, the configurational integral of the system is provided in good approximation by the LJD cell theory \cite{lennardjones,wood,salsburg,kirkwoodfreevol}. Within the framework of the cell model, particles are considered to be localised in `cells' centred on the sites of a fully occupied lattice (of some prescribed symmetry). Each particle experiences a potential energy $u_{\text{cell}}^{\text{nn}}({\bf r})$ generated by its nearest neighbours. In the simplest version, the theory presupposes each cell to contain one particle moving {\em independently} from its neighbours. The $N$-particle canonical partition function can then be factorised as follows: \begin{eqnarray} Q_{\text{LJD}}(N) &=& \frac{1} { \Lambda ^{2N} } \int d{\bf r}^{N} \exp[-\beta U({\bf r}^{N})] \nonumber \\ & \approx & \left( \frac{1} { \Lambda ^{2}} \int d^{2}{\bf r}\exp \left[-\frac{\beta}{2} u_{\text{cell}}^{\text{nn}}({\bf r}) \right] \right)^{N} \end{eqnarray} For hard interactions, the second phase space integral is simply the cell {\em free area} available to each particle. If we assume the nearest neighbours to form a perfect hexagonal cage, the free area is given by $A_{\text{free}}=\sqrt{3}(\Delta_{C}-D)^{2}/2$ with $\Delta_{C}$ the nearest neighbour distance. The configurational integral then becomes \begin{equation} Q_{\text{LJD}}(N) \approx \left ( \frac{ A_{\text{free}} }{\Lambda ^2} \right)^{N} = \left ( \frac{\frac{1}{2}\sqrt{3}\Delta_{C}^{2}} {\Lambda ^2} \right )^{N} \left( 1-\bar{\Delta}_{C}^{-1} \right)^{2N} \end{equation} where the (lateral) spacing $\bar{\Delta}_{C}=\Delta_{C}/D$ is a measure for the translational freedom each particle experiences within the cage. The free energy associated with the LJD cell theory is given by: \begin{equation} \frac{ \beta F _{\text{LJD}} }{N} = \ln \frac{\Lambda^2}{D^2} + \ln \frac{2}{\sqrt{3}} + 2 \ln \left ( \frac{ \bar{\Delta}_{C}^{-1} }{1- \bar{\Delta}_{C}^{-1}} \right ) \label{fljd} \end{equation} The LJD equation of state associated with \eq{fljd} provides a very accurate description of a 2D solid at densities near close-packing \cite{aldersimul}. If we now apply the condition of single-occupancy (i.e. one array of platelets per column) we can use $\bar{\Delta}_{C}$ to relate the plate volume fraction $\phi = Nv_{0} / V$ (with $v_{0} = (\pi/4) L D^2 $ the particle volume) to the reduced linear density $ \rho $ via: \begin{equation} \phi^{\ast} \bar{\Delta}_{C}^{2} = \rho \label{tienrhophi} \end{equation} in terms of the reduced packing fraction $\phi^{\ast} =\phi /\phi_{\text{cp}}$ with $\phi_{\text{cp}}=\pi/2\sqrt{3}\approx 0.907$ the value at close packing. The total free energy of the columnar state is now obtained by adding the fluid and LJD contributions: \begin{eqnarray} \frac{\beta F_{\text{col}}}{N} & = & \ln \tilde{{\mathcal V}}c - 1 + 2 \ln \left \{ \frac{3}{2} \frac{D}{L} \left ( \frac{\phi^{\ast} \bar{\Delta}_{C}^{2} }{1 - \phi^{\ast} \bar{\Delta}_{C}^{2} } \right ) \right \} - 2 \nonumber \\ && - \ln \left ( \frac{ 1 - \phi^{\ast} \bar{\Delta}_{C}^{2}}{3} \right ) - 2 \ln ( 1 - \bar{\Delta}_{C}^{-1} ) \label{freecell} \end{eqnarray} where the ideal contribution is identical to that of \eq{free}. The final step is to minimise the total free energy with respect to $\bar{\Delta}_{C}$. The stationarity condition $\partial F / \partial \bar{\Delta}_C = 0$ yields a third-order polynomial whose physical solution reads: \begin{equation} \bar{\Delta}_C = \frac{-3^{1/3}4 \phi^{\ast} + 2^{1/3} K^{2/3}}{6^{2/3} \phi^{\ast} K ^{1/3}} \end{equation} with \begin{equation} K = 27 (\phi ^{\ast}) ^2 + [3 (\phi^{\ast})^3 (32 + 243 \phi^{\ast})]^{1/2} \end{equation} With this, the free energy for the columnar state is fully specified. Unlike the nematic free energy, the columnar free energy is entirely algebraic and does not involve any implicit minimisation condition to be solved ({\em cf.} \eq{statio}). The pressure and chemical potential can be found in the usual way by taking the appropriate derivative of \eq{freecell}. In Sec. \ref{asymp} we will show that the nematic free energy can also be recast in closed algebraic form using a simple variational form for the ODF, similar to \eq{expo}. \begin{figure*} \begin{center} \begin{picture}(20,0) \put(1,-5){(a)} \put(9,-5){(b)} \end{picture} \includegraphics[clip=,width = 0.9\columnwidth ]{fig1a} \includegraphics[clip=,width = 0.9\columnwidth ]{fig1b} \caption{\label{mono} (a) Phase diagram for monodisperse colloidal platelets of variable aspect ratio $L/D$ in terms of the plate packing fraction $\phi =(\pi/4) LD^{2}N/V$. Thin continuous lines serve to guide the simulation data. (b) Dimensionless concentration of the coexisting isotropic and nematic phases as a function of the plate aspect ratio. Inset: nematic order parameter $S$ of the coexisting nematic phase plotted versus aspect ratio. } \end{center} \end{figure*} \section{Phase diagram} \fig{mono} presents an overview of the phase behaviour of a hard cylindrical platelets based on the theoretical approach described above, along with various simulation data for hard cut spheres available in literature. From \fig{mono}a it is evident that the packing fractions associated with the isotropic-nematic coexistence increase for larger aspect ratio whereas the nematic-columnar transition remains virtually unaffected by the shape of the platelet. This observation is in line with the tentative phase diagram constructed by Veerman and Frenkel \cite{Veerman}. The trends can be understood qualitatively by noting that the onset of nematic order occurs if the fraction of {\em excluded volume} $\sim ND^3/V $ exceeds a certain universal value of about $4$ (as reflected in \fig{mono}b) whereas columnar order only becomes stable beyond a critical packing fraction, typically $\phi \simeq 0.4$. Whence: \begin{eqnarray} \phi_{\text{IN}} & \simeq & \pi L/D, \hspace{1cm} L/D \ll 1 \nonumber \\ \phi_{\text{NC}} & \simeq & 0.4 \end{eqnarray} which implies the presence of a {\em triple} aspect ratio, fixed by the intersection of both nematic binodals. Although at this particular value an isotropic-nematic-columnar triphasic coexistence occurs above a certain packing fraction, the system volume occupied by the nematic phase is always infinitesimally small and the situation thus differs from a regular tri-phasic coexistence occurring in e.g. binary mixtures at a given thermodynamic state point. Equating both expression we estimate the triple aspect ratio to be $L/D = 0.4\pi \approx 0.126$, which is very close to the value $0.125$ obtained from extrapolating the simulation binodals from \olcites{zhang2,beekschilling}. Beyond the triple aspect ratio, the platelets are no longer sufficiently anisometric to guarantee a stable nematic phase and direct transitions from the isotropic fluid to the columnar solid occur. We should note that our theory does not take into account the theoretically disputed cubatic phase, as an intermediate state between the isotropic and columnar phases. The issue of the stability of cubatic order with respect to columnar order is discussed in a recent simulation study by Duncan {\em et al.} \cite{duncanmasters}. The transition densities from Veerman \cite{Veerman} and Bates \cite{batesthindisks} are systematically larger than those reported by Zhang \cite{zhang2} and van der Beek \cite{beekschilling} and therefore give rise to a slightly higher estimate of the triple value ($L/D \simeq 0.14$). The theoretical value $L/D =0.175$ deviates considerably from the ones predicted from simulations, mainly because the predicted packing fractions of the coexisting nematic and columnar phases are too large. The equations of state presented in \fig{druk} demonstrate that the main source of error must be the chemical potential of either the nematic or columnar phase nematic branch, rather than the pressure. For $L/D=0.05$, the predicted pressure in the nematic and columnar states are fairly close to the simulation results with discrepancies less than a few percent in both branches. For larger aspect ratios ($L/D > 0.1$) the agreement between theory and simulation is quite satisfactory, despite an increased shape difference between the cylinder and the cut sphere. For $L/D = 0.2$, the occurrence of cubatic order has been reported in simulation \cite{Veerman} which is not taken into account in the present model. The isotropic binodal point in \fig{mono}a at this value is taken to be the mean value between the onset of cubatic order and the transition to the columnar state, with the error bar indicating the boundaries of cubatic order. For the thickest species $L/D=0.3$, the coexisting high density phase was found to be a solid rather than a columnar. Similarly, for smaller aspect ratios a continuous columnar-solid transition line could be located beyond the nematic-columnar moving toward higher packing fraction upon decreasing $L/D$. In our simple cell-fluid model there is, however, no distinction between the columnar and solid states due to the absence of a freezing transition within the strictly 1D line fluid representing the structure along the column direction. \begin{figure*} \begin{center} \begin{picture}(20,0) \put(1,-5){(a)} \put(9,-5){(b)} \end{picture} \includegraphics[clip=,width= 0.9 \columnwidth ]{fig2a} \includegraphics[clip=,width= 0.9 \columnwidth ]{fig2b} \caption{\label{druk} Equation of state for colloidal platelets for two different inverse plate aspect ratios $D/L$, plotted in terms of the reduced pressure $PD^{3}/k_{\rm B}T$ versus dimensionless concentration $ND^{3}/V$. (a) Isotropic-nematic density region. (b) Nematic-columnar region.} \end{center} \end{figure*} The predictive power of the Onsager-Parsons theory for platelets is perhaps better highlighted in \fig{mono}b, where the isotropic-nematic binodals are plotted in terms of the reduced number concentration $c=ND^{3}/V$. The agreement is reasonable for large aspect ratio but rather poor for thin platelets. In the limit of infinitely thin disks ($L/D \rightarrow 0$) the coexistence concentrations are identical to those obtained from Onsager's second-virial theory {\em viz.} $c_{I} = 3.29 (16/\pi^{2})$ and $c_{N}=4.191(16/\pi^{2})$ \cite{Lekkerkerker84}. This is easily understood from the fact that the packing fraction of infinitely thin disks at a given finite number concentration is zero. Consequently, $G_{P}(\phi)$ reduces to unity and the Parsons decoupling approximation involving the hard sphere excess free energy becomes ineffective. A similar reduction to the $B_{2}$ level takes place for infinitely thin rods ($L/D \rightarrow \infty$). However, contrary to rods, the effect of the {\em third virial coefficient} $B_{3}$ is finite for disks with vanishing thickness. In general, for isotropic systems of hard cylinders we have: \begin{eqnarray} B_{3} / B_{2}^{2} &=& 0, \hspace{1cm} ( L/D \rightarrow \infty) \nonumber \\ B_{3} / B_{2}^{2} &=& 0.444, \hspace{1cm} ( D/L \rightarrow \infty) \end{eqnarray} where the latter value is taken from \olcite{Eppengafrenkel}. Likewise, higher order virial contributions will also be non-zero. Simulation studies of the virial terms up to $B_{7}$ \cite{Eppengafrenkel,youvlasov} reveal that higher order virial terms involve alternating positive and negative contributions of comparable magnitude, indicating just how complicated virial expansions are for dense fluids of platelets. The virial terms generated by the Onsager-Parsons free energy can be obtained from the virial expression for the excess free energy $\beta F^{\text{ex}}/N = \sum_{n \geq 2} B_{n} \rho^{n-1} / (n-1)$. Applying this to \eq{free} gives: \begin{equation} \frac{B_{n}}{ B_{2}^{n-1}} = \frac{(n+2)(n-1)}{4(n-2)!} \left ( \frac{\pi}{8} \frac{L}{D} \frac{ \langle \langle V_{\text{excl}}(\gamma) \rangle \rangle }{D^{3}} \right )^{n-2}\hspace{0.1cm} (n \geq 2) \end{equation} It is clear that all contributions beyond $B_{2}$ are zero for $L/D=0$ thus leading back to the original Onsager result. For $L/D=0.1$ the reduced third, fourth and fifth virial coefficients in the isotropic phase are 0.170, 0.010 and 3.67 $\cdot 10^{-4}$. Comparing these with the numerically exact values 0.508, 0.111 and -0.10 for cut spheres \cite{youvlasov} shows that higher-order correlations are systematically under-weighted by the Parsons method. \section{Asymptotic results for the N-C transition} \label{asymp} A simple rationale for the apparent independence of the NC transition with respect to particle shape can be obtained by comparing the free energy of the two states and exploiting the fact that the nematic order at densities close to the transition is very strong. In that case the average excluded volume \eq{vexcl} between the particles in the nematic phase can be approximated by retaining the leading order contribution for small inter-rod angles $\gamma$: \begin{equation} \langle \langle \tilde{V}_{\text{excl}}( \gamma ) \rangle \rangle \sim 2 \pi \frac{L}{D} + \left ( \frac{\pi}{2} + \frac{2L^2}{D^2} \right ) \langle \langle \gamma \rangle \rangle \end{equation} The orientational averages indicated by the brackets can be estimated using a Gaussian Ansatz for the ODF \cite{OdijkLekkerkerker}: \begin{equation} \label{gaussint} \left \langle (\cdot) \right \rangle \sim \int_{-\pi/2}^{\pi/2} d \theta | \sin \theta | \int_{0}^{2\pi} d \varphi f_{G}(\theta) (\cdot) \end{equation} in terms of the following one-parameter Gaussian variational function: \begin{equation} f_{G}(\theta) = {\mathcal N} \exp \left [ - \frac{ \alpha }{2} \theta ^{2} \right ], \hspace{1cm} \left ( -\frac{\pi}{2} \leq \theta \leq \frac{\pi}{2} \right ) \end{equation} where the normalisation factor ${\mathcal N}$ follows from $\langle 1 \rangle =1$. The variational parameter $\alpha$ is required to be much larger than unity so that $f_{G}$ is sharply peaked around $\theta=0$. In that case, the integration over the polar angle $\theta$ in \eq{gaussint} can be safely extended to $\pm \infty$ and $\sin \theta \approx \theta$. In the asymptotic limit, the normalization constant is given by ${\mathcal N}=\alpha/4\pi$ and the double orientational average over the angle $\gamma$ in the nematic phase is found to be \cite{OdijkLekkerkerker}: \begin{equation} \left \langle \left \langle \gamma \right \rangle \right \rangle \sim \left ( \frac{\pi}{\alpha} \right )^{1/2} \end{equation} to leading order in $\alpha $. Similarly, the orientational entropy can be approximated by: \begin{equation} \left \langle \ln 4 \pi f_{G} \right \rangle \sim \ln \alpha - 1 \end{equation} The nematic order parameter [{\em cf.} \eq{xis}] follows from $ S \sim 1-3/\alpha $. These algebraic results allow the minimisation of the free energy with respect to $\alpha$ to be carried out analytically and leads to a closed expression for the nematic free energy. Using the asymptotic expressions above, and introducing the volume fraction as a density variable the following algebraic form for the Onsager-Parsons nematic free energy can be produced: \begin{eqnarray} \frac{\beta F_{\text{nem}}}{N} & \sim & (\ln \phi - 1) +\left \{ 2 \ln \left ( \frac{D}{L} \frac{\pi^{1/2}}{2} \phi G_{P}(\phi) \right ) -1 \right \} \nonumber \\ && + 2 + 4 \phi G_{P}(\phi) \end{eqnarray} which is a sum of the ideal, orientational and excess parts, respectively. Similarly, one may derive for the columnar free energy: \begin{eqnarray} \frac{\beta F_{\text{col}}}{N} & \sim & (\ln \phi - 1 ) +\left \{ 2 \ln \left ( \frac{3}{2} \frac{D}{L} \frac{\rho}{1 - \rho} \right ) - 2 \right \} \nonumber \\ && - \ln \frac{(1-\rho)}{3} - 2 \ln (1-\bar{\Delta}_{C}^{-1}) \end{eqnarray} with $\rho = \phi^{\ast} \bar{\Delta}_C^{2}$ and combines the ideal, orientational, 1D fluid and cell contributions, respectively. The only explicit shape dependency is the contribution $2 \ln D/L$ in the orientational part which is identical in both expressions and therefore does {\em not} affect the NC coexistence properties. Solving the coexistence conditions gives the universal coexistence values $\phi_{N} = 0.500 $ and $\phi_{C} = 0.621 $ and pressure $\beta P D^{3} = 11.37 (D/L)$. Furthermore, the normalised lateral columnar spacing is $\bar{\Delta}_{C} = 1.075$ and the equilibrium variational parameters $\alpha = 1.232 (D/L)^2$ pertaining to the nematic order in the nematic phase and columnar phases are given by $\alpha = 1.276 (D/L)^2$ and $\xi = 5.830 (D/L)$, respectively. \section{Conclusions} We have combined the Onsager-Parsons theory with a simple LJD cell model to address the phase behaviour of hard cylindrical platelets with variable aspect ratio. The theoretical framework provides a simple, yet qualitative underpinning for the competitive stability of the isotropic, nematic and columnar states, observed in Monte Carlo computer simulations. Upon increasing the aspect ratio, the window of stability for the nematic phase decreases systematically up to a critical value, identified as a triple equilibrium. Beyond this value the anisometry of the plates is too small to warrant a stable nematic phase and direct transitions from an isotropic fluid to a columnar solid occur. It would be intriguing to verify whether the stability of cubatic order, suggested by the simulations, can be captured with a similar free volume concept. Rather than forming a closed packed assembly of columns, the cubatic phase must be envisioned in terms of interacting finite-sized stacks with random orientations. This will be explored in a future study. \acknowledgments We are grateful to George Jackson and Jeroen van Duijneveldt for fruitful discussions. HHW acknowledges the Ramsay Memorial Fellowship Trust for financial support.
1,108,101,566,217
arxiv
\section{Introduction and description of the main results} The object of study in the present paper is a class of entire solutions of the system \beq\label{Euler-Lagrange equation} \Delta u-W_u(u)=0,\quad u:\BR^n\ri\BR^m, \eeq $n,m\in\mathbb{N}^+$, where $W:\BR^m\ri\BR$ is a phase transition potential that is nonnegative and vanishes only on a finite set $\{W=0\}=:A=\{a_1,...a_N\}$ for some distinct points $a_1,...,a_N\in\BR^m$ that represent the phases of a substance which can exist in $N\geq 2$ different equally preferred phases. The system \eqref{Euler-Lagrange equation} is the Euler-Lagrange equation corresponding to the Allen-Cahn free energy functional \beq\label{AC functional} J_D(v)=\int_D \left( \f12|\na v|^2+W(v) \right)\,dx. \eeq We restrict ourselves to maps $u\in W_{loc}^{1,2}(\BR^n,\BR^m)\cap L^\infty(\BR^n,\BR^m)$, which minimize $J$ subject to their Dirichlet data, \beq\label{def of minimizer} J_D(u+v)\geq J_D(u),\quad \forall v\in W_0^{1,2}(D,\BR^m)\cap L^\infty(D,\BR^m) \eeq for any open, bounded Lipschitz set $D\subset \BR^n$. We call such maps \emph{entire minimizers} of $J$, and note that they clearly satisfy \eqref{Euler-Lagrange equation}, under appropriate regularity hypothesis on $W$. In relation to the phase transition interpretation, an interesting and difficult problem is the existence of multi-phase solutions, that are solutions with the following geometrical properties: \emph{There is an $\hat{N}\in \mathbb{N}$ with $2\leq \hat{N}\leq N$, $\hat{a}_1,...,\hat{a}_{\hat{N}}\in A$, a small $\g>0$ and open sets $\Om_1,...,\Om_{\hat{N}}$ such that} \beq\label{partition by various phases} \BR^n=I\cup\left( \bigcup\limits_{j=1}^{\hat{N}}\Om_j \right) \eeq \emph{with $I$ being a set of thickness $O(1)$ and} \beq\label{in each phase} |u(x)-\hat{a}_j|\leq \g,\quad\forall x\in \Om_j, \text{ for some small }\g \eeq The set $I$ plays the role of a diffuse interface that separates the coexisting phases. Understanding the geometrical structure of this diffuse interface is a major point in the study of such solutions \cite{F}. In the scalar case $m=1$, for $W\in C^2(\BR^m,\BR)$, $a_i$ nondegenerate (i.e. $\f{\pa^2 W}{\pa u^2}>0$ at the minima), and for $N=2$ which is the natural choice, there is a rich literature and many important results. We list some of these works and organize them in two groups: papers that address various general aspects (see \cite{ft,ks,gurtin,sternberg,cc,cc2,bcn,modica1,modica2,farina}); and papers that are motivated by a celebrated conjecture of De Giorgi (see \cite{degiorgi,mm,gg,ac,s,wei,ct,dkw0}) where a relationship of $I$ with minimal surfaces, and in particular with hyperplanes in low dimensions, is established. The reader could also consult the expository papers \cite{fv,savin,s0,dkw}. In the vector case $m\geq 2$ and $N\geq 3$, the structure of $I$ is not expected to be planar in any dimension $n\geq 2$ but rather linked to the minimal cones\footnote{The complete classification of the minimal cones in $\BR^n$ is known only for n=3 \cite{taylor}. We refer to Section 7 in the expository paper of David \cite{david} and references therein.} in $\BR^n$. This kind of solutions are called \emph{junctions} and have been shown to exist for $n=m=2$ and $n=m=3$ with $\hat{N}=N=3$ and $\hat{N}=N=4$ respectively, but so far only under symmetry hypotheses. Specifically, the existence is established for: \begin{itemize} \item $n=2$ with respect to the symmetries of the equilateral triangle \cite{bgs}, \item $n=3$ with respect to the symmetries of the tetrahedron \cite{gs}, \end{itemize} and with \eqref{def of minimizer} verified only in their respective equivariance classes. We refer to \cite{book-afs} where the symmetric case is covered in detail for general reflection point groups and also for lattices. For subquadratic potentials that behave like $|u-a|^\al$ near $a\in A$, $\al\in(0,2)$, the interface is less diffuse as we explain below. More precisely we consider potentials that satisfy \begin{enumerate} \itemsep0.6em \item[H1.] $W\in C(\BR^m, [0,\infty))\cap C^2_{loc}(\BR^m\backslash A)$ with $\{W=0\}=A=\{a_1,\dots,a_N\}$, $N\geq 2$. \item[H2.] Set \beq\label{def of r0} r_0:=\f12\min\limits_{1\leq i\neq j\leq N}|a_i-a_j|. \eeq For any $a_i\in A$, $W(u)$ can be written as \beq\label{formula for W near ai} W(u)=|u-a_i|^\al g_i(u) \;\;\text{ in }u\in B_{r_0}(a_i) \eeq for some function $g_i(u)\in C^2(B_{r_0}(a_i), [c_i,\infty))$ where $c_i$ is a positive constant. \end{enumerate} For such subquadratic potentials, \eqref{in each phase} is replaced by \beq\label{in each phase free bdy} u(x)=\hat{a}_j,\quad\forall x\in \Om_j. \eeq Indeed, in this case the entire minimizer possesses a free boundary and the phases $\hat{a}_j$ are attained \cite{agz}, while for $\al=2$, the solution converges exponentially at infinity to the phases. The subquadratic assumption can be thought as a reduction which simplifies without changing the essential features of this type of solution. In a suitable limit $\al\ri0$, \eqref{AC functional} becomes the Alt-Caffarelli functional. We now define the appropriate analog of the set $I$ in \eqref{partition by various phases} for vector analog of the subquadratic potentials. \begin{definition}\label{diffuse interface} Let $0<\g_0<\f12\min\limits_{i\neq j}|a_i-a_j|$ fixed. Set \beq\label{def of delta} \d(x)=\dist(u(x),\{W=0\}) \eeq where $\dist$ stands for the Euclidean distance. Let $0<\g< \g_0$ and assume $\g_0<\sup\limits_{\BR^n} \d(x)$. We define the set \beq\label{def of diffuse interface} I_\g=\{x\in\BR^n:\d(x)\geq \g\}. \eeq \end{definition} For entire minimizers satisfying $\|u\|_{L^\infty(\BR^n,\BR^m)}<\infty$, $\|\na u\|_{L^\infty}(\BR^n,\BR^m)<\infty$, and $0<\al\leq 2$, the following estimate is known (see \cite[Lemma 5.5]{book-afs}) \beq\label{est of I_g} c_1(\g)r^{n-1}\leq \cl(I_\g\cap B_r(x_0))\leq c_2(\g)r^{n-1},\quad r\geq r(x_0), \eeq where $x_0\in\BR^n$ is arbitrarily chosen, $r(x_0)$ is a positive constant depending on $x_0$, and the constants $c_i(\g)>0 \,(i=1,2)$ is independent of $x_0,\,r$. Clearly $I_{\g_1}\subset I_{\g_2}$ if $\g_1>\g_2$ and we define \beq\label{I_0} \lim\limits_{\g\ri 0} I_\g=I_0=\{x\in\BR^n: \d(x)>0\}=: \text{Diffuse Interface}. \eeq The constants $c_1(\g),\, c_2(\g)$ in \eqref{est of I_g} degenerate as $\g\ri 0$ (see \cite{book-afs}) and so no useful information can be obtained for $I_0$ out of \eqref{est of I_g}. Our main result in the present paper concerns certain global facts on $I_0$ that can be summarized in the following theorem. \begin{theorem}\label{main} Let $\al\in(0,2)$, $u:\BR^n\ri\BR^m$ be a bounded entire minimizer with $I_{\g_0}\neq \varnothing$ (i.e. $u\not\equiv a_i$), and assume H1, H2 above. Then there exists a radius $r_0>0$ and positive constants $c,\,c_1,\,c_2$, which only depend on $u$ but not on $r$, such that for $r\geq r_0$ the following estimates hold: \begin{align} \label{measure of I_0} &\cl^n(I_0\cap B_r(0^n))\leq cr^{n-1},\\ \label{free bdy lower bdd} &\ch^{n-1}(\pa^* I_0\cap B_r(0^n))\geq c_1r^{n-1}, \end{align} where $\pa^*$ denotes the De Giorgi reduced boundary. If furthermore $\al=1$ we have the upper bound estimate \beq\label{est free bdy} \ch^{n-1}(\pa^* I_0\cap B_r(0^n))\leq c_2r^{n-1}. \eeq \end{theorem} The analogs of the ``minimal cone" solutions in $\BR^2$ and $\BR^3$ mentioned above as well as the cylindrical triple junction cone in $\BR^3$ have been shown to exist also for $0\leq \al<2$ and possess free boundary (see \cite[Theorem 1 and Proposition 4]{agz})\footnote{For $\al=0$, $J(u)=\int\f12|\na u|^2+\chi_{\{u\in S_A\}}$, where $S_A$ denotes the interior of the convex hull of $A=\{a_1,...,a_N\}$. It is the vectorial analog of the Alt-Caffarelli functional. }. Combining Theorem \ref{main} above with that result we obtain the following \begin{corol}\label{corol with symmetry} Under H1 and H2, $0\leq \al <2$, for the equivariant triangle junction in $\BR^2$ and the equivariant quadruple junction in $\BR^3$ the estimates \eqref{measure of I_0}, \eqref{free bdy lower bdd} and \eqref{est free bdy} hold for any $r\geq r_0>0$. \end{corol} \begin{Remark} We note that the assumption on the existence of minimizers as above is not restrictive as such nonconstant entire minimizers $U:\BR^1\ri\BR^m$ have been shown to exist in \cite{ms} under the hypothesis of continuity on $W$ together with the mild condition \beq\label{mild condition} \sqrt{W(u)}\geq f(|u|),\text{ for some nonnegative }f:(0,+\infty)\ri\BR\text{ s.t. }\int_0^{+\infty}f(r)\,dr=+\infty, \eeq that allows even decay to zero of the potential at infinity. A more convenient sufficient condition for the existence of such nonconstant entire minimizers (called \emph{connections}) is (see \cite{fgn}) \beq\label{condition at infty} \liminf\limits_{|u|\ri\infty} W(u)>0. \eeq \end{Remark} In \cite{cc}, for the scalar two phase problem, the entire range of potentials $F_0=\chi_{\{|u|<1\}}$, $F(u)=(1-u^2)^{\al/2}$ ($0<\al\leq 1$) was already introduced. As $\al\ri 0$, the minimizers get increasingly localized. In particular for $\al=0$ the connections are affine functions. For these reasons we expect the construction of entire minimizers mentioned in Corollary \ref{corol with symmetry} above, under no symmetry hypotheses, to be more accessible for singular potentials. From the point of view of regularity, our problem can be regarded as a generalization of the more simplified model which studies the minimization problem of the functional \beq\label{simplified model} \int_{\Om}\left( \f12|\na u|^2+|u|^\al\right)\,dx,\quad u:\BR^n\ri\BR^m, \; \al\in(0,2). \eeq In the scalar case, i.e. $m=1$, one recovers the two phase free boundary problem \beqo \D u= \al(u^+)^{\al-1}-\al(u^-)^{\al-1}. \eeqo which has been extensively studied under various conditions and settings, see for example \cite{fs} for the case $\al\in(1,2)$ and \cite{lp} for the 2D problem with $\al\in (0,1)$. One can also refer to \cite{lqt,ls} for the optimal regularity theory for a functional with a more general form of the potential $W$. When $\al=1$, the problem becomes the so-called two-phase obstacle-type problem. The regularity of the solution as well as the free boundary regularity has been summarised in detail in \cite{book-psu}. For the vector-valued case, i.e. $m\geq 2$, the minimization problem \eqref{simplified model} was investigated in \cite{asuw} for $\al=1$ and \cite{fsw} for $\al\in(1,2)$, where the authors studied the regularity of the minimizers and the asymptotic behavior for the minimizer near the ``regular" point of the free boundary. Also we would like to mention the works \cite{csy,mtv} studying the vector-valued Bernoulli free boundary problem, where the potential function $W(u)$ takes the form $Q^2(x)\chi_{\{|u|>0\}}$. Such a problem is quite close to the $\al=0$ case of the functional \eqref{simplified model}. All these works mainly focus on the behavior and regularity of the free boundary The biggest difference between our functional \eqref{AC functional} and the simplified one \eqref{simplified model} is that the potential $W(u)$ possesses more than one global minimum. Furthermore, our emphasis is not so much on the local behavior of the free boundary, but on its asymptotic behavior on the large domain $B_R(0^n)$ as $R\ri \infty$ We note that usually the existence of the free boundary is forced by the Dirichlet boundary condition given on $\pa\Om$. However, in this paper we do not assume any boundary condition and the existence of the free boundary results from the assumption that $u$ is an entire bounded non-constant minimizer. To prove the main Theorem \ref{main}, we first show in Section \ref{section 2} that the Euler-Lagrange equation is satisfied by the minimizer $u$ via the regularity of the solution. Then using the regularity property and a non-degeneracy lemma, we prove the first part of the theorem (upper bound estimate for the $\cl^n$ measure of $I_0$) by introducing a method of dividing $B_R$ into identical smaller sub-cubes and classifying all the cubes according to how much measure of the ``contact set" they contain. This is done in Theorem \ref{first part main theorem} from Section \ref{section 3}. We also demonstrate in Theorem \ref{two phase existence} the coexistence of at least two phases for the minimizer at each large scale. In Section \ref{growth est} we derive a Weiss-type formula and then give a growth estimate for the minimizer near the free boundary. These results from Section \ref{growth est} are used in Section \ref{section 5}, where we estimate the $\mathcal{H}^{n-1}$ measure of the free boundary when $\al=1$ and prove the second part of Theorem \ref{main}. Note that currently our method of estimating the $\ch^{n-1}$ measure of the free boundary from above cannot be generalized for arbitrary $\al\in (0,2)$, and we hope to solve this problem in the future. \section{Regularity of the minimizer}\label{section 2} We first prove the optimal regularity of the entire minimizer $u$ for $\al\in(0,1)$. \begin{proposition}\label{reg 0<al<=1} Suppose $0<\al<1$ and let $u$ be an entire minimizer of the energy functional $J$ satisfying $|u(x)|\leq M$. Assuming H1, H2. Then we have \beqo u\in C_{loc}^{1,\beta}(\BR^n,\,\BR^m), \eeqo where $\beta$ is a constant defined by $\beta=\f{\al}{2-\al}$. In particular, this $C^{1,\b}$ regularity is sharp. \end{proposition} \begin{rmk} We would like to note that in the following proof, unless being specifically stated, all the constants denoted by $C$ only depend on the upper bound $M$, the potential function $W$ and the dimensions $m,\, n$. \end{rmk} \begin{proof} For any ball $B_R(x)\subset \BR^n$, let $v$ be the harmonic function (which means each $v_i$ is harmonic) in $B_R(x)$ such that $u=v$ on $\pa B_R$. Since $v$ is harmonic, then $\na v$ satisfies a Campanato type growth condition \beq\label{campanato} \int_{B_\rho}|\na v-(\na v)_\rho|^2\,dx\leq C(\f{\r}{R})^{n+2}\int_{B_R}|\na v-(\na v)_R|^2\,dx \eeq For $\r<R$, we deduce \beq\label{decay of nabla u} \begin{split} \int_{B_\r}|\na u-(\na u)_\r|^2\,dx&\leq C\int_{B_\r}|\na v-(\na v)_\r|^2\,dx+C\int_{B_R}|\na u-\na v|^2\,dx\\ &\leq C(\f{\r}{R})^{n+2}\int_{B_R}|\na v-(\na v)_R|^2\,dx+C\int_{B_R}|\na u-\na v|^2\,dx \end{split} \eeq For the second term, we use the minimization property to obtain \beq\label{mini} \int_{B_R}|\na u-\na v|^2\,dx= \int_{B_R}\left(|\na u|^2-|\na v|^2\right)\,dx \leq\int_{B_R}2\left(W(v)-W(u)\right)\,dx. \eeq By hypothesises H1, H2 and $\|u\|_{L^\infty}\leq M$, we can easily verify that $W$ can be written as \beq\label{formula for W} W(u)=\left(\prod\limits_{i=1}^N |u-a_i|^\al \right)g(u), \eeq where $g$ is a function such that \beq\label{condition on g} g\in C^2(\overline{B}_M), \quad g(u)\geq C \text{ for some constant }C>0. \eeq When $\al\in(0,1)$, since both $v$ and $u$ are uniformly bounded, we compute \beq\label{diff w alpha} \begin{split} & W(v)-W(u)\\ =&\left(\prod\limits_{i=1}^N|v-a_i|^\al\right) g(v)-\left(\prod\limits_{i=1}^N|u-a_i|^\al\right) g(u)\\ \leq&\left(\prod\limits_{i=1}^N(|u-a_i|^\al+|u-v|^\al)\right)\left(g(u)+C|u-v|\right)-\left(\prod\limits_{i=1}^N|u-a_i|^\al\right) g(u)\\ \leq & C|u-v|^\al, \end{split} \eeq where $C$ is a constant only depending on $W, M, n, m$. By H\"{o}lder, Poincar\'{e} and Young, we have \begin{align} \non&\int_{B_R}|u-v|^\al\,dx \\ \non\leq & C(\int_{B_R} |u-v|^2\,dx)^{\f{\al}{2}} R^{n(1-\f{\al}{2})}\\ \label{thirdline}\leq & C(\int_{B_R}|\na(u-v)|^2\,dx)^{\f{\al}{2}}R^{n(1-\f{\al}{2})+\al}\\ \non\leq& \delta \int_{B_R}|\na(u-v)|^2\,dx+C(\delta)R^{n+\f{2\al}{2-\al}} \end{align} where $\delta$ is a suitably chosen small number. Combining \eqref{mini}, \eqref{diff w alpha} and \eqref{thirdline}, we obtain \beq\label{decay of nabla u-v} \int_{B_R}|\na u-\na v|^2\,dx\leq CR^{n+2\beta},\quad \beta:=\f{\al}{2-\al} \eeq Therefore from \eqref{campanato}, \eqref{decay of nabla u} and \eqref{decay of nabla u-v} we get the following Campanato type decay estimate for the minimizer $u$ \beq\label{ucampanato} \int_{B_\rho}|\na u-(\na u)_\r|^2\,dx\leq c(\f{\r}{R})^{n+2}\int_{B_R}|\na u-(\na u)_R|^2\,dx+CR^{n+2\beta} \eeq By a standard iteration argument we conclude that there exists a constant $C$ such that \beq\label{decayrate} \int_{B_\rho}|\na u-(\na u)_\r|^2\,dx\leq C\r^{n+2\beta}, \eeq which by the Morrey-Campanato theory implies $u\in C^{1,\beta}$. To show that this $C^{1,\beta}$ estimate is sharp, we need some results that will be proved later. Firstly by Theorem \ref{first part main theorem} and its proof, we know there exists $a_i\in A$ such that $\cl^n(\{u(x)=a_i\})>0$ and $\{u(x)=a_i\}$ contains interior points. Let $x_1\in\{u(x)=a_i\}$ be an interior point, and define \beqo r_1:=\sup \{r>0: B_r(x_1)\subset \{u(x)=a_i\}\}. \eeqo When $u\not\equiv a_i$, we know that $r_1\in(0,\infty)$ and there is a point $x_2\in \pa B_{r_1}(x_1)$ such that $u(x_2)=a_i$ and $x_2\in \overline{\{|u-a_i|>0\}}$. For $r$ sufficiently small, by Lemma \ref{nondeg} we have \beq\label{nondeg optimal reg} \sup\limits_{B_r(x_2)}|u-a_i|\geq cr^{1+\beta}. \eeq Also we claim that $\na u(x_2)=0$. Firstly, by $u(x)\equiv a_i$ for any $x\in B_r(x_2)\cap B_{r_1}(x_1)$, we can easily check that $\pa_\nu u(x_2)=0$ where $\nu$ denotes the normal vector on $\pa B_{r_1}(x_1)$. On the other hand, take any unit vector $\mu$ which points to one of the tangential directions at $x_2$ on $\pa B_{r_1}(x_1)$. For any $|t|\leq r_1$, \beqo x_2+t\mu+(\sqrt{r_1^2-t^2}-r_1)\nu\in\pa B_{r_1}(x_1),\quad u(x_2+t\mu+(\sqrt{r_1^2-t^2}-r_1)\nu)=a_i. \eeqo Thus for any $i\in\{1,2,...,m\}$ we have \beqo 0=\f{d}{dt}\bigg|_{t=0}u_i(x_2+t\mu+(\sqrt{r_1^2-t^2}-r_1)\nu)=\na u_i(x_2)\cdot \mu. \eeqo Since $\mu$ is an arbitrary tangential direction, the claim $\na u(x_2)=0$ is proved. Finally, the vanishing of $\na u(x_2)$ together with \eqref{nondeg optimal reg} imply the sharpness of the $C^{1,\beta}$ regularity. \end{proof} \begin{rmk}\label{c1,alpha remark} For $1\leq \al<2$, by exactly the same argument we can also prove $u\in C^{1,\g}_{loc}$ for any $\g\in(0,1)$. Just notice that we need to replace \eqref{diff w alpha} with $W(u)-W(v)\leq C|u-v|$. When $\al=1$, in contrast to the scalar case problem (see \cite{sha}), it is still open whether $u\in C^{1,1}_{loc}$. \end{rmk} Now with the regularity result we can identify the Euler-Lagrange equation for the entire minimizer $u$ in the following lemma. \begin{lemma}\label{EL} Let $u$ be an entire minimizer of the functional $J$ satisfying $|u(x)|\leq M$. Assume H1, H2. Then we have \begin{enumerate} \itemsep0.5em \item $1<\al<2$, $u$ is a strong solution of \beq\label{eleq for 1<al<2} \D u=W_u(u),\qquad \forall x\in\BR^n. \eeq \item $\al=1$, $u$ is a strong solution of \beq\label{eleq for al=1} \D u=W_u(u)\chi_{\{d(u,A)>0\}}. \eeq \item $0<\al<1$, in the open set $\{x: d(u(x),A)>0\}$, $u$ solves the equation \eqref{eleq for 1<al<2}. \end{enumerate} Here $W_u(u)$ denotes the derivative of $W$ with respect to $u$, $\chi$ is the characteristic function and $d(x,A):=\dist(x,A)$. \end{lemma} \begin{proof} Take $D\in\BR^n$ to be an arbitrary bounded Lipschitz domain, and for $\phi\in C_0^\infty(D,\BR^m)$, we compute the first variation of the energy $J_D(u)$ \beq\label{first variation} \begin{split} 0&\leq \int_{D} \left(\f12|\na (u+t\phi)|^2-\f12|\na u|^2+W(u+t\phi)-W(u)\right)\,dx\\ & = t\int_{D}\na u\cdot \na \phi\,dx+\f{t^2}{2}\int_{D}|\na\phi|^2\,dx+\int_{D}\left(W(u+t\phi)-W(u)\right)\,dx. \end{split} \eeq When $1<\al<2$, $W$ can be written as \eqref{formula for W} with the condition \eqref{condition on g}. One can directly compute \beq\label{formula W_u(u)} W_u(u)=\left( \al(u-a_i)|u-a_i|^{\al-2}\prod\limits_{k\neq i}|u-a_k|^\al \right)g(u)+\left(\prod\limits_{i=1}^N |u-a_i|^\al \right)D_ug(u). \eeq Since we already proved $u\in C^{1,\g}(D)$, it is obvious that $W_u(u)\in C(D)$. Therefore when we divide \eqref{first variation} by $t$ and take $t\ri 0+$ or $0-$, it follows that $u$ solves \eqref{eleq for 1<al<2} in $D$. For $\al=1$, dividing \eqref{first variation} by $t$ and letting $t\ri 0$, we can prove that \beqo \left|\int_{D}\na u\cdot \na \phi\,dx\right|\leq C\int_{D}|\phi|\,dx, \eeqo which implies $\D u\in L^\infty(D,\BR^m)$. Moreover, on any sub-domain $K\subset D\cap \{d(u,A)>0\}$, the equation \eqref{eleq for 1<al<2} holds. Combining the fact that $\D u=0$ a.e. on $\{x:u(x)\in A\}$ (which is due to the fact that weak derivatives for Sobolev functions vanish on level sets a.e. \cite{book-eg}), we have (since $u$ is continuous) that $u$ is a strong solution of \eqref{eleq for al=1} in $D$. For $0<\al<1$, one can easily check that $u$ solves \eqref{eleq for 1<al<2} in the open set $\{x: d(u,A)>0\}$. However, since $W_u(u)$ blows up as $d(u,A)\ri 0$, the local integrability of $W_u(u)$ is not a priori known, so $u$ may not solve \eqref{eleq for al=1} on the whole space in distributional sense. \end{proof} \begin{rmk} In the case $0<\al<1$, even though we cannot say $u$ is a distributional solution to \eqref{eleq for al=1}, we can deduce another equation for $u$ from the first domain variation. For any $\varphi\in C_0^1(D,\BR^n)$, it holds that \begin{align*} 0&=\f{d}{dt}J_{D}(u(x+t\varphi(x)))\\ &=\int_D \left((\na u\na\va)\cdot \na u-(\di\va)(\f12|\na u|^2+W(u))\right)\,dx \end{align*} This formulation has also been utilized in \cite{lp,Weiss3}. Here we just present this form of equation for completeness and it will not be utilized in the rest of the paper. \end{rmk} With the Euler-Lagrange equation \eqref{eleq for 1<al<2} and the formula for $W_u(u)$ \eqref{formula W_u(u)}, we can easily improve the regularity of $u$ when $1<\al<2$. \begin{proposition}\label{reg al>1} When $1<\al<2$, the entire minimizer $u\in C^{2,\al-1}_{loc}(\BR^n,\BR^m)$. \end{proposition} \begin{proof} According to Lemma \ref{EL}, $u$ satisfies the equation \eqref{eleq for 1<al<2} when $1<\al<2$. Using the formula \eqref{formula W_u(u)} for $W_u(u)$ and the rough estimate $u\in C^{1,\gamma}_{loc}$, we see that $W_u(u)\in C^{\al-1}_{loc}(\BR^n,\BR^m)$. Then the $C^{2,\al-1}_{loc}$ regularity immediately follows from the classical Schauder estimate. \end{proof} \section{Estimate of $\cl^n(I_0)$ and existence of the free boundary}\label{section 3} In this section we will prove the estimate \eqref{measure of I_0} for any nontrivial entire minimizer $u$ of the functional $J$. Furthermore, we also show that at every sufficiently large scale, the minimizer $u$ must contain at least two different phases. Take $W$ satisfying the hypothesises H1 $\&$ H2 and assume $u$ is an entire minimizer for the functional $J$. Before stating our new results, we first recall two estimates from \cite{book-afs} and \cite{agz} without proof, which will play an important role in our arguments. Readers can refer to \cite{book-afs} and \cite{agz} for detailed proofs. These estimates for the scalar case $m=1$ are obtained in \cite{cc}. \begin{proposition}\label{two estimates} When $0<\al<2$, for any entire minimizer $u$ satisfying $\|u\|_{L^\infty(\BR^n)}<\infty$, the following two estimates hold true: \begin{enumerate} \itemsep0.5em \item \underline{The basic estimate} (see \cite[Lemma 2.2]{agz}) For any $x_0\in\BR^n$, there exists an $r_0$ such that for $r>r_0$, \beq\label{basic est} J_{B_r(x_0)}(u)\leq Cr^{n-1}, \eeq where the constant $C=C(M)$ is independent of $u$. We notice that this $r_0$ can be $0$ when $\al\in[1,2)$. \item \underline{The density estimate} (see \cite[Theorem 5.2]{book-afs}) Take $a\in A$ to be a minimal point for $W(u)$. If for some $r_0,\,\lam,\,\mu_0>0$, \beqo \mathcal{L}^n(B_{r_0}(x)\cap \{|u-a|>\lam\})\geq \mu_0, \eeqo then there exists a constant $C(\mu_0,\lam)>0$ such that \beq\label{density est} \mathcal{L}^n(B_{r}(x)\cap \{|u-a|>\lam\})\geq C(\mu_0,\lam) r^n, \quad \forall r\geq r_0. \eeq \end{enumerate} \end{proposition} Another important component of our arguments is the following non-degeneracy lemma. \begin{lemma}\label{nondeg} Assume $0<\al<2$. We take the point $a_1\in A$ and an entire minimizer $u$ for the functional J such that $\|u\|_{L^\infty(\BR^n)}<\infty$. There exists a suitably small number $\theta(W)$ and a constant $c=c(n,W)$, such that if $x_0\in\overline{\{0<|u-a_1|<\theta\}}$ and $B_r(x_0)\subset\overline{\{|u-a_1|<\theta\}}$, then \beq\label{nondegeneracy} \sup\limits_{B_r(x_0)}|u-a_1|\geq c(n,W)r^\f{2}{2-\al}, \eeq where the constant $c(n,W)$ only depends on the dimension $n$ and the potential function $W$. Moreover $c(n,W)\sim O(\alpha)$ for $\alpha<<1$. \end{lemma} \begin{proof} Without loss of generality, suppose $a_1=0^m$. First we require that \beqo \theta<\f12\min\limits_{i\neq j}|a_i-a_j|=r_0. \eeqo By (H2), when $|u|<r_0$, $W(u)$ can be written as $W(u)=|u|^\al\cdot g(u)$ for some $g(u)\in C^2(B_{r_0}(0^m))$ satisfying \beqo g(u)\geq C_g \eeqo for some constant $C_g>0$. Assume $|u(x_0)|>0$ (if $|u(x_0)|=0$, then we simply take a sequence of points $\{x_i\}$ converging to $x_0$ and satisfy $|u(x_i)|>0$). Taking $h(x)=|u|^{2-\al}-c|x-x_0|^2$ for some constant $c$ which will be determined later, by direct calculation we have that if $|u(x)|>0$, then \beq\label{laplaceh} \D h=(2-\al)(-\al)\f{\left|\na|u|\right|^2}{|u|^\al}+(2-\al)\f{|\na u|^2}{|u|^\al}+(2-\al)u\cdot D_u(g)+\al(2-\al)g(u)-2nc \eeq If we take \beq\label{condition on theta, c} \theta<\min\{\f{\al C_g}{4\|D_ug(u)\|_{L^\infty(B_{r_0}(0^m))}}, \f12\min\limits_{i\neq j}|a_i-a_j| \},\quad c<\f{\al(2-\al)C_g}{8n}, \eeq then \eqref{laplaceh} implies that \beqo \Delta h\geq -(2-\al)\al \f{\left|\na|u|\right|^2}{|u|^\al}+(2-\al)\f{\left|\na u\right|^2}{|u|^\al} \eeqo When $\al\leq 1$, it follows that $\Delta h\geq 0$ in $\{|u(x)|>0\}\cap B_r(x_0)$. Since $h(x_0)>0$ and $h(x)<0$ on $\pa\{|u(x)>0|\}\cap B_r(x_0)$, we must have \beqo \max\limits_{x\in\pa B_r(x_0)}h(x)>0, \eeqo which implies the lemma. For $\al\in(1,2)$, combining \eqref{laplaceh} and \eqref{condition on theta, c} we deduce that in $\{|u|>0\}\cap B_{r}(x_0)$ \begin{align*} &\D h+(2-\al)\al\f{\left|\na|u|\right|^2}{|u|^\al}-(2-\al)\f{\left|\na u\right|^2}{|u|^\al}\geq \f{\al(2-\al)C_g}{4}\\ \Rightarrow & \D h+\f{\al}{2-\al}\f{\na h\cdot \na(|u|^{2-\al}+c|x-x_0|^2)+4c^2|x-x_0|^2}{|u|^{2-\al}}\geq (2-\al)\f{\left|\na u\right|^2}{|u|^\al}+\f{\al(2-\al)C_g}{4}\\ \Rightarrow & \D h+\left( \f{\al\na(h+2c|x-x_0|^2)}{(2-\al)|u|^{2-\al}} \right)\cdot\na h+\f{4c\al(-h+|u|^{2-\al})}{(2-\al)|u|^{2-\al}}\geq \f{\al(2-\al)C_g}{4}\\ \Rightarrow & \D h+\left( \f{\al\na(h+2c|x-x_0|^2)}{(2-\al)|u|^{2-\al}} \right)\cdot\na h-\f{4c\al}{(2-\al)|u|^{2-\al}}\cdot h\geq 0. \end{align*} Here to derive the last inequality we further require that $c$ satisfies \beq\label{condition on c} c\leq \f{(2-\al)^2C_g}{16}. \eeq Then the maximum principle argument can be applied again to get \beqo \max\limits_{x\in\pa B_r(x_0)} |u(x)|> cr^\f{2}{2-\al}. \eeqo This completes the proof. \end{proof} \vspace{5mm} Now we are ready to prove the first part \eqref{measure of I_0} of the Theorem \ref{main}. For the sake of convenience, we rewrite the statement in the following theorem. \begin{theorem}[First part of Theorem \ref{main}]\label{first part main theorem} Let $x_0\in\BR^n$, $u: \BR^n\ri \BR^m$ be a bounded nonconstant entire minimizer of the energy $J$. Then there are positive constants $R_0$ and $c$ such that \beq\label{I_0 est} \cl^n(B_R(x_0)\cap I_0)\leq cR^{n-1},\quad R>R_0, \eeq where $I_0$ is defined in \eqref{I_0}, which is the region where $W(u)>0$. The constant $c$ only depends on the dimension $n$, the potential function $W$ and $\|u\|_{L^\infty(\BR^n)}$. \end{theorem} \begin{proof} Without loss of generality, suppose $x_0=0^n$ and write $B_R=B_R(0^n)$. According to the basic estimate \eqref{basic est} in Proposition \ref{two estimates}, we know that there exist positive constants $C_0, \,r_0$ such that for any $R>r_0$ \begin{equation} \label{energy density 0}\int_{B_R} \f12|\na u|^2+W(u)\,dx\leq C_0R^{n-1}. \end{equation} For the sake of convenience, we use the cubes which are centered at $0^n$ to replace $B_R$. Define \beqo \tilde{S}_R:=\{x\in \BR^n: x_i\in(-R,R),\text{ for }i=1,2,...,n\}. \eeqo Let $L$ be a constant whose value will be specified later. For any positive integer $k$, we can divide the cube $\tilde{S}_{kL}$ into $(2k)^n$ identical cubes with the side length $L$. We number all these sub-cubes by $S_1,S_2,...,S_{K}$ where $K:=(2k)^n$. And we take $\theta$ to be the constant $\theta(W)$ in the Lemma \ref{nondeg}. Then we define \beq\label{def sigma i,j} \sigma_i^j:=\cl^n(\{|u-a_j|<\f{\th}{2}\}\cap S_i), \quad \text{for }i=1,...K,\; j=1,..., N. \eeq Take $\e:=\e(\theta)$ to be a small constant to be specified later, depending only on $\theta$ and $\|u\|_{L^\infty}$. Also we introduce the notion of adjacent sub-cubes: $S_{i_1}$ and $S_{i_2}$ are called adjacent if and only if \beqo \overline{S_{i_1}}\cap \overline{S_{i_2}}\neq \varnothing,\quad 1\leq i_1,\,i_2\leq K \eeqo We divide $\{S_i\}_1^K$ into the following five non-overlapping classes \begin{itemize} \itemsep0.5em \item[1] Boundary sub-cubes of $\tilde{S}_{kL}$: \beqo T_1:=\{S_i:\text{ the number of adjacent cubes of }S_i \text{ is less than }3^n-1\}. \eeqo \item[2] Sub-cubes that contain two phases: \beqo T_2:=\{S_i: \exists j_0 \text{ s.t. } \s_i^{j_0}=\max\limits_{1\leq j\leq N}\s_i^j\leq(1-2\e)L^n, \;\max\limits_{j\neq j_0} \s_i^j\geq\f{\e}{N-1} L^n\}\backslash T_1. \eeqo \item[3] Sub-cubes that contain regions where $u$ stays away from any $a_j$. \beqo T_3:=\{S_i: \exists j_0 \text{ s.t. } \s_i^{j_0}=\max\limits_{1\leq j\leq N}\s_i^j\leq(1-2\e)L^n, \;\max\limits_{j\neq j_0} \s_i^j<\f{\e}{N-1} L^n\}\backslash T_1. \eeqo \item[4] ``Interior" sub-cubes of the contact set $\{x:u(x)\in A\}$: \beqo T_4:=\{S_i: \exists j_0 \text{ s.t. } \s_i^{j_0}>(1-2\e)L^n \text{ and }\s_p^{j_0}> (1-2\e)L^n, \forall S_p\text{ adjacent to }S_{i}\}\backslash T_1. \eeqo \item[5] Sub-cubes close to the boundary of the contact set: \beqo T_5:=\{S_i: \exists j_0 \text{ s.t. } \s_i^{j_0}> (1-2\e)L^n \text{ and }\s_p^{j_0}\leq (1-2\e)L^n,\text{ for some } S_p\text{ adjacent to }S_{i}\}\backslash T_1. \eeqo \end{itemize} Now we estimate the number of cubes in each class. First note that $S_i\in T_1$ means $S_i$ is one of the boundary cubes of $\tilde{S}_{kL}$, therefore \beq\label{number of T_1} |T_1|\leq c_0(n)k^{n-1} \quad \text{ for some dimensional constant }c_0(n). \eeq For a cube $S_i\in T_2$, assume $\s_i^{j_0}=\max\limits_{1\leq j\leq N}\s_i^j\leq (1-2\e)L^n$ and $\s_i^{j_1}\geq \f{\e}{N-1}L^n$ for some $j_1\neq j_0$. By the definition of $\s_i^j$ and $\theta<\f12|a_{j_0}-a_{j_1}|$, we can infer that for any $r\in[\f{\th}{2},\,|a_{j_0}-a_{j_1}|-\f{\th}{2}]$, it holds that \beqo \cl^n(\{|u-a_{j_0}|<r\}\cap S_i)\geq \f{\e}{N-1} L^n,\quad \cl^n(S_i\backslash \{|u-a_{j_0}|<r\})\geq \f{\e}{N-1} L^n. \eeqo Applying the co-area formula and the relative isoperimetric inequality (see for example \cite{Thomas}), we have \begin{equation}\label{co-area0} \begin{split} &\int_{S_i}|\na u|^2\,dx\\ \geq & \f{1}{L^n} \left(\int_{S_i} |\na(u-a_{j_0})|\,dx\right)^2\\ \geq &\f{1}{L^n} \left( \int_{\th/2}^{|a_{j_0}-a_{j_1}|-\th/2} \ch^{n-1}(\{|u-a_{j_0}|=r\}\cap S_i)\,dr \right)^2\\ \geq & \f{1}{L^n} \left( \int_{\th/2}^{|a_{j_0}-a_{j_1}|-\th/2} C \left(\min\{\cl^n(\{|u-a_{j_0}|<r\}\cap S_i),\cl^n(S_i\backslash \{|u-a_{j_0}|<r\}) \}\right)^{\f{n-1}{n}} \,dr \right)^2\\ \geq &c_1(L,\th,\e)>0 \end{split} \end{equation} From the basic estimate \eqref{basic est} we get \beq\label{number of T_2} |T_2|\leq \f{C(kL)^{n-1}}{c_1}= c_2(L,\th,\e)k^{n-1},\quad k\geq k_0, \eeq where $k_0$ is a constant. For a cube $S_i\in T_3$, \beqo \cl^n\left(\left\{|u-a_j|>\f{\theta}{2},\;\forall 1\leq j\leq N\right\}\cap S_i\right)>\e L^n \eeqo By the Hypothesis H1 and H2 on $W$ and the assumption $\|u\|_{L^\infty}<\infty$, there is a constant $c_3$ which depends on $\|u\|_{L^\infty},\theta$ such that \beqo W(u)\geq c_3,\quad \text{when }|u-a_j|>\f{\theta}{2},\; \forall 1\leq j\leq N \eeqo Thus by \eqref{energy density 0} the number of sub-cubes $T_3$ is bounded by \beq\label{number of T_3} |T_3|\leq c_4(L,\th,\e,\|u\|_{L^\infty})k^{n-1},\quad k\geq k_0. \eeq From now on we focus on the analysis of cubes in $T_4$ and $T_5$. Take $S_i$ in $T_4$ or $T_5$, then there is a $j_0$ such that $\s_i^{j_0}>(1-2\e)L^n$. In this case, we claim that when $\e$ is suitably chosen, we can assure that \beqo \max\limits_{x\in S_i}|u(x)-a_{j_0}|<\theta \eeqo If there exists $x_0\in S_i$ such that $|u(x_0)-a_{j_0}|\geq \theta$, then we have that there exists a constant $c_5(\theta,\|\na u\|_{L^\infty})$ such that \beqo \cl^n(\{|u-a_{j_0}|>\theta/2\}\cap S_i)\geq c_5. \eeqo We note that the uniform boundedness of $|\na u|$ follows from the $C^{1,\beta}$ regularity (Proposition \ref{reg 0<al<=1} and Proposition \ref{reg al>1}) and the assumption that $|u|$ is uniformly bounded. And $c_5$ doesn't depend on $j_0$. Then the claim follows if we simply take \beq\label{value of epsilon} \e<\f{c_5}{2L^n}. \eeq \begin{lemma}\label{value of L} When $L$ is suitably chosen depending on $\theta$, in any cube $S_i\in T_4\cup T_5$, it holds \beq\label{measureofa_j} \cl^n(\{u(x)=a_{j_0}\}\cap S_i)\geq \om_n\left(\f{L}{4}\right)^n \eeq where $\om_n$ is the volume of the n-dimensional unit ball \end{lemma} \begin{proof} We proceed by contradiction and denote the central point of $S_i$ by $z_i$. So, \beqo |\{u(x)=a_{j_0}\}\cap S_i|< \om_n\left(\f{L}{4}\right)^n \eeqo Then there must be a point $x_1\in B_R(z_i,\f{L}{4})$ such that $x_1\in \overline{\{0<|u-a_{j_0}|<\theta\}}$. Moreover, we have \beqo B_{\f{L}{4}}(x_1)\subset S_i\subset \overline{\{|u-a_{j_0}|<\theta\}} \eeqo Therefore we are in the position to apply Lemma \ref{nondeg} to deduce that \beqo \sup\limits_{B_{\f{L}{4}}(x_1)}|u-a_{j_0}|\geq c(n,W)\left(\f{L}{4}\right)^{\f{2}{2-\al}} \eeqo which contradicts with $\max\limits_{x\in S_i}|u-a_{j_0}|<\theta$ if we choose the constant $L$ at the beginning satisfying $c(n,W)\left(\f{L}{4}\right)^{\f{2}{2-\al}}>2\theta$. This completes the proof of Lemma \ref{value of L}. \end{proof} If the cube $S_i\in T_4$, then by definition we have \beqo |u(x)-a_{j_0}|<\theta, \quad \forall x\in S_i\cup(\bigcup_{S_p\text{ adjacent to }S_i} S_p) \eeqo By the same argument as in the proof of the lemma above, we obtain that \beqo u(x)\equiv a_{j_0},\quad x\in S_i. \eeqo If $S_i\in T_5$, then there must be at least one adjacent cube of $S_i$, denoted by $S_{p}$, such that \beq\label{est in Sp} |\{|u-a_{j_0}|>\f{\theta}{2}\}\cap S_{p}|>\e L^n. \eeq We set \beqo Q_{S_i}:=S_i\cup(\bigcup_{S_p\text{ adjacent to }S_i} S_p) \eeqo Then by \eqref{measureofa_j}, \eqref{est in Sp} and the co-area formula, we can compute similarly as in \eqref{co-area0} to get \beqo \int_{Q_{S_i}}|\na u|^2\,dx\geq c_6(L, \th, \e). \eeqo Since each point can belong to at most $3^n$ different $Q_{S_i}$, utilizing \eqref{basic est} we conclude \beqo C(n)(kL)^{n-1}\geq \sum\limits_{S_i\in T_5}\int_{Q_{S_i}}|\na u|^2\,dx\geq c_6|T_5|, \eeqo which implies \beq\label{number of T_5} |T_5|\leq c_7(n, L,\th,\e)k^{n-1}. \eeq Finally, combining \eqref{number of T_1}, \eqref{number of T_2}, \eqref{number of T_3} and \eqref{number of T_5} we get \beq \cl^n(\tilde{S}_{kL}\cap I_0)\leq (|T_1|+|T_2|+|T_3|+|T_5|)L^n\leq c_8(n, L,\th,\e)(kL)^{n-1}. \eeq Since $B_{kL}\subset \tilde{S}_{kL}$, we can get \eqref{I_0 est} after taking $k$ to be the smallest integer larger than $\f{R}{L}$ for sufficiently large $R$. Also if we carefully check the definitions of all the constants in the proof we conclude that $c_8$ only depends on the dimension $n$, the potential $W$ and the uniform bound of $|u|$, but not on the specific solution $u$. This completes the proof of Theorem \ref{first part main theorem}. \end{proof} Theorem \ref{first part main theorem} implies that a bounded entire minimizer $u(x)$ should satisfy $W(u)=0$ in ``most of the space". Next we further show that at sufficiently large scales, $u$ must possess at least two different phases, each of which contains some definite measure of order $R^n$. \begin{lemma}\label{lemma ci(th)} Let $x_0\in\BR^n$, $u:\BR^n\ri\BR^m$ be a bounded entire minimizer of $J$. Assume that $u\not\equiv a_i$ for any $i\in \{1,2,...,N\}$. We take an arbitrary constant $\theta<r_0:=\f12 \min\limits_{1\leq i\neq j\leq N}|a_i-a_j|$, then there exist positive constants $R_0, c(u,\theta)$ such that for any $R\geq R_0$, there are $a_i,a_j\in A$, which depend on $R$, satisfying \beq\label{ci(th)} \cl^n(B_R(x_0)\cap \{|u-a_k|<\theta\})\geq cR^n,\quad k=i,j. \eeq \end{lemma} \begin{proof} Since $u$ is nonconstant, by $C^{1,\beta}$ regularity of $u$ there is some $R_1>0, \,0<\lam<r_0,\,\mu_0>0$ such that \beqo \cl^n(B_{R_1}(x_0)\cap \{|u-a_1|>\lam\})\geq \mu_0. \eeqo Then by the density estimate \eqref{density est} in Proposition \ref{two estimates}, there exists $\mu_1$ such that \beq\label{u-a1>lam} \cl^n(B_{R}(x_0)\cap \{|u-a_1|>\lam\})\geq \mu_1R^n,\quad \forall R\geq R_1 \eeq Take $\theta<r_0$ to be an arbitrary constant. By our hypothesis on $W$, there is a positive constant $C=C(\lam,\theta, \|u\|_{L^\infty})$ such that \beqo W(u)>C, \;\text{ when } |u-a_1|>\lam,\,|u-a_j|\geq \theta \text{ for any }j\neq 1. \eeqo Applying the basic estimate \eqref{basic est} in Proposition \ref{two estimates}, for enough large $R$, \beq\label{u-a_2>theta} \cl^n(B_R(x_0)\cap \{|u-a_1|>\lam,\,|u-a_j|\geq \theta \text{ for any }j\neq 1\})\leq C_2R^{n-1}, \eeq for some constant $C_2$. Combining \eqref{u-a1>lam} and \eqref{u-a_2>theta}, we obtain that \beqo \begin{split} &\cl^n(B_{R}(x_0)\cap (\bigcup\limits_{j\neq 1}\{|u-a_j|<\theta\}))\\ \geq & \cl^n(B_R(x_0)\cap \{|u-a_1|>\lam, |u-a_j|<\theta \text{ for some }j\neq 1\})\geq c_1(u,\theta)R^n, \quad \forall R>\tilde{R}_1 \end{split} \eeqo for some constants $\tilde{R}_1$ and $c_1$. The same argument also works for the set $B_R(x_0)\cap(\bigcup\limits_{j\neq k}\{|u-a_j|<\theta\})$ for any $k\in \{1,2,...,N\}$, i.e. there exists $\tilde{R}_k, \;c_k>0$ such that \beqo \cl^n(B_{R}(x_0)\cap (\bigcup\limits_{j\neq k}\{|u-a_j|<\theta\}))\geq c_k(u,\theta) R^n, \forall R\geq \tilde{R}_k \eeqo Finally, we take $R_0=\min\limits_{k} \tilde{R}_k$ and $c=\f{1}{N-1}\min\limits_k c_k$ and the conclusion of the lemma easily follows. \end{proof} In the following theorem, we show that in any ball $B_R(x_0)$ with radius $R$ large enough, the sets $\{u=a_i\}$ and $\{u=a_j\}$ ($a_i,\,a_j$ from Lemma \ref{lemma ci(th)}) must contain a set of measure of the order $R^n$. \begin{theorem}\label{two phase existence} Let $x_0\in \BR^n$, $u: \BR^n\ri\BR^m$ be a bounded entire minimizer of the energy $J$, and $u\not\equiv a_j$ for any $j\in\{1,2,...,N\}$. Then there are positive constants $R_0$ and $c$ (both depend on $u$) such that for any $R\geq R_0$, there are $a_i,a_j$ depending on $R$ such that \beq \min\{\mathcal{L}^n(B_R(x_0)\cap \{u=a_i\}),\mathcal{L}^n(B_R(x_0)\cap \{u=a_j\})\}\geq cR^n,\quad \forall R>R_0 \eeq \end{theorem} \begin{proof} Without loss of generality, suppose $x_0=0^n$ and write $B_R=B_R(0^n)$. According to the Proposition \ref{two estimates} and Lemma \ref{lemma ci(th)}, we know that for any sufficiently large $R$, there are $a_i,a_j\in A$ such that \eqref{ci(th)} holds. The proof relies on the same technique as the proof of Theorem \ref{first part main theorem}. So we will only present the main ingredients and omit some technical details. Take $L$ as the same constant in Theorem \ref{first part main theorem} and $k\in\mathbb{N}$. We consider the domain \beqo \tilde{S}_{kL}:=\{x\in \BR^n: x_i\in (-kL,kL)\}, \eeqo and then divide $\tilde{S}_{kL}$ into $K=(2k)^n$ identical sub-cubes $S_1,...,S_K$, each of which has side of length $L$. We also recall the definition of $\s_i^j$ in \eqref{def sigma i,j}. By Lemma \ref{lemma ci(th)}, there are two phases $a_i, a_j$ (for simplicity we assume they are $a_1,a_2$) such that \beq\label{a_1,a_2 in S_kL} \cl^n(\tilde{S}_{kL}\cap \{|u-a_j|<\f{\th}{2}\})\geq c(kL)^n,\quad j=1,2. \eeq Take $\e:=\e(u,\theta)$ be a small constant such that \begin{itemize} \itemsep0.5em \item[a.] \eqref{value of epsilon} holds. As a result, if $\s_i^j>(1-2\e)L^n$, then $|u(x)-a_j|<\theta$ for any $x\in S_i$. \item[b.] $\e\leq \f{c}{2^{n+3}}$ where $c$ is the constant in \eqref{a_1,a_2 in S_kL}. \end{itemize} Then we divide $\{S_i\}_1^K$ into the following two classes \begin{itemize} \itemsep0.5em \item[1] $U_1:=\{S_i: \exists j_0 \text{ s.t. } \s_i^{j_0}=\max\limits_{1\leq j\leq N}S_i^j\leq(1-2\e)L^n\}.$ \item[2] $U_2:=\{S_i: \exists j_0 \text{ s.t. } \s_i^{j_0}=\max\limits_{1\leq j\leq N}S_i^j>(1-2\e)L^n.\}$ \end{itemize} From the proof of Theorem \ref{first part main theorem}, we have \beqo |U_1|\leq c_0(L,\theta,\e)k^{n-1}. \eeqo Let $K_1$ denote the number of sub-cubes $S_i$ satisfying $\s_i^1>(1-2\e)L^n$. We obtain from \eqref{a_1,a_2 in S_kL} \beq \begin{split} c(kL)^n&\leq \sum\limits_{1\leq i\leq (2k)^n}\s_i^1\\ &\leq |U_1|L^n+K_1 L^n+ \left((2k)^n-|U_1|-K_1 \right)(2\e L^n)\\ &\leq c_0k^{n-1}L^n +K_1L^n+\f{c}{4} (kL)^n\quad (\text{Property b of }\e), \end{split} \eeq which immediately implies that $K_1\geq \f{c}{2}k^n$ whenever $k$ is large enough. Together with Lemma \ref{value of L} we have \beq\label{estimate two phase} \cl^n (\tilde{S}_{kL}\cap \{u=a_1\})\geq \f{c}{2}k^n\om_n (\f{L}{4})^n\geq c_1(kL)^n, \eeq for some constant $c_1=c_1(W,u)$. For $\{u=a_2\}$ the estimate \eqref{estimate two phase} still holds. One can easily check that \eqref{estimate two phase} implies the statement of Theorem \ref{two phase existence}. \end{proof} \section{Weiss' Monotonicity formula and a growth estimate in the case $\al=1$}\label{growth est} Thanks to Theorem \ref{two phase existence}, we know for a uniformly bounded entire minimizer $u$, the free boundary $\pa\{|u-a_i|>0\}$ $(i=1,2)$ must exist. In this section we will derive a growth rate estimate for $|u-a_i|$ away from the free boundary in the case $\underline{\al=1}$. From now on we fix $\al=1$ and assume $a_1=0^m$. By the hypothesis H2, $W(u)$ has the form $W(u)=g(u)|u|$ for some $g\in C^2(B_\theta)$. Here $\th$ is the constant in Lemma \ref{nondeg}. Since $u$ is a local minimizer, it satisfies the Euler-Lagrange equation near the free boundary point, \beq\label{ELeq} \D u= g(u)\f{u}{|u|}+|u| D_u g(u). \eeq Also there exists a positive constant $C>0$ such that $g(u)>C$ when $|u|\leq \theta$. We use the notation \beq\label{notations} \Om(u):=\{|u(x)|>0\},\quad \G(u):=\pa^*\Om(u). \eeq Here $\pa^*$ denotes De Giorgi's reduced boundary. An easy observation is that for any point $x\in\G(u)$, we must have $|u(x)|=|\na u(x)|=0$. The proof is straightforward: if at some point $x_0\in \G(u)$, $|\na u|>0$, then by continuity of $\na u$ we have that in a small neighborhood $B_{r}(x_0)$, $|\na u_i|\geq c$ for some $1\leq i\leq m$ and $c>0$. The inverse function theorem implies that in $B_r(x_0)$, $\{u_i=0\}$ is a $(n-1)$-dimensional hypersurface, which further gives $x_0\not\in\pa^e\Om(u)$, where $\pa^e$ denotes the measure theoretic boundary. Finally we arrive at a contradiction thanks to the well-known result $\pa^* E\subset \pa^e E$ for any set $E$ of locally finite perimeter. For the definitions of the reduced boundary and the measure theoretic boundary, as well as their relationship, we refer to \cite[Chapter 5.7\&5.8]{book-eg} for details. We first establish an almost monotonicity formula for $|u|<\th$. The proof closely follows the classical arguments of Weiss (see \cite{Weiss1,Weiss2}) \begin{lemma}\label{weiss lemma} Let $u$ be a solution of \eqref{ELeq} in $B_r(x_0)$ such that $|u|<\th$ in $B_r(x_0)$, and set \beq\label{def weiss} W(u,x_0,r)=\f{1}{r^{n+2}}\int_{B_r(x_0)}\left(\f12|\na u|^2+g(u)|u|\right)\,dx-\f{1}{r^{n+3}}\int_{\pa B_r(x_0)}|u|^2 \,d\mathcal{H}^{n-1}. \eeq Then $W(u,x_0,r)$ satisfies \beq\label{Weiss} \f{d}{dr} W(u,x_0,r) =r\int_{\pa B_1}|\f{du_r}{dr}|^2\,d\mathcal{H}^{n-1}+2r\int_{B_1}D_ug\cdot u_r|u_r|\,dx \eeq where \beqo u_r(x):=\f{u(x_0+rx)}{r^2}. \eeqo \end{lemma} \begin{proof} First we write $W(u,x_0,r)$ as \beqo W(u,x_0,r)=\int_{B_1}\left(\f12|\na u_r|^2+g(r^2 u_r)|u_r|\right)\,dx-\int_{\pa B_1}|u_r|^2\,d\mathcal{H}^{n-1}. \eeqo Then by direct calculation we have \begin{align*}\ &\f{d}{dr}W(u,x_0,r)\\ =&\int_{B_1}\left( \na u_r\cdot\f{d}{dr}(\na u_r)+D_u g(r^2 u_r)\cdot \f{d}{dr}(r^2 u_r)|u_r|+ g(r^2 u_r)|u_r|^{-1}u_r\cdot \f{d}{dr}u_r \right)\,dx\\ & -2\int_{\pa B_1}u_r\cdot\f{d}{dr}u_r\,d\mathcal{H}^{n-1}\\ =&\int_{B_1}\left(-\Delta u_r\cdot \f{d}{dr}u_r+D_u g(r^2 u_r)\cdot \f{d}{dr}(r^2 u_r)|u_r|+ g(r^2 u_r)|u_r|^{-1}u_r\cdot \f{d}{dr}u_r \right)\,dx\\ & -2\int_{\pa B_1}u_r\cdot\f{d}{dr}u_r\,d\mathcal{H}^{n-1}+\int_{\pa B_1}(x\cdot \na u_r)\cdot \f{d}{dr}u_r\,d\mathcal{H}^{n-1}.\\ =&\int_{B_1}\left(-\Delta u_r\cdot \f{d}{dr}u_r+D_u g(r^2 u_r)\cdot \f{d}{dr}(r^2 u_r)|u_r|+ g(r^2 u_r)|u_r|^{-1}u_r\cdot \f{d}{dr}u_r \right)\,dx\\ &+\int_{\pa B_1}r|\f{d}{dr}u_r|^2\,\mathcal{H}^{n-1}. \end{align*} Here we have used integration by parts in the second step and the formula $\f{d}{dr}u_r=\f{1}{r}(x\cdot \na u_r-2u_r)$ in the last step. Since $u$ satisfies the equation \eqref{ELeq}, direct computation implies that \beq\label{ELu_r} \Delta u_r=\left( g(r^2 u_r)\f{u_r}{|u_r|}+D_u g(r^2 u_r)r^{2}|u_r| \right). \eeq Substituting \eqref{ELu_r} into the above identity, we obtain \begin{align*} &\f{d}{dr}W(u,x_0,r)-\int_{\pa B_1}r|\f{d}{dr}u_r|^2\,\mathcal{H}^{n-1}\\ =& \int_{B_1}\bigg\{ -\left( g(r^2 u_r) \f{u_r}{|u_r|}+D_u g(r^2 u_r)r^{2}|u_r| \right)\f{d}{dr}u_r\\ &\qquad +D_u g(r^2 u_r)\cdot \f{d}{dr}(r^2 u_r)|u_r|+ g(r^2 u_r)\f{u_r}{|u_r|}\cdot \f{d}{dr}u_r\bigg\}\,dx\\ =&2 r \int_{B_1}D_u g(r^2 u_r)\cdot u_r|u_r|\,dx \end{align*} Hence we have proved \eqref{Weiss}. \end{proof} \begin{rmk} For other $\al\in[0,2)$, the analogue result still holds for \beqo W(u,x_0,r)=\f{1}{r^{n+2\ka-2}}\int_{B_r(x_0)}\f12|\na u|^2+W(u)\,dx-\f{\ka}{2r^{n+2\ka-1}}\int_{\pa B_r(x_0)}|u|^2 \,d\mathcal{H}^{n-1}. \eeqo where $\kappa:=\f{2}{2-\al}$ and $W(u)=g(u)|u|^\al$. The derivative of $W(u,x_0,r)$ is given by \beqo \f{d}{dr} W(u,x_0,r) =r\int_{\pa B_1}|\f{du_r}{dr}|^2\,d\mathcal{H}^{n-1}+\ka r^{\ka-1}\int_{B_1}D_ug\cdot u_r|u_r|^\al \eeqo For our purpose we only need the statement for $\al=1$. The proof for general $\al\in [0,2)$ is identical and we omit it here. \end{rmk} \begin{proposition}\label{prop growth} Let $\al=1$ and $u$ be a bounded entire minimizer and let $\G(u)$ be as defined in \eqref{notations}. There exist constants $r_0$ and $C$, which only depends on $\|u\|_{L^\infty}$ and the potential function $W(u)$, such that \beq\label{growth estimate} |u(x)|\leq C \dist(x,\G(u))^2,\quad |\na u(x)|\leq C\dist(x,\G(u)) \eeq whenever $\dist(x,\G(u))\leq r_0$. \end{proposition} \begin{proof} This proposition and the proof are almost identical to \cite[Theorem 2]{asuw} (except now for a more general potential function). We present the whole argument here for completeness. The statement of the proposition is equivalent to \beqo \sup\limits_{x\in B_r(x_0)}|u(x)|\leq Cr^2,\quad \sup\limits_{x\in B_r(x_0)}|\na u(x)|\leq Cr, \eeqo whenever $x_0\in\G(u)$, $r\leq r_0$. By \eqref{ELeq} and the standard theory of elliptic regularity, it suffices to show \beq\label{growth est int form} \f{1}{r^n}\int_{B_r(x_0)}|u|\,dx\leq Cr^2,\quad \forall x_0\in \G(u),\; r\leq r_0 \eeq where $C$ and $r_0$ only depend on $\|u\|_{L^\infty}$ and the potential function $W$. Note that since $|u|$ is uniformly bounded, we have $u\in C^{1,\g}$, which further implies $|\na u|$ is uniformly bounded. As a result, there is a constant $r_0$ such that $\dist(x,\G(u))\leq 2r_0$ implies $|u(x)|\leq \theta$, where $\th$ is the constant in Lemma \ref{nondeg}. Also, $W(u(x))$ has the form $g(u(x))|u(x)|$ for some smooth function $g(u)\geq C>0$ when $\dist(x,\G(u))\leq 2r_0$. Since $|\na u|$ is bounded and $r_0$ is a constant, we have that $W(u,x_0,r_0)$ is uniformly bounded by some constant $C_1$ independent of $u$ and $x_0$. Here $W(u,x_0,r_0)$ is the quantity defined in \eqref{def weiss}. Using Lemma \ref{weiss lemma}, we compute for $r<r_0$ \beq\label{compute integral growth} \begin{split} \f{1}{r^{n+2}}\int_{B_r(x_0)}g(u)|u|\,dx& =W(u,x_0,r)-\f{1}{r^{n+2}}\int_{B_r(x_0)}\f12|\na u|^2\,dx\\ &\qquad +\f{1}{r^{n+3}}\int_{\pa B_r(x_0)}|u|^2\,d\ch^{n-1}\\ &=W(u,x_0,r)-\f{1}{r^{n+2}}\int_{B_r(x_0)}\f12|\na (u-p(x-x_0))|^2\,dx\\ &\qquad +\f{1}{r^{n+3}}\int_{\pa B_r(x_0)}|u-p(x-x_0)|^2\,d\ch^{n-1}\\ &\leq W(u,x_0,r_0)+\int_{r}^{r_0}2s\int_{B_1} |D_ug||\f{u(x_0+sx)}{s^2}|^2\,dx\,ds\\ &\qquad +\f{1}{r^{n+3}}\int_{\pa B_r(x_0)}|u-p(x-x_0)|^2\,d\ch^{n-1}, \end{split} \eeq for every $p(x)\in\mathcal{H}$, where $\ch$ is defined by \beqo \begin{split} \ch:=&\{p(x): \; p(x)=(p_1(x),...p_m(x)), \text{ each }p_i(x) \text{ is a }\\ &\quad \text{homogeneous harmonic polynomial of second order.} \} \end{split} \eeqo We would like to point out that the homogeneity and harmonicity of $p(x)$ is used in the second equality of \eqref{compute integral growth}. We already know that the first term in the last step of \eqref{compute integral growth} is bounded by a constant $C_1$ independent of $u$ and $x_0$. For the second term, since $u(x_0+x)\leq C|x|^{\f53}$ when $|x|\leq r_0$ by the $C^{1,\f23}$ regularity (c.f. Remark \ref{c1,alpha remark} and observation below \eqref{notations}), we have \beqo \int_{r}^{r_0}2s\int_{B_1} |D_ug||\f{u(x_0+sx)}{s^2}|^2\,dx\,ds\leq C\int_r^{r_0}s^{-3}\int_{B_1} |sx|^{\f{10}{3}}\,dx\,ds\leq C_2 \eeqo for some constant $C_2$. Because $g(u)\geq C>0$ in $B_r(x_0)$, in order to prove \eqref{growth est int form}, it suffices to show that there is a constant $C_3$, independent of $u$ and $x_0$, such that for any $x_0\in\G(u)$ and $r\leq r_0$, \beq\label{last term bound} \min\limits_{p\in\ch} \f{1}{r^{n+3}}\int_{\pa B_r(x_0)}|u-p(x-x_0)|^2\,d\ch^{n-1}\leq C_3. \eeq Let $p_{x_0,r}$ be the minimizer of the integral $\int_{\pa B_r(x_0)}|u-p(x-x_0)|^2\,d\ch^{n-1}$ among $p\in\ch$. Then $p_{x_0,r}$ satisfies \beq\label{ortho} \int_{\pa_{B_r}(x_0)}(u(x)-p_{x_0,r}(x-x_0))\cdot q(x-x_0)\,d\ch=0\quad \forall q\in\ch. \eeq Suppose by contradiction that \eqref{last term bound} is not true, then there is a sequence of entire minimizers $\{u_k\}$ (uniformly bounded), a sequence of points $x_k\in \G(u_k)$ as well as a sequence of radii $r_k\ri 0$ such that \beqo M_k:=\f{1}{r_k^{n+3}}\int_{\pa B_{r_k}(x_k)}|u_k-p_{x_k,r_k}(x-x_k)|^2\,d\ch^{n-1}\ri\infty. \eeqo Define \beqo v_k(x):=\f{u_k(x_k+r_kx)}{r_k^2},\qquad w_k:=\f{v_k-p_{x_k,r_k}}{\sqrt{M_k}}. \eeqo Then we immediately get \beqo \int_{\pa B_1(0^n)}|w_k|^2\,d\ch^{n-1}=1 \eeqo and we have \beq\label{compute w_k} \begin{split} &\int_{B_1(0^n)} \f12|\na w_k|^2\,dx-\int_{\pa B_1(0^n)}|w_k|^2\,d\ch^{n-1}\\ =&M_k^{-1}\left( \int_{B_1(0^n)} \f12|\na (v_k-p_{x_k,r_k})|^2\,dx-\int_{\pa B_1(0^n)}|v_k-p_{x_k,r_k}|^2\,d\ch^{n-1} \right)\\ =&M_k^{-1}\left( \int_{B_1(0^n)} \f12|\na v_k|^2\,dx-\int_{\pa B_1(0^n)}|v_k|^2\,d\ch^{n-1} \right)\\ \leq &M_k^{-1} W(u_k,x_k,r_k)\\ \leq & M_k^{-1}\left( W(u_k,x_k,r_0)+\int_{r_k}^{r_0} 2s \int_{B_1} |D_u g||\f{u_k(x_k+sx)}{s^2}|^2 \,dx\,ds\right)\\ \ri & 0\quad \text{as }k\ri\infty. \end{split} \eeq So $w_k$ is uniformly bounded in $W^{1,2}(B_1)$. Also we note that by \eqref{eleq for al=1} each $w_k$ satisfies the equation \beqo \D w_k=\f{1}{\sqrt{M_k}} \left( \f{v_k}{|v_k|}g(u_k)+D_ug(u_k) \right)\chi_{\{|u_k|>0\}}, \eeqo which implies \beqo |\D w_k|\leq \f{C}{\sqrt{M_k}}\ri 0,\quad \text{as }k\ri\infty. \eeqo By Schauder estimates, $w_k$ is uniformly bounded in $C_{loc}^{1,\g}(B_1)$ for any $\g< 1$. Therefore we can extract a subsequence, still denoted by $w_k$, that converges to $w_0$ with the following properties \begin{enumerate} \itemsep0.5em \item $w_k\ri w_0$ weakly in $H^1(B_1)$, strongly in $L^2(\pa B_1)$, $\int_{\pa B_1} |w_0|^2\,d\ch^{n-1}=1$. \item $w_k\ri w_0$ in $C^{1,\g}_{loc}(B_1)$ for any $\g<1$; \item $\D w_0=0$; \item $|w_0(0^n)|=|\na w_0(0^n)|=0$; \item $\int_{\pa B_1} w_0\cdot q\,d\ch^{n-1}=0$ for any $q\in\ch$. This property follows from \eqref{ortho}. \end{enumerate} By \cite[Lemma 4.1]{Weiss3}, we know that for any $w_0$ satisfying (3) and (4), \beqo \int_{B_1}|\na w_0|^2\,dx\geq 2\int_{\pa B_1}|\na w_0|^2\,d\ch^{n-1}. \eeqo On the other hand, from \eqref{compute w_k} we know \beqo \int_{B_1}|\na w_0|^2\,dx\leq 2\int_{\pa B_1}|\na w_0|^2\,d\ch^{n-1}. \eeqo Therefore we have $\int_{B_1}|\na w_0|^2\,dx= 2\int_{\pa B_1}|\na w_0|^2\,d\ch^{n-1}$ which implies (again by \cite[Lemma 4.1]{Weiss3}) that each component of $w_0$ is a homogeneous harmonic polynomial of second order, i.e. $w_0\in \ch$. This is in contradiction with properties (1) and (5). The proof is complete. \end{proof} \section{$(n-1)$-Hausdorff measure of the free boundary for $\al=1$} \label{section 5} In this section, we continue working with the potential function $W(u)$ satisfying H1 and H2 with $\underline{\al=1}$. Assume $u$ is a bounded entire minimizer of the energy $J$. We would like to study the $(n-1)$-Hausdorff measure of $\pa^* I_0$ and prove the second part of Theorem \ref{main}, i.e. the inequality \eqref{free bdy lower bdd}, \eqref{est free bdy}. Firstly we focus on the local estimate of $\pa^* \{u=a_i\}$ and we use the same notations and assumptions as in Section \ref{growth est}. Take $a_1=0^m$ and $W(u)=g(u)|u|$ for some $g\in C^2(B_\theta)$, $\th$ as in Lemma \ref{nondeg}. $u$ satisfies the Euler-Lagrange equation \eqref{ELeq}. Thanks to the growth estimate \eqref{growth est} and the non-degeneracy Lemma \ref{nondeg}, we have for every $x_0\in\G(u)$ (recall that $\G(u)$ is defined in \eqref{notations}) and small $r$, \begin{align} \label{control of u}c_1r^2& \leq \sup\limits_{x\in B_r(x_0)}|u(x)|\leq c_2r^2,\\ \label{control of nablau} c_1r &\leq \sup\limits_{x\in B_r(x_0)}|\na u(x)|\leq c_2r. \end{align} \begin{thm}\label{local estimate}(Local estimate of $\G$) There are constants $r_0$ and $C_0$ such that \beq\label{bdy_meas:local} \mathcal{H}^{n-1}(\G(u)\cap B_{r_0}(z))\leq C_0\quad \text{for every }\;z\in \G(u). \eeq \end{thm} \begin{proof} Take the constant $r_0$ such that for any $x$ that satisfies $\mathrm{dist}(x,\G(u))\leq 2r_0$, $|u(x)|\leq \th$. We will fix a ball $B_{2r_0}(z)$ for some $z\in \G(u)$ in the rest of the proof. We define \beqo v_i:=\pa_{x_i} u,\;\; i=1,2,...,n,\qquad \S_\e(u):=\{x\in B_{r_0}(z)\cap\{|u|>0\}: |\na u|<\e\}. \eeqo By differentiating the Euler-Lagrange equation \eqref{ELeq}, formally we have \begin{equation}\label{equation_for_vi} \begin{split} \D v_i=&|u|^{-1}g(u)v_i+ |u|^{-1}(D_ug\cdot v_i)u )-|u|^{-3}g(u)(v_i\cdot u)u\\ &+|u|^{-1}(v_i\cdot u)D_ug+|u|(D^2_u g\cdot v_i). \end{split} \end{equation} Take the function $\psi_\e(x):\BR^+\ri [0,1]$ defined by \beqo \psi_\e(x)=\begin{cases} 1, & x\geq \e,\\ \f{x}{\e}, & x\in [0,\e). \end{cases} \eeqo We also choose a smooth cut-off function $\phi \in C_c^{\infty }(B_{2r_0}(z),\BR)$ such that \beqo \phi\equiv 1 \text{ in }B_{r_0}(z),\quad |\na \phi|\leq \f{C}{r_0} \eeqo Let \beqo \ta:=B_{2r_0}(z)\cap \{|u|>0\}. \eeqo The key of the proof is to estimate the following integral \beq\label{finite-perimeter-integral} I:= \int_{\ta}\na v_i\cdot \na\left[ \psi_\e(|v_i|)\f{v_i}{|v_i|} \phi \right]\,dx, \eeq from which estimate \eqref{meas est:Sigma_e} below follows. \textbf{Claim. }There exists a constant $C(g,r_0)$, which is independent of $\e,\,z$, such that \beq\label{claim est I} I\leq C(g,r_0). \eeq \begin{proof}[Proof of the Claim] Define \beqo \eta:=\psi_\e(|v_i|)\f{v_i}{|v_i|}\phi \eeqo We first show that $\eta\in W_0^{1,2}(B_{2r_0}(z),\BR^m)$. Indeed, by direct computation we have \begin{align*} \pa_j \eta&= \psi_\e'(|v_i|)\pa_j|v_i|\f{v_i}{|v_i|}\phi+\psi_\e(|v_i|)\f{\pa_j v_i}{|v_i|}\phi\\ &\quad -\psi_\e(|v_i|)v_i\f{\pa_j|v_i|}{|v_i|^2}\phi+\psi_\e(|v_i|)\f{v_i}{|v_i|}\pa_j\phi \end{align*} By definitions of $\psi_\e$, $\phi$ and the $W^{2,2}$ estimate of $u$, the right-hand side is $L^2$-integrable. Combining with the fact that $\phi\in C_0^{\infty}(B_{2r_0}(z))$, we get $\eta\in W_0^{1,2}(B_{2r_0}(z),\BR^m)$. We notice that since $\D v_i$ is very singular when $|u|\ri 0$, so we can not directly perform integration by parts by moving all the derivatives on $v_i$ in domain $\ta$. Instead, we will switch $\pa_i$ and $\nabla$. For any $f\in C_0^\infty(B_{2r_0}(z),\BR^m)$, by integration by parts we have \beqo \int_{B_{2r_0}(z)}\na v_i\cdot\na f\,dx=\int_{B_{2r_0}(z)} \Delta u\cdot \pa_if\,dx \eeqo This can be generalized to the vector-valued function $\eta$ in $W_0^{1,2}(B_{2r_0}(z),\BR^m)$, so we get \beq\label{ibp} \int_{\ta}\na v_i\cdot\na \eta\,dx=\int_{B_{2r_0}(z)} \na v_i\cdot\na \eta\,dx=\int_{B_{2r_0}(z)} \D u\cdot \pa_i\eta\,dx=\int_{\ta} \D u\cdot \pa_i\eta\,dx \eeq Above we have exploited the fact that $D^2 u$ and $\na \eta$ vanish almost everywhere on $\{|u|=0\}$. So it suffices to prove \beq\label{claim after ibp} \int_{\ta} \D u\cdot \pa_i\eta \,dx\leq C(g,r_0). \eeq We define the set $\ta_\delta:=B_{2r_0}(z)\cap \{|u|>\d\}$, it is obvious that $\ta_\d\subset \ta$ for any $\d>0$ and $\ta=\lim_{\d\ri 0 } \ta_\d$. Then we have \begin{align} \nonumber &\int_{\ta} \D u\cdot \pa_i\eta\,dx\\ \nonumber=&\lim\limits_{\d\ri 0} \int_{\ta_\d} \D u\cdot\pa_i \eta\,dx\\ \label{two parts}=&\lim\limits_{\d\ri 0} \int_{\ta_\d} -\D v_i\cdot \eta\,dx+\lim\limits_{\d\ri 0} \int_{\pa \ta_\d} \Delta u\cdot \eta \g_i d\s \end{align} For the first term in \eqref{two parts}, we further compute \begin{align} \nonumber &-\int_{\ta_\d} \D v_i\cdot [\psi_\e(|v_i|)\f{v_i}{|v_i|}\phi]\,dx\\ \label{estimate:integral}=&-\int_{\ta_\d} \psi_{\e}(|v_i|)|v_i|^{-1}\phi \bigg( |u|^{-1}g(u)|v_i|^2-|u|^{-3}g(u)(v_i\cdot u)^2\\ \nonumber &\qquad\qquad\qquad \qquad +2|u|^{-1}(D_ug\cdot v_i)(u\cdot v_i)+|u|(v_i\cdot D^2_ug\cdot v_i) \bigg)\,dx \end{align} By the Cauchy-Schwartz inequality, \beqo |u|^{-1}g(u)|v_i|^2-|u|^{-3}g(u)(v_i\cdot u)^2\geq 0. \eeqo Substituting this into \eqref{estimate:integral} gives \beq \begin{split} \label{First part:bounded by C}&\int_{\ta_\d}-\D v_i\cdot \eta\,dx \\ \leq &\bigg| \int_{\ta_\d} \psi_{\e}(|v_i|)|v_i|^{-1}\phi \bigg( 2|u|^{-1}(D_ug\cdot v_i)(u\cdot v_i)+|u|(v_i\cdot D^2_ug\cdot v_i) \bigg)\,dx\bigg|\\ \leq &C(g, r_0). \end{split} \eeq The integral is bounded by a constant $C(g,r_0)$ (doesn't depend on the choice of $z, \,\d,\, \e$) because $|v_i|$, $D_u(g)$, $D_u^2g$, $u$ are all uniformly bounded by a constant in $B_{2r_0}(z)$. For the second part in \eqref{two parts}, we apply \eqref{ELeq} to obtain \beq \begin{split}\label{second part} &\int_{\pa \ta_\d} \D u\cdot \eta\gamma_i d\s\\ =&\int_{\pa\{|u|>\d\}\cap B_{2r_0}(z)}\D u\cdot \eta\g_i d\s\\ =&\int_{\pa\{|u|>\d\}\cap B_{2r_0}(z)}\left(g(u)\f{u}{|u|}+|u|D_ug(u)\right)\left( \psi_\e(|v_i|)\f{v_i}{|v_i|}\phi) \right) \g_i d\s\\ =&\int_{\pa\{|u|>\d\}\cap B_{2r_0}(z)} g(u)\pa_i|u|\f{\psi_\e(|v_i|)}{|v_i|} \phi \g_i d\s+ \int_{\pa\{|u|>\d\}\cap B_{2r_0}(z)}|u|\pa_i g(u)\f{\psi_\e(|v_i|)}{|v_i|} \phi \g_i d\s\\ =&:\mathrm{I}+\mathrm{II} \end{split} \eeq We notice that on $\pa\{|u|>\delta\}$, if $\left|\nabla |u|\right|\neq 0$, then the outer normal vector can be written as $\g=\f{-\na |u|}{\left| \na |u|\right|}$, so we obtain that $\mathrm{I}\leq 0$. For the term $\mathrm{II}$, we perform integration by parts again to get \beq \begin{split}\label{estimate:bdy int} \lim\limits_{\d\ri 0}\mathrm{II}\leq &\lim\limits_{\d\ri 0}\d \left|\int_{\pa\{|u|> \delta\}\cap B_{2r_0}(z)} \pa_ig(u)\f{\psi_\e(|v_i|)}{|v_i|}\phi\g_i\,d\sigma\right|\\ \leq &\lim\limits_{\d\ri 0}\delta \int_{\{|u|>\delta|\cap B_{2r_0}(z)\}}\left|\pa_i(\pa_ig(u)\f{\psi_\e(|v_i|)}{|v_i|}\phi)\right|\,dx=0 \end{split} \eeq We note that in the last step of \eqref{estimate:bdy int}, the limit is zero since it is the multiplication of $\d$ (goes to zero) and a bounded integral (the bound depends on $\e$, but doesn't depend on $\d$). Combining \eqref{estimate:integral}, \eqref{First part:bounded by C}, \eqref{second part} and \eqref{estimate:bdy int} will conclude the proof of the Claim. \end{proof} On the other hand, we compute \begin{align*} I&=\int_{\ta}\big(\na v_i \na\psi_\e(|v_i|)\f{v_i}{|v_i|}\phi\big)+\big( \na v_i\cdot \na(\f{v_i}{|v_i|})\psi_\e(|v_i|)\phi \big)\\ &\qquad\qquad\qquad +\big( \na v_i \f{v_i}{|v_i|}\na\phi \psi_\e(|v_i|) \big)\,dx\\ &=\f{C}{\e}\int_{\ta\cap\{0<|v_i|<\e\}} |\na |v_i||^2\phi\,dx\\ &\qquad + \int_{\ta\cap \{|v_i|>0\}} \left( |v_i|^{-1}|\na v_i|^2- |v_i|^{-1}|\na|v_i||^2\right)\psi_\e(|v_i|)\phi \,dx\\ &\qquad +\int_{\ta}\left( \na|v_i|\na\phi\psi_\e(|v_i|) \right)\,dx \end{align*} Note that we have \beqo \begin{split} &\qquad\qquad\qquad |v_i|^{-1}|\na v_i|^2- |v_i|^{-1}|\na|v_i||^2\geq 0,\\ &\int_{\ta}\left( \na|v_i|\na\phi\psi_\e(|v_i|) \right)\,dx\leq (\int_{\ta}|\na |v_i||^2)^{\f12}(\int_{\ta} |\na \phi\psi_\e(|v_i|)|^2)^{\f12}\leq C(r_0). \end{split} \eeqo Combining with \eqref{claim est I}, we conclude that \beq\label{meas est:Sigma_e} \int_{\S_\e}|\na|v_i||^2\,dx\leq C(g,r_0)\e, \quad \forall \e<<1. \eeq We need the following lemma. \begin{lemma}\label{vi energy} There are constants $\e_0$ and $C$ such that for every $\e\leq \e_0$, \beqo \sum\limits_{i=1}^n\int_{B_\e(z)\cap\Om(u)}|\na|v_i||^2\,dx\geq C\cl^n(B_{\e}(z)),\quad \forall z\in\G(u). \eeqo \end{lemma} \begin{proof} Recall that $\Om(u)=\{|u|>0\}$. If the statement is false, there exists a sequence of uniformly bounded entire minimizers $\{u_j\}_{j=1}^\infty$, $\{z_j\in\G(u_j)\}$, $\{\e_j\}$ as well as $\{C_j\}$ such that \begin{align} &\nonumber\qquad\qquad \lim\limits_{j\ri\infty}\e_j=0,\quad \lim\limits_{j\ri\infty} C_j=0,\\ &\label{contradict1}\sum\limits_{i=1}^n\int_{B_{\e_j}(z_j)\cap\Om(u)}|\na|\pa_iu_j||^2\,dx< C_j\cl^n(B_{\e_j}(z_j)). \end{align} Define $f_j(x): B_1(0^n)\ri \BR^m$ as \beqo f_j(x):=\f{u_j(z_j+\e_jx)}{\e_j^2}. \eeqo Then by Proposition \ref{prop growth}, \eqref{control of u} and \eqref{contradict1}, we have \begin{enumerate} \itemsep0.5em \item $|f_j(0^n)|=|\na f_j(0^n)|=0\quad \text{for every }j$, \item $\|f_j\|_{C^{1,\g}(B_1(0^n))} (\g<1)$ is uniformly bounded. \item $\sum\limits_i^n \int_{B_1}|\na |\pa_i f_j||^2\,dx\leq C_j\om_n,$ \item $\sup_{B_{\f12}(0^n)} |f_j(x)|\geq C>0$ for some constant $C$. \end{enumerate} Using all these properties, we can get the following convergence results up to some subsequence, \begin{align} &\nonumber \qquad\qquad\quad f_j\ri f \text{ in }C^{1}(B_{1}(0^n)),\\ &\label{convergence1}|f(0^n)|=|\na f(0^n)|=0,\quad \sum\limits_i^n \int_{B_1}|\na |\pa_i f||^2\,dx=0. \\ & \label{convergence2}\quad \qquad\qquad \sup_{B_{\f12}(0^n)} |f(x)|\geq C>0. \end{align} Note that \eqref{convergence1} implies that $f\equiv 0$ in $B_1(0^n)$, which yields a contradiction with \eqref{convergence2}. The proof of Lemma \ref{vi energy} is complete. \end{proof} According to Besicovitch covering lemma, we can find a covering of $\G(u)\cap B_{r_0}(z)$ by a finite family of balls $\{B_j\}_{j\in J}$, such that each ball is of radius $\e$ and centered on $\G(u)\cap B_{r_0}$, and no more than $N_n$ balls from this family overlap, where $N_n$ is independent of $\e$ and of the set $\G(u)\cap B_{r_0}$. By the estimate \eqref{control of nablau}, we have $B_{\e}(z)\cap\Om(u)\subset \S_{C\e}$ for some constant $C$. Consequently, we obtain \begin{align*} \sum\limits_{j\in J}\cl^n(B_j)&\leq C \sum\limits_{j\in J}\sum\limits_{i=1}^n\int_{B_j\cap\{|u|>0\}}|\na(|v_i|)|^2\,dx\quad \text{(by Lemma \ref{vi energy})}\\ &\leq C\sum\limits_{i=1}^n\int_{\S_{C\e}}|\na(|v_i|)|^2\,dx\leq C(g,r_0)\e \quad \text{(by \eqref{meas est:Sigma_e}).} \end{align*} This implies \beqo \sum\limits_{j\in J}\e^{n-1}\leq C(g,r_0), \eeqo for some constant $C(n,r_0)$ independent of the choice of $z$. Finally letting $\e\ri 0$, we get \beqo \mathcal{H}^{n-1}(\G(u)\cap B_{r_0}(z))\leq C(g,r_0). \eeqo The proof of Theorem \ref{local estimate} is complete. \end{proof} \begin{rmk}\label{local_rmk} Using Theorem \ref{local estimate}, we can prove that for any $R$, there exists $C(R)$ such that \beqo \ch^{n-1}(B_R(x)\cap \G(u))\leq C(R) \eeqo To prove this, one simply covers the set $B_R(x)\cap \G(u)$ by identical small balls $\{B_{r_0}(z_i)\}$ such that $z_i\in \G(u)$ for every $i$. We omit the details. \end{rmk} Now we use the local estimate Remark \ref{local_rmk} and Theorem \ref{first part main theorem} to prove the global estimate of the $\mathcal{H}^{n-1}$ measure of the free boundary $\pa^* I_0$. \begin{thm}[Second part of Theorem \ref{main}]\label{global estimate} Let $\al\in (0,2)$, $x_0\in \BR^n$. There are constants $c_1,r_0$ such that for any $r>r_0$, \beqo \mathcal{H}^{n-1}(\pa^* I_0 \cap B_r(x_0))\geq c_1r^{n-1}. \eeqo And when $\al=1$, there are constants $c_2, r_0$ such that for $r\geq r_0$, \beqo \mathcal{H}^{n-1}(\pa^* I_0 \cap B_r(x_0))\leq c_2r^{n-1}. \eeqo \end{thm} \begin{rmk} Unlike the local estimate, here all the constants depend on $u$. \end{rmk} \begin{proof} Without loss of generality we take $x_0=0^n$. According to Theorem \ref{two phase existence}, for sufficiently large $r$, there are two phases $a_1,a_2\in A$, which depend on $r$, such that \beqo \cl^n(B_r\cap \{u=a_j\})\geq cr^n, \quad j=1,2. \eeqo Using the relative isoperimetric inequality we obtain that \beqo \begin{split} &\ch^{n-1}(\pa^*\{|u-a_1|>0\}\cap B_r)\\ \geq &C\left(\min\{\cl^n(B_r\cap \{u=a_1\}), \cl^n(B_r\backslash \{u=a_1\} )\} \right)^{\f{n-1}{n}}\geq c_1r^{n-1}, \end{split} \eeqo which gives the lower bound. Note that this estimate is valid for any $0<\al<2$. For the upper bound, we fix $\al=1$ and examine more closely the proof of Theorem \ref{first part main theorem}. Again we consider the domain $\tilde{S}_{kL}$ and classify all the sub-cubes $\{S_i\}_1^{(2k)^n}$ into five classes $T_1$--$T_5$. If $S_i\in T_4$, then $u(x)\equiv a_{j_0}$ for all $x\in \overline{S_i}$, which implies $\ch^{n-1}(S_i\cap \pa^* I_0)=0$. Moreover, for any $x_0\in\pa S_i$, by the definition of $T_4$ it holds that \beqo \max\limits_{x\in B_L(x_0)} |u(x)-a_{j_0}|\leq \theta. \eeqo By the proof of Lemma \ref{value of L}, we obtain that $B_{L/4}(x_0)\subset \{u=a_{j_0}\}$ and consequently $x_0\not\in \pa^* I_0$. As a result, we have \beqo \ch^{n-1}(\overline{S_i}\cap \pa^* I_0)=0. \eeqo Using estimates in Theorem \ref{first part main theorem} and Remark \ref{local_rmk}, we have for large enough $k$ \beq\label{upper bd free bdy} \begin{split} &\ch^{n-1}(\pa^*I_0\cap \tilde{S}_{kL})\\ \leq & \sum\limits_{S_i\in T_1\cup T_2\cup T_3\cup T_5} \ch^{n-1}(\pa^*I_0\cap \overline{S_i}) \\ \leq &(|T_1|+|T_2|+|T_3|+|T_5|) C(L)\\ \leq & c_2(W,u) k^{n-1}. \end{split} \eeq The upper bound follows immediately from \eqref{upper bd free bdy} and the proof is complete. \end{proof}
1,108,101,566,218
arxiv
\section{Introduction} Sampling from a Nash equilibrium is a well-know method for proving existence of a simple approximate Nash equilibrium. By the sampling method, the (possibly complicated) mixed strategy $x_i$ of player $i$ is replaced by $k$ i.i.d. samples of pure strategies from the distribution $x_i$. These $k$ samples are each chosen at random with probability $1/k$, and together they form a simple $k$\emph{-uniform strategy} $s_i$. Equivalently, $k$-uniform strategies are mixed strategies that assign to each pure strategy a rational probability with denominator $k$. The main advantage of the $k$-uniform strategy $s_i$ over the original strategy $x_i$ is that there are at most $m^k$ such strategies (actually $\binom{m+k-1}{k}$), where $m$ is the number of actions of player $i$. Therefore, in the case where we do not know the original strategy $x_i$ (and thus we cannot produce the strategy $s_i$ from $x_i$), we can \emph{search} for the strategy $s_i$ over a relatively small set of size $m^k$. The sampling method has a very important consequence for the computation of approximate Nash equilibria. If we prove existence of a $k$-uniform approximate Nash equilibrium $(s_i)_{i=1}^n$ for small $k$, then we need only search exhaustively for an approximate Nash equilibrium over all the possible $n$-tuples of $k$-uniform strategies. Although this method seems naive, it provides the best upper bound that is known today for computing an approximate Nash equilibrium. Althofer \cite{A} was the first to introduce the sampling method, when he studied two-player zero-sum games and showed existence of $k$-uniform approximately optimal strategies with $k=O(\log m)$. Althofer \cite{A} also showed that the order of $\log m$ is optimal (for two-player games). Lipton, Markakis, and Mehta \cite{LMM} generalized this result to all two-player games; i.e., they proved existence of a $k$-uniform approximate Nash equilibrium for $k=O(\log m)$. For $n$-player games, Lipton, Markakis, and Mehta \cite{LMM} proved existence of a $k$-uniform approximate Nash equilibrium for $k=O(n^2 \log m)$. H\'emon, Rougemont, and Santha \cite{HRS} simplified it to $k=O(n \log m)$. In the present paper, we prove existence of a $k$-uniform approximate Nash equilibrium for $k=O(\log n + \log m)$ (see Theorem \ref{theo:main}). The results in \cite{LMM} and \cite{HRS} induce a $poly(N^{\log N})$ algorithm for computing an approximate Nash equilibrium (see \cite{N}), where $N=nm^n$ is the input size. Our result yields a $poly(N^{\log \log N})$ algorithm for games where the number of actions of each player is polynomial in $n$ (the number of players). To our knowledge, the best previously known upper bound for this class of games is the $poly(N^{\log N})$ of \cite{LMM}. Our second result establishes an inverse connection between the entropy of Nash equilibria in the game and the time that it takes the sampling method algorithm to find an approximate Nash equilibrium (see Theorem \ref{theo:ent}). In particular, this result generalizes the result of Daskalakis and Papadimitriou \cite{DP} on existence of a polynomial algorithm for an approximate Nash equilibrium in \emph{small probability games}, which are a sub-class of the games where the entropy of a Nash equilibrium is very high. Daskalakis and Papadimitriou \cite{DP} proved this result for two-player games. A corollary of our result (see Corollary \ref{cor:small}) is that an appropriate generalization of that statement holds for any number of players $n$. \section{The results} We consider $n$-player games with $m$-actions for each player.\footnote{All the results in the paper hold also for the case where each player has a different number of actions (i.e., player $i$ has $m_i$ actions). For simplicity, we assume throughout that all players have the same number of actions $m$.} The \emph{size of the game} is denoted by $N:=nm^n$. We use the following standard notation. The set of players is $[n]=\{1,2,...,n\}$. The set of actions of each player is $A_i=[m]=\{1,2,...,m\}$. The set of strategy profiles is $A=[m]^n$. The payoff function of player $i$ is $u_i:A\rightarrow [0,1]$. The payoff function profile is denoted by $u=(u_i)_{i\in [n]}$. The set of probability distributions over a set $B$ is denoted by $\Delta(B)$. The set of mixed actions of player $i$ is $\Delta(A_i)$. The payoff function can be multilinearly extended to $u_i:\Delta(A)\rightarrow [0,1]$. A mixed action profile $x=(x_i)_{i\in [n]}$, where $x_i \in \Delta(A_i)$ is an $\varepsilon$-\emph{equilibrium} if no player can gain more than $\varepsilon$ by a unilateral deviation; i.e., $u_i(x)\geq u_i(a_i,x_{-i})-\varepsilon$, for every player $i$ and every action $a_i\in [m]$, where $x_{-i}$ denotes the action profile of all players other than $i$. A $0$-equilibrium is called an \emph{exact} or \emph{Nash} equilibrium. A mixed strategy $x_i\in A_i$ is called $k$\emph{-uniform} if $x_i(a_i)=c_i/k$, where $c_i\in \mathbb{Z}$, for every action $a_i\in A_i$. Equivalently, a $k$-uniform strategy is a uniform distribution over a multi-set of $k$ pure actions. A strategy profile $x=(x_i)_{i\in [n]}$ will be called $k$\emph{-uniform} if every $x_i$ is $k$-uniform. We use the notation $f(x)=poly(g(x))$ if there exists a constant $c$ such that $f(x)\leq g(x)^{c}$ for large enough $x$. \subsection{General games} Our Main Theorem states the following: \begin{theorem}\label{theo:main} Every $n$-player game with $m$ actions for each player admits a $k$-uniform $\varepsilon$-equilibrium for every \begin{equation*} k\geq \frac{8(\ln m + \ln n - \ln \varepsilon + \ln 8)}{\varepsilon^2}. \end{equation*}. \end{theorem} \begin{corollary}\label{cor:main} Let $m=poly(n)$, and let $N=nm^n$ be the input size of an $n$-player $m$-action normal-form game. For every constant $\varepsilon>0$, there exists an algorithm for computing an $\varepsilon$-equilibrium in $poly(N^{\log \log N})$ steps. \end{corollary} \begin{proof}[Proof of Corollary \ref{cor:main}] The number of all the possible $k$-uniform profiles is at most $m^{nk}$. Note that \begin{equation*} m^{nk}=poly(m^{n \log n})=poly((m^n)^{\log \log (m^n)})=poly(N^{\log \log N}). \end{equation*} Therefore the exhaustive search algorithm that searches for an $\varepsilon$-equilibrium over all possible $k$-uniform profiles finds such an $\varepsilon$-equilibrium after at most $poly(N^{\log \log N})$ iterations. \end{proof} \begin{proof}[Proof of Theorem \ref{theo:main}] The proof uses the sampling method. Let \linebreak $k \geq \frac{8(\ln m + \ln n - \ln \varepsilon + \ln 8)}{\varepsilon^2}$, and let $x=(x_i)_{i\in [n]}$ be an exact equilibrium of the game $u=(u_i)_{i\in [n]}$. For every player $i$, we sample $k$ i.i.d. pure strategies $(b^i_j)_{j \in k}$ according to the distribution $x_i$ ($b^i_j \in A_i$). Denote by $s_i$ the uniform distribution over the pure actions $(b^i_j)_{j \in k}$. It is enough to show that with positive probability the profile $(s_i)_{i\in [n]}$ forms an $\varepsilon$-equilibrium. For every player $i$ and strategy $j\in A_i=[m]$, we define a set of forbidden $s$ values: \begin{equation*} E_{i,j}=\{\mathbf{s}\in \bigtimes_{l\in[n]}\Delta(A_l):|u_i(j,x_{-i})-u_i(j,\mathbf{s_{-i}})|\geq \frac{\varepsilon}{2}\}. \end{equation*} Note that almost every realization of $s$ is absolutely continuous with respect to $x$, written $s\ll x$; i.e., the event $\{\mathrm{support}(s)\subset \mathrm{support}(x)\}$ has probability 1. Therefore, it is sufficient to verify that $\mathbb{P}(s\notin\cup_{i,j} E_{i,j})>0$, since every strategy profile $\mathbf s\ll x$, $\mathbf{s}\notin \cup_{i,j} E_{i,j}$ is an $\varepsilon$-equilibrium, by \begin{multline*} u_i(a_i,\mathbf{s_{-i}}) \leq u_i(a_i,x_{-i})+\frac{\varepsilon}{2}\leq \sum_{b\in A_i} \mathbf{s_i}(b) u_i(b,x_{-i}) +\frac{\varepsilon}{2}\\ \leq \sum_{b\in A_i} \mathbf{s_i}(b) u_i(b,\mathbf{s_{-i}}) +\varepsilon = u_i(\mathbf s)+\varepsilon, \end{multline*} where the second inequality holds because all the strategies in the support of $\mathbf{s_i}$ are in the support of $x_i$, which contains only best replies to $x_{-i}$. To show that $\mathbb{P} (s\in\cup_{i,j} E_{i,j})<1$, it is sufficient to show that $\mathbb{P}(s\in E_{i,j})\leq \frac{1}{mn}$ because we have $mn$ such events $\{s\in E_{i,j}\}$. Up to this point, the arguments of the proof are similar to \cite{LMM} and \cite{HRS}. The estimation of the probability $\mathbb{P}(s\in E_{i,j})$, however, uses more delicate arguments. Let us estimate $\mathbb{P}(s\in E_{1,1})$. We begin by rewriting the payoff of player 1. For every $l\in[k]$, we can write \begin{equation*} u_1(1,s_{-1})=\frac{1}{k^{n-1}}\underset{j_1,j_2,...,j_n \in [k]}{\sum} u_1(1,b^2_{j_2+l},b^3{j_3+l},...,b^n_{j_n+l}) \end{equation*} where the indexes $j_i+l$ are taken modulo $k$. If we take the average over all possible $l$ we have \begin{equation}\label{eq:pay} u_1(1,s_{-1})=\frac{1}{k^{n-1}}\underset{j_1,j_2,...,j_n \in [k]}{\sum} \frac{1}{k} \underset{l\in[k]}{\sum} u_1(1,b^2_{j_2+l},b^3{j_3+l},...,b^n_{j_n+l}). \end{equation} For every initial profile of indexes $j_*=(j_2,j_3,...,j_n)\in [k]^{n-1}$ and every $l\in [k]$, we denote $b^{-1}_{j_*+l}:=(b^2_{j_2+l},b^3_{j_3+l},...,b^n_{j_n+l})\in A_{-1}$, and we define the random variable \begin{equation}\label{eq:d} d(j_*):= \begin{cases} 0 & \text{if } \left\lvert \frac{1}{k}\underset{l\in[k]}{\sum}u_1(1,b^{-1}_{j_*+l})-u_1(1,x_{-1}) \right\rvert \leq \dfrac{\varepsilon}{4} \\ 1 & \text{otherwise.} \end{cases} \end{equation} By the definition of $d(j_*)$, we have \begin{equation}\label{eq:din} d(j_*)+\frac{\varepsilon}{4} \geq \left\lvert \frac{1}{k}\underset{l\in[k]}{\sum}u_1(1,b^{-1}_{j_*+l})-u_1(1,x_{-1}) \right\rvert. \end{equation} Note also that for any fixed $j_*$ the random action profiles $b^{-1}_{j_*+1},\ldots,b^{-1}_{j_*+k}$ are independent. Therefore by Hoeffding's inequality (see \cite{H}) we have \begin{equation}\label{eq:hof} \mathbb{E}(d(j_*))\leq 2 e^{-\frac{k\varepsilon^2}{8}}. \end{equation} Using representation (\ref{eq:pay}) of the payoffs and inequalities (\ref{eq:din}) and (\ref{eq:hof}), we get \begin{equation}\label{eq:fin} \begin{aligned} \mathbb{P} (s\in E_{1,1}) &= \mathbb{P} \left( \left\lvert \frac{1}{k^{n-1}} \underset{j_* \in [k]^{n-1}}{\sum} \frac{1}{k} \underset{l\in[k]}{\sum} u_1(1,b^{-1}_{j_*+l})-u_1(1,x_{-1}) \right\rvert \geq \frac{\varepsilon}{2} \right) \\ &\leq \mathbb{P} \left( \frac{1}{k^{n-1}} \underset{j_* \in [k]^{n-1}}{\sum} \left\lvert \frac{1}{k} \underset{l\in[k]}{\sum} u_1(1,b^{-1}_{j_*+l})-u_1(1,x_{-1}) \right\rvert \geq \frac{\varepsilon}{2} \right) \\ &\leq \mathbb{P} \left( \frac{1}{k^{n-1}} \underset{j_* \in [k]^{n-1}}{\sum} d(j_*) \geq \frac{\varepsilon}{4} \right) \leq \frac{8 e^{-\frac{k\varepsilon^2}{8}}}{\varepsilon} \end{aligned} \end{equation} where the last inequality follows from Markov's inequality. Putting $k\geq \frac{8(\ln m + \ln n - \ln \varepsilon + \ln 8)}{\varepsilon^2}$ in inequality (\ref{eq:fin}), we get $\mathbb{P} (E_{1,1}) \leq \frac{1}{mn}$. \end{proof} \subsection{Games with a high-entropy equilibrium} In the sequel it will be convenient to consider the set of $k$-uniform strategies as the set of \emph{ordered} $k$-tuples of pure actions. To avoid ambiguity we will call those strategies $k$\emph{-uniform ordered strategies}.\footnote{Many $k$-uniform ordered strategies correspond to the same mixed strategy of the player in the game.} Now the number of $k$-uniform ordered profiles is exactly $m^{nk}$. The algorithm of Corollary \ref{cor:main} suggests that we should search over all the possible $k$-uniform profiles (or $k$-uniform ordered profiles), one by one, until we find an approximate equilibrium. Consider now the case where a large fraction of the $k$-uniform ordered strategies form an approximate equilibrium, say a fraction of $1/r$. In such a case we can pick $k$-uniform ordered profiles \emph{at random}, and then we will find the approximate equilibrium in expected time $r$. Define the \emph{$k$-uniform random sampling algorithm} ($k$-URS) to be the algorithm described above; i.e., it samples uniformly at random $n$-tuples of $k$-uniform ordered strategies and checks whether this profile forms an $\varepsilon$-equilibrium.\footnote{Checking whether a strategy profile forms an approximate equilibrium can always be done in $poly(N)$ time. Actually, it can even be done by using only $poly(n,m)$ samples from the mixed profile. Using the samples, the answer will be correct with a probability that is exponential (in $n$ and $m$) close to 1 (see, e.g., \cite{B}, proof of Theorem 2).} An interesting question arises: For which games does the $k$-URS algorithm find an approximate equilibrium fast? Daskalakis and Papadimitriou \cite{DP} focused on two-player games with $m$ actions, and they showed that the $k$-URS algorithm finds an approximate equilibrium after $poly(m)$ samples for \emph{small-probability games}. A \emph{small-probability game} is a game that admits a Nash equilibrium where each pure action is played with probability at most $c/m$ for some constant $c$. Here we generalize the result of Daskalakis and Papadimitriou to $n$-player games. Instead of focusing on the specific class of small-probability games we establish a general connection between the entropy of equilibria in the game and the expected number of samples of the $k$-URS algorithm until an approximate Nash equilibrium is found. \begin{theorem}\label{theo:ent} Let $u$ be an $n$-player game with $m$ actions for each player, with a Nash equilibrium $x=(x_i)$. Let $k\geq \max\{\frac{16}{\varepsilon^2}(\ln n + \ln m -\ln \varepsilon +2), e^{16/\varepsilon^2}\}=O(\log m + \log n)$; then the $k$-uniform random sampling algorithm finds an $\varepsilon$-equilibrium after at most $4\cdot 2^{k(n\log_2 m -H(x))}$ samples in expectation, where $H(x)$ is Shannon's entropy of the Nash equilibrium $x$. \end{theorem} The following corollary of this theorem is straightforward. \begin{corollary}\label{cor:ent} Families of games where $n\log_2 m - \max\limits_{x\in \mathrm{NE}}H(x)$ is bounded admit a $poly(m,n)$ probabilistic algorithm for computing an approximate Nash equilibrium. \end{corollary} The corollary follows from the fact that $k=O(\log m + \log n)$, and therefore $4\cdot 2^{kO(1)} = poly(n,m)$. A special case where $n\log_2 m-H(x)$ is constant is that of small-probability games with a \emph{constant} number of players $n$. \begin{corollary}\label{cor:small} Let $c\geq 1$, and let $u$ be an $n$-player $m$-action game with a Nash equilibrium $x=(x_i)_{i\in [n]}$, where $x_i(a_i) \leq \frac{c}{m}$ for players $i$ and all actions $a_i\in A_i$. Let $k=O(\log m)$, as defined in Theorem \ref{theo:ent}. Then the expected number of samples of the $k$-URS algorithm is at most $4\cdot 2^{k n \log c}=poly(m)$. \end{corollary} The corollary follows from the fact that the entropy of the Nash equilibrium $x$ is $H(x)=\sum_{i\in [n]} H(x_i) \geq n(\log_2 m-\log_2 c)$. The following example demonstrates that even in the case of two-player games, the class of games that have PTAS according to Corollary \ref{cor:ent} is slightly wider than the class of small-probability games. \begin{example} Consider a two-player $m$-action game where the equilibrium is $x=(x_1,x_2)$, where $x_1$ is the uniform distribution over all actions $x_1=(\frac{1}{m},\frac{1}{m},...,\frac{1}{m})$, and $x_2=(\frac{1}{\sqrt{m}},\frac{1}{m+\sqrt{m}},\frac{1}{m+\sqrt{m}},...,\frac{1}{m+\sqrt{m}})$. This game is not a small-probability game, but it does satisfy $n\log_2 m-H(x)=o(1)$: \begin{eqnarray*} 2\log_2 m - H(x) &\leq & \log_2 m - \frac{m-1}{m+\sqrt{m}}\log_2 (m+\sqrt{m}) \\ &\leq & \frac{1}{\sqrt{m}+1} \log m = o(1). \end{eqnarray*} \end{example} In the proof of Theorem \ref{theo:ent} we use the following lemma from information theory. \begin{lemma}\label{lem:inf} Let $y$ be a random variable that assumes values in a finite set $M$. Let $S\subset M$ such that $\mathbb{P} (y\in S)\geq 1-\frac{1}{\log_2 |M|}$; then $|S|\geq \frac{1}{4} 2^{H(y)}$. \end{lemma} \begin{proof} \begin{equation*} \begin{aligned} H(y) &=\mathbb{P} (y\in S) H(y|y\in S) + \mathbb{P} (y\notin S) H(y|y\notin S) + H(\mathbbm{1}_{\{y\in S\}}) \\ &\leq \log_2 |S| + \mathbb{P} (y\notin S) \log_2 |M| +1 \leq \log_2 |S| + 2. \end{aligned} \end{equation*} \end{proof} \begin{proof}[Proof of Theorem \ref{theo:ent}] Note that $k\geq\max\{\frac{16}{\varepsilon^2}(\ln n + \ln m -\ln \varepsilon +2), e^{16/\varepsilon^2}\}$ guarantees that \begin{equation*} \frac{8 e^{-\frac{k\varepsilon^2}{8}}}{\varepsilon} \leq \frac{1}{mn} \frac{1}{nk log_2 m}. \end{equation*} By considering inequality (\ref{eq:fin}) in the proof of Theorem \ref{theo:main}, we can see that the above choice of $k$ implies that $\mathbb{P} (E_{1,1}) \leq \frac{1}{mn} \frac{1}{nk \log_2 m}$, which implies that $\mathbb{P}(s\in\cup_{i,j} E_{i,j}) \leq \frac{1}{nk \log_2 m}$. This means that if we sample $k$-uniform ordered strategy profiles according to the Nash equilibrium $x$, then the resulting $k$-uniform ordered strategies form an $\varepsilon$-equilibrium with a probability of at least $1-\frac{1}{nk \log_2 m}=1-\frac{1}{\log_2(m^{nk})}$. Next, using Lemma \ref{lem:inf}, we provide a lower bound on the number of $k$-uniform profiles that form an $\varepsilon$-equilibrium. The random $k$-uniform profiles are elements of a set of size $m^{nk}$. The entropy of the random $k$-uniform profile is $kH(x)$. The probability that the random profile will form an $\varepsilon$-equilibrium is at least $1-\frac{1}{\log_2(m^{nk})}$. Therefore, by Lemma \ref{lem:inf}, we get that there are at least $\frac{1}{4}2^{kH(x)}$ different $k$-uniform profiles that are $\varepsilon$-equilibria. To conclude, the fraction of the $k$-uniform profiles that form an $\varepsilon$-equilibrium (among all the $k$-uniform profiles) is at least: \begin{equation*} \frac{\frac{1}{4}2^{kH(x)}}{m^{nk}}=\frac{1}{4}2^{k(H(x)-n\log_2 m)}. \end{equation*} Therefore, the expected time for finding an $\varepsilon$-equilibrium is at most $4\cdot 2^{k(n\log_2 m -H(x))}$. \end{proof} \section{Discussion} Having established an upper bound of $O(\log m + \log n)$, it is natural to ask whether it is tight. Althofer \cite{A} provided a lower bound of the order $\log m$ that matches our upper bound in the case where the number of players is not much larger than the number of pure strategies; i.e., $n=poly(m)$. In general, the tightness of our upper bound remains an open question. A similar question regarding the existence of \emph{pure} approximate equilibria in \emph{Lipschitz} games with many players arose in a related work by Azrieli and Shmaya \cite{AS}. Let us call games with $n$ players, $m$ actions for each player, and payoffs in $[0,1]$, \emph{normalized $n$-player $m$-action games}. To pinpoint the limits of our understanding of the problem, consider the following questions. \begin{question}\label{q untight} Is there a function $k\colon\mathbb (0,1)\to\mathbb N$ ($k$ dependents on $\varepsilon$ only, and not on the number of players $n$), such that every normalized $n$-player two-action game admits an $\varepsilon$-equilibrium in which every player employs a mixed strategy whose coefficients are rational numbers with a denominator at most $k(\varepsilon)$? \end{question} \begin{question}\label{q tight} Is there an $\varepsilon>0$ and a constant $C>0$, such that for every $n,m\in \mathbb N$ there exists a normalized $n$-player $m$-action game that does not admit any $\varepsilon$-equilibrium in which every player employs a mixed strategy whose coefficients are rational numbers with a denominator at most $C(\log n+\log m)$? \end{question} Note that a positive answer to Question~\ref{q tight} means that our upper bound \emph{is} tight, whereas a positive answer to Question~\ref{q untight} implies that our upper bound is \emph{not} tight. A positive answer to Question~\ref{q untight} means that one can find a $k$-uniform approximate equilibrium of the game for a \emph{constant} $k$ (depending only on $\varepsilon$), which in particular implies that there exists a $poly(N)$ algorithm for computing an approximate Nash equilibrium in two-action games.
1,108,101,566,219
arxiv
\section{\hspace{-2.5ex}. Introduction} \label{sec:introduction} \setcounter{equation}{0} In the semiclassical theory of gravity, the gravitational field is treated classically, but the matter fields are quantum. The key equation of the theory is the semiclassical Einstein equation, a generalization of the Einstein equation where the expectation value of the stress-energy tensor of quantum matter fields is the source of curvature. One expects that semiclassical gravity could be derived from a fundamental quantum theory of gravity as a certain approximation, but, in the absence of such a fundamental theory, the scope and limits of the semiclassical theory are not very well understood. It seems clear, nevertheless, that it should not be valid unless gravitational fluctuations are negligibly small. This condition may break down when the matter stress-energy has appreciable quantum fluctuations, since one would expect that fluctuations in the stress-energy of matter would induce gravitational fluctuations \cite{ford82}. A number of examples have been recently studied, both in cosmological and flat spacetimes, where, for some states of the matter fields, the stress-energy tensor have significant fluctuations \cite{stress-en_fluctu}. To account for such fluctuations, it is necessary to extend the semiclassical theory of gravity. To address this problem, or analogous problems in quantum mechanics or quantum field theory, different approaches have been adopted in the literature. The present paper attempts to unify, at least conceptually, two of these approaches in a formal derivation of an effective theory for the gravitational field in the semiclassical regime. The common feature of these two approaches is the idea of viewing the metric field as the system of interest and the matter fields as being part of its environment. This idea was first proposed by Hu \cite{hu89} in the context of semiclassical cosmology. Both approaches make use of the influence functional formalism, introduced by Feynman and Vernon \cite{feynman-vernon} to deal with a system-environment interaction in a full quantum theory. In this formalism, the integration of the environmental variables in a path integral yields the influence functional, from which one can define an effective action for the dynamics of the system \cite{feynman-hibbs,calzettahu,humatacz,husinha,% caldeira,hu-paz-zhang,hu-matacz94,greiner}. The first of these two approaches has been extensively used in the literature, not only in the framework of semiclassical cosmology \cite{calzettahu,humatacz,husinha,cv96,lomb-mazz,ccv97,campos-hu}, but also in the context of analogous semiclassical regimes for systems of quantum mechanics \cite{caldeira,hu-matacz94,hu-paz-zhang2} and of quantum field theory \cite{greiner,matacz,morikawa,shaisultanov,gleiser}. It makes use of the closed time path (CTP) functional technique, due to Schwinger and Keldysh \cite{schwinger}. This is a path integral technique designed to obtain expectation values of field operators in a direct way \cite{ctp}. In the semiclassical regime, a tree level approximation is performed in the path integrals involving the system variables. In this approximation, the equation of motion for the expectation value of the system field operator is the semiclassical equation, which can be directly derived from the effective action of Feynman and Vernon \cite{calzettahu,greiner,cv96,ccv97,campos-hu,shaisultanov}. When computing this effective action perturbatively up to quadratic order in its variables, one usually finds some imaginary terms which do not contribute to the semiclassical equation. The key point of this approach is the formal identification of the contribution of such terms to the influence functional with the characteristic functional of a Gaussian stochastic source. Assuming that in the semiclassical regime this stochastic source interacts with the system variables, and, thus, these become stochastic variables, equations of the Langevin type are derived for these variables. However, since this approach relies on a purely formal identification, doubts can be raised on the physical meaning of the derived equations. The second approach is based on the description of the transition from quantum to classical behavior in the framework of the consistent histories formulation of a quantum theory. The consistent histories formulation, proposed by Griffiths \cite{griffiths}, and developed by Omn\`es \cite{omnes} and by Gell-Mann and Hartle \cite{gell-mann-hartle,hartle}, was designed to deal with quantum closed ({\it i.e.}, isolated) systems. It is thus believed to be an appropriate approach to quantum cosmology, where the quantum system is the whole universe. The main goal of this formulation is the study of the conditions under which a set of quantum mechanical variables become decoherent, which means that these variables can be described in a probabilistic way \cite{gell-mann-hartle,hartle,halliwell93,histories,paz-zurek}. When the closed system consists on a distinguished subsystem (the ``system'', which is also often called an ``open system'') interacting with its environment, Gell-Mann and Hartle proposed a mechanism for decoherence and classicalization of suitably coarse-grained system variables \cite{gell-mann-hartle,hartle}. This approach allows to evaluate the probability distribution functional associated to such decoherent variables and, under some approximations, to derive effective quasiclassical equations of motion of the Langevin type for such variables \cite{gell-mann-hartle,hartle,halliwell93,dowker,halliwell}. In Sec.~\ref{sec:classicalization} we show that that these two approaches can in fact be related. In this way, we see that, on the one hand, the second approach sheds light into the physical meaning of the first one. On the other hand, the first approach provides a tool for computing effective Langevin-type equations to the second one. A large portion of this section consists of reformulating the mechanism for decoherence and classicalization of Gell-Mann and Hartle in the language of the CTP functional formalism. In Sec.~\ref{sec:Einstein-Langevin}, we use the results of this analysis to formally derive effective equations of motion for the gravitational field in a semiclassical regime. This derivation relies heavily on the results of the previous section. We find that, in the semiclassical regime, gravity might be described by a background metric, solution of the semiclassical Einstein equation, plus some stochastic metric perturbations. The equation for these perturbations, the semiclassical Einstein-Langevin equation, is seen to incorporate the effect of the lowest order matter stress-energy fluctuations on the gravitational field. In this paper we use the $(+++)$ sign conventions and the abstract index notation of Ref.~\cite{wald84}, and we work in units in which $c=\hbar =1$. \newpage \section{\hspace{-2.5ex}. Effective equations of motion from environment-in\-duced classicalization} \label{sec:classicalization} \setcounter{equation}{0} \setcounter{footnote}{0} \def\arabic{footnote}{\arabic{footnote}} \subsection{\hspace{-2.5ex}. The CTP functional formalism for a system-environment interaction} \label{subsec:CTP} We start this section by sketching the CTP functional formalism \cite{schwinger} applied to a system-environment interaction and its relation with the influence functional formalism of Feynman and Vernon \cite{feynman-vernon}. For more detailed reviews of the CTP functional formalism, see Refs.~\cite{ctp,campos-hu}, and for the influence functional formalism of Feynman and Vernon, see Refs.~\cite{feynman-hibbs,calzettahu,humatacz,husinha,% caldeira,hu-paz-zhang,hu-matacz94,greiner}. For simplicity, we shall work in this section with a model of quantum mechanics, but all the formalism can also be formally applied to field theory. It is instructive to maintain in this section the explicit dependence on $\hbar$. Let us consider a model of quantum mechanics which describe the interaction of two subsystems: one, called the ``system'', with coordinates $q$, and the other, called the ``environment'', with coordinates $Q$.\footnote{Even if, in order to simplify the notation, we do not write indices in these coordinates, $q$ and $Q$ have to be understood as representing an arbitrary number of degrees of freedom (which, in particular, can be an infinite number of degrees of freedom).} We write the action for this model as $S[q,Q]=S_s[q]+S_{se}[q,Q]$.\footnote{We shall assume that the action $S[q,Q]$ is the one that appears in the path integral formulas for the model, which, in general, needs not to coincide with the classical action for the model \cite{abers,weinberg}.} Let $\hat{q}(t)$ and $\hat{Q}(t)$ be the Heisenberg picture coordinate operators, which are assumed to be self-adjoint, {\it i.e.}, $\hat{q}^{\dag}\!=\!\hat{q}$ and $\hat{Q}^{\dag}\!=\!\hat{Q}$, and let $\hat{q}^{\rm \scriptscriptstyle S}$ and $\hat{Q}^{\rm \scriptscriptstyle S}$ be the corresponding Schr\"{o}dinger picture operators. Suppose that we are only interested in describing the physical properties of system observables from some initial time $t_i$ until some final time $t_f>t_i$. Working in the Schr\"{o}dinger picture, the state of the full system ({\it i.e.}, system plus environment) at the initial time $t\!=\!t_i$ will be described by a density operator $\hat{\rho}^{\rm \scriptscriptstyle S}(t_i)$. Let $\left\{ |q,Q\rangle^{\rm \scriptscriptstyle S} \right\}$ be the basis of eigenstates of the operators $\hat{q}^{\rm \scriptscriptstyle S}$ and $\hat{Q}^{\rm \scriptscriptstyle S}$. The matrix elements of the initial density operator in this basis will be written as $\rho(q,Q;q^{\prime},Q^{\prime};t_i)\equiv \mbox{}^{\rm \scriptscriptstyle S} \langle q,Q|\: \hat{\rho}^{\rm \scriptscriptstyle S}(t_i) \:|q^{\prime},Q^{\prime}\rangle^{\rm \scriptscriptstyle S}$. For simplicity, we shall assume that the initial density operator can be factorized as $\hat{\rho}^{\rm \scriptscriptstyle S}(t_i) \!=\! \hat{\rho}_s^{\rm \scriptscriptstyle S}(t_i)\otimes \hat{\rho}_e^{\rm \scriptscriptstyle S}(t_i)$, in such a way that its matrix elements in coordinate representation can be written as $\rho(q,Q;q^{\prime},Q^{\prime};t_i)\!=\!\rho_s(q,q^{\prime};t_i)\, \rho_e(Q,Q^{\prime};t_i)$. However, the formalism can be generalized to the most general case of a non-factorizable initial density operator \cite{hakim,grabert,gell-mann-hartle}. We are interested in computing expectation values of operators related to the system variables only, for times $t$ between $t_i$ and $t_f$. The dynamics of the system in this sense can be completely characterized by the knowledge of the whole family of Green functions of the system. Working in the Heisenberg picture, these Green functions can be defined as expectation values of products of $\hat{q}(t)$ operators. These Green functions can be derived from a CTP generating functional in which only the system variables are coupled to external sources $j_+(t)$ and $j_-(t)$ \cite{calzettahu,greiner,cv96,campos-hu,morikawa,shaisultanov}. This CTP generating functional can be written as the following path integral\footnote{A way of generalizing the formalism to a non-factorizable initial density operator consists in the following \cite{hakim,gell-mann-hartle}. One writes the initial density matrix in coordinate representation as $\rho(q,Q;q^{\prime},Q^{\prime};t_i)=\rho_s(q,q^{\prime};t_i)\, \rho_{se}(q,Q;q^{\prime},Q^{\prime};t_i)$, where $\rho_s$ is chosen in such a way that $\int\! dq\, \rho_s(q,q;t_i)=1$. Then, the CTP generating functional can be written as (\ref{generating functional}), with \[ e^{{i \over \hbar}\,S_{\rm eff}[q_+,q_-]}\equiv \int\! {\cal D}[Q_+]\;{\cal D}[Q_-]\; \rho_{se} (q_{+_{\scriptstyle i}},Q_{+_{\scriptstyle i}}; q_{-_{\scriptstyle i}},Q_{-_{\scriptstyle i}};t_i ) \: \delta(Q_{+_{\scriptstyle f}}\!-Q_{-_{\scriptstyle f}}) \; e^{{i \over \hbar}\, \left(\,S[q_+,Q_+]-S[q_-,Q_-]\, \right)}. \] } \begin{equation} Z[j_+,j_-] = \int\! {\cal D}[q_+]\;{\cal D}[q_-]\; \rho_s (q_{+_{\scriptstyle i}},q_{-_{\scriptstyle i}};t_i ) \: \delta(q_{+_{\scriptstyle f}}\!-q_{-_{\scriptstyle f}}) \; e^{{i \over \hbar}\, \left(S_{\rm eff}[q_+,q_-]+ \hbar \!\int\! dt\, j_+ q_+ -\hbar \!\int\! dt\, j_- q_- \right)}, \label{generating functional} \end{equation} with \begin{equation} S_{\rm eff}[q_+,q_-]\equiv S_s[q_+]-S_s[q_-]+S_{\rm IF}[q_+,q_-], \label{effective action} \end{equation} where $S_{\rm IF}$ is the influence action of Feynman and Vernon, which is defined in terms of the influence functional ${\cal F}_{\rm IF}$ as \begin{eqnarray} \hspace*{-4.5ex} {\cal F}_{\rm IF}[q_+,q_-]\!&\!\!\equiv \!\!&\! e^{{i \over \hbar}\, S_{\rm IF}[q_+,q_-]} \nonumber \\ \!&\!\!\equiv \!\!& \! \int\! {\cal D}[Q_+]\;{\cal D}[Q_-]\; \rho_e (Q_{+_{\scriptstyle i}},Q_{-_{\scriptstyle i}};t_i ) \: \delta(Q_{+_{\scriptstyle f}}\!-Q_{-_{\scriptstyle f}}) \; e^{{i \over \hbar}\, \left(\,S_{se}[q_+,Q_+]-S_{se}[q_-,Q_-]\, \right)}. \label{influence functional} \end{eqnarray} We shall call $S_{\rm eff}[q_+,q_-]$ the effective action of Feynman and Vernon. In these expressions we use the notation $q_{+_{\scriptstyle i}} \!\!\equiv\! q_+(t_i)$, $q_{+_{\scriptstyle f}} \!\!\equiv\! q_+(t_f)$, $Q_{+_{\scriptstyle i}} \!\!\equiv\! Q_+(t_i)$, $Q_{+_{\scriptstyle f}} \!\!\equiv\! Q_+(t_f)$, and similarly for $q_-$ and $Q_-$. All the integrals in $t$, including those that would define the actions $S_s[q]$ and $S_{se}[q,Q]$ in terms of the corresponding Lagrangians, have to be understood as integrals between $t_i$ and $t_f$. The CTP generating functional has the properties \begin{equation} Z[j,j]=1, \hspace{7 ex} Z[j_-,j_+]=Z^{\displaystyle \ast}[j_+,j_-], \hspace{7 ex} \bigl|\hspace{0.2ex} Z[j_+,j_-]\hspace{0.2ex}\bigr|\leq 1. \label{generating funct properties} \end{equation} From this generating functional, we can derive the following Green functions for the system: \begin{equation} \left\langle\, \tilde{\rm T}[\hat{q}(t_1^{\prime}) \cdots \hat{q}(t_s^{\prime})] \,{\rm T}[\hat{q}(t_1) \cdots \hat{q}(t_r)]\, \right\rangle \hspace{-0.2ex}=\hspace{-0.2ex} \left. {\delta Z[j_+,j_-] \over i\delta j_+(t_1) \cdots i\delta j_+(t_r) (-i)\delta j_-(t_1^{\prime}) \cdots (-i)\delta j_-(t_s^{\prime}) }\right|_{j_\pm=0} \!, \label{green functions} \end{equation} where $t_1,\dots ,t_r, t_1^{\prime},\dots ,t_s^{\prime}$ are all between $t_i$ and $t_f$, ${\rm T}$ and $\tilde{\rm T}$ mean, respectively, time and anti-time ordering. The expectation value is taken in the Heisenberg picture state corresponding to the Schr\"{o}dinger picture state described by $\hat{\rho}^{\rm \scriptscriptstyle S}(t_i)$ at the initial time $t\!=\!t_i$. The influence functional (\ref{influence functional}) can actually be interpreted as a CTP generating functional for quantum variables $Q$ coupled to classical time-dependent sources $q(t)$ through the action $S_{se}[q,Q]$ \cite{su}. Let us consider the quantum theory for the variables $Q$ in presence of classical sources $q(t)$ corresponding to this action, and assume that the initial Schr\"{o}dinger picture state for the quantum variables $Q$ is described by the density operator $\hat{\rho}_e^{\rm \scriptscriptstyle S}(t_i)$. For this theory, let $\hat{\cal U}[q](t,t^{\prime })$ be the unitary time-evolution operator, which can be formally written as $\hat{\cal U}[q](t,t^{\prime })\!=\! {\rm T} \exp \! \left[-{i\over \hbar} \int_{t^{\prime }}^{t} dt^{\prime\prime } \hat{H}^{\rm \scriptscriptstyle S}[q](t^{\prime\prime})\right]$, for $t\!>\!t^{\prime }$, where $\hat{H}^{\rm \scriptscriptstyle S}[q](t)$ is the Hamiltonian operator in the Schr\"{o}dinger picture. This Hamiltonian operator depends on $t$ as a function of $q(t)$ and their derivatives $\dot{q}(t)$, and this gives a functional dependence on $q$ in the operator $\hat{\cal U}$. It is easy to see that \cite{gell-mann-hartle,hartle,humatacz,hu-matacz94,greiner,hakim} \begin{equation} {\cal F}_{\rm IF}[q_+,q_-]= {\rm Tr} \Bigl[ \hat{\rho}_e ^{\rm \scriptscriptstyle S}(t_i) \: \hat{\cal U}^{\dag}[q_-](t_{f},t_{i})\: \hat{\cal U}[q_+](t_{f},t_{i})\Bigr]= \Bigl\langle \hat{\cal U}^{\dag}[q_-](t_{f},t_{i})\: \hat{\cal U}[q_+](t_{f},t_{i}) \Bigr\rangle _{\!\hat{\rho}_e^{\rm S}(t_i)}, \label{influence funct representation} \end{equation} where we use $\langle \hspace{1.5ex} \rangle_{\!\hat{\rho}_e^{\rm S}(t_i)}$ to denote an expectation value in the state described by $\hat{\rho}_e^{\rm \scriptscriptstyle S}(t_i)$. From this expression, it follows that the influence functional satisfies \begin{equation} {\cal F}_{\rm IF}[q,q]=1, \hspace{7 ex} {\cal F}_{\rm IF}[q_-,q_+]= {\cal F}_{\rm IF}^{\displaystyle\,\ast}[q_+,q_-], \hspace{7 ex} \bigl|\hspace{0.2ex} {\cal F}_{\rm IF}[q_+,q_-]\hspace{0.2ex}\bigr|\leq 1, \label{influence funct properties} \end{equation} or, equivalently, in terms of the influence action, \begin{equation} S_{\rm IF}[q,q]=0, \hspace{7 ex} S_{\rm IF}[q_-,q_+]= -S_{\rm IF}^{\displaystyle\,\ast}[q_+,q_-], \hspace{7 ex} {\rm Im}\, S_{\rm IF}[q_+,q_-] \geq 0, \label{influence action properties} \end{equation} and similar properties follow for $S_{\rm eff}[q_+,q_-]$. A decoherence functional for the system, where the environment variables have been completely integrated out, can now be introduced as the functional Fourier transform of the CTP generating functional in the external sources: \begin{equation} Z[j_+,j_-] \equiv \int\! {\cal D}[q_+]\;{\cal D}[q_-]\; {\cal D}[q_+,q_-] \; e^{i \int\! dt\, (j_+ q_+ - j_- q_- )}, \label{decoherence functional} \end{equation} that is, from (\ref{generating functional}) we have that \begin{equation} {\cal D}[q_+,q_-]= \rho_s (q_{+_{\scriptstyle i}},q_{-_{\scriptstyle i}};t_i ) \: \delta(q_{+_{\scriptstyle f}}\!-q_{-_{\scriptstyle f}}) \; e^{{i \over \hbar}\, S_{\rm eff}[q_+,q_-]}. \label{decoherence functional 2} \end{equation} In the consistent histories approach to quantum mechanics, ${\cal D}[q_+,q_-]$ is known as the decoherence functional for fine-grained histories of the system \cite{gell-mann-hartle,hartle,halliwell93,histories,dowker,halliwell}. The environment of a system has to be understood as characterized by all the quantum degrees of freedom which can affect the dynamics of the system, but which are ``not accessible'' in the observations of that system. This environment includes in general an ``external'' environment (variables representing other particles, or, in the context of field theory, other fields) and an ``internal'' environment (some degrees of freedom which, from the fundamental quantum theory point of view, would be associated to the same physical object as the ``system'' variables, but which are not directly probed in our observations of the system) \cite{zurek,omnes}. For instance, a problem which has been studied using the influence functional method is that of quantum Brownian motion \cite{feynman-vernon,feynman-hibbs,caldeira,% hu-paz-zhang,hu-matacz94,hu-paz-zhang2,gell-mann-hartle,% hartle,halliwell93,dowker,halliwell,hakim,grabert,brun}. In this problem, one is interested in the dynamics of a macroscopic particle interacting with a medium composed by a large number of other particles. In this example, one considers that the only ``observable'' system degree of freedom is the center of mass position of the macroscopic particle, whereas the remaining microscopic degrees of freedom of the macroscopic particle are considered as environmental variables. Such ``internal'' environment degrees of freedom, and also those of the particles of the medium (the ``external'' environment), are usually modelized as an infinite set of harmonic oscillators. In the context of field theory, one would typically consider as ``inaccessible'' to the observations the modes of the field of interest with characteristic momenta higher than some cut-off momentum \cite{lombardo,greiner,matacz}. In the case of the gravitational field, this has been considered by Whelan \cite{whelan} in a toy model designed to investigate the decoherence mechanism for gravity. It is convenient at this stage to distinguish between these two kinds of environmental variables, so let $Q$ represent the coordinates of the ``external'' environment (the coordinates of ``other particles'') and $q_{\mbox{}_{\rm U}}$ the ``unobservable system'' coordinates (the coordinates of the ``internal'' environment). As before, $q$ will represent the ``true'' system coordinates. One could now simply replace $Q$ by $(Q,q_{\mbox{}_{\rm U}})$ in the previous expressions. However, for convenience, we shall do the integrations in the environmental variables in two steps. The action of the full system will be now written as $S[q,q_{\mbox{}_{\rm U}},Q]$, and, as before, we shall assume a totally factorizable initial density operator $\hat{\rho}^{\rm \scriptscriptstyle S}(t_i)= \hat{\rho}_s^{\rm \scriptscriptstyle S}(t_i)\otimes \hat{\rho}_{\mbox{}_{\rm U}}^{\rm \scriptscriptstyle S}(t_i) \otimes \hat{\rho}_e^{\rm \scriptscriptstyle S}(t_i)$, which leads to an initial density matrix in coordinate representation of the form $\rho(q,q_{\mbox{}_{\rm U}},Q; q^{\prime},q_{\mbox{}_{\rm U}}^{\prime},Q^{\prime};t_i)= \rho_s(q,q^{\prime};t_i)\, \rho_{\mbox{}_{\rm U}}(q_{\mbox{}_{\rm U}}, q_{\mbox{}_{\rm U}}^{\prime};t_i)\, \rho_e(Q,Q^{\prime};t_i)$ (notice that we are now using the subindex $e$ for the ``external'' environment). Such a factorization is based on the assumption that the interactions between the three subsystems can be neglected for times $t \leq t_{i}$. Unfortunately, in most situations, this assumption does not seem to be very physically reasonable, especially for the ``true'' system-``internal'' environment interactions. One would need to consider the generalization of the formalism to a non-factorizable initial density operator mentioned above and the analysis would be more complicated. We start defining \begin{eqnarray} && \hspace{-10ex} e^{{i \over \hbar}\,\left(\, S_{s}^{\rm eff}[q_+]-S_{s}^{\rm eff}[q_-] +S_{se}^{\rm eff}[q_+,Q_+;q_-,Q_-] \,\right)} \nonumber \\ && \hspace{-5ex} \equiv \int\! {\cal D}[q_{\mbox{}_{\rm U}+}]\;{\cal D}[q_{\mbox{}_{\rm U}-}]\; \rho_{\mbox{}_{\rm U}} (q_{{\mbox{}_{\rm U}+}_{\scriptstyle i}}, q_{{\mbox{}_{\rm U}-}_{\scriptstyle i}};t_i ) \: \delta(q_{{\mbox{}_{\rm U}+}_{\!\scriptstyle f}}\!- q_{{\mbox{}_{\rm U}-}_{\!\scriptstyle f}}) \; e^{{i \over \hbar}\, \left(\,S[q_+,q_{\mbox{}_{\rm U}+},Q_+] -S[q_-,q_{\mbox{}_{\rm U}-},Q_-]\, \right)}, \label{s-e effective actions} \end{eqnarray} where the effective action for the system $S_{s}^{\rm eff}[q]$ is chosen to be real and local. Notice that the effective action $S_{se}^{\rm eff}[q_+,Q_+;q_-,Q_-]$ has analogous properties to those of $S_{\rm IF}$ in (\ref{influence action properties}). We introduce now an effective influence functional and an effective influence action as \begin{equation} {\cal F}^{\rm eff}_{\rm IF}[q_+,q_-] \equiv e^{{i \over \hbar}\, S^{\rm eff}_{\rm IF}[q_+,q_-]} \equiv \hspace{-0.1ex} \int\! {\cal D}[Q_+]\;{\cal D}[Q_-]\; \rho_e (Q_{+_{\scriptstyle i}},Q_{-_{\scriptstyle i}};t_i ) \: \delta(Q_{+_{\scriptstyle f}}\!-Q_{-_{\scriptstyle f}}) \; e^{{i \over \hbar}\, S_{se}^{\rm eff}[q_+,Q_+;q_-,Q_-]}. \label{effective influence functional} \end{equation} With these definitions, the effective action of Feynman and Vernon, $S_{\rm eff}[q_+,q_-]$, which appears in expression (\ref{generating functional}) can be written as \begin{equation} S_{\rm eff}[q_+,q_-]\equiv S_s^{\rm eff}[q_+]-S_s^{\rm eff}[q_-] +S^{\rm eff}_{\rm IF}[q_+,q_-]. \label{effective action 2} \end{equation} Note that, since $S_{\rm eff}[q_+,q_-]$ satisfies the same properties as $S_{\rm IF}$ in (\ref{influence action properties}), it follows from the last expression that $S^{\rm eff}_{\rm IF}$ has also these properties. \subsection{\hspace{-2.5ex}. The ``naive'' semiclassical approximation} \label{subsec:naive semiclassical} The usual ``naive'' semiclassical approximation for the system variables consists in performing a ``tree level'' approximation in the path integrals involving the $q$ variables in expression (\ref{generating functional}) \cite{calzettahu,greiner,cv96,ccv97,campos-hu,shaisultanov}. Therefore, the CTP generating functional is approximated by \begin{equation} Z[j_+,j_-] \simeq e^{{i \over \hbar}\, \left(S_{\rm eff}\bigl[\bar{q}_+^{\scriptscriptstyle (0)} \hspace{-0.2ex}[j]\, , \, \bar{q}_-^{\scriptscriptstyle (0)}\hspace{-0.2ex}[j]\bigr]+ \hbar \int\! dt\, j_{\mbox{}_+} \bar{q}_+^{\scriptscriptstyle (0)} \hspace{-0.2ex}[j]- \hbar \int\! dt\, j_{\mbox{}_-} \bar{q}_-^{\scriptscriptstyle (0)} \hspace{-0.2ex}[j] \right)}, \label{semiclass approx} \end{equation} where $\bar{q}_{\pm}^{\scriptscriptstyle (0)}\hspace{-0.2ex}[j] \!\equiv\! \bar{q}_{\pm}^{\scriptscriptstyle (0)}\hspace{-0.2ex}[j_+,j_-]$ are solutions of the classical equations of motion for the action $S_{\rm eff}[q_+,q_-]+ \hbar \int\! dt\, j_+ q_+ -\hbar \int\! dt\, j_- q_-$, that is, \begin{equation} {\delta S_{\rm eff}[\bar{q}_+^{\scriptscriptstyle (0)} , \bar{q}_-^{\scriptscriptstyle (0)}] \over \delta q_{\pm}(t)} = \mp \, \hbar j_{\pm}(t), \label{semiclass eqs with j's} \end{equation} which satisfy the boundary condition $\bar{q}_+^{\scriptscriptstyle (0)}(t_f) \!=\!\bar{q}_-^{\scriptscriptstyle (0)}(t_f)$. Whenever this approximation is valid, we can see from (\ref{semiclass approx}), (\ref{semiclass eqs with j's}) and (\ref{green functions}) that $\langle \hat{q}(t) \rangle \simeq q^{\scriptscriptstyle (0)}(t)$, with $q^{\scriptscriptstyle (0)} \equiv \bar{q}_{+}^{\scriptscriptstyle (0)}\hspace{-0.2ex} [j_+\!=\!j_-\!=\!0]= \bar{q}_{-}^{\scriptscriptstyle (0)}\hspace{-0.2ex} [j_+\!=\!j_-\!=\!0]$, that is, $q^{\scriptscriptstyle (0)}(t)$ is a solution of the two equivalent equations: \begin{equation} \left. {\delta S_{\rm eff}[q_+,q_-] \over \delta q_+(t)} \right|_{q_+=q_-=q^{\scriptscriptstyle (0)}} =0, \hspace{10ex} \left. {\delta S_{\rm eff}[q_+,q_-] \over \delta q_-(t)} \right|_{q_+=q_-=q^{\scriptscriptstyle (0)}} =0. \label{semiclassical eq} \end{equation} One can see that these two equations are actually the same equation, and that this equation is real. This is the semiclassical equation for the system variables. In a naive way, one would think that, when the above semiclassical approximation is valid, the system would behave as a classical system described by the coordinate functions $q^{\scriptscriptstyle (0)}(t)$, {\it i.e.}, that one could substitute the description of the system in terms of the operators $\hat{q}(t)$ by a classical description in terms of the functions $q^{\scriptscriptstyle (0)}(t)$. However, one can see from (\ref{semiclass approx}), (\ref{semiclass eqs with j's}) and (\ref{green functions}) that, in general, \begin{equation} \left\langle\, \tilde{\rm T}[\hat{q}(t_1^{\prime}) \cdots \hat{q}(t_s^{\prime})] \,{\rm T}[\hat{q}(t_1) \cdots \hat{q}(t_r)]\, \right\rangle \simeq \hspace{-2ex}/ \hspace{1ex} q^{\scriptscriptstyle (0)}(t_1) \cdots q^{\scriptscriptstyle (0)}(t_r)\, q^{\scriptscriptstyle (0)}(t_1^{\prime}) \cdots q^{\scriptscriptstyle (0)}(t_s^{\prime}). \end{equation} Thus, in general, whenever the above approximations are valid, we can only interpret the solutions of the semiclassical equation as representing the expectation value of the operators $\hat{q}(t)$. \subsection{\hspace{-2.5ex}. Further coarse-graining and decoherence} \label{subsec:decoherence} Decoherence takes place in a set of quantum-mechanical variables when the quantum interference effects are (in general, approximately) suppressed in the description of the properties of a physical system which are associated to that variables. When this happens, such decoherent variables can be described in an effective probabilistic way. In the Heisenberg picture, we will say that a set of variables decohere when the description in terms of the operators corresponding to these variables can be replaced by an effective description in terms of a set of classical random variables, in the sense that the quantum Green functions for such operators become approximately equal to the moments of the classical random variables. For the Green functions (\ref{green functions}), it is easy to see that this would hold in an exact way if the CTP generating functional (\ref{generating functional}) depended on the sources $j_{\pm}$ only as a functional $\Phi_q[j_+\!-\!j_-]$ of the difference $j_+ - j_-$, or, equivalently, if the decoherence functional (\ref{decoherence functional}) could be written as ${\cal D}[q_+,q_-]= {\cal P}_q[q_+] \, \delta[q_+-q_-]$. However, in practice, one finds that this condition is usually too strong to be satisfied, even in an approximate way \cite{gell-mann-hartle,hartle,halliwell93,histories,dowker,% halliwell,whelan}. One needs to introduce further coarse-graining in the system degrees of freedom in order to achieve decoherence. Let us then introduce coarse-grained system operators, which correspond to imprecisely specified values of the system coordinates. In the Heisenberg picture, such operators can be defined as \begin{equation} \hat{q}_c(t) \equiv \sum_{\bar{q}} \bar{q} \:\hat{P}_{\bar{q}}(t), \end{equation} where $\hat{P}_{\bar{q}}(t)$ is a set of projection operators, labeled by some variables $\bar{q}$ (these are often discrete variables), of the form \begin{equation} \hat{P}_{\bar{q}}(t) = \int\! dq \, dq_{\mbox{}_{\rm U}} \hspace{0.1ex} dQ \: \gamma(q-\bar{q}) \: |q,q_{\mbox{}_{\rm U}} ,Q,t \rangle \langle q,q_{\mbox{}_{\rm U}} ,Q,t|. \end{equation} Here $\left\{ |q,q_{\mbox{}_{\rm U}} ,Q,t \rangle \right\}$ is the basis of eigenstates of the operators $\hat{q}(t)$, $\hat{ q}_{\mbox{}_{\rm U}} (t)$ and $\hat{Q}(t)$, and $\gamma$ is a real function. We shall assume coarse-grainings of characteristic sizes $\sigma$, that is, such that the function $\gamma(q-\bar{q})$ vanishes or has negligible values for $q$ outside a cell $I_{\bar{q}}$ of sizes $\sigma$ centered around $\bar{q}$. This means that \begin{equation} \int\! dq \;\gamma(q-\bar{q}) \: f(q) \simeq \int_{I_{\bar{q}}} \! dq \;\gamma(q-\bar{q}) \: f(q), \label{c-g characteristic sizes} \end{equation} for any function $f(q)$. In addition, the function $\gamma$ must be chosen in such a way that the set of projection operators is (at least, approximately) exhaustive and mutually exclusive, which means that \begin{equation} \sum_{\bar{q}} \hat{P}_{\bar{q}}(t)= \hat{I}, \hspace{10ex} \hat{P}_{\bar{q}}(t) \hat{P}_{\bar{q}^\prime}(t)= \delta_{\bar{q} \bar{q}^\prime}\, \hat{P}_{\bar{q}}(t), \label{proj properties} \end{equation} where $\hat{I}$ is the identity operator. For specific examples of operators satisfying the above properties in an exact or in an approximate way, see Refs.~\cite{dowker,halliwell}. Next, we can introduce a family of decoherence functions for coarse-grained histories of the system \cite{gell-mann-hartle,hartle,halliwell93,histories,paz-zurek,% dowker,halliwell}. In order to do so, let us consider a set $\{t_1, \dots , t_N \}$ of $N$ instants of time, such that $t_k < t_{k+1}$, $k = 0, \dots , N$, with $t_0 \equiv t_i$ and $t_{N+1} \equiv t_f$. Introducing two sets of values of $\bar{q}$ associated to such set of instants, $\{ \bar{q}_+ \} \equiv \{ \bar{q}_{+_1}, \dots , \bar{q}_{+_N}\}$ and $\{ \bar{q}_- \} \equiv \{ \bar{q}_{-_1}, \dots , \bar{q}_{-_N}\}$, the decoherence function for this pair of ``coarse-grained histories'' of the system is defined as \begin{equation} {\cal D}_c( \{ \bar{q}_+ \},\{ \bar{q}_-\} )_{(t_1, \dots , t_N)} \equiv {\rm Tr} \! \left[ \hat{P}_{\bar{q}_{+ _N}}(t_N) \cdots \hat{P}_{\bar{q}_{+ _1}}(t_1) \, \hat{\rho} \, \hat{P}_{\bar{q}_{- _1}}(t_1) \cdots \hat{P}_{\bar{q}_{- _N}}(t_N) \right], \label{c-g decoh funct} \end{equation} where $\hat{\rho}$ is the density operator describing the state of the entire system (system plus environment) in the Heisenberg picture (${\cal D}_c$ is often called decoherence ``functional'' in the literature, but, for each set $\{t_1, \dots , t_N \}$, this is actually a function of $2N$ variables). These decoherence functions can be written in a path integral form as \begin{equation} {\cal D}_c( \{ \bar{q}_+ \},\{ \bar{q}_-\} )_{(t_1, \dots , t_N)} = \! \int\! {\cal D}[q_+]\,{\cal D}[q_-]\, \prod_{k=1}^N \gamma (q_+(t_k) \!-\! \bar{q}_{+_k})\, \gamma (q_-(t_k) \!-\! \bar{q}_{-_k})\: {\cal D}[q_+,q_-], \label{c-g decoh funct 2} \end{equation} where ${\cal D}[q_+,q_-]$ is the decoherence functional for fine-grained histories of the system (\ref{decoherence functional}). From the definition (\ref{c-g decoh funct}) and the properties (\ref{proj properties}), one can show that these decoherence functions have the properties \begin{equation} \sum_{\{ \bar{q}_+ \}} \sum_{\{ \bar{q}_- \}} {\cal D}_c( \{ \bar{q}_+ \},\{ \bar{q}_-\} ) = 1, \hspace{10ex} {\cal D}_c( \{ \bar{q}_- \},\{ \bar{q}_+\} ) = {\cal D}_c^{\displaystyle \ast}( \{ \bar{q}_+ \},\{ \bar{q}_-\} ), \label{decoh funct properties} \end{equation} and that the diagonal elements of the decoherence functions (the values of those functions in the limit $\bar{q}_{-_k}\!\rightarrow \! \bar{q}_{+_k}$) are positive. For $N \!>\! 1$, we can also see that, if we divide the set $\{t_1, \dots , t_N \}$ into a subset of $M \!<\! N$ instants, $\{t_1^{\prime}, \dots , t_M^{\prime} \} \!\subset\! \{t_1, \dots , t_N \}$, with $t_1^{\prime} < \cdots <t_M^{\prime}$, and the subset of the remaining $L \!\equiv\! M\!-\!N$ instants, denoted as $\{t_1^{\prime \prime}, \dots , t_L^{\prime \prime} \}$ [{\it i.e.}, $\{t_1, \dots , t_N \}\!=\! \{t_1^{\prime}, \dots , t_M^{\prime} \} \cup \{t_1^{\prime \prime}, \dots , t_L^{\prime \prime} \}$], then \begin{equation} {\cal D}_c( \{ \bar{q}_+ \}_{\!\mbox{}_{M}}, \{ \bar{q}_-\}_{\!\mbox{}_{M}} )_{(t_1^{\prime}, \dots , t_M^{\prime})} = \sum_{ \{ \bar{q}_+ \}_{\!\mbox{}_{L}} } \sum_{\{ \bar{q}_- \}_{\!\mbox{}_{L}} } {\cal D}_c( \{ \bar{q}_+ \}_{\!\mbox{}_{N}}, \{ \bar{q}_-\}_{\!\mbox{}_{N}} )_{(t_1, \dots , t_N)}, \label{decoh funct prop 4} \end{equation} with $\{ \bar{q}_{\pm} \}_{\!\mbox{}_{M}} \!\equiv\! \{ \bar{q}_{\pm}(t_1^{\prime}), \dots, \bar{q}_{\pm}(t_M^{\prime}) \}$, $\{ \bar{q}_{\pm} \}_{\!\mbox{}_{L}} \!\equiv\! \{ \bar{q}_{\pm}(t_1^{\prime\prime}), \dots, \bar{q}_{\pm}(t_L^{\prime\prime}) \}$, where we use the notation $\bar{q}_{\pm}(t_k) \!\equiv\! \bar{q}_{\pm_k}$, for $k \!=\! 1, \dots, N$, and $\{ \bar{q}_{\pm} \}_{\!\mbox{}_{N}} \!\equiv\! \{ \bar{q}_{{\pm}_1}, \dots, \bar{q}_{{\pm}_N} \}$. To make contact with the CTP formalism, let us introduce now, in analogy with (\ref{decoherence functional}), a family of generating functions for the coarse-grained system degrees of freedom as the following Fourier series: \begin{equation} Z_c( \{ j_+ \},\{ j_- \} )_{(t_1, \dots , t_N)} \equiv \sum_{ \{ \bar{q}_+ \}} \sum_{ \{ \bar{q}_- \}} {\cal D}_c( \{ \bar{q}_+ \},\{ \bar{q}_-\} )_{(t_1, \dots , t_N)}\; e^{i \sum_{k=1}^{N} (j_{+_k} \bar{q}_{+_k} - j_{-_k} \bar{q}_ {-_k})} , \label{c-g generating funct} \end{equation} where $\{ j_{\pm} \} \equiv \{ j_{\pm_1}, \dots , j_{\pm_N}\}$. Note that the properties (\ref{decoh funct properties}) for the decoherence functions are equivalent to \begin{equation} Z_c( \{0 \},\{ 0 \} )=1, \hspace{10ex} Z_c( \{ j_- \},\{ j_+ \} ) =Z_c^{\,\displaystyle \ast}( \{ j_+ \},\{ j_- \} ). \label{c-g generating funct properties} \end{equation} From the generating function (\ref{c-g generating funct}), we can compute the Green functions \[ G_{c \; m_1 \cdots \, m_s}^{n_1 \cdots \, n_r} (t_1^{\prime}, \dots, t_r^{\prime}; t_1^{\prime \prime}, \dots, t_s^{\prime \prime}) \equiv \left\langle\, \tilde{\rm T}[\hat{q}_c^{m_1}(t_1^{\prime \prime}) \cdots \hat{q}_c^{m_s}(t_s^{\prime \prime})] \, {\rm T}[\hat{q}_c^{n_1}(t_1^{\prime}) \cdots \hat{q}_c^{n_r}(t_r^{\prime})]\, \right\rangle, \] with $n_1, \dots, n_r, m_1, \dots, m_s \!\in \! {\rm I\hspace{-0.4 ex}N}$, $\{t_1^{\prime}, \dots , t_r^{\prime} \} \!\subseteq \! \{t_1, \dots , t_N \}$ and $\{t_1^{\prime \prime}, \dots , t_s^{\prime \prime} \} \!\subseteq \! \{t_1, \dots , t_N \}$ (thus, $r,s \leq N$): \begin{equation} G_{c \; m_1 \cdots \, m_s}^{n_1 \cdots \, n_r} \!\hspace{0.1ex} (t_1^{\prime}, \dots, t_r^{\prime}; t_1^{\prime \prime}, \dots, t_s^{\prime \prime})= \! \left. { \left(-i \partial \right) ^{n_1+ \cdots +n_r+ m_1+ \cdots +m_s} Z_c( \{ j_+ \},\{ j_- \} )_{(t_1, \dots , t_N)} \over \left[ \partial j_+(t_1^{\prime}) \right]^{\hspace{-0.1 ex} n_1} \!\! \cdots \! \left[ \partial j_+(t_r^{\prime}) \right]^{\hspace{-0.1 ex} n_r} \! \left[-\partial j_-(t_1^{\prime \prime}) \right]^{\hspace{-0.1 ex} m_1} \!\! \cdots \!\left[- \partial j_-(t_s^{\prime \prime}) \right]^{\hspace{-0.1 ex} m_s} \!} \right|_{\{j_\pm \}=\{0\} } \!, \label{c-g green funct} \end{equation} where $j_\pm (t_k) \!\equiv\! j_{\pm_k}$, for $k \!=\! 1, \dots, N$. The property (\ref{decoh funct prop 4}) can also be written in terms of the corresponding generating functions as \begin{equation} Z_c(\{ j_+ \}_{\!\mbox{}_{M}}, \{ j_-\}_{\!\mbox{}_{M}} )_{(t_1^{\prime}, \dots , t_M^{\prime})} =\left. Z_c( \{ j_+ \}_{\!\mbox{}_{N}}, \{ j_- \}_{\!\mbox{}_{N}} )_{(t_1, \dots , t_N)} \right|_{\{j_\pm \}_{\!\mbox{}_{L}}=\{0\} }, \label{c-g generating funct prop 4} \end{equation} with the notation $\{ j_{\pm} \}_{\!\mbox{}_{M}} \!\equiv\! \{ j_{\pm}(t_1^{\prime}), \dots, j_{\pm}(t_M^{\prime}) \}$, and similarly for $\{ j_{\pm} \}_{\!\mbox{}_{L}}$ and $\{ j_{\pm} \}_{\!\mbox{}_{N}}$. Notice that this last property is consistent with (\ref{c-g green funct}), in the sense that, for instance, $G_c^{n_1 n_2}(t_1^{\prime},t_2^{\prime})$ can be equally computed either from $Z_c(\{ j_+ \}_{\mbox{}_{2}}, \{ j_-\}_{\mbox{}_{2}} )_{(t_1^{\prime},t_2^{\prime})}$, or from $Z_c( \{ j_+ \}_{\!\mbox{}_{N}}, \{ j_- \}_{\!\mbox{}_{N}} )_{(t_1, \dots , t_N)}$, with $N>2$. Having introduced the coarse-grained description of the system in terms of the operators $\hat{q}_c(t)$, we can now sketch the decoherence mechanism for them. For the Green functions (\ref{c-g green funct}), one can show that the decoherence condition described above holds in an exact way if the generating function (\ref{c-g generating funct}) depends on the sources $j_{\pm_k}$ only as a function of the differences $j_{+_k}\!-\!j_{-_k}$, {\it i.e.}, as $\Phi_{\bar{q}}( \{ j_+\!-\!j_- \} )_{(t_1, \dots , t_N)}$. Then, introducing the Fourier series corresponding to $\Phi_{\bar{q}}$, we can write \begin{equation} Z_c( \{ j_+ \},\{ j_- \})_{(t_1, \dots , t_N)}= \Phi_{\bar{q}}( \{ j_+ \hspace{-0.2ex}-\hspace{-0.2ex} j_- \} ) _{(t_1, \dots , t_N)} \equiv \sum_{\{ \bar{q}\} } {\cal P}_{\bar{q}}( \{ \bar{q} \} )_{(t_1, \dots , t_N)} \; e^{i \sum_{k=1}^{N} \bar{q}_k ( j_{+_k} - j_{-_k}) }. \label{decoherence condition} \end{equation} Note from the last expression that, if we interpret the function ${\cal P}_{\bar{q}}$ as the probability distribution for a set of random variables $\bar{q}_k$, $k \!=\! 1, \dots, N$, associated to the instants $t_k$, then $\Phi_{\bar{q}}$ is the corresponding characteristic function. Therefore, from (\ref{c-g green funct}), we get \begin{eqnarray} &&\hspace{-4ex} G_{c \; m_1 \cdots \, m_s}^{n_1 \cdots \, n_r} (t_1^{\prime}, \dots, t_r^{\prime}; t_1^{\prime \prime}, \dots, t_s^{\prime \prime})= \left. { \left(-i \partial \right) ^{n_1+ \cdots +n_r+ m_1+ \cdots +m_s} \Phi_{\bar{q}}( \{ j \} )_{(t_1, \dots , t_N)} \over \left[ \partial j(t_1^{\prime}) \right]^{n_1} \cdots \left[ \partial j(t_r^{\prime}) \right]^{n_r} \left[\partial j(t_1^{\prime \prime}) \right]^{m_1} \cdots \left[\partial j(t_s^{\prime \prime}) \right]^{m_s} } \right|_{\{j \}=\{0\} } \nonumber \\ &&\hspace{-4ex} =\!\hspace{-0.1ex} \sum_{\{ \bar{q}\} } \hspace{-0.1ex} {\cal P}_{\bar{q}}(\hspace{-0.1ex} \{ \bar{q} \} \hspace{-0.1ex} ) _{\hspace{-0.1ex}(t_1, \dots , t_N)} \, \bar{q}^{n_1 \!}(t_1^{\prime}) \hspace{-0.2ex} \cdots \hspace{-0.2ex} \bar{q}^{n_r \!}(t_r^{\prime})\, \bar{q}^{m_1 \!}(t_1^{\prime \prime}) \hspace{-0.2ex} \cdots \hspace{-0.2ex} \bar{q}^{m_s \!}(t_s^{\prime \prime}) \hspace{-0.2ex} \equiv \hspace{-0.2ex} \left\langle \bar{q}^{n_1 \!}(t_1^{\prime}) \hspace{-0.2ex} \cdots \hspace{-0.2ex} \bar{q}^{n_r \!}(t_r^{\prime})\, \bar{q}^{m_1 \!}(t_1^{\prime \prime}) \hspace{-0.2ex} \cdots \hspace{-0.2ex} \bar{q}^{m_s \!}(t_s^{\prime \prime}) \right\rangle_{\hspace{-0.2ex} c} \hspace{-0.2ex}, \nonumber \\ \mbox{} \label{correlation functions} \end{eqnarray} where $\langle \hspace{1.5ex} \rangle_c$ means statistical average of the random variables, and we use the notation $\bar{q}(t_k) \!\equiv\! \bar{q}_k$, $j(t_k) \!\equiv\! j_k$, for $k \!=\! 1, \dots, N$. Note that, if (\ref{decoherence condition}) is satisfied, then the property (\ref{c-g generating funct prop 4}) reduces to \begin{equation} \Phi_{\bar{q}}( \{ j \}_{\!\mbox{}_{M}} ) _{(t_1^{\prime}, \dots , t_M^{\prime})} = \left. \Phi_{\bar{q}}( \{ j \}_{\!\mbox{}_{N}} )_{(t_1, \dots , t_N)} \right|_{\{ j \}_{\!\mbox{}_{L}}=\{0\} }, \label{prop 4} \end{equation} or, equivalently, \begin{equation} {\cal P}_{\bar{q}}( \{ \bar{q} \}_{\!\mbox{}_{M}} ) _{(t_1^{\prime}, \dots , t_M^{\prime})} = \sum_{ \{ \bar{q} \}_{\!\mbox{}_{L}}} {\cal P}_{\bar{q}}( \{ \bar{q} \}_{\!\mbox{}_{N}} ) _{(t_1, \dots , t_N)}. \label{prop 4 bis} \end{equation} This last property is a necessary condition for the probabilistic interpretation (\ref{correlation functions}) to be consistent. The conditions for decoherence (\ref{decoherence condition}) can be written in terms of the corresponding decoherence functions as \begin{equation} {\cal D}_c( \{ \bar{q}_+ \},\{ \bar{q}_-\} )_{(t_1, \dots , t_N)} ={\cal P}_{\bar{q}}( \{ \bar{q}_+ \} )_{(t_1, \dots , t_N)} \prod_{k=1}^N \delta_{\bar{q}_{+_k} \bar{q}_{-_k}}. \label{decoherence condition 2} \end{equation} These are actually the conditions for decoherence of coarse-grained system variables as stated in the consistent histories formulation of quantum mechanics \cite{gell-mann-hartle,hartle,halliwell93,histories,paz-zurek,% dowker,halliwell}. Notice, from (\ref{proj properties}), that (\ref{decoherence condition 2}) is always satisfied for a single instant of time ({\it i.e.}, when $N \!=\! 1$) \cite{dowker}. We can now check that the interpretation of ${\cal P}_{\bar{q}}$ as a probability function is actually correct. From the second of the properties (\ref{decoh funct properties}), we have that ${\cal P}_{\bar{q}}^{\displaystyle \ast}( \{ \bar{q} \} ) ={\cal P}_{\bar{q}}( \{ \bar{q} \} )$, {\it i.e.}, ${\cal P}_{\bar{q}}$ is real. Since the diagonal elements of the decoherence functions are positive, ${\cal P}_{\bar{q}}( \{ \bar{q} \} )$ is also positive. These two properties of ${\cal P}_{\bar{q}}( \{ \bar{q} \} )_{(t_1, \dots , t_N)}$, together with (\ref{prop 4 bis}), are enough to guarantee that it can be properly interpreted as the probability distribution for a set of random variables associated to the instants $t_1, \dots , t_N$. From the first of the relations (\ref{decoh funct properties}), which yields $\sum_{\{ \bar{q}\} } {\cal P}_{\bar{q}}( \{ \bar{q} \} )=1$, it follows that this probability distribution is normalized. In practice, the conditions for decoherence described above will be usually only satisfied in an approximate way. Approximate decoherence is typically achieved through a mechanism which was proposed by Gell-Mann and Hartle \cite{gell-mann-hartle,hartle}. To see how this works, note that, if we assume coarse-grainings of characteristic sizes $\sigma$ [see (\ref{c-g characteristic sizes})], and using (\ref{decoherence functional 2}), we can write the decoherence function (\ref{c-g decoh funct 2}) as \begin{eqnarray} &&\hspace{-3.9ex} {\cal D}_c( \{ \bar{q}_+ \},\{ \bar{q}_-\} )_{(t_1, \dots , t_N)} \! \simeq \! \int_{ \! \mbox{}_{ \scriptstyle \{ I_{\bar{q}_{+}} \}, \{ I_{\bar{q}_{-}} \} }} \hspace{-3ex} {\cal D}[q_+^{(0)}]\,{\cal D}[q_-^{(0)}] \prod_{k=1}^N {\cal D}[q_+^{(k)}]\,{\cal D}[q_-^{(k)}]\, \rho_s \bigl(q^{(0)}_{+_{\scriptstyle i}},q^{(0)}_{-_{\scriptstyle i}} ;t_i \bigr) \, \delta \bigl(q^{(N)}_{+_{\scriptstyle f}}\!-q^{(N)}_{-_{\scriptstyle f}} \bigr) \nonumber \\ &&\hspace{-4ex} \times \delta \Bigl(q^{\hspace{-0.1ex}(k-1)}_+\hspace{-0.1ex}(t_k) \!-\! q^{\hspace{-0.1ex}(k)}_+\hspace{-0.1ex}(t_k)\Bigr) \hspace{0.2ex} \delta \Bigl(q^{\hspace{-0.1ex}(k-1)}_-\hspace{-0.1ex}(t_k) \!-\! q^{\hspace{-0.1ex}(k)}_-\hspace{-0.1ex}(t_k)\Bigr) \hspace{0.2ex} \gamma (q^{\hspace{-0.1ex}(k)}_+\hspace{-0.1ex}(t_k) \!-\! \bar{q}_{+_k}) \hspace{0.2ex} \gamma (q^{\hspace{-0.1ex}(k)}_-\hspace{-0.1ex}(t_k) \!-\! \bar{q}_{-_k}) \prod_{k=0}^N \! e^{{i \over \hbar}\, S_{\rm eff}[q_+^{\hspace{-0.1ex}(k)} \hspace{-0.4ex},q_-^{\hspace{-0.1ex}(k)}]}, \nonumber \\ \mbox{} \label{c-g decoh funct 3} \end{eqnarray} where each path integration $\int {\cal D}[q_{\pm}^{(k)}]$, for $k=0, \dots, N$, is over paths $q_{\pm}^{(k)}(t)$ with $t \in [t_k,t_{k+1}]$, being $t_0 \equiv t_i$ and $t_{N+1} \equiv t_f$, and we have used a notation to indicate that these paths are restricted to pass through the cells $I_{\bar{q}_{\pm _k}}$ at the instants $t_k$, for $k=1, \dots, N$. From (\ref{effective action 2}), the modulus of each factor $\exp \bigr( {{i \over \hbar}\hspace{0.2ex} S_{\rm eff}[q_+^{\hspace{-0.1ex}(k)} ,q_-^{\hspace{-0.1ex}(k)}]} \bigl)$ in the last expression is $\exp \bigr( {-{1 \over \hbar} \hspace{0.2ex} {\rm Im}\, S_{\rm IF}^{\rm eff} [q_+^{\hspace{-0.1ex}(k)} ,q_-^{\hspace{-0.1ex}(k)}]} \bigl)$. Then, if for every $k=0, \dots, N$, ${\rm Im}\, S_{\rm IF}^{\rm eff} [q_+^{\hspace{-0.1ex}(k)} ,q_-^{\hspace{-0.1ex}(k)}]$, which is always positive or zero, is much larger than $\hbar$ whenever the differences $|q_+^{\hspace{-0.1ex}(k)}\!-q_-^{\hspace{-0.1ex}(k)}|$ are larger than some ``cut-off'' sizes $d^{(k)}$, the integrand in (\ref{c-g decoh funct 3}) will be only non-negligible for $|q_+^{\hspace{-0.1ex}(k)}\!-q_-^{\hspace{-0.1ex}(k)}| \leq d^{(k)}$. If the characteristic sizes $\sigma$ of the coarse-graining satisfy $\sigma \!\gg \!d^{(k)}$, then the ``off-diagonal'' elements of ${\cal D}_c( \{ \bar{q}_+ \},\{ \bar{q}_-\} )_{(t_1, \dots , t_N)}$ are negligible and one has approximate decoherence \cite{gell-mann-hartle,hartle}. We should stress that $S_{\rm IF}^{\rm eff}[q_+,q_-]$ is the result of integrating out both the ``external'' environment degrees of freedom and also the system degrees of freedom which are ``not accessible'' to the observations (the ``internal'' environment). In general, these two integrations play an important role in the achievement of this sufficient condition for approximate decoherence. A characterization of the degree of approximate decoherence has been given in Ref.~\cite{dowker} (see also Refs.~\cite{histories,halliwell93}). Typically, $d^{(k)}$ can be estimated in terms of $\Delta t_k \!\equiv \! t_{k+1} \!-\! t_k$. When this is the case, one usually finds that the Gell-Mann and Hartle mechanism for approximate decoherence works provided all the time intervals satisfy $\Delta t_k \!\geq\! \Delta t_c$, $k=0, \dots, N$, where $\Delta t_c$ is sufficiently larger than some characteristic decoherence time scale $t_{\scriptscriptstyle \! D}$ ($t_{\scriptscriptstyle \! D}$ can be written in terms of $\sigma$ and some parameters characterizing the environment and the system-environment couplings) \cite{gell-mann-hartle,hartle,paz-zurek}. For $\Delta t_c$ one should take the smallest value compatible with a specified degree of approximate decoherence. In this sense, we can think of a coarse-graining as characterized both by the sizes $\sigma$ and by the time scale $\Delta t_c$. \subsection{\hspace{-2.5ex}. Effective equations of motion for the system} \label{subsec:effective eqs} Assuming that the mechanism for approximate decoherence described in the previous subsection works, an approximate effective description of the coarse-grained system variables in terms of a set of random variables [in the sense of Eq.~(\ref{correlation functions})] is available, at least for instants of time satisfying $\Delta t_k \!\geq\! \Delta t_c$, for $k=0, \dots, N$. The corresponding probability distribution ${\cal P}_{\bar{q}}( \{ \bar{q} \} )_{(t_1, \dots , t_N)}$ is given by the diagonal elements of the decoherence function (\ref{c-g decoh funct}). We shall next make an estimation of this probability distribution. This follows essentially the derivation of Gell-Mann and Hartle in Refs.~\cite{gell-mann-hartle,hartle}. For alternative derivations for more specific models, see Refs.~\cite{halliwell93,dowker,halliwell}. Introducing the new variables $q_{\scriptscriptstyle \Delta}\!\equiv\! q_+\!-q_-$ and $q_{\scriptscriptstyle \Sigma}\!\equiv\! {1 \over 2}\, (q_+\!+\!q_-)$, and similarly for $\bar{q}_{\pm_k}$, and assuming that $\sigma \gg d^{(k)}$, note first, from (\ref{c-g decoh funct 3}), that the restrictions on the integration over $q_{\scriptscriptstyle \Delta}$ coming from the coarse-graining can be neglected in the diagonal elements of this decoherence function. Therefore, using (\ref{c-g decoh funct 2}) and (\ref{decoherence functional 2}), and writing $S_{\rm eff}[q_+,q_-]\equiv S_{\rm eff} [q_{\scriptscriptstyle \Delta},q_{\scriptscriptstyle \Sigma}]$, we get \begin{equation} {\cal P}_{\bar{q}}( \{ \bar{q}_{\scriptscriptstyle \Sigma} \} ) _{(t_1, \dots , t_N)} \simeq \int\! {\cal D}[q_{\scriptscriptstyle \Sigma}]\, \prod_{k=1}^N \gamma^2 (q_{\scriptscriptstyle \Sigma}(t_k) \!-\! \bar{q}_{{\scriptscriptstyle \Sigma}_k}) \: {\cal P}_{\rm f}[q_{\scriptscriptstyle \Sigma}], \label{probability 2} \end{equation} where \begin{equation} {\cal P}_{\rm f}[q_{\scriptscriptstyle \Sigma}] \equiv \int_{_{_{\scriptstyle q_{\mbox{}_{\hspace{-0.1ex}\Delta}}\!(t_f)=0 }}} \!\!\!\!\!\!\!\!\!\! {\cal D}[q_{\scriptscriptstyle \Delta}]\; \rho_s \!\left(q_{\scriptscriptstyle \Sigma_{\scriptstyle i}} \!+\!{\textstyle \frac{1}{2}}\, q_{\scriptscriptstyle \Delta_{\scriptstyle i}}, q_{\scriptscriptstyle \Sigma_{\scriptstyle i}} \!-\!{\textstyle \frac{1}{2}}\, q_{\scriptscriptstyle \Delta_{\scriptstyle i}} ;t_i \right) \: e^{{i \over \hbar}\, S_{\rm eff} [q_{\mbox{}_{\hspace{-0.1ex}\Delta}},q_{\mbox{}_{\Sigma}}]}. \label{probability 3} \end{equation} At this stage, we introduce two simplifications in our analysis. First, we restrict our evaluation to coarse-grained system variables having significance only up to certain scales, larger enough than $\sigma$, so that the random variables $\bar{q}_k$ can be well approximated by continuous random variables. This approximation can be implemented with the use of a set of approximate projection operators $\hat{P}_{\bar{q}}(t)$, with $\bar{q}$ being continuous variables, which satisfy the properties (\ref{proj properties}) in an approximate way (see Refs.~\cite{dowker,halliwell} for an example). Then, all the sums $\sum_{\{ \bar{q}\} }$ can be replaced by integrals $\int \hspace{-0.2ex} \prod_{k=1}^N d\bar{q}_k$ and the functions ${\cal P}_{\bar{q}}( \{ \bar{q} \} )_{(t_1, \dots , t_N)}$ become probability densities. Second, as long as we are only interested in the dynamics of the system on time scales much larger than $\Delta t_c$ ($\Delta t_c$ is proportional to the decoherence time scale $t_{\scriptscriptstyle \! D}$, which is typically extremely small, see Refs.~\cite{omnes,zurek,joos-zeh} for some examples), we can take the continuous time limit in (\ref{probability 2}). In order to do so, consider the instants $t_k \equiv t_i+k \, \Delta t$, $k = 0, \dots , N+1$, with $\Delta t \equiv (t_f-t_i)/(N+1)$. Introducing functions $\bar{q}(t)$, such that $\bar{q}(t_k) = \bar{q}_k$ (assumed now to be continuous variables), and letting $N \!\rightarrow \! \infty$ in (\ref{probability 2}) [replace $\bar{q}_{{\scriptscriptstyle \Sigma}_k}$ by $\bar{q}_k$], with $(t_f-t_i)$ maintained finite (thus, $\Delta t \!\rightarrow \! 0$), we get a probability distribution functional associated to some stochastic variables $\bar{q}(t)$ \cite{feynman-hibbs}: \begin{equation} {\cal P}_{\bar{q}}[\bar{q}] \simeq \int\! {\cal D}[q_{\scriptscriptstyle \Sigma}]\: \gamma^2[q_{\scriptscriptstyle \Sigma} -\bar{q}] \: {\cal P}_{\rm f}[q_{\scriptscriptstyle \Sigma}], \label{prob functional} \end{equation} where $\gamma[q]$ is the functional corresponding to $\prod_{k=1}^N \gamma (q(t_k))$ in the limit $N \!\rightarrow \!\infty$ (some redefinitions in the parameters entering in the function $\gamma (q)$ may be needed in order that such limit is well defined, see Refs.~\cite{halliwell,halliwell93} for an explicit example of how this limit is taken). Notice that, if we take the limit to the continuous in time and in the variables $\bar{q}_k$ in (\ref{decoherence condition}), we get a functional $\Phi_{\bar{q}}[j]$ which is the functional Fourier transform of ${\cal P}_{\bar{q}}[\bar{q}]$. Hence, $\Phi_{\bar{q}}[j]$ can be interpreted as the characteristic functional for the stochastic variables $\bar{q}(t)$ \cite{feynman-hibbs}. From the probability functional (\ref{prob functional}) or, equivalently, from the associated characteristic functional [by functional derivation with respect to the sources $j(t)$], we can compute the Green functions $G_{c \; m_1 \cdots \, m_s}^{n_1 \cdots \, n_r} (t_1^{\prime}, \dots, t_r^{\prime}; t_1^{\prime \prime}, \dots, t_s^{\prime \prime})$ with each of the instants in $\{t_1^{\prime}, \dots , t_r^{\prime} \}$ being separated from $t_i$ and from the remaining instants in this set by intervals larger enough than $\Delta t_c$, and similarly for the instants in $\{t_1^{\prime \prime}, \dots , t_s^{\prime \prime} \}$. We can get a good approximation to the path integral (\ref{probability 3}) by expanding $S_{\rm eff} [q_{\scriptscriptstyle \Delta},q_{\scriptscriptstyle \Sigma}]$ in powers of $q_{\scriptscriptstyle \Delta}$ and neglecting higher than quadratic terms, {\it i.e.}, we make a Gaussian approximation in this path integral. This expansion can be made using (\ref{effective action 2}) and writing $S_{\rm IF}^{\rm eff}[q_+,q_-]\equiv S_{\rm IF}^{\rm eff} [q_{\scriptscriptstyle \Delta},q_{\scriptscriptstyle \Sigma}]$. In this expansion, the dependence of $S_{\rm eff} [q_{\scriptscriptstyle \Delta},q_{\scriptscriptstyle \Sigma}]$ on the velocities $\dot{q}_{\scriptscriptstyle \Delta}(t)$\footnote{We understand that a term depends on $\dot{q}_{\scriptscriptstyle \Delta}(t)$ if it does so before any integration by parts.} (we assume that there is no dependence on time derivatives of higher order) gives rise, after integration by parts, to boundary terms proportional to $q_{{\scriptscriptstyle \Delta}_{\scriptstyle i}}$ (we use that $q_{{\scriptscriptstyle \Delta}_{\scriptstyle f}}=0$). For instance, assuming that $S_{s}^{\rm eff}[q]= \int\! dt \, L_s(q(t),\dot{q}(t),t)$, in the expansion of the terms $S_{s}^{\rm eff}$ we find a boundary term $- p_{s}(q_{\scriptscriptstyle \Sigma_{\scriptstyle i}}, \dot{q}_{\scriptscriptstyle \Sigma_{\scriptstyle i}},t_i) \, q_{\scriptscriptstyle \Delta_{\scriptstyle i}}$, where $p_{s} \equiv \partial L_s/ \partial \dot{q}$ are the canonical momenta. Similarly, if $S_{\rm IF}^{\rm eff}$ depends on $\dot{q}_{\scriptscriptstyle \Delta}(t)$, its expansion will contain some boundary terms. However, since, in general, $S_{\rm IF}^{\rm eff}$ depends non-locally on $q_{\scriptscriptstyle \Delta}(t)$ and $q_{\scriptscriptstyle \Sigma}(t)$, these terms will be more complicated. Note that we are considering models slightly more general than the ones studied by Gell-Mann and Hartle in Refs.~\cite{gell-mann-hartle,hartle}, since we allow for the possibility of an influence action depending on $\dot{q}_{\scriptscriptstyle \Delta}(t)$ and $\dot{q}_{\scriptscriptstyle \Sigma}(t)$. The motivation for considering such a generalization is that we are interested in field theory actions with interaction terms depending on the derivatives of the fields. One can show that, when expanding up to quadratic order in $q_{\scriptscriptstyle \Delta}$, the general form for the boundary terms in $S_{\rm IF}^{\rm eff}$ is $- F_1[q_{\scriptscriptstyle \Sigma}](t_i) \, q_{\scriptscriptstyle \Delta_{\scriptstyle i}}+ i F_2[q_{\scriptscriptstyle \Sigma}](t_i) \, q_{\scriptscriptstyle \Delta_{\scriptstyle i}}^2 +i \!\int\! dt \, q_{\scriptscriptstyle \Delta}(t) \, F_3[q_{\scriptscriptstyle \Sigma}](t,t_i) \, q_{\scriptscriptstyle \Delta_{\scriptstyle i}}$, where $F_1$, $F_2$ and $F_3$ are real functionals of $q_{\scriptscriptstyle \Sigma}$, which vanish when $S_{\rm IF}^{\rm eff}$ does not depend on $\dot{q}_{\scriptscriptstyle \Delta}(t)$. Finally, we get the following expansion: \begin{eqnarray} && \hspace{-3.4ex} S_{\rm eff} [q_{\scriptscriptstyle \Delta},q_{\scriptscriptstyle \Sigma}]= S_s^{\rm eff}[q_{\scriptscriptstyle \Sigma} \!+\!{\textstyle \frac{1}{2}}\, q_{\scriptscriptstyle \Delta}] -S_s^{\rm eff}[q_{\scriptscriptstyle \Sigma} \!-\!{\textstyle \frac{1}{2}}\, q_{\scriptscriptstyle \Delta}] +S_{\rm IF}^{\rm eff} [q_{\scriptscriptstyle \Delta},q_{\scriptscriptstyle \Sigma}] = -p_1[q_{\scriptscriptstyle \Sigma}](t_i) \, q_{\scriptscriptstyle \Delta_{\scriptstyle i}}+ i F_2[q_{\scriptscriptstyle \Sigma}](t_i) \, q_{\scriptscriptstyle \Delta_{\scriptstyle i}}^2 \nonumber \\ &&\hspace{-3.4ex} +\, i \!\int\! dt \, q_{\scriptscriptstyle \Delta}(t) \, F_3[q_{\scriptscriptstyle \Sigma}](t,t_i) \, q_{\scriptscriptstyle \Delta_{\scriptstyle i}} \!+\!\! \int\! dt\: q_{\scriptscriptstyle \Delta}(t) C[q_{\scriptscriptstyle \Sigma}](t) +{i \over 2\hbar} \!\int\! dt\, dt^{\prime}\, q_{\scriptscriptstyle \Delta}(t) \, q_{\scriptscriptstyle \Delta}(t^{\prime}) \, C_2[q_{\scriptscriptstyle \Sigma}](t,t^{\prime}) +O \!\left(q_{\scriptscriptstyle \Delta}^3 \right) \!, \nonumber \\ \mbox{} \label{eff action expansion} \end{eqnarray} with \begin{equation} p_1[q](t_i) \equiv p_s (q_i,\dot{q}_i,t_i)+F_1[q](t_i), \hspace{5ex} C[q](t) \equiv {\delta S_s^{\rm eff}[q] \over \delta q(t)}+C_1[q](t), \label{C} \end{equation} and \begin{equation} C_k[q_{\scriptscriptstyle \Sigma}](t_1,.\,.\,.\,,t_k)\equiv \left( {\hbar \over i} \right)^{\! k-1} \!\!\! \left. {\delta^k S_{\rm IF}^{\rm eff} [q_{\scriptscriptstyle \Delta},q_{\scriptscriptstyle \Sigma}] \over \delta q_{\scriptscriptstyle \Delta}(t_1) \cdot \cdot \cdot \delta q_{\scriptscriptstyle \Delta}(t_k)} \right|_{q_{\mbox{}_{\hspace{-0.1ex}\Delta}}=0}, \label{C's} \end{equation} where the functional derivatives with respect to $q(t)$ are defined for variations which keep the value of $q(t)$ fixed at $t=t_i$ and $t=t_f$. Substituting the expansion (\ref{eff action expansion}) into Eq.~(\ref{probability 3}), we get a Gaussian path integral, which can be calculated. Note that, since ${\rm Im}\, S_{\rm IF}^{\rm eff} \geq 0$, $C_2[q](t,t^{\prime})$ is positive semi-definite. In order that the Gaussian approximation that we have carried out is valid, we must assume in addition that $C_2[q](t,t^{\prime})$ is strictly positive definite and, thus, $\det C_2[q] \neq 0$. We get \begin{equation} {\cal P}_{\rm f}[q] \simeq N \, W_i[q] \left[ \det \! \left( C_2[q]/ 2 \pi \hbar^2 \right) \right]^{-1/2} e^{-{1 \over 2}\! \int\! dt\, dt^{\prime}\, C[q](t)\, C_2^{-1}[q](t,t^{\prime})\, C[q](t^{\prime}) }, \label{approx probability} \end{equation} where $N$ is a normalization constant, $C_2^{-1}$ is the inverse of $C_2$ defined by \begin{equation} \int \! dt^{\prime \prime}\, C_2(t,t^{\prime \prime}) \, C_2^{-1}(t^{\prime \prime},t^{\prime})= \delta(t-t^{\prime}), \end{equation} $W_i[q] \equiv W\! \left(q(t_i),p[q](t_i),\Pi[q](t_i);t_i \right)$, with \begin{equation} W(q,p,\Pi;t_i) \equiv \int\! {dq_0 \over 2 \pi \hbar } \; e^{-{i \over \hbar} q_0 p} \, e^{-{1 \over \hbar} q_0^2 \Pi} \rho_s (q_{\scriptstyle i}+{\textstyle \frac{1}{2}} \hspace{0.2ex} q_0, q_{\scriptstyle i}-{\textstyle \frac{1}{2}} \hspace{0.2ex} q_0;t_i ), \label{Wigner funct} \end{equation} and \begin{eqnarray} &&p[q](t_i) \equiv p_1[q](t_i)+ \hbar \! \int \! dt \, dt^{\prime } \, F_3[q](t,t_i) \, C_2^{-1}[q](t,t^{\prime})\, C[q](t^{\prime}), \nonumber \\ &&\Pi[q](t_i) \equiv F_2[q](t_i) - {\hbar \over 2}\! \int \! dt \, dt^{\prime } \, F_3[q](t,t_i) \, C_2^{-1}[q](t,t^{\prime})\, F_3[q](t^{\prime},t_i). \end{eqnarray} Note that the function $W$ defined in (\ref{Wigner funct}) is a generalization of the Wigner function associated to the initial state of the system, and it reduces to the ordinary Wigner function for $\Pi =0$ \cite{wigner}. Note that, in expression (\ref{approx probability}), the momenta $p[q](t_i)$ in this generalized Wigner function are in general different from the canonical momenta $p_s (q_i,\dot{q}_i,t_i)$. In the case of $S^{\rm eff}_{\rm IF}$ non-depending on the velocities $\dot{q}_{\scriptscriptstyle \Delta}(t)$, one has $p[q](t_i)=p_s (q_i,\dot{q}_i,t_i)$ and $\Pi[q](t_i)=0$, thus, $W_i[q]$ is the standard Wigner function. From the definition (\ref{C's}), and using the properties of $S_{\rm IF}^{\rm eff}[q_+,q_-]$, we can see that \begin{eqnarray} C_1[q](t)\!\!\!&=&\!\!\! \left.{\delta \, {\rm Re}\, S_{\rm IF}^{\rm eff}[q_+,q_-] \over \delta q_+(t) }\right|_{q_+=q_-=q} =\left.{\delta S_{\rm IF}^{\rm eff}[q_+,q_-] \over \delta q_+(t) }\right|_{q_+=q_-=q}, \nonumber \\ C_2[q](t,t^{\prime})\!\!\!&=&\!\!\!{\hbar \over 2} \left. \left[ {\delta^2\, {\rm Im}\, S_{\rm IF}^{\rm eff}[q_+,q_-] \over \delta q_+(t) \delta q_+(t^{\prime}) } -{\delta^2\, {\rm Im}\, S_{\rm IF}^{\rm eff}[q_+,q_-] \over \delta q_+(t) \delta q_-(t^{\prime}) } \right] \right|_{q_+=q_-=q}, \label{C's 2} \end{eqnarray} and then, from (\ref{C}) and (\ref{effective action 2}), we have \begin{equation} C[q](t)=\left.{\delta S_{\rm eff}[q_+,q_-] \over \delta q_+(t) }\right|_{q_+=q_-=q}. \end{equation} Substituting (\ref{approx probability}) into (\ref{prob functional}), we see that the only non-negligible contribution to the path integral in (\ref{prob functional}) come from those paths which are not very far deviated from the paths $q^{\scriptscriptstyle (0)}(t)$ which satisfy $C[q^{\scriptscriptstyle (0)}](t)=0$, that is, which satisfy the semiclassical equation (\ref{semiclassical eq}). This implies that only those paths $\bar{q}(t)$ which remain always near from the semiclassical paths $q^{\scriptscriptstyle (0)}(t)$ will give a non-negligible value to ${\cal P}_{\bar{q}}[\bar{q}]$. In this sense, the mechanism proposed by Gell-Mann and Hartle is a mechanism for decoherence and classicalization of coarse-grained system variables. However, we see that, in general, ${\cal P}_{\bar{q}}[\bar{q}]$ has a complicated functional dependence on $\bar{q}(t)$. Let us then study the deviations from a specific solution of the semiclassical equation, that is, we shall now restrict our considerations to those paths $\bar{q}(t)$ which are distributed around a given solution $q^{\scriptscriptstyle (0)}(t)$ of the semiclassical equation. We can now introduce stochastic variables $\Delta q(t) \equiv \bar{q}(t)- q^{\scriptscriptstyle (0)}(t)$ which describe the deviations from $q^{\scriptscriptstyle (0)}(t)$. The associated probability distribution functional ${\cal P}_{\!\Delta q}[\Delta q]$ is equal to ${\cal P}_{\bar{q}}[q^{\scriptscriptstyle (0)}\!+\!\Delta q]$ up to a normalization factor, which, from (\ref{prob functional}), is given by \begin{equation} {\cal P}_{\bar{q}}[q^{\scriptscriptstyle (0)}+\Delta q] \simeq \int\! {\cal D}[q]\: \gamma^2[q] \: {\cal P}_{\rm f}[q^{\scriptscriptstyle (0)} +\Delta q+q]. \label{probability 4} \end{equation} In practice, it is difficult to work out the explicit dependence of the probability distribution functional on the characteristic parameters of the coarse-graining, $\sigma$ and $\Delta t_c$, even in simple models \cite{halliwell93,halliwell}. Nevertheless, if such parameters are small enough so that the values of ${\cal P}_{\rm f}[q^{\scriptscriptstyle (0)} +\Delta q+q]$ do not change very much for the different paths $q(t)$ which give a non-negligible contribution in (\ref{probability 4}), the functional (\ref{probability 4}) can be approximated by ${\cal P}_{\rm f}[q^{\scriptscriptstyle (0)} +\Delta q]$. We can make a further approximation by expanding ${\cal P}_{\rm f}[q^{\scriptscriptstyle (0)}+\Delta q]$ around $q^{\scriptscriptstyle (0)}$. This can be done by setting $q_{\scriptscriptstyle \Sigma}= q^{\scriptscriptstyle (0)} +\Delta q$ in (\ref{eff action expansion}), expanding in $\Delta q$, and substituting the result for this expansion in (\ref{probability 3}). The result to lowest non-trivial order is \begin{equation} {\cal P}_{\!\Delta q}[\Delta q] \simeq N[q^{\scriptscriptstyle (0)}] \, W_i[q^{\scriptscriptstyle (0)}\!+\! \Delta q] \, e^{-{1 \over 2} \int\! dt\, dt^{\prime}\, C_L[q^{\scriptscriptstyle (0)}+\Delta q](t)\, C_2^{-1}[q^{\scriptscriptstyle (0)}](t,t^{\prime})\, C_L[q^{\scriptscriptstyle (0)}+\Delta q](t^{\prime}) }, \label{approx gaussian probability} \end{equation} where $N[q^{\scriptscriptstyle (0)}]$ is a normalization factor and $C_L[q^{\scriptscriptstyle (0)}+\Delta q]$ is the expansion of $C[q^{\scriptscriptstyle (0)}+\Delta q]$ to linear order in $\Delta q$. Notice that, in this probability functional, the factor $W_i[q^{\scriptscriptstyle (0)}\!+\! \Delta q]$ contains all the contribution arising from the initial state of the system. This generalized Wigner function, even if computed expanding around $q^{\scriptscriptstyle (0)}$, will have in general a complicated non-local dependence on $\Delta q$, except when $S^{\rm eff}_{\rm IF}$ is independent of $\dot{q}_{\scriptscriptstyle \Delta}$, in which case it reduces to the standard Wigner function for the initial state of the system and depends only on $\Delta q_i$ and $\Delta \dot{q}_i$. If the deviations from $q^{\scriptscriptstyle (0)}$ are small enough, we can approximate $W_i[q^{\scriptscriptstyle (0)}+ \Delta q] \simeq W_i[q^{\scriptscriptstyle (0)}]$. Then, with these approximations, the variables $\Delta q$ are distributed in such a way that $C_L[q^{\scriptscriptstyle (0)}\!+\!\Delta q](t)$ are Gaussian stochastic variables characterized by \begin{equation} \left\langle C_L[q^{\scriptscriptstyle (0)}\!+\!\Delta q](t) \right\rangle_c=0, \hspace{7ex} \left\langle C_L[q^{\scriptscriptstyle (0)}\!+\!\Delta q](t)\, C_L[q^{\scriptscriptstyle (0)}\!+\!\Delta q](t^{\prime}) \right\rangle_c= C_2[q^{\scriptscriptstyle (0)}](t,t^{\prime}) . \label{gaussian correlators} \end{equation} Thus, the equation of motion for $\Delta q$ is the Langevin equation \begin{equation} C_L[q^{\scriptscriptstyle (0)}\!+\!\Delta q](t) +\xi(t)=0, \label{langevin eq} \end{equation} where $\xi(t)$ is a Gaussian stochastic source with \begin{equation} \left\langle \xi(t) \right\rangle_c=0, \hspace{7ex} \left\langle \xi(t) \, \xi(t^{\prime}) \right\rangle_c= C_2[q^{\scriptscriptstyle (0)}](t,t^{\prime}) . \label{gaussian correlators 2} \end{equation} We should mention that there are very simple models for quantum Brownian motion in which all the actions involved are quadratic in their variables and the interaction terms are independent of the velocities \cite{feynman-vernon,feynman-hibbs,caldeira,hu-paz-zhang,% gell-mann-hartle,hartle,halliwell93,dowker,grabert,brun}. For such models, assuming that the environment is in an initial state of thermal equilibrium, the influence functional can be computed exactly and it is Gaussian. The effective action of Feynman and Vernon in these cases is exactly of the form (\ref{eff action expansion}), with $C_1[q_{\scriptscriptstyle \Sigma}](t)$ linear in $q_{\scriptscriptstyle \Sigma}$, $C_2(t,t^{\prime})$ independent of $q_{\scriptscriptstyle \Sigma}$ and $F_1\!=\!F_2\!=\!F_3\!=\!0$. Thus, for these models, expression (\ref{approx probability}) is actually exact. In these cases, with the approximation ${\cal P}_{\bar{q}}[\bar{q}] \simeq {\cal P}_{\rm f}[\bar{q}]$, one can derive a Langevin equation for the stochastic variables $\bar{q}(t)$, without need of introducing a specific solution $q^{\scriptscriptstyle (0)}$ of the semiclassical equation. This Langevin equation is simply $C[\bar{q}](t)\!+\!\xi(t)\!=\!0$, being $\xi(t)$ a Gaussian stochastic source with $\left\langle \xi(t)\right\rangle_c=0$ and $\left\langle \xi(t) \, \xi(t^{\prime})\right\rangle_c= C_2(t,t^{\prime})$. However, for models with more complicated actions, we are only able to derive effective equations of motion for the deviations $\Delta q$ around a given solution $q^{\scriptscriptstyle (0)}$ of the semiclassical equation. \subsection{\hspace{-2.5ex}. A quick method to obtain the Langevin equation} \label{subsec:quick method} Starting with the effective action of Feynman and Vernon (\ref{effective action 2}), there is a quick way to obtain the Langevin equation (\ref{langevin eq}) for the deviations $\Delta q$ around a specific solution of the semiclassical equation. This method has actually been extensively used in the literature, in the context of quantum Brownian motion \cite{caldeira,hu-matacz94,hu-paz-zhang2}, and also in the context of field theory \cite{greiner,matacz,morikawa,shaisultanov,gleiser}, including some models for gravity interacting with a scalar field \cite{calzettahu,humatacz,husinha,cv96,lomb-mazz,ccv97,campos-hu}. One starts with an expansion of this effective action around a solution $q^{\scriptscriptstyle (0)}(t)$ of the semiclassical equation up to quadratic order in perturbations $\Delta q_{\pm}$ satisfying $\Delta q_+(t_i)=\Delta q_-(t_i)$ and $\Delta q_+(t_f)=\Delta q_-(t_f)$ (in the simplest models, in which this effective action is exactly quadratic in $q_+$ and $q_-$, one works directly with the exact expression). From (\ref{eff action expansion}), it is easy to see that the expansion for the influence action reads \begin{eqnarray} &&\hspace{-10.5ex} S^{\rm eff}_{\rm IF}[q^{\scriptscriptstyle (0)}\!+\!\Delta q_+ , q^{\scriptscriptstyle (0)}\!+\!\Delta q_-]= \int\! dt \left(\Delta q_+(t)\!-\!\Delta q_-(t) \right) C_1[q^{\scriptscriptstyle (0)}\!+\!{\textstyle \frac{1}{2}} \hspace{0.2ex}(\Delta q_+ \!+\! \Delta q_-)](t) \nonumber \\ && \hspace{-1ex} +\,{i \over 2\hbar} \int\! dt\, dt^{\prime} \left(\Delta q_+(t)\!-\!\Delta q_-(t) \right) C_2[q^{\scriptscriptstyle (0)}](t,t^{\prime}) \left(\Delta q_+(t^{\prime})\!-\!\Delta q_-(t^{\prime}) \right) +O (\Delta q^3 ), \label{influence action expansion} \end{eqnarray} where it is understood that $C_1$ has to be expanded up to linear order. Using the identity, which follows from a Gaussian path integration, \begin{equation} e^{-{1 \over 2\hbar^2}\! \int\! dt\, dt^{\prime}\, \left(\Delta q_+(t)-\Delta q_-(t) \right)\, C_2[q^{\scriptscriptstyle (0)}](t,t^{\prime})\, \left(\Delta q_+(t^{\prime})-\Delta q_-(t^{\prime}) \right)}= \int\! {\cal D}[\xi]\: {\cal P}_{\xi}[\xi]\, e^{{i \over \hbar} \!\int\! dt\, \xi(t)\, \left(\Delta q_+(t)-\Delta q_-(t) \right) }, \label{identity} \end{equation} where ${\cal P}_{\xi}[\xi]$ is the Gaussian probability distribution functional for the Gaussian stochastic variables $\xi(t)$ characterized by (\ref{gaussian correlators 2}), that is, \begin{equation} {\cal P}_{\xi}[\xi]= \frac{e^{-{1\over2}\!\int\! dt\, dt^{\prime}\, \xi(t) \, C_2^{-1}[q^{\scriptscriptstyle (0)}](t,t^{\prime})\, \xi(t^{\prime}) }} {\int\! {\cal D}\bigl[\bar{\xi}\bigr]\: e^{-{1\over2}\!\int\! d\tau \, d\tau ^{\prime}\, \bar{\xi}(\tau) \, C_2^{-1}[q^{\scriptscriptstyle (0)}](\tau,\tau ^{\prime})\, \bar{\xi}(\tau ^{\prime}) }}, \label{xi probability} \end{equation} we can write in this approximation \begin{equation} \bigl|\hspace{0.2ex}{\cal F}^{\rm eff}_{\rm IF} [q^{\scriptscriptstyle (0)}\!+\!\Delta q_+, q^{\scriptscriptstyle (0)}\!+\!\Delta q_-] \hspace{0.2ex}\bigr|= e^{-{1 \over \hbar}\,{\rm Im}\, S^{\rm eff}_{\rm IF} [q^{\scriptscriptstyle (0)}+\Delta q_+, q^{\scriptscriptstyle (0)}+\Delta q_-]}= \left\langle e^{{i \over \hbar}\! \int\! dt \, \xi(t)\, \left(\Delta q_+(t)-\Delta q_-(t) \right) } \right\rangle_c, \label{identity2} \end{equation} where $\langle \hspace{1.5ex} \rangle_c$ means statistical average over the stochastic variables $\xi(t)$. Thus, the effect of the imaginary part of the influence action (\ref{influence action expansion}) on the corresponding influence functional is equivalent to the averaged effect of the stochastic source $\xi(t)$ coupled linearly to the perturbations $\Delta q_{\pm}$ (note that, in the above expressions, the perturbations $\Delta q_{\pm}$ are deterministic functions). Notice that expression (\ref{identity}) or, equivalently, (\ref{identity2}) give the characteristic functional of the stochastic variables $\xi(t)$ \cite{feynman-hibbs}. The influence functional, in the approximation (\ref{influence action expansion}), can then be written as an statistical average over $\xi$: \begin{equation} {\cal F}^{\rm eff}_{\rm IF} [q^{\scriptscriptstyle (0)}\!+\!\Delta q_+, q^{\scriptscriptstyle (0)}\!+\!\Delta q_-]= \left\langle e^{{i \over \hbar}\, {\cal A}^{\rm eff}_{\rm IF}[\Delta q_+,\Delta q_-;\xi] } \right\rangle_c, \end{equation} where \begin{equation} {\cal A}^{\rm eff}_{\rm IF}[\Delta q_+,\Delta q_-;\xi] \equiv {\rm Re}\, S^{\rm eff}_{\rm IF} [q^{\scriptscriptstyle (0)}\!+\!\Delta q_+, q^{\scriptscriptstyle (0)}\!+\!\Delta q_-] +\!\int\! dt \, \xi(t) \left(\Delta q_+(t)-\Delta q_-(t) \right) +O (\Delta q^3 ), \end{equation} where ${\rm Re}\, S^{\rm eff}_{\rm IF}$ can be read from expression (\ref{influence action expansion}). The Langevin equation (\ref{langevin eq}) can be easily derived from the action \begin{equation} {\cal A}_{\rm eff}[\Delta q_+,\Delta q_-;\xi] \equiv S_s^{\rm eff}[q^{\scriptscriptstyle (0)}\!+\!\Delta q_+]- S_s^{\rm eff}[q^{\scriptscriptstyle (0)}\!+\!\Delta q_-]+ {\cal A}^{\rm eff}_{\rm IF}[\Delta q_+,\Delta q_-;\xi], \label{new effective action} \end{equation} where $S_s^{\rm eff}[q^{\scriptscriptstyle (0)}\!+\!\Delta q_{\pm}]$ has to be expanded up to second order in the perturbations $\Delta q_{\pm}$. That is, \begin{equation} \left. \frac{\delta {\cal A}_{\rm eff}[\Delta q_+,\Delta q_-;\xi]} {\delta \Delta q_+(t)} \right|_{\Delta q_+=\Delta q_-=\Delta q}=0 \label{quick Langevin eq} \end{equation} leads to Eq.~(\ref{langevin eq}). \section{\hspace{-2.5ex}. Effective equations of motion for the gravitational field} \label{sec:Einstein-Langevin} \setcounter{equation}{0} In this section, we shall apply the results of the previous section to derive effective equations of motion for the gravitational field in a semiclassical regime. In order to do so, we will consider the simplest case of a linear real scalar field $\Phi$ coupled to the gravitational field. We shall restrict ourselves to the case of fields defined on a globally hyperbolic manifold ${\cal M}$. In this case, we would consider the metric field $g_{ab}(x)$ as the system degrees of freedom, and the scalar field $\Phi(x)$ and also some ``high-momentum'' gravitational modes, considered as inaccessible to the observations, as the environment variables. Unfortunately, since the form of a complete quantum theory of gravity interacting with matter is unknown, we do not know what these ``high-momentum'' gravitational modes are. Such a fundamental quantum theory might not even be a field theory, in which case the metric and scalar fields would not be fundamental objects. Thus, in this case, we cannot attempt to evaluate the effective actions in Eq.~(\ref{s-e effective actions}) starting from the fundamental quantum theory and integrating out the ``high-momentum'' gravitational modes . What we can do instead is to adopt the usual procedure when dealing with an effective quantum field theory. That is, we shall take for the actions $S_{s}^{\rm eff}[g]$ and $S_{se}^{\rm eff}[g^+,\Phi_+;g^-,\Phi_-]$ the most general local form compatible with general covariance and with the properties of $S_{se}^{\rm eff}$ [these properties are analogous to those of $S_{\rm IF}$ in Eq.~(\ref{influence action properties})] \cite{weinberg,donoghue}. The general form for $S_{s}^{\rm eff}[g]$ is \begin{equation} S_{s}^{\rm eff}[g]=\int \! d^4 x \, \sqrt{- g} \left[{1\over 16 \pi G_{B}} \left(R-2\Lambda_{B}\right) + \alpha_{B} C_{abcd}C^{abcd}+\beta_{B} R^2 + \cdots \right], \label{grav action} \end{equation} where $R$ and $C_{abcd}$ are, respectively, the scalar curvature and the Weyl tensor associated to the metric $g_{ab}$, $1/G_B$, $\Lambda_B /G_B$, $\alpha_B$ and $\beta_B$ are bare coupling constants and the dots represent terms of higher order in the curvature [because of the Gauss-Bonnet theorem in four spacetime dimensions, there is no need of considering terms of second order in the curvature different from those written in Eq.~(\ref{grav action})]. Since ${\cal M}$ is a globally hyperbolic manifold, we can foliate it by a family of Cauchy hypersurfaces $\Sigma_{t}$, labeled by a time coordinate $t$. We use the notation ${\bf x}$ for spatial coordinates on each of these hypersurfaces, and $t_{i}$ and $t_{f}$ for some initial and final times, respectively. The integration domain for all the action terms must now be understood as a compact region ${\cal U}$ of the manifold ${\cal M}$, bounded by the hypersurfaces $\Sigma_{t_i}$ and $\Sigma_{t_f}$ ({\it i.e.}, as in the previous section, integrals in $t$ are integrals between $t_i$ and $t_f$). For the matter part of the effective action, let us consider the following ansatz: \begin{equation} S_{se}^{\rm eff}[g^+,\Phi_+;g^-,\Phi_-]=S_m[g^+,\Phi_+] -S_m[g^-,\Phi_-], \label{effective action ansatz} \end{equation} with \begin{equation} S_m[g,\Phi] \equiv -{1\over2} \int\! d^4x \, \sqrt{- g} \left[g^{ab}\partial_a \Phi \hspace{0.2ex} \partial_b \Phi +\left(m^2+ \xi R \right)\Phi^2 + \cdots \right], \label{scalar field action} \end{equation} where $\xi$ is a dimensionless coupling parameter of the field to the scalar curvature, and the dots stand for terms of higher order in the curvature and in the number of derivatives of the scalar field. Self-interaction terms for the scalar field could also be included but, for simplicity, we shall ignore them in this paper. One can see that general covariance and the properties of $S_{se}^{\rm eff}[g^+,\Phi_+;g^-,\Phi_-]$ imply that imaginary terms and terms mixing the ``plus'' and ``minus'' fields in this action must be necessarily non-local. Thus, within a local approximation, the ansatz (\ref{effective action ansatz}) is the most general form for this action. We shall comment below some limitations of this local approximation. In order to simplify the analysis, we neglect the contributions of the higher order terms not written in Eqs.~(\ref{grav action}) and (\ref{scalar field action}). Assuming that the mass of the scalar field is much smaller than the Planck mass, this is a good approximation in a regime where all the characteristic curvature scales are far enough from the Planck scales. The terms in the gravitational Lagrangian density proportional to $R^2$ and $C_{abcd}C^{abcd}$ need to be considered in order to renormalize the matter one-loop ultraviolet divergencies. Assuming the form (\ref{effective action ansatz}) for the matter part of the effective action, we can now introduce the corresponding effective influence functional as in Eq.~(\ref{effective influence functional}). Let us assume that the state of the scalar field in the Schr\"{o}dinger picture at the initial time $t\! =\! t_{i}$ is described by a density operator $\hat{\rho}^{\rm \scriptscriptstyle S}(t_{i})$ (in the notation of the previous section, this was $\hat{\rho}^{\rm \scriptscriptstyle S}_e(t_{i})$, but, here, we drop the subindex $e$ to simplify the notation). If we now consider the theory of a scalar field quantized in a classical background spacetime $({\cal M},g_{ab})$ through the action (\ref{scalar field action}), to this state it would correspond a state in the Heisenberg picture described by a density operator $\hat{\rho}[g]$. Let $\left\{ |\varphi(\mbox{\bf x})\rangle^{\rm \scriptscriptstyle S} \right\}$ be the basis of eigenstates of the Schr\"{o}dinger picture scalar field operator $\hat{\Phi}^{\rm \scriptscriptstyle S}({\bf x})$: $\hat{\Phi}^{\rm \scriptscriptstyle S}({\bf x}) \, |\varphi\rangle ^{\rm \scriptscriptstyle S}= \varphi(\mbox{\bf x}) \, |\varphi\rangle^{\rm \scriptscriptstyle S}$. The matrix elements of $\hat{\rho}^{\rm \scriptscriptstyle S}(t_{i})$ in this basis will be written as $\rho_{i} \!\left[\varphi,\tilde{\varphi}\right] \equiv \mbox{}^{\rm \scriptscriptstyle S} \langle \varphi|\,\hat{\rho}^{\rm \scriptscriptstyle S}(t_{i}) \, |\tilde{\varphi}\rangle^{\rm \scriptscriptstyle S}$. We can now introduce the effective influence functional as \begin{equation} {\cal F}^{\rm eff}_{\rm IF}[g^+,g^-] \equiv \int\! {\cal D}[\Phi_+]\; {\cal D}[\Phi_-] \; \rho_i \!\left[\Phi_+(t_i),\Phi_-(t_i) \right] \, \delta\!\left[\Phi_+(t_f)\!-\!\Phi_-(t_f) \right]\: e^{i\left(S_{m}[g^+,\Phi_+]-S_{m}[g^-,\Phi_-]\right) }, \label{path integral} \end{equation} and the effective influence action will be given by ${\cal F}^{\rm eff}_{\rm IF}[g^+,g^-] \equiv e^{i S^{\rm eff}_{\rm IF}[g^+,g^-]}$. Of course, trying to show how the mechanism for decoherence and classicalization of the previous section can work in this case would involve some technical difficulties, such as introducing diffeomorphism invariant coarse-grainings and eliminating properly the gauge redundancy (with the use of some suitable Faddeev-Popov method) in the path integrals. We are not going to deal with such issues in this paper. We shall rather assume that they can be suitably implemented without changing the main results for the effective equations of motion. Expression (\ref{path integral}) is actually formal, it is ill-defined and must be regularized in order to get a meaningful quantity for the influence functional. We shall formally assume that we can regularize it using dimensional regularization, that is, that we can give sense to Eq.~(\ref{path integral}) by dimensional continuation of all the quantities that appear in this expression. We should mention that, however, when performing specific calculations, the dimensional regularization procedure may not be in all cases the most suitable one. In this sense, one should understand the following derivation as being formal. Using dimensional regularization, we must substitute the action $S_m$ in (\ref{path integral}) by some generalization to $n$ spacetime dimensions. This can be taken as \begin{equation} S_m[g,\Phi_{n}] = -{1\over2} \int\! d^n x \, \sqrt{- g} \left[g^{ab} \partial_a \Phi_{n} \partial_b \Phi_{n} +\left(m^2+ \xi R \right)\Phi_{n}^2 \right], \label{scalar action} \end{equation} where we use a notation in which we write a subindex $n$ in all the quantities that have different physical dimensions than the corresponding physical quantities in the spacetime of four dimensions. The quantities that do not carry a subindex $n$ have the same physical dimensions than the corresponding ones in four spacetime dimensions, although they should not be confused with such physical quantities. A quantity with a subindex $n$ can always be associated to another one without a subindex $n$; these are related by some mass scale $\mu$, for instance, it is easy to see that $\Phi_{n}=\mu^{{n-4\over 2}}\,\Phi$. In order to write the effective equations for the metric field in dimensional regularization, we need to substitute the action (\ref{grav action}) by some suitable generalization to $n$ spacetime dimensions. We take \begin{equation} S_{s}^{\rm eff}[g]=\mu^{n-4} \!\int \! d^n x \,\sqrt{- g} \left[{1\over 16 \pi G_{B}} \left(R-2\Lambda_{B}\right)+ {2\over 3}\,\alpha_{B} \left(R_{abcd}R^{abcd}- R_{ab}R^{ab} \right)+\beta_{B} R^2 \right], \label{grav action in n} \end{equation} where $R_{abcd}$ is the Riemann tensor and, again, the mass parameter $\mu$ has been introduced in order to get the correct physical dimensions. Using the Gauss-Bonnet theorem in four spacetime dimensions, one can see that the action obtained by setting $n\!=\!4$ in (\ref{grav action in n}) is equivalent to (\ref{grav action}). The form of the action (\ref{grav action in n}) is suggested from the Schwinger-DeWitt analysis of the divergencies in the stress-energy tensor in dimensional regularization \cite{bunch}. The effective action of Feynman and Vernon (\ref{effective action 2}) is in our case given by $S_{\rm eff}[g^+,g^-]= S_{s}^{\rm eff}[g^+]-S_{s}^{\rm eff}[g^-] +S^{\rm eff}_{\rm IF}[g^+,g^-]$. Since the action terms (\ref{scalar action}) and (\ref{grav action in n}) contain second order derivatives of the metric, one should also add some boundary terms to them \cite{wald84,humatacz}. The effect of these boundary terms is simply to cancel out the boundary terms that appear when taking variations of $S_{\rm eff}[g^+,g^-]$ that keep the value of $g^+_{ab}$ and $g^-_{ab}$ fixed on the boundary of ${\cal U}$. They guarantee that we can obtain an expansion for $S_{\rm eff}[g^+,g^-]$ analogous to (\ref{eff action expansion}), with no extra boundary terms coming from the integration by parts of terms containing second order derivatives of $g_{ab}^{\scriptscriptstyle \Delta} \equiv g^+_{ab}-g^-_{ab}$. Alternatively, in order to obtain the effective equations for the metric [equations analogous to (\ref{semiclassical eq}) and (\ref{langevin eq})], we can work with the action terms (\ref{scalar action}) and (\ref{grav action in n}) (without boundary terms) and neglect all boundary terms when taking variations with respect to $g^{\pm}_{ab}$. From now on, all the functional derivatives with respect to the metric must be understood in this sense. \subsection{\hspace{-2.5ex}. The semiclassical Einstein equation} \label{subsec:semiclassical Einstein} From the action (\ref{scalar action}), we can define the stress-energy tensor functional in the usual way \begin{equation} T^{ab}[g,\Phi_{n}](x) \equiv {2\over\sqrt{- g(x)}} \, \frac{\delta S_m[g,\Phi_{n}]}{\delta g_{ab}(x)}, \label{s-t functional} \end{equation} which yields \begin{equation} T^{ab}[g,\Phi_n]=\bigtriangledown^{a}\Phi_n \bigtriangledown^{b}\hspace{-0.1ex} \Phi_n- {1\over 2}\, g^{ab} \bigtriangledown^{c} \hspace{-0.1ex} \Phi_n \bigtriangledown_{\!c}\hspace{-0.1ex} \Phi_n -{1\over 2}\, g^{ab}\, m^2 \Phi_n^2 +\xi \left( g^{ab} \Box -\bigtriangledown^{a}\! \bigtriangledown^{b} +\, G^{ab} \right) \Phi_n^2 \label{class s-t} \end{equation} where $\bigtriangledown_{\!a}$ is the covariant derivative associated to the metric $g_{ab}$, $\Box \!\equiv\! \bigtriangledown_{\!a}\bigtriangledown^{a}$, and $G_{ab}$ is the Einstein tensor. Working in the Heisenberg picture, we can now formally introduce the stress-energy tensor operator for a scalar field quantized in a classical spacetime background, regularized using dimensional regularization, as \begin{equation} \hat{T}_{n}^{ab}[g] \equiv T^{ab}[ g,\hat{\Phi}_{n}[g]], \hspace{5 ex} \hat{T}^{ab}[g] \equiv \mu^{-(n-4)}\, \hat{T}_{n}^{ab}[g], \label{regul s-t} \end{equation} where $\hat{\Phi}_{n}[g](x)$ is the Heisenberg picture field operator in $n$ spacetime dimensions, which satisfies the Klein-Gordon equation \begin{equation} \left( \Box -m^2- \xi R \right) \hat{\Phi}_{n}=0, \label{Klein-Gordon in n} \end{equation} and where we use a symmetrical ordering (Weyl ordering) prescription for the operators. Using Eq.~(\ref{Klein-Gordon in n}), one can write the stress-energy operator in the following way: \begin{equation} \hat{T}_{n}^{ab}[g] = {1\over 2} \left\{ \bigtriangledown^{a}\hat{\Phi}_{n}[g]\, , \, \bigtriangledown^{b}\hat{\Phi}_{n}[g] \right\} + {\cal D}^{ab}[g]\, \hat{\Phi}_{n}^2[g], \label{regul s-t 2} \end{equation} where ${\cal D}^{ab}[g]$ is the differential operator \begin{equation} {\cal D}^{ab}_{x} \equiv \left(\xi-{1\over 4}\right) g^{ab}(x) \Box_{x}+ \xi \left( R^{ab}(x)- \bigtriangledown^{a}_{x} \bigtriangledown^{b}_{x} \right), \label{diff operator} \end{equation} being $R_{ab}$ the Ricci tensor. From the definitions (\ref{path integral}), (\ref{s-t functional}) and (\ref{regul s-t}), one can see that \begin{equation} \left. {2\over\sqrt{- g(x)}} \, \frac{\delta S^{\rm eff}_{\rm IF}[g^+,g^-]} {\delta g^+_{ab}(x)} \right|_{g^+=g^-=g} \! =\left\langle \hat{T}_n^{ab}(x) \right\rangle \![g], \label{s-t expect value} \end{equation} where the expectation value is taken in the $n$-dimensional spacetime generalization of the state described by $\hat{\rho}[g]$. As in Eq.~(\ref{semiclassical eq}), if we derive $S_{\rm eff}[g^+,g^-]$ with respect to $g^+_{ab}$ and then set $g^+_{ab}=g^-_{ab}=g_{ab}$, we get the semiclassical Einstein equation in dimensional regularization: \begin{equation} {1\over 8 \pi G_{B}} \left( G^{ab}[g]+ \Lambda_{B} g^{ab} \right)- \left({4\over 3}\, \alpha_{B} D^{ab} +2 \beta_{B} B^{ab}\right)\! [g] = \mu^{-(n-4)} \left\langle \hat{T}_{n}^{ab}\right\rangle \! [g], \label{semiclassical eq in n} \end{equation} where the tensors $D^{ab}$ and $B^{ab}$ are defined as \begin{eqnarray} &&\hspace{-6ex} D^{ab} \equiv {1\over\sqrt{- g}} \frac{\delta}{\delta g_{ab}} \int \! d^n x \,\sqrt{- g} \left(R_{cdef}R^{cdef}- R_{cd}R^{cd} \right) = {1\over2}\, g^{ab} \! \left( R_{cdef} R^{cdef}- R_{cd}R^{cd}+\Box \hspace{-0.2ex} R \right) \nonumber \\ && \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \hspace{1ex} -\,2R^{acde}{R^b}_{cde}-2 R^{acbd}R_{cd}+4R^{ac}{R_c}^b -3 \hspace{0.2ex}\Box \hspace{-0.2ex} R^{ab} +\bigtriangledown^{a}\!\bigtriangledown^{b}\! \hspace{-0.2ex} R, \label{D} \end{eqnarray} and \begin{equation} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\hspace{-6.3ex} B^{ab} \equiv {1\over\sqrt{- g}} \frac{\delta}{\delta g_{ab}} \int \! d^n x \,\sqrt{- g} \, R^2 = {1\over2}\, g^{ab} R^2-2 R R^{ab}+2 \bigtriangledown^{a}\! \bigtriangledown^{b}\hspace{-0.1ex} R -2 g^{ab}\Box \hspace{-0.2ex} R. \label{B in n} \end{equation} From equation (\ref{semiclassical eq in n}), after renormalizing the coupling constants in order to eliminate the divergencies in $\mu^{-(n-4)}\langle \hat{T}_{n}^{ab}\rangle [g]$ in the limit $n\!\rightarrow \! 4$ and then taking this limit, we will get the semiclassical Einstein equation in the physical spacetime of four dimensions: \begin{equation} {1\over 8 \pi G} \left( G^{ab}[g]+ \Lambda g^{ab} \right)- 2 \left( \alpha A^{ab}+\beta B^{ab} \right)\hspace{-0.3ex}[g]= \left\langle\hat{T}_{R}^{ab}\right\rangle \![g]. \label{semiclassical Einstein eq} \end{equation} In the last equation $1/G$, $\Lambda /G$, $\alpha$ and $\beta$ are renormalized coupling constants, $\langle\hat{T}_{R}^{ab}\rangle [g]$ is the renormalized expectation value of the stress-energy tensor operator, and we have used that, for $n\!=\!4$, $D^{ab}=(3/2) A^{ab}$, being $A^{ab}$ the local curvature tensor obtained by functional derivation with respect to the metric of the action term corresponding to the Lagrangian density $C_{abcd}C^{abcd}$. \subsection{\hspace{-2.5ex}. The semiclassical Einstein-Langevin equation} \label{subsec:Einstein-Langevin} According to the results of the previous section, assuming that some suitably coarse-grained metric field satisfies the conditions for approximate decoherence and that the approximations of subsection \ref{sec:classicalization}\,\ref{subsec:effective eqs} are valid in a certain regime, small deviations from a given solution $g_{ab}$ of the semiclassical Einstein equation (\ref{semiclassical Einstein eq}) can be described by linear stochastic perturbations $h_{ab}$ to that semiclassical metric. These perturbations satisfy a Langevin equation of the form (\ref{langevin eq}), which shall be called the semiclassical Einstein-Langevin equation. Our next step will be to write the semiclassical Einstein-Langevin equation in dimensional regularization. Let us assume that $g_{ab}$ is a solution of Eq.~(\ref{semiclassical eq in n}) in $n$ spacetime dimensions. The semiclassical Einstein-Langevin equation in dimensional regularization has then the form \begin{eqnarray} {1\over 8 \pi G_{B}}\biggl( G^{ab}_L[g\!+\!h]+ \Lambda_{B} \left(g^{ab}\!-\!h^{ab}\right) \biggr)\! &\!\!\!\!\!\!\!-\!\!\!\!\!\!\!& \!\left({4\over 3}\, \alpha_{B} D^{ab}_L + 2 \beta_{B} B^{ab}_L \right)\![g \!+\! h] = \mu^{-(n-4)} \left\langle \hat{T}_{n}^{ab}\right\rangle \!\!_{\mbox{}_{\scriptstyle L}}[g\!+\!h]\nonumber \\ &&\hspace{30 ex}\!+\,2 \mu^{-(n-4)} \xi_n^{ab}, \label{Einstein-Langevin eq in n} \end{eqnarray} where $h_{ab}$ is a linear stochastic perturbation to $g_{ab}$, $h^{ab}\!\equiv\! g^{ac}g^{bd}h_{cd}$, that is, $g^{ab}\!-h^{ab}\!+0(h^2)$ is the inverse of the metric $g_{ab}\!+\!h_{ab}$, and, as in the previous section, we use a subindex ${\scriptstyle L}$ to denote an expansion up to linear order in $h_{ab}$. In this equation, $\langle \hat{T}_{n}^{ab}\rangle [g+h]$ is the expectation value of $\hat{T}_{n}^{ab}[g\!+\!h]$ in the $n$-dimensional spacetime generalization of the state described by $\hat{\rho}[g+h]$, and $\xi_n^{ab}$ is a Gaussian stochastic tensor characterized by the correlators \begin{equation} \left\langle\xi_n^{ab}(x) \right\rangle_{c}\!= 0, \hspace{10ex} \left\langle\xi_n^{ab}(x)\xi_n^{cd}(y) \right\rangle_{c}\!= N_n^{abcd}[g](x,y), \label{correlators in n} \end{equation} with [see Eqs.~(\ref{gaussian correlators 2}) and (\ref{C's 2})] \begin{equation} 2 N_n^{abcd}[g](x,y) \equiv \left. {1\over\sqrt{- g(x)}\sqrt{- g(y)} } \left[ \frac{\delta^2 \, {\rm Im}\, S^{\rm eff}_{\rm IF}[g^+,g^-]} {\delta g^+_{ab}(x)\delta g^+_{cd}(y)} - \frac{\delta^2 \, {\rm Im}\, S^{\rm eff}_{\rm IF}[g^+,g^-]} {\delta g^+_{ab}(x)\delta g^-_{cd}(y)} \right] \right|_{g^+=g^-=g}\!. \label{noise in n} \end{equation} We can write Eq.~(\ref{Einstein-Langevin eq in n}) in a more explicit way by working out the expansion $\langle \hat{T}_{n}^{ab}\rangle \!_{\mbox{}_{\scriptstyle L}}[g+h]$. Since, from Eq.~(\ref{s-t expect value}), we have that \begin{equation} \left\langle \hat{T}_n^{ab}(x) \right\rangle \![g+h]= \left. {2\over\sqrt{-\det (g\!+\!h)(x)}} \, \frac{\delta S^{\rm eff}_{\rm IF} [g\!+\!h^+,g\!+\!h^-]}{\delta h^+_{ab}(x)} \right|_{h^+=h^-=h}\!, \label{perturb s-t expect value} \end{equation} this expansion can be obtained from an expansion of the influence action $S^{\rm eff}_{\rm IF}[g+h^+,g+h^-]$ up to second order in $h^{\pm}_{ab}$ (in this expansion, we can neglect boundary terms). At the same time, we can obtain a more explicit expression for the noise kernel (\ref{noise in n}). To perform this expansion for the influence action, we have to compute the first and second order functional derivatives of $S^{\rm eff}_{\rm IF}[g^+,g^-]$ and then set $g^+_{ab}\!=\!g^-_{ab}\!=\!g_{ab}$. If we do so using the path integral representation (\ref{path integral}), we can interpret these derivatives as expectation values of operators in the Heisenberg picture for a scalar field quantized in a classical spacetime background $({\cal M},g_{ab})$ as, for instance, in expression (\ref{s-t expect value}). The relevant second order derivatives are \begin{eqnarray} \left. {1\over\sqrt{- g(x)}\sqrt{- g(y)} } \, \frac{\delta^2 S^{\rm eff}_{\rm IF}[g^+,g^-]} {\delta g^+_{ab}(x)\delta g^+_{cd}(y)} \right|_{g^+=g^-=g} \!\! \!\!\!&=&\!\!\! -H_{\scriptscriptstyle \!{\rm S}_{\scriptstyle n}}^{abcd}[g](x,y) -K_n^{abcd}[g](x,y)+ i N_n^{abcd}[g](x,y), \nonumber \\ \left. {1\over\sqrt{- g(x)}\sqrt{- g(y)} } \, \frac{\delta^2 S^{\rm eff}_{\rm IF}[g^+,g^-]} {\delta g^+_{ab}(x)\delta g^-_{cd}(y)} \right|_{g^+=g^-=g} \!\! \!\!\!&=&\!\!\! -H_{\scriptscriptstyle \!{\rm A}_{\scriptstyle n}}^{abcd} [g](x,y) -i N_n^{abcd}[g](x,y), \label{derivatives} \end{eqnarray} with \begin{eqnarray} N_n^{abcd}[g](x,y) \!\!\!&= &\!\!\! {1\over 8}\, \biggl\langle \biggl\{ \hat{T}_n^{ab}(x)- \left\langle \hat{T}_n^{ab}(x) \right\rangle , \, \hat{T}_n^{cd}(y)- \left\langle \hat{T}_n^{cd}(y)\right\rangle \biggr\} \biggr\rangle [g], \nonumber \\ H_{\scriptscriptstyle \!{\rm S}_{\scriptstyle n}}^{abcd} [g](x,y)\!\!\! &= &\!\!\! {1\over 4}\:{\rm Im} \left\langle {\rm T}^{\displaystyle \ast}\!\! \left( \hat{T}_n^{ab}(x) \hat{T}_n^{cd}(y) \right) \right\rangle \![g], \nonumber \\ H_{\scriptscriptstyle \!{\rm A}_{\scriptstyle n}}^{abcd} [g](x,y) \!\!\!&= &\!\!\! -{i\over 4}\, \left\langle {1\over 2} \left[ \hat{T}_n^{ab}(x), \, \hat{T}_n^{cd}(y) \right] \right\rangle \![g], \nonumber \\ K_n^{abcd}[g](x,y) \!\!\!&= &\!\!\! \left. {-1\over\sqrt{- g(x)}\sqrt{- g(y)} } \, \left\langle \frac{\delta^2 S_m[g,\Phi_{n}]}{\delta g_{ab}(x)\delta g_{cd}(y)} \right|_{\Phi_{n}=\hat{\Phi}_{n}}\right\rangle \![g], \label{kernels} \end{eqnarray} using again a symmetrical ordering (Weyl ordering) prescription for the operators in the last of these expressions. All the expectation values in these expressions are in the $n$-dimensional spacetime generalization of the state described by $\hat{\rho}[g]$. In the above equations, $\{ \; , \: \}$ and $[ \; , \: ]$ mean, respectively, the anticommutator and the commutator, and we use the symbol ${\rm T}^{\displaystyle \ast}$ to denote that, first, we have to time order the field operators $\hat{\Phi}_{n}$ and then apply the derivative operators that appear in each term of the product $T^{ab}(x) T^{cd}(y)$, where $T^{ab}$ is the functional (\ref{class s-t}). For instance, \begin{equation} {\rm T}^{\displaystyle \ast}\!\! \left(\hspace{-0.07ex} \bigtriangledown^{a}_{\!\!\! \mbox{}_{x}} \hspace{-0.1ex}\hat{\Phi}_{n}(x)\! \bigtriangledown^{b}_{\!\!\! \mbox{}_{x}}\!\hat{\Phi}_{n}(x)\! \bigtriangledown^{c}_{\!\!\! \mbox{}_{y}}\!\hat{\Phi}_{n}(y)\! \bigtriangledown^{d}_{\!\!\! \mbox{}_{y}}\!\hat{\Phi}_{n}(y)\! \right)\! =\!\!\!\!\lim_{ x_1,x_2 \rightarrow x_{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\! \mbox{}_{\mbox{}_{\mbox{}_{\mbox{}_ {\mbox{}_{\scriptstyle x_3,x_4 \rightarrow y}}}}}} }\!\!\! \bigtriangledown^{a}_{\!\!\! \mbox{}_{x_1}}\!\!\hspace{0.02ex} \bigtriangledown^{b}_{\!\!\! \mbox{}_{x_2}}\!\! \bigtriangledown^{c}_{\!\!\! \mbox{}_{x_3}}\!\! \bigtriangledown^{d}_{\!\!\! \mbox{}_{x_4}}\! {\rm T}\! \left(\hat{\Phi}_{n}(x_1)\hat{\Phi}_{n}(x_2) \hat{\Phi}_{n}(x_3)\hat{\Phi}_{n}(x_4) \right)\!, \label{T star} \end{equation} where ${\rm T}$ is the usual time ordering. Notice that all the kernels that appear in expressions (\ref{derivatives}) are real. In fact, from (\ref{kernels}), we see that the noise kernel $N_n^{abcd}$, and also the kernel $H_{\scriptscriptstyle \!{\rm A}_{\scriptstyle n}}^{abcd}$, are free of ultraviolet divergencies in the limit $n \!\rightarrow \!4$. This is because, for a linear quantum field, the ultraviolet divergencies in $\left\langle\hat{T}_n^{ab}(x) \hat{T}_n^{cd}(y)\right\rangle$ are the same ones as those of $\left\langle\hat{T}_n^{ab}(x)\right\rangle \left\langle\hat{T}_n^{cd}(y)\right\rangle$. Therefore, in the semiclassical Einstein-Langevin equation (\ref{Einstein-Langevin eq in n}), one can perform exactly the same renormalization procedure as the one for the semiclassical Einstein equation (\ref{semiclassical eq in n}). After this renormalization procedure, Eq.~(\ref{Einstein-Langevin eq in n}) will yield the semiclassical Einstein-Langevin equation in the physical spacetime ($n\!=\!4$). It can be written as \begin{equation} {1\over 8 \pi G} \Bigl( G^{ab}_L[g+h]+ \Lambda\left(g^{ab}-h^{ab}\right) \Bigr)- 2 \left( \alpha A^{ab}_L+\beta B^{ab}_L \right)\hspace{-0.3ex} [g+h]=\left\langle \hat{T}_{R}^{ab}\right\rangle \!\!_{\mbox{}_{\scriptstyle L}} [g+h] +2 \xi^{ab} , \label{Einstein-Langevin eq} \end{equation} being $\xi^{ab}$ is a Gaussian stochastic tensor with \begin{equation} \left\langle\xi^{ab}(x) \right\rangle_c = 0, \hspace{6ex} \left\langle\xi^{ab}(x)\xi^{cd}(y) \right\rangle_c = N^{abcd}[g](x,y), \label{correlators} \end{equation} where $N^{abcd} \equiv \lim_{n \rightarrow 4} \mu^{-2 (n-4)} N_n^{abcd}$. Notice from (\ref{kernels}) that the noise kernel $N^{abcd}[g](x,y)$ gives a measure of the lowest order fluctuations of the scalar field stress-energy tensor around its expectation value. Thus, the stochastic metric perturbations $h_{ab}$, solution of the semiclassical Einstein-Langevin equation (\ref {Einstein-Langevin eq}), account for the back reaction of such matter stress-energy fluctuations on the spacetime geometry. For a more detailed analysis of the semiclassical Einstein-Langevin equation and some of its applications, see Ref.~\cite{mv98}. Going back to the expressions in dimensional regularization, which may be useful for calculational purposes, we can now write the expansion of the influence action around a given metric $g_{ab}$. From (\ref{s-t expect value}) and (\ref{derivatives}), taking into account that $S^{\rm eff}_{\rm IF}[g,g]=0$ and that $S^{\rm eff}_{\rm IF}[g^-,g^+]= -S^{\rm eff {\displaystyle \ast}}_{\rm IF}[g^+,g^-]$, we get \begin{eqnarray} &&\hspace{-4ex} S^{\rm eff}_{\rm IF}[g\!+\!h^+,g\!+\!h^-] ={1\over 2} \int\! d^nx\, \sqrt{- g(x)}\, \left\langle \hat{T}_{n}^{ab}(x)\right\rangle \![g] \left(h^+_{ab}(x)\!-\!h^-_{ab}(x) \right) \nonumber \\ &&\hspace{-1.9ex} -{1\over 2} \int\! d^nx\, d^ny \,\sqrt{- g(x)}\sqrt{- g(y)} \left(H_{\scriptscriptstyle \!{\rm S}_{\scriptstyle n}}^{abcd} [g](x,y)\!+\!K_n^{abcd}[g](x,y) \right)\! \left(h^+_{ab}(x)h^+_{cd}(y)\!-\!h^-_{ab}(x)h^-_{cd}(y) \right) \nonumber \\ &&\hspace{-1.9ex} -{1\over 2} \int\! d^nx\, d^ny\, \sqrt{- g(x)}\sqrt{- g(y)}\, H_{\scriptscriptstyle \!{\rm A}_{\scriptstyle n}}^{abcd} [g](x,y) \left(h^+_{ab}(x)h^-_{cd}(y)\!-\!h^-_{ab}(x)h^+_{cd}(y) \right) \nonumber \\ &&\hspace{-1.9ex} +{i\over 2} \int\! d^nx\, d^ny\, \sqrt{- g(x)}\sqrt{- g(y)}\, N_n^{abcd}[g](x,y) \left(h^+_{ab}(x)\!-\!h^-_{ab}(x) \right) \left(h^+_{cd}(y)\!-\!h^-_{cd}(y) \right)+0(h^3). \nonumber \\ \mbox{} \label{expansion 1} \end{eqnarray} From (\ref{kernels}), it is easy to see that the kernels satisfy the symmetry relations \begin{equation} H_{\scriptscriptstyle \!{\rm S}_{\scriptstyle n}}^{abcd}(x,y)= H_{\scriptscriptstyle \!{\rm S}_{\scriptstyle n}}^{cdab}(y,x), \hspace{3 ex} H_{\scriptscriptstyle \!{\rm A}_{\scriptstyle n}}^{abcd}(x,y)= -H_{\scriptscriptstyle \!{\rm A}_{\scriptstyle n}}^{cdab}(y,x), \hspace{3 ex} K_n^{abcd}(x,y) = K_n^{cdab}(y,x). \label{symmetries} \end{equation} Using these relations, and defining \begin{equation} H_n^{abcd}(x,y)\equiv H_{\scriptscriptstyle \!{\rm S}_{\scriptstyle n}}^{abcd}(x,y) +H_{\scriptscriptstyle \!{\rm A}_{\scriptstyle n}}^{abcd}(x,y), \label{H} \end{equation} we can write the expansion (\ref{expansion 1}) as \begin{eqnarray} S^{\rm eff}_{\rm IF}[g\!+\!h^+,\!\!\!\!&g&\!\!\!\!\!+h^-] ={1\over 2} \int\! d^nx\, \sqrt{- g(x)}\, \left\langle \hat{T}_{n}^{ab}(x)\right\rangle \![g] \, \left[h_{ab}(x) \right] \nonumber \\ &&\hspace{-1ex} -{1\over 2} \int\! d^nx\, d^ny\, \sqrt{- g(x)}\sqrt{- g(y)}\, \left[h_{ab}(x)\right] \left(H_n^{abcd}[g](x,y)\! +\!K_n^{abcd}[g](x,y) \right) \left\{ h_{cd}(y) \right\} \nonumber \\ &&\hspace{-1ex} +{i\over 2} \int\! d^nx\, d^ny\, \sqrt{- g(x)}\sqrt{- g(y)}\, \left[h_{ab}(x) \right] N_n^{abcd}[g](x,y) \left[h_{cd}(y) \right]+0(h^3), \label{expansion 2} \end{eqnarray} where we have used the notation \begin{equation} \left[h_{ab}\right] \equiv h^+_{ab}\!-\!h^-_{ab}, \hspace{5 ex} \left\{ h_{ab}\right\} \equiv h^+_{ab}\!+\!h^-_{ab}. \label{notation} \end{equation} Using this expansion and noting, from (\ref{kernels}), that \begin{equation} K_n^{abcd}[g](x,y)= -{1\over 4} \left\langle \hat{T}_{n}^{ab}(x)\right\rangle \![g] {g^{cd}(x)\over\sqrt{- g(y)}}\, \delta^n(x\!-\!y)-{1\over 2}\,{1\over\sqrt{- g(y)}} \left\langle \left. \frac{\delta T^{ab}[g,\Phi_{n}](x)}{\delta g_{cd}(y)} \right|_{\Phi_{n}=\hat{\Phi}_{n}}\right\rangle \![g], \label{K} \end{equation} we get, from (\ref{perturb s-t expect value}), \begin{equation} \left\langle \hat{T}_n^{ab}(x) \right\rangle \!\!_{\mbox{}_{\scriptstyle L}} [g\!+\!h] = \left\langle \hat{T}_n^{ab}(x) \hspace{-0.1ex}\right\rangle \![g] + \left\langle \hat{T}_n^{{\scriptscriptstyle (1)}\hspace{0.1ex} ab} [g;h](x) \right\rangle \![g] - 2 \!\int\! d^ny \hspace{0.3ex} \sqrt{- g(y)} \hspace{0.3ex} H_n^{abcd}[g](x,y) \hspace{0.2ex} h_{cd}(y), \label{s-t expect value expansion} \end{equation} where the operator $\hat{T}_n^{{\scriptscriptstyle (1)}\hspace{0.1ex} ab}$ is defined from the term of first order in the expansion $T^{ab}_L[g+h,\Phi_{n}]$ as \begin{equation} T^{ab}_L[g\!+\!h,\Phi_{n}]=T^{ab}[g,\Phi_{n}]+ T^{{\scriptscriptstyle (1)}\hspace{0.1ex} ab}[g,\Phi_{n};h], \hspace{5 ex} \hat{T}_n^{{\scriptscriptstyle (1)}\hspace{0.1ex} ab} [g;h]\equiv T^{{\scriptscriptstyle (1)}\hspace{0.1ex} ab}[g,\hat{\Phi}_{n}[g];h], \label{T(1)} \end{equation} using, as always, a Weyl ordering prescription for the operators in the last definition. Note that the third term in the right hand side of Eq.~(\ref{s-t expect value expansion}) is due to the dependence on $h_{cd}$ of the field operator $\hat{\Phi}_{n}[g+h]$ and of the dimensional regularized version of the density operator $\hat{\rho}[g+h]$. Substituting (\ref{s-t expect value expansion}) into (\ref{Einstein-Langevin eq in n}), and taking into account that $g_{ab}$ satisfies the semiclassical Einstein equation (\ref{semiclassical eq in n}), we can write the Einstein-Langevin equation (\ref{Einstein-Langevin eq in n}) as \begin{eqnarray} &&\hspace{-2ex}{1\over 8 \pi G_{B}}\left( G^{{\scriptscriptstyle (1)}\hspace{0.1ex} ab} [g;h](x)\!-\! \Lambda_{B}\, h^{ab}(x) \right) - {4\over 3}\, \alpha_{B} D^{{\scriptscriptstyle (1)}\hspace{0.1ex} ab} [g;h](x) -2\beta_{B} B^{{\scriptscriptstyle (1)}\hspace{0.1ex} ab} [g;h](x) \nonumber \\ &&\hspace{-2ex}- \mu^{-(n-4)} \hspace{-0.3ex} \left\langle \hspace{-0.1ex} \hat{T}_n^{{\scriptscriptstyle (1)}\hspace{0.1ex} ab} [g;h](x) \hspace{-0.1ex}\right\rangle \![g] \hspace{-0.2ex}+\hspace{-0.2ex} 2 \hspace{-0.2ex}\!\int\! d^ny\, \sqrt{- g(y)}\,\mu^{-(n-4)} H_n^{abcd}[g](x,y)\, h_{cd}(y) \hspace{-0.2ex}=\hspace{-0.2ex} 2 \mu^{-(n-4)} \xi_n^{ab}(x). \nonumber \\ \mbox{} \label{Einstein-Langevin eq 2} \end{eqnarray} In the last equation we have used the superindex ${\scriptstyle (1)}$ to denote the terms of first order in the expansions $G^{ab}_L[g+h]$, $D^{ab}_L[g+h]$ and $B^{ab}_L[g+h]$. Thus, for instance, $G^{ab}_L[g+h]\!=\!G^{ab}[g]+ G^{{\scriptscriptstyle (1)}\hspace{0.1ex} ab}[g;h]$. The explicit expressions for the tensors $T^{{\scriptscriptstyle (1)}\hspace{0.1ex} ab}[g,\Phi_{n};h]$, $G^{{\scriptscriptstyle (1)}\hspace{0.1ex} ab}[g;h]$, $D^{{\scriptscriptstyle (1)}\hspace{0.1ex} ab}[g;h]$ and $B^{{\scriptscriptstyle (1)}\hspace{0.1ex} ab}[g;h]$ are given in the Appendix. From $T^{{\scriptscriptstyle (1)}\hspace{0.1ex} ab}[g,\Phi_{n};h]$, we can write an explicit expression for the operator $\hat{T}_n^{{\scriptscriptstyle (1)}\hspace{0.1ex} ab}$. Using the Klein-Gordon equation (\ref{Klein-Gordon in n}), and expressions (\ref{regul s-t 2}) and (\ref{diff operator}) for the stress-energy operator, we can write this operator as \begin{equation} \hat{T}_n^{{\scriptscriptstyle (1)}\hspace{0.1ex} ab} [g;h]=\left({1\over 2}\, g^{ab}h_{cd}-\delta^a_c h^b_d- \delta^b_c h^a_d \right) \hat{T}_{n}^{cd}[g] +{\cal F}^{ab}[g;h]\, \hat{\Phi}_{n}^2[g], \label{T(1) operator} \end{equation} where ${\cal F}^{ab}[g;h]$ is the differential operator \begin{eqnarray} {\cal F}^{ab} \!\!\!\!&\equiv& \!\!\!\!\left(\xi\!-\!{1\over 4}\right)\! \left(h^{ab}\!-\!{1\over 2}\, g^{ab} h^c_c \right)\! \Box+ {\xi \over 2} \left[ \bigtriangledown^{c}\! \bigtriangledown^{a}\! h^b_c+ \bigtriangledown^{c}\! \bigtriangledown^{b}\! h^a_c- \Box h^{ab}- \bigtriangledown^{a}\! \bigtriangledown^{b}\! h^c_c- g^{ab}\! \bigtriangledown^{c}\! \bigtriangledown^{d} h_{cd} \right. \nonumber \\ &&\!\!\!+\left. g^{ab} \Box h^c_c +\left( \bigtriangledown^{a} h^b_c+ \bigtriangledown^{b} h^a_c-\bigtriangledown_{\! c} \hspace{0.2ex} h^{ab}- 2 g^{ab}\! \bigtriangledown^{d}\! h_{cd} + g^{ab}\! \bigtriangledown_{\! c} \! h^d_d \right)\! \bigtriangledown^{c} -g^{ab} h_{cd} \bigtriangledown^{c}\! \bigtriangledown^{d} \right], \nonumber \\ \mbox{} \label{diff operator F} \end{eqnarray} and it is understood that indices are raised with the background inverse metric $g^{ab}$ and that all the covariant derivatives are associated to the metric $g_{ab}$. Substituting expression (\ref{T(1) operator}) into Eq.~(\ref{Einstein-Langevin eq 2}), and using the semiclassical equation (\ref{semiclassical eq in n}) to get an expression for $\mu^{-(n-4)} \langle \hat{T}_{n}^{ab}\rangle [g]$, we can finally write the semiclassical Einstein-Langevin equation in dimensional regularization as \begin{eqnarray} &&{1\over 8 \pi G_{B}}\Biggl[ G^{{\scriptscriptstyle (1)}\hspace{0.1ex} ab}\!-\! {1\over 2}\, g^{ab} G^{cd} h_{cd}+ G^{ac} h^b_c+G^{bc} h^a_c+ \Lambda_{B} \left( h^{ab}\!-\!{1\over 2}\, g^{ab} h^c_c \right) \Biggr](x) - {4\over 3}\, \alpha_{B}\biggl( D^{{\scriptscriptstyle (1)}\hspace{0.1ex} ab} \nonumber \\ &&\left. -{1\over 2}\, g^{ab} D^{cd} h_{cd}+ D^{ac} h^b_c+D^{bc} h^a_c \right)\! (x) -2\beta_{B}\left( B^{{\scriptscriptstyle (1)}\hspace{0.1ex} ab}\!-\! {1\over 2}\, g^{ab} B^{cd} h_{cd}+ B^{ac} h^b_c+B^{bc} h^a_c \right)\! (x) \nonumber \\ &&- \mu^{-(n-4)}\, {\cal F}^{ab}_x \! \left\langle \hat{\Phi}_{n}^2(x) \right\rangle \![g] +2 \!\int\! d^ny \, \sqrt{- g(y)}\, \mu^{-(n-4)} H_n^{abcd}[g](x,y)\, h_{cd}(y) =2 \mu^{-(n-4)} \xi^{ab}_n(x), \nonumber \\ \mbox{} \label{Einstein-Langevin eq 3} \end{eqnarray} where the tensors $G^{ab}$, $D^{ab}$ and $B^{ab}$ are computed from the semiclassical metric $g_{ab}$, and where we have omitted the functional dependence on $g_{ab}$ and $h_{ab}$ in $G^{{\scriptscriptstyle (1)}\hspace{0.1ex} ab}$, $D^{{\scriptscriptstyle (1)}\hspace{0.1ex} ab}$, $B^{{\scriptscriptstyle (1)}\hspace{0.1ex} ab}$ and ${\cal F}^{ab}$ to simplify the notation. Notice that, in Eq.~(\ref{Einstein-Langevin eq 3}), all the ultraviolet divergencies in the limit $n \!\rightarrow \!4$, which shall be removed by renormalization of the coupling constants, are in $\left\langle \hat{\Phi}_{n}^2(x) \right\rangle$ and the symmetric part $H_{\scriptscriptstyle \!{\rm S}_{\scriptstyle n}}^{abcd}(x,y)$ of the kernel $H_n^{abcd}(x,y)$, whereas, as we have pointed out above, the kernels $N_n^{abcd}(x,y)$ and $H_{\scriptscriptstyle \!{\rm A}_{\scriptstyle n}}^{abcd}(x,y)$ are free of ultraviolet divergencies. Once we have performed such a renormalization procedure, setting $n \!= \!4$ in this equation will yield the physical semiclassical Einstein-Langevin equation, Eq.~(\ref{Einstein-Langevin eq}). Note that, due to the presence of the kernel $H_n^{abcd}(x,y)$ in Eq.~(\ref{Einstein-Langevin eq 3}), such Einstein-Langevin equation will be non-local in the metric perturbation. \subsection{\hspace{-2.5ex}. Discussion} We have seen that effective equations of motion for the metric field of the form (\ref{semiclassical Einstein eq}) and (\ref{Einstein-Langevin eq}) follow from the local approximation (\ref{effective action ansatz}) for the effective action describing the ``effective interaction'' of the metric and the scalar field. A more realistic evaluation of this effective action starting from a fundamental theory of quantum gravity would certainly lead to some real and imaginary non-local terms in this action. In some situations, the contribution of these terms to the effective equations of motion for the metric (note that they would also give some extra terms in the semiclassical equation) might not be negligible and, in any case, one would expect that their role in the decoherence mechanism for the metric field would be important. This would represent non trivial effects coming from the ``high-momentum'' modes of quantum gravity, which are not part of the gravitational field described by the classical stochastic metric $g_{ab}+h_{ab}$, but which can be source of this gravitational field in the same way as the matter fields. The contribution of these neglected terms to the equations for the background metric $g_{ab}$ and for the stochastic metric perturbation $h_{ab}$ would be similar to the contribution of the scalar field through its stress-energy operator, but with this operator replaced with some ``effective'' stress-energy operator of such primordial ``high-momentum'' gravitational modes coupled to the scalar field. These equations would take the form (\ref{semiclassical Einstein eq}) and (\ref{Einstein-Langevin eq}) only when the effect of this ``effective'' stress-energy tensor on the classical spacetime geometry can be neglected. A way of partially modelizing this effect would consist on replacing the stress-energy operator $\hat{T}_{n}^{ab}[g]$ by $\hat{T}_{n}^{ab}[g]+\hat{t}_{n}^{ab}[g]$, where $\hat{t}_{n}^{ab}[g]$ is the stress-energy tensor of gravitons quantized in classical spacetime background $({\cal M},g_{ab})$ \cite{wald84}. We end this paper with some comments on the relation between the semiclassical Einstein-Langevin equation (\ref{Einstein-Langevin eq}) and the Langevin-type equations for stochastic metric perturbations recently derived in the literature \cite{calzettahu,humatacz,husinha,cv96,lomb-mazz,ccv97,campos-hu}. In these previous derivations, one starts with the influence functional (\ref{path integral}), with the state of the scalar field assumed to be an ``in'' vacuum or an ``in'' thermal state, and computes explicitly the expansion for the corresponding influence action around a specific metric background. One then applies the method of subsection \ref{sec:classicalization}\,\ref{subsec:quick method} to derive a Langevin equation for the perturbations to this background. As we have seen in subsection \ref{sec:classicalization}\,\ref{subsec:quick method}, this method yields the same equations as the one used in this section. However, in most of the previous derivations, one starts with a ``mini-superspace'' model and, thus, the metric perturbations are assumed from the beginning to have a restrictive form. In those cases, the derived Langevin equations do not correspond exactly to our equation, Eq.~(\ref{Einstein-Langevin eq}), but to a ``reduced'' version of this equation, in which only some components of the noise kernel in Eq.~(\ref{correlators}) (or some particular combinations of them) influence the dynamics of the metric perturbations. Only those equations which have been derived starting from a completely general form for the metric perturbations are actually particular cases, computed explicitly, of the semiclassical Einstein-Langevin equation (\ref{Einstein-Langevin eq}) \cite{cv96,lomb-mazz,campos-hu}. \section*{Acknowledgments} We are grateful to Esteban Calzetta, Antonio Campos, Bei-Lok Hu and Albert Roura for very helpful suggestions and discussions. This work has been partially supported by the CICYT Research Project number \mbox{AEN95-0590}, and the European Project number \mbox{CI1-CT94-0004}. \bigskip \bigskip \vspace{1ex} {\noindent \Large \bf Appendix: Expansions around a background metric}
1,108,101,566,220
arxiv
\section{Introduction} \PARstart{O}{ver} the last decade, the growing population in urban areas, without a corresponding increase in road capacity, has led to traffic congestion, increased delays, and environmental concerns \cite{Schrank2019}. Integrating communication technologies along with computational capabilities into connected and automated vehicles (CAVs) has the potential to revolutionize our overwhelmed transportation systems. Through these advancements, our transportation system will transition into an emerging mobility system, in which CAVs can make better operational decisions\textemdash leading to improvements in passengers safety as well as a significant reduction of energy consumption, greenhouse gas emissions, and travel delays \cite{zhao2019enhanced,Melo2017a,ersal2020connected,Mahbub2019ACC,Wadud2016,chalaki2020TCST}. Rigorous evaluation of the performance of CAVs requires a broad spectrum of testing, ranging from numerical simulation to real-world public roads. Recently, the emergence of scaled cities has received significant global attention as a more sustainable CAV testing solution \cite{paull2017duckietown,hyldmar2019fleet,fok2012platform,Beaver2020DemonstrationCity,chalaki2021CSM}. These closed-test facilities use robotic cars to ensure safety, complete control of the test-environment variables, and quick, repeatable experiments. A key intermediate step before testing these new technologies in a scaled environment is to use high-fidelity simulations to gather preliminary information about the system's performance in an idealized environment \cite{Wang2010_its}. Several research efforts have been reported in literature on creating a digital version of the real environment using physics-based simulation software. Zhang and Masoud \cite{zhang2020v2xsim} used Gazebo to create a virtual environment to test CAVs due to its ability to capture microscopic vehicle movement. The authors selected Gazebo, rather than a game engine, due to concerns about rigorously replicating the full dynamics of an individual vehicle. In other efforts, a simulation framework for CAVs has been linked to the robot operating system (ROS) and game-engine platform Unity \cite{hussein2018ros,tsai2017virtualreality,mizuchi2017interaction}. Tsai et al. \cite{tsai2017virtualreality} demonstrated the validity of hardware-in-the-loop simulation utilizing the ROS Unity link. Mizuchi et al. \cite{mizuchi2017interaction} introduced virtual reality for multiple users into the environment using Unity, and Yang et al. \cite{yang2016unityproduction} modeled an existing environment within Unity to validate simulated sensors in a variety of weather and lighting conditions In this work, we recreate a full-scale urban environment in Unity to evaluate the behavior of CAVs operating under various control laws. While other virtual environments have been created, to the best of our knowledge, the environment we report in this paper is the first one that analyzes a transportation network at a system level while it is directly coupled to its physical counterpart. In this paper, we introduce the Information and Decision Science Laboratory's Scaled Smart Digital City (IDS 3D City) in Unity, a full-scale digital recreation of the Information and Decision Science Lab's Scaled Smart City (IDS$^3$C) physical testbed \cite{Beaver2020DemonstrationCity,chalaki2021CSM}. IDS$^3$C is a $1$:$25$ scaled testbed spanning over $400$ square feet, and it is capable of replicating real-world traffic scenarios using up to $50$ ground and $10$ aerial vehicles. Our digital replica can communicate with the central mainframe computer using the user datagram protocol (UDP), allowing users to evaluate the behavior of their algorithms before running a physical experiment in the IDS$^3$C. Using IDS 3D City, we are also able to rapidly iterate the design of our experiments before deploying them on the physical city. The remainder of the paper proceeds as follows. In Section \ref{sec:simulation}, we introduce IDS 3D City and elaborate on the different features and their interactions. In Section \ref{sec:experiment}, we present a coordination problem of CAVs at a roundabout, and compare the results from IDS 3D City and IDS$^3$C. Finally, we draw concluding remarks and propose some directions for future research in Section \ref{sec:conclusion}. \section{Digital Simulation Environment} \label{sec:simulation} The IDS $3$D City integrates seamlessly the current control framework used in its physical counterpart, IDS$^3$C. A schematic of the communication structure between the IDS $3$D City and IDS$^3$C is shown in Fig. \ref{fig:commGraph}. During a physical experiment, a central mainframe computer runs a custom C++ application that generates a separate thread for each CAV in the experiment. Each physical CAV in IDS$^3$C receives a desired trajectory from the mainframe computer over WiFi, and the position and orientation of each CAV are fed back to the mainframe computer through a VICON motion capture system. To mimic the behavior of IDS$^3$C, we send trajectory data over a local UDP socket from mainframe to the IDS $3$D City application. This trajectory data consists of the desired state of each CAV in the simulation. After each physics update, the position and orientation of each CAV in the IDS $3$D City are broadcast through ROS to a node that mimics the format of VICON measurements. This information is accessed by the mainframe computer, which updates the CAVs' states, executes the control algorithm, and sends new commands over UDP. A major consequence of this design is that we can seamlessly switch between running any individual car in the physical or virtual environment with minimal changes to our input files. The IDS $3$D City is also capable of replaying experimental data, allowing users to directly control a vehicle, and streaming a live feed of the virtual cameras attached to each vehicle. In the following subsections, we review the three major aspects of our simulation environment: the Unity game engine, Microsoft AirSim, and ROS\#. \begin{figure*}[ht] \centering \includegraphics[width=\linewidth]{Figures/UnityCity.png} \caption{Comparison of the physical and virtual city environments. The mainframe computer can switch between physical and virtual experiment seamlessly.} \label{fig:commGraph} \end{figure*} \subsection{Unity Game Engine} \label{sec:unity} We built a majority of the IDS $3$D City using Unity, a free and highly-customizable game engine with built-in physics and a C\# scripting framework; for a brief history of the Unity game engine, see \cite{hussein2018ros}. We selected Unity over existing simulation packages, such as Gazebo, as it is easy to deploy and performs well on a variety of platforms. Unlike Zhang and Masoud \cite{zhang2020v2xsim}, our interest lies in the system-level coordination of CAVs, not the particular dynamics of any individual CAV. Unity also relies heavily on the entity-component paradigm of software design, which grants us incredible flexibility in the design and control of vehicles in the virtual environment. The built-in Nvidia PhysX engine is open-source, which grants us the ability to modify the physics of the experiment when necessary. Unity is capable of building an executable for Windows, Mac, Linux, and mobile devices, which ensures that the simulation will run natively on all available hardware. Unity's graphical settings are also configurable per device, allowing weaker hardware to access the IDS $3$D City, while more powerful hardware can produce high-fidelity videos and screenshots. Furthermore, Unity allows us to explore more accurate mixed-traffic scenarios with built-in virtual reality support As a first step to creating the IDS $3$D City, we reconstructed the IDS$^3$C's road network at full scale and placed environmental decorations within Unity. The road network was defined in CAD files, which defines each road segment as either a straight line or arc segment. To handle the simulation logic, we created two manager scripts. The \textit{Experiment Manager} is the primary manager, which controls the experiment clock used for data collection. It also stores the initial conditions of all vehicles, this ensures that an experiment can be repeated without restarting the simulation software. The secondary manager script is the \textit{Vehicle Manager}, which handles all of the vehicles. The vehicle manager spawns each vehicle at its initial position, and if two vehicles overlap, the vehicle manager places the second one behind the first to avoid infeasible initial conditions. The vehicle manager also passes information about the vehicles to the user interface (UI) and data logging tools. To initialize vehicles into the environment, we use Unity's prefab system which allows us to configure each vehicle based on the initialization data sent from the mainframe computer. For each vehicle, the initialization data includes the control algorithm name, controller parameters, the initial state, and vehicle appearance. We implemented the vehicles as an abstract class, thus the vehicle manager is flexible enough to initialize and coordinate any additional vehicle types that we may add in the future. A schematic of the key components in our vehicle prefab is presented in Fig. \ref{fig:prefab}, and the behavior of the AirSim and ROS\# components are explained in the relevant sections that follow. \begin{figure}[ht] \centering \includegraphics[width=0.8\linewidth]{Figures/Prefab_Diagram.png} \caption{A diagram showing the different components that make up a single vehicle in the Unity simulation.} \label{fig:prefab} \end{figure} We use the passenger car as the main vehicle type in our simulation. This is is controlled by a custom car script, which is a child of the abstract vehicle class. The car script takes a timestamped waypoint as input, which consist of a desired position in $\mathbb{R}^2$, an orientation in $\mathbb{R}$, and a speed in $\mathbb{R}$. This information is passed to a low-level tracking controller to generate a steering angle and throttle command. The steering angle is computed using a modified Stanley controller \cite{thrun2006stanley}, \begin{align} \label{eq:stanley} \delta(t) = &\big(\psi(t) - k_a\cdot v(t)\cdot\dot{\psi}(t)\big) \notag\\ & + \arctan\Big\{ \frac{k_e y_e(t)}{k_s + v(t)} \Big\} - k_y\big(\dot{\psi}(t) - \dot{\psi}_d\big), \end{align} where $\delta(t)$ is the steering angle, $\psi(t)$ is current yaw angle, $\psi_d(t)$ is the desired yaw angle, $v(t)$ is the current speed, $y_e(t)$ is the lateral tracking error, $k_a, k_e, k_y$ are proportional tracking constants, and $k_s$ is a small constant that ensures the controller can operate at low speeds. The throttle command is generated through a feedforward-feedback controller, i.e., the desired position is tracked using PID control, and we compensate for the vehicle's speed at that point with a feedforward term in the control loop \cite{Spong2004RobotEdition}. The throttle command is sent through a second layer of the controller where it is translated into gas, brake, and handbrake inputs (formally defined in the next section). Finally, the steering angle and throttle commands are sent to the AirSim controller, which updates the state of the vehicle using its own dynamic model. The final major component within Unity is the UI, which is visible in Fig. \ref{fig:city}. The UI displays information about the current experiment and CAVs in a human-readable format. The UI includes all of the relevant information about each vehicle, including the vehicle's ID, status, current position, and speed. We also included buttons that allow the user to open a preview panel for any vehicle. The preview panel contains a live feed of the camera attached to the CAV, as well as the current steering angle, gas, brake, and handbrake commands. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{Figures/UnityUI.jpg} \caption{The digital simulation UI during one run of the experiment. The vehicle UI nodes are on the left, the experiment controls on top, and the preview panel on the right.} \label{fig:city} \end{figure} \subsection{AirSim} \label{airsim} To model the dynamics and sensors of each vehicle, we included Microsoft AirSim's work-in-progress Unity module\footnote{AirSim: \url{https://microsoft.github.io/AirSim/Unity/}}. We accomplished this by using the AirLib wrapper plugin, which allows us access to AirSim's C++ API while maintaining the Unity code base in C\#. Our vehicle prefabs (Fig. \ref{fig:prefab}) are based on the prefabs contained in AirSim. AirSim provides convenient code packages, for vehicles and drones, that model physically accurate behavior while being fully configurable. Configurable variables include motor torque, steering angle limits, weight, and aerodynamic drag. This allows us to validate our approaches to CAV coordination on a variety of vehicles, and further helps us demonstrate that our control algorithms are independent of the underlying vehicle dynamics. Another major feature of AirSim is its sensor suite. Namely, each vehicle is equipped with an RGB camera to collect qualitative data and to give visual feedback to a human operator. We made several modifications to the AirSim source code, both to fix undesirable behaviors and to customize the vehicles for our use case. First, we modified the handbrake to affect all four wheels, as opposed to only the two front wheels. We also fixed bugs in the braking behavior, one where extreme braking would occur, and another where the brakes would lock and be unable to move. Next, we adjusted the default parameters of each vehicle for use in the IDS $3$D City. The default parameters in AirSim resulted in significant understeer with our line-tracking controller \eqref{eq:stanley}. We resolved this by increasing the wheel friction in the physics engine, which yielded a higher effective grip. Finally, our low-level tracking controller outputs a normalized throttle command $u_d(t) \in [-1, 1]$; however, the AirSim controller expects three input variables, gas, brake, and handbrake. We map the desired throttle to these variables using an intermediate layer, \begin{align} h(t) &= \begin{cases} 1 & \text{ if } u_d(t) \leq -0.5, \\ 0 & \text{ otherwise}, \end{cases} \\ b(t) &= \max\big\{0, -u_d(t)\big\} \cdot \big(1 - h(t)\big), \\ g(t) &= \max\big\{0, \,\,\,\,u_d(t)\big\} \cdot \big(1 - h(t)\big), \end{align} where $h(t) \in \{0, 1\}$ is the handbrake, $b(t)\in[0,1]$ is the brake command, and $g(t)\in[0,1]$ is the gas command. This results in the AirSim controller tracking the desired speed, and the vehicle only triggers the handbrake when a sufficiently large deceleration is requested. It also guarantees that the vehicle will stop, rather than shifting into reverse, if it overshoots its current waypoint. \subsection{ROS Framework} ROS provides a flexible framework for robotics software, particularly through its standardized communication protocols. These protocols give separate software components the ability to exchange information reliably, while providing access to a wide suite of debugging tools. To introduce ROS functionality into Unity, we integrated Siemens's open-source ROS\# package\footnote{ROS\#: \url{https://github.com/siemens/ros-sharp}}. In the IDS$^3$C, we use ROS to access VICON motion capture data and determine the state of each vehicle in real time. In the IDS $3$D City, we use ROS\# to mimic the VICON ROS topic by attaching two ROS-specific components called publisher and client to the vehicle prefab. The publisher component captures the position and orientation data of the vehicle. This information is composed into ROS messages to be published as a timestamped transform message. The client component connects to a ROS server that runs on the mainframe computer. The client streams the state data of each vehicle to the ROS server, which the server broadcasts in the same format as the VICON motion capture system. This setup also enables us to virtual and physical vehicles simultaneously, and have access to the state information of all vehicles in real time. \section{Virtual and Physical Experiment} \label{sec:experiment} To demonstrate the capabilities of the IDS $3$D City, we consider a scenario of homogeneous human-driven vehicles operating in a single-lane roundabout, depicted in Fig. \ref{fig:Roundabout}. We consider $N=6$ vehicles entering the roundabout in two groups of $3$, one from the northern entry and one from the eastern entry. Our approach to planning trajectories for each vehicle $i\in\{1, 2, \dots, N\}$ considers double integrator dynamics, \begin{align} \dot{p}_i(t) &= v_i(t), \\ \dot{v}_i(t) &= u_i(t), \end{align} where $p_i(t),v_i(t)\in\mathbb{R}$ are the longitudinal position and speed of vehicle $i$, and $u_i(t)\in\mathbb{R}$ is the control input. We also impose the state and control constraints, \begin{align} 0 \leq v_{\min}&\leq v_i(t) \leq v_{\max}, \\ u_{\min} &\leq u_i(t) \leq u_{\max}, \end{align} where $v_{\min},v_{\max}$ are the minimum and maximum speed limit and $u_{\min},u_{\max}$ are the minimum and maximum control inputs. To control each vehicle, we employ the Intelligent Driver Model (IDM) \cite{Treiber2000}, which is known to mimic the behavior of human drivers. This model outputs the acceleration for a vehicle $i$ based on the relative state of a preceding vehicle, $k\in\{1, 2, \dots, N\} \setminus \{i\}$, \begin{equation} \label{eq:idm} u_i(t) = u_{\max} \left[ 1 - \left( \frac{v_i(t)}{v_{\max}} \right)^\delta - \left( \frac{s^*(v_i(t),\Delta v_i(t))}{s_i(t)} \right)^2 \right], \end{equation} where $s^*$ is the desired headway of the vehicle, \begin{equation} \label{eq:s_star} s^*(v_i(t),\Delta v_i(t)) = s_0 + \max \left( 0, v_i(t) T + \frac{v_i(t) \Delta v_i(t)}{2 \sqrt{u_{\min} u_{\max}}} \right), \end{equation} where $s_i(t)$ is the bumper-to-bumper distance between vehicles $i$ and $k$, and $\Delta v_i(t) = v_i(t) - v_k(t)$. The constants $s_0, T, \delta$ are parameters that correspond to the standstill stopping distance, time headway, and an exponential factor that determine the acceleration and braking behavior, respectively. Standard values for each of these parameters can be found in \cite{Treiber2000}. We designed the roundabout scenario in Fig. \ref{fig:Roundabout} such that the two groups of vehicles would reach at the merging point at the same time. To ensure safety, vehicles at the northern entry (Path $1$) must yield to roundabout traffic (Path $2$). We achieved this by placing a virtual stopped vehicle at the position of the yield sign whenever a vehicle from Path $2$ was near the merging point. When the area in front of the merging point was clear, the virtual vehicle was removed and vehicles on Path $1$ were allowed to enter the roundabout. Otherwise, the vehicles traveling along Path $1$ would form a queue and wait for the vehicles along Path $2$ to pass through the merging point. \begin{figure}[t] \centering \includegraphics[width=0.7\linewidth]{Figures/IDS_Lab_IDM.png} \caption{A schematic of the roundabout scenario showing the two paths and the yield sign location.} \label{fig:Roundabout} \end{figure} The speed of each vehicle following Path $1$ is plotted against position in Figs. \ref{fig:simVel} and \ref{fig:expVel} for the simulation and experiment, respectively. The effect of the yield sign can be seen around $3.3$ m in both cases, where the front vehicle traveling on Path 1 comes to full stop and a queue begins to form. In simulation, after approximately two seconds, the front vehicle squeezes into a gap and merges with the vehicles on Path 2. This causes the second vehicle on Path 1 to creep forward before coming to a complete stop again. In contrast, the front vehicle in the experiment comes to a complete stop, is unable to merge, and a queue forms behind it. This resulted all vehicles on Path 1 in yielding to all of vehicles on Path 2 before entering the roundabout. This observation demonstrates that while the IDM controller and vehicle dynamics behave similarly, the delays, noise, and disturbances in the physical experiment ultimately prevent the front vehicle from merging early in this particular scenario. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{Figures/NorthLoopSim.png} \caption{Speed vs position for the vehicles on Path $1$ in the simulation with a $0.1$ s moving average filter applied.} \label{fig:simVel} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{Figures/NorthLoopExp.png} \caption{Speed vs position for the vehicles on Path $1$ in the experiment with a $0.1$ s moving average filter applied.} \label{fig:expVel} \end{figure} The position of all vehicles is plotted against time time trajectory for the simulation and experiment in Figs. \ref{fig:comparisonSim} and \ref{fig:comparisonExp} respectively. The horizontal black line around $2.1$ m marks one car length upstream from the merging point, and we have translated the reference frame of Path 2 such that the distance to the merging point is equal on both paths, i.e., the same distance corresponds to the same physical position. Therefore, collisions between different colored lines can only occur at distances grater than $2.1$ m. Despite the different vehicle dynamics in the simulation and experiment, Figs. \ref{fig:simVel} - \ref{fig:comparisonExp} demonstrate that both environments result in appropriate IDM behavior, and neither case leads to a collision between vehicles. In addition, these results show that the simulated vehicles have smoother speed profiles compared to the experiment, as expected. Videos of the experiment and simulation, as well as supplemental material on the capabilities of the IDS $3$D City, can be found on our website \url{https://sites.google.com/view/ud-ids-lab/IDS3DCity}. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{Figures/RvsTSim.png} \caption{Position vs time trajectory vehicles in the simulation for the Path 1 (red) and Path 2 (blue). The horizontal black line corresponds to the position of merging point.} \label{fig:comparisonSim} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{Figures/RvsTExp.png} \caption{Position vs time trajectory vehicles in the experiment for the Path 1 (red) and Path 2 (blue). The horizontal black line corresponds to the position of merging point.} \label{fig:comparisonExp} \end{figure} \section{Conclusion} \label{sec:conclusion} In this work, we presented an overview of our virtual recreation of the IDS$^3$C. Our simulation environment leverages the Unity game engine, AirSim, and ROS\# to control full-scale virtual vehicles, and to verify the behavior of our control algorithms before they are deployed in our physical environment. We demonstrated how the simulated environment hooks into the control code for the physical city, and how this enables us to quickly iterate the design of an experiment and debug our control algorithms. In particular, we illustrated that the intelligent driver model in a roundabout behaves properly, and we demonstrated that our control framework is independent of the underlying dynamics of individual vehicles. Ongoing work includes performing experiments to implement our optimal control framework \cite{chalaki2020experimental} and mixed traffic input \cite{mahbub2021_platoonMixed}. A potential direction for future research is to fully integrate the virtual vehicles with a physical experiment, resulting in an augmented-reality cyber-physical system. Finally, including the drones from AirSim in the environment would be another research direction. \section*{Acknowledgements} We would like to acknowledge Amanda Kelly for her help with building the virtual city environment. \bibliographystyle{IEEEtran.bst}
1,108,101,566,221
arxiv
1,108,101,566,222
arxiv
\section{Introduction} \label{introduction} Software repositories contain a wealth of valuable information (e.g., source code, bug reports) related to software development. This information can assist software engineers (e.g., architects and developers) to comprehend various aspects of a software system. Architectural information is one of the most important types of information in software development, and it is not only used in the early stages (e.g., architecture design) of the development, but also in the later stages of software development life cycle, like maintenance and evolution \cite{li2013application}. Architectural information, such as benefits and drawbacks of certain architectural solutions (e.g., patterns and tactics) in specific application domains, can help architects and related stakeholders to conduct architecture evaluation on candidate architectural solutions during the architecting process \cite{hofmeister2007general}. However, architectural information is scattered in various sources of software repositories (e.g., Q\&A sites \cite{soliman2016architectural}, technical blogs and tutorials \cite{soliman2021exploring}, issue tracking systems \cite{soliman2021exploratory}, developer mailing lists \cite{ding2015understanding}, and chat messages \cite{borrego2019towards}), and is often described in a combination of textual and graphical representation \cite{Malavolta2013WhatIN}. This constitutes a large volume of architectural information that is sometimes tacit and not documented at all in those sources \cite{ding2014open}. Therefore, the intricate information of a system, especially in a large and complex system easily evaporates if architectural information is non-documented. The consequences would be incurring design and implementation issues \cite{Capilla201610YO}. With the increase in the complexity and size of systems, architectural information management becomes even more challenging, and software engineers need to find the right information and recover it efficiently. However, searching and finding or recovering relevant architectural information from artifacts (e.g., code, requirements document) is a challenging task for software engineers \cite{JANSEN20091232}. Therefore, approaches and tools for searching and mining architectural information from development artifacts are much needed to assist the development process (e.g., architecting process). The Mining Software Repositories (MSR) field analyzes the rich data available in software repositories to uncover interesting and actionable information about software systems \cite{Hassan2008TheRA}. Mining software repositories for Software Architecture (SA) has been the subject of the architecture research community in recent years \cite{soliman2021preface}. SA researchers and practitioners have developed approaches and tools to ease the searching and mining of architectural information from various sources of software repositories in order to support architecting activities \cite{hofmeister2007general}\cite{li2013application}. Some of the supported architecting activities include architecture analysis \cite{velasco2016knowledge}, architecture understanding \cite{do2015keecle}, and architecture recovery \cite{Shahbazian2018RecoveringAD}. However, according to our recent industrial survey on architectural information searching activities \cite{de2022developers}, practitioners reported that they spend too much time and feel overwhelmed due to the dispersion and abundance of architectural information in various sources of software repositories, as the major challenge they face when seeking architectural information (e.g., architectural tactics). We also observed that none of our survey participants used the existing approaches and tools (proposed in the literature) to search and mine architectural information from software repositories. One reason is that a gap still exists on mining architectural information between research and practice, and developers might not be familiar with the existing approaches and tools in the literature. Therefore, it is important to systematically investigate the current state of research on mining architectural information. Such a study can help practitioners (e.g., developers) know various mined architectural information and the approaches and tools utilized, and can also help researchers gain insights into the challenges faced when mining architectural information. Further, it can inform researchers and tool builders to develop dedicated approaches and tools to address such challenges. Thus, this motivated us to carry out a literature review of the reported research on mining architectural information in software repositories to support architecting activities. There are several methods for conducting literature reviews, such as SMS and Systematic Literature Review (SLR). Both SMS and SLR are typically utilized to survey the literature on a specific topic. An SLR focuses on investigating, evaluating, and interpreting the available studies related to specific research questions towards a topic or phenomenon \cite{kitchenham2007guidelines}, while an SMS provides an overview of a research area to systematically identify and evaluate the evidence in literature. One of the main differences between an SMS and an SLR is that an SMS aims to discover the research trends and covers a broad topic in the literature, while an SLR usually has a relatively narrow and in-depth research scope and focuses on specific research questions. The architectural information has an impact on different levels of software systems and affects various aspects in software development (e.g., development activities). Hence, to establish an overview of this topic area in the context of software development, we decided to conduct an SMS rather than an SLR on the studied topic (i.e., mining architectural information in software development). More specifically, we conducted an SMS \cite{petersen2015guidelines} to investigate the architectural information and their sources mined, the approaches and tools used, the architecting activities supported, and the challenges and issues faced. Based on the data collected from 79 primary studies published between 2006 and 2021. This SMS makes the following key contributions: we identified (1) 8 main categories and 29 subcategories of mined architectural information; (2) 13 sources used to mine architectural information; (3) 81 approaches and 53 tools proposed and employed for mining architectural information; (4) 12 architecting activities that can be supported by the mined architectural information; (5) 4 types and 8 subtypes of challenges faced in mining architectural information, and (6) finally, we outlined the key future avenues for research on mining architectural information in software development. The remainder of this paper is organized as follows: Section \ref{researchContext} introduces the context of this SMS. Section \ref{MappingDesign} elaborates on the design and research questions of this SMS. Section \ref{Results} presents the results of each research question. Section \ref{Discussion} discusses the results of the research questions and their implications for researchers and practitioners. Section \ref{ThreatValidity} examines the threats to validity. Finally, Section \ref{ConclusionFurtureWork} concludes this SMS with potential areas of future research. \section{Research context} \label{researchContext} To clarify the scope of this systematic mapping study, two fundamental concepts need to be explained “mining software repositories” and “architectural information”. \subsection{Mining software repositories} The studies on mining software repositories collect and analyze the rich data available in software repositories to uncover interesting and actionable information about software projects \cite{Hassan2008TheRA}. Version Control Systems (VCS) (e.g., GitHub), Q\&A sites (e.g., Stack Overflow), and issue tracking systems (e.g., Jira), among others, are examples of sources of software repositories that are commonly used for mining development related information. These sources are useful for both software practitioners and researchers. For example, practitioners (e.g., architects) can share and learn architectural tactics and quality attributes knowledge from their peers in Stack Overflow \cite{bi2021mining}. Researchers can benefit from the available data (e.g., source code, commit messages) in software repositories to conduct their research on software development, such as code clone detection \cite{nafi2019clcdsa} and dependency analysis \cite{bavota2014improving}. To achieve this, researchers are required to select software repositories and data sources that fit their research needs, extract data from these repositories, and analyze the data to obtain evidence for answering their research questions. \subsection{Architectural information} Architectural information states a high-level abstraction of software systems, and it is one of the most important types of information in software development \cite{SA2012}. There are various types of architectural information, such as architectural solutions and architectural decisions, that are intensively used during the development. In this section, we introduce the concept of architectural information through these two concrete types of architectural information. \textit{Architectural solutions} are the fundamental building blocks in modern software design and they are used to address architecture concerns. Architectural patterns (e.g., Model–View–Controller), tactics (e.g., resource pooling), and frameworks (e.g., Windows Presentation Foundation (WPF)) are typical architectural solutions. \textit{Architectural decision} is a description of the set of architectural additions, subtractions, and modifications to the architecture, the rationale, the design rules, design constraints, and additional requirements that (partially) realize one or more requirements on a given architecture \cite{jansen2005SoftArch}. Architectural decisions play a crucial role in software architecture, during design, implementation, evolution, reuse, and integration of architectures \cite{jansen2005SoftArch}. Architectural information is often described in different formats, such as textual and graphical representation \cite{Malavolta2013WhatIN}, and this information is recorded in various artifacts, such as code \cite{granchelli2017towards}, architecture documents \cite{ding2014open}, that are contained in diverse sources of software repositories. However, it is very challenging for architects and developers to mine and reuse architectural information that is contained in those artifacts due to, for instance, unstructured representation of this information in texts or graphs. As mentioned in Section \ref{introduction}, mining repositories for SA has been the subject of the architecture research community in recent years \cite{soliman2021preface}. Researchers and practitioners have developed many approaches and tools to facilitate the search and mining of architectural information from various artifacts that are contained in software repositories in order to support the development. \section {Mapping study design} \label{MappingDesign} \subsection{Goal and research questions}\label{GoalResearchQuestions} The goal of this SMS based on the Goal-Question-Metric approach \cite{caldiera1994goal} is: to \textit{\textbf{analyze}} the primary studies on mining architectural information \textit{\textbf{for the purpose of}} understanding \textit{\textbf{with respect to}} the sources of architectural information, types of mined architectural information, architecting activities supported, approaches and tools employed, and challenges \textit{\textbf{from the point of view of}} researchers and practitioners \textit{\textbf{in the context of}} software development. In order to get a comprehensive overview of mining architectural information in software development, we further decomposed the goal of this SMS into five Research Questions (RQs) as listed in Table \ref{ResearchQuestions}. \begin{table} [!h] \small \caption {Research questions and their rationale} \label{ResearchQuestions} \begin{tabular}{p{5cm}p{10.5cm}} \toprule \textbf{Research Question} & \textbf{Rationale}\\ \midrule \textbf{RQ1.} What architectural information is mined in software development? & SA researchers have mined different types of architectural information, such as architectural decisions \cite{bhat2017automatic}, architectural tactics and quality attributes \cite{bi2021mining}. This RQ intends to explore the types of mined architectural information to support software development. The answer of this RQ can (i) help practitioners know the types of architectural information have been mined to support the development and (ii) help researchers be aware of the types of architectural information have and have not been mined so that they can explore how to use the mined architectural information and/or mine the architectural information that is not yet mined.\\ \textbf{RQ2.} What sources are used to mine architectural information? & Architectural information is scattered in various sources of software repositories, such as Q\&A sites (e.g., Stack Overflow) \cite{soliman2016architectural}, issue tracking systems (e.g., Jira) \cite{bhat2017automatic}, and technical blogs and tutorials \cite{soliman2021exploring}. Thus, there is no single source of architectural information that contains all the required architectural information. The selection of a source is determined by the type of data or information that needs to be mined. For example, information related to architectural issues or debt may require selecting issue tracking systems, such as Bugzilla or Jira repositories. Therefore, researchers have mined various sources for the sake of certain architectural information to support the development. The answer of this RQ provides insights into specific sources researchers utilize when mining architectural information, and such insights can (i) motivate other researchers to further investigate architectural information in those sources and (ii) help practitioners know where they can search architectural information to address their architectural design concerns.\\ \textbf{RQ3.} What architecting activities can be supported by the mined architectural information? & There are various architecting activities (e.g., architecture analysis, architecture synthesis, architecture evaluation \cite{hofmeister2007general}) that are performed during the architecture life cycle. Each architecting activity may require specific architectural information to be conducted effectively. For example, architectural information, such as the benefits and drawbacks of certain architectural solutions (e.g., architecture patterns and tactics) in specific application domains, can help practitioners (e.g., architects) to conduct architecture evaluation. The answer of this RQ can help practitioners know which architecting activities can be supported by the mined architectural information.\\ \textbf{RQ4.} What approaches and tools are used to mine architectural information? & Many approaches and tools have been proposed and used to mine architectural information. The answer of this RQ can provide practitioners with an overview of the readily available approaches and tools that they can utilize to mine architectural information from software repositories, and to some extent inspire researchers to develop new and dedicated approaches and tools.\\ \textbf{RQ5.} What are the challenges in mining architectural information? & Mining architectural information in software repositories has many challenges. With this RQ, we want to identify and report these challenges as potential future research directions in this area.\\ \bottomrule \end{tabular} \label{table:1} \end{table} \subsection{Mapping study execution} We designed this SMS according to the guidelines for systematic mapping studies proposed by Petersen \textit{et al}. \cite{petersen2015guidelines}. Figure \ref{MappingStudyExecution} illustrates the five phases (i.e., automatic search, study screening, snowballing, data extraction, and data synthesis) that we followed to execute this mapping study. \begin{figure} \centering \includegraphics [scale=0.4] {Figures/MappingStudyExecution} \caption{Mapping study execution} \label{MappingStudyExecution} \end{figure} \subsubsection{Phase 1: Automatic search}\label{AutomaticSearch} We performed the automatic search in seven electronic databases (see Table \ref{ElectronicDatabases}) with three steps (i.e., electronic databases selection, choosing the search scope, and defining the search string). \textit{(i) Electronic databases selection}: The selected electronic databases are shown in Table \ref{ElectronicDatabases} and they were selected based on the guidelines provided by Chen \textit{et al}. \cite{chen2010towards}. These databases are regarded as the most common and popular databases in the software engineering field, and they are considered appropriate to search for relevant studies \cite{chen2010towards}. Google Scholar was not included in this SMS because through a pilot search we observed that Google Scholar produced a significant number of irrelevant studies and the results from Google Scholar were overlapped with the studies returned from other databases. \textit{(ii) Choosing the search scope}: We set the starting date to January 2006. The starting year is justified by considering the milestone paper about the golden age of software architecture in research and practice that was published in 2006 by Shaw \textit{et al}. \cite{shaw2006golden}. The ending search period is settled as November 2021 when we started this SMS. \textit{(iii) Defining the search string}: To define the search string used in this SMS, we followed the following steps: (1) The PICO (Population, Intervention, Comparison, Outcomes) criteria \cite{kitchenham2007guidelines} were used to develop the search terms. The population in this study is “architectural information”, and the intervention refers to “mining”. Only population and intervention were considered to develop the search terms, because architectural information mining approaches are not compared with other approaches and the outcomes of using mining approaches are not limited in this SMS. (2) We extracted the major terms (e.g., “architect*”, “mining software repositor*”, “Stack Overflow”) based on the research topics and research questions in the existing relevant studies (e.g., \cite{bi2021mining}\cite{malavolta2021mining}) that were already known to us. (3) We generated a list of synonyms (i.e., topic-related terms) for each extracted major term. For example, the synonym terms for “architect*” term are “design*” and “structur*”. (4) To compose the search string, we combined each extracted major term and its synonyms with the Boolean operator \textbf{OR}, and (5) we linked the major terms with the Boolean operator \textbf{AND}. Note that, before the formal search and selection, we conducted a pilot search and selection in order to decide the search terms used in the formal search and selection. First, the pilot search can help us settle the appropriate search terms for the formal search \cite{petersen2015guidelines}. Two search strings were used in this pilot search, i.e., (“\textit{architecture}” OR “\textit{design}” OR “\textit{structure}”) AND (“\textit{mining software repository}” OR “\textit{repository mining}”) and (“\textit{architecting}” OR “\textit{designing}” OR “\textit{structuring}”) AND (“\textit{mining software repository}” OR “\textit{repository mining}”). We noticed that the search results by different search terms are complementary to each other, for example, using the term “architecture” in IEEE Explore did not return all relevant studies that were retrieved by using the terms “architecting”, “designing”, and “structuring”. Therefore, we decided to include all these terms in the final construction of the search string to avoid missing potentially relevant studies. Moreover, it should be noted that we did not include the terms related to specific types of architectural information, such as architecture decision, quality attribute requirements, architecture smell, in the search due to two reasons: (1) part of the goal of this SMS is to identify the types of mined architectural information in software development (i.e., RQ1), therefore, we cannot have a relatively comprehensive list of those types before conducting this SMS; (2) including the terms related to the specific types of architectural information in the search query may lead to the situation that the search results are biased to these types. Furthermore, some electronic databases have their own limitations on the number of search terms and operators. For example, IEEE Explore does not allow more than seven wildcards (*) and ScienceDirect does not support more than eight Boolean connectors. Hence, the final search string was adjusted according to the restrictions and settings of each database. Overall, the final search string used in the formal search was defined as: \begin{tcolorbox}[colback=gray!5!white,colframe=gray!75!black] (“\textit{mining software repositor*}” OR “\textit{repositor* mining}”) AND (\textit{architect* OR design* OR structur*}) AND (“\textit{Stack Overflow}” OR \textit{StackOverflow} OR \textit{GitHub} OR “\textit{open source software}” OR “\textit{open source communit*}” OR “\textit{online developer communit*}” OR “\textit{online communit*}” OR “\textit{online developer forum*}” OR “\textit{online forum*}” OR “\textit{question and answer site*}” OR “\textit{question and answer website*}” OR “\textit{Q\&A site*}” OR “\textit{Q\&A website*}” OR “\textit{mailing list*}” OR \textit{gitter} OR \textit{slack} OR \textit{chat} OR “\textit{issue tracker}” OR “\textit{issue tracking}” OR “\textit{issue management}”) \end{tcolorbox} \begin{table} [h!] \small \caption {Electronic databases used for the automatic search} \label{ElectronicDatabases} \begin{tabular}{p{3cm}p{5.9cm}p{4.9cm}} \toprule \textbf{Database} & \textbf{URL} & \textbf{Search Scope in Database}\\ \midrule ACM Digital Library &\url{https://dl.acm.org/} & Paper title, abstract\\ IEEE Explore &\url{https://ieeexplore.ieee.org/} & Paper title, keywords, abstract\\ Science Direct &\url{https://www.sciencedirect.com/} & Paper title, keywords, abstract \\ EI Compendex &\url{https://www.engineeringvillage.com/} & Paper title, abstract\\ Springer Link &\url{https://link.springer.com/} & Paper title, abstract \\ Wiley InterScience &\url{https://onlinelibrary.wiley.com/} & Paper title, abstract\\ ISI Web of Science &\url{https://login.webofknowledge.com/} & Paper title, keywords, abstract\\ \bottomrule \end{tabular} \end{table} \subsubsection{Phase 2: Study screening} \label{StudyScreening} We defined the inclusion and exclusion criteria (see Table \ref{InclusionExculusionCriteria}) to screen the retrieved studies from each database (see Table \ref{ElectronicDatabases}). Before the formal study screening (manual inspection), to reach an agreement about the inclusion and exclusion criteria (see Table \ref{InclusionExculusionCriteria}), a pilot study screening was performed whereby the first two authors randomly selected 100 primary studies from the results of automatic search (see Figure \ref{SearchAndScreeningResults} in Section \ref{Results}) and checked them independently. Specifically, this study screening process involved the following rounds and the inclusion and exclusion criteria elaborated in Table \ref{InclusionExculusionCriteria} were applied in each round: (1) In the first round of study screening, the first two authors independently screened the 100 studies by reading the titles. Any uncertain studies (i.e., they could not be decided by the titles) were temporarily included and kept in the second round. (2) In the second round of study screening, the first two authors independently screened the studies left in the first round screening by reading their abstracts. Those studies were retained for the third round for which the first two authors had not made any decision. (3) In the third round of study, the first two authors independently screened studies by reading the full text of the papers left in the second round. Moreover, the first two authors held a meeting to compare the pilot study screening results and then discuss their disagreements (if any) in an effort to reconcile them and arrive at a final version in which as many discrepancies as possible have been resolved. To measure the inter-rater agreement between the first two authors, we calculated the Cohen’s Kappa coefficient \cite{cohen1960coefficient} and got an agreement of 0.935. During the formal study screening process, we followed the same rounds of study screening, which were used during the pilot study screening process, and they are described below. The selected number of studies in each round of the study screening process is provided in Section \ref{StudiesScreeningResults}. \textit{(i) First round of study screening (i.e., reading titles)}: Through the use of the inclusion and exclusion criteria elaborated in Table \ref{InclusionExculusionCriteria}, the first author screened the papers by reading the titles of the remaining studies from the pilot study screening in order to select potential primary studies. Moreover, he recorded the key terms that led to the inclusion/exclusion of a specific study, and those terms were further used for discussion during the consensus meetings and reassessment process. In order to mitigate potential personal bias during study screening, the results of the first round were checked and validated by the second author. In some cases, where the relevance of a study was still unclear from the title, the first and second authors decided to temporarily include it to the second round of study screening. \textit{(ii) Second round of study screening (i.e., reading abstracts)}: In the second round of study screening, the first author read the abstracts of the retained studies from the first round, and he screened the studies based on the inclusion and exclusion criteria (in Table \ref{InclusionExculusionCriteria}). He followed the same procedure (i.e., recording the key terms that led to inclusion/exclusion of a specific study) that was employed in the previous round study screening. Similar to the first round screening, to mitigate potential bias, the second author checked and validated the results from the second round. Moreover, if any disagreements (i.e., whether a study could be included or not) between the two authors arose, such conflicts were discussed and resolved among the first two authors. Those studies were kept for the final round which were hard to decide based on their abstracts. \textit{(iii) Third round of study screening (i.e., reading full texts)}: The first author read the full text of each study that was retained from the second round, and he screened the studies based on the selection criteria (in Table \ref{InclusionExculusionCriteria}). He followed the same procedure (i.e., recording the key terms that led to inclusion/exclusion of a specific study, and validating the study screening results with the second author) that was used in the previous rounds. Note that, we got controversies on 11 primary studies from the study screening results (i.e., whether a study should be excluded or included in the final round of study screening). Such controversies between the first two authors were discussed in meetings involving the third author till a consensus was reached. \begin{table} [h!] \small \captionsetup{font=scriptsize} \caption{Inclusion (I) and Exclusion (E) criteria for selecting the primary studies} \label{InclusionExculusionCriteria} \begin{tabular}{p{15cm}} \toprule \textbf{Inclusion criterion}\\ \midrule \textbf{I1.} A study that focuses on mining, including searching, extracting, capturing, and retrieving, architectural information in software development.\\ \midrule \textbf {Exclusion criteria}\\ \midrule \textbf{E1.} A study focuses on architectural information without mining it.\\ \textbf{E2.} A study focuses on mining other types of information instead of architectural information.\\ \textbf{E3.} A study not written in English is excluded.\\ \textbf{E4.} If two papers publish the same work in different venues (e.g., conference and journal), the less mature one is excluded.\\ \textbf{E5.} A study that is grey literature (e.g., technical report) is excluded. \\ \bottomrule \end{tabular} \end{table} \subsubsection{Phase 3: Snowballing} In order to minimize the threat of missing relevant studies that were possibly missed out during the automatic search, we conducted snowballing by following the approach in \cite{wohlin2014guidelines}. We adopted the forward (i.e., collecting those papers citing the selected studies) and backward (i.e., using the references of the selected studies) snowballing, which is also used in many other SMSs, such as \cite{waseem2020systematic}\cite{tian2021impact}). Specifically, we conducted the snowballing process iteratively. We checked the references lists of the studies selected from the final round of study screening in automatic database search (in Section \ref{StudyScreening}) and those papers citing the selected studies, then the newly selected studies of the last iteration were checked in the subsequent iterations. The iterative process was completed when there were no newly selected papers. Each iteration followed the three rounds of study screen process described in Section \ref{StudyScreening}, i.e., based on the titles, abstracts, and full texts. The selected primary studies from the snowballing process are reported in Section \ref{StudiesScreeningResults}. \subsubsection{Phase 4: Data extraction} To provide the demographic information of the selected studies and answer the five RQs of this SMS (see Table \ref{ResearchQuestions}), we carefully read all the primary studies selected in this SMS (see Figure \ref{MappingStudyExecution}) and extracted the required data items as listed in Table \ref{DataExtraction}. Before the formal data extraction, the first three authors discussed the meaning of each data item and the way about how to extract data. To ensure an unambiguous understanding of the data items, the first author conducted a pilot data extraction. Specifically, the first author randomly selected 15 primary studies (from 79 selected studies, see Figure \ref{SearchAndScreeningResults}) and extracted the data according to the data items in Table \ref{DataExtraction}. When such papers were unclear and the first author got confused while extracting the data (e.g., if such a selected study did not clearly describe such a data item defined in Table \ref{DataExtraction}), physical meetings with the second author were scheduled to solve such confusion. The process continued till all the 15 studies were manually checked. To suppress the effect of subjective bias, the pilot data extraction results were reviewed and validated by other two authors (i.e., the second and third authors) of this study so that three authors could get a consensus on the understanding of the extracted data items. Likewise, in the formal data extraction process, the data extraction was performed by the first author and validated by the second and third authors, and any divergences and ambiguities of the results in the extracted data were discussed together for reaching an agreement. In this way, we can ensure that the extracted data in the formal data extraction process are valid. The description of the data items and their relevant RQs are presented in Table \ref{DataExtraction}. Specifically, ten data items have been defined to be extracted from the selected studies. The four data items (i.e., D1-D4) are used to extract the demographic details of the selected studies, and the remaining data items (i.e., D5-D10) are used to answer the five RQs (see Table \ref{ResearchQuestions}). The data extraction was subsequently followed by data synthesis, and these two processes were conducted and recorded with the aid of MAXQDA (a qualitative data analysis tool)\footnote{\url{https://www.maxqda.com/}}. \begin{table} \small \caption {Data items extracted from the selected studies with the relevant RQs} \label{DataExtraction} \begin{tabular}{p{0.3cm}p{2cm}p{5cm}p{5cm}p{1.3cm}} \toprule \# &\textbf{Data item} & \textbf{Description} & \textbf{Data analysis approach} & \textbf{Relevant RQ}\\ \midrule D1 & Publication year & The publication year of the study. & Descriptive statistics & Overview\\ D2 & Publication venue & The name of the venue where the study is published. & Descriptive statistics & Overview\\ D3 & Publication type & The type of the study (i.e., journal, conference, or workshop). & Descriptive statistics & Overview\\ D4 & Author type & The type of authors (i.e., academia, industry, or both). & Descriptive statistics & Overview\\ D5 & Architectural information & The type of mined architectural information. & Open coding \& constant comparison and predefined classifications & RQ1 \\ D6 & Source & The source used to mine architectural information. & Open coding \& constant comparison and descriptive statistics & RQ2 \\ D7 & Architecting activity & The architecting activity that a study claims to support. & Open coding \& constant comparison and predefined classifications & RQ3 \\ D8 & Approach & The approach used to mine architectural information. & Open coding \& constant comparison and descriptive statistics & RQ4 \\ D9 & Tool & The tool used to mine architectural information. & Open coding \& constant comparison and descriptive statistics & RQ4 \\ D10 & Challenge & The challenge faced when mining architectural information. & Open coding \& constant comparison & RQ5 \\ \bottomrule \end{tabular} \end{table} \subsubsection{Phase 5: Data synthesis}\label{DataSynthesis} During this phase, we synthesized the extracted data that we gathered from the previous phase (i.e., Phase 4: Data extraction) in order to answer the five RQs (see Table \ref{ResearchQuestions}). We used open coding \& constant comparison to analyze several data items and answer certain RQs of this SMS (see Table \ref{DataExtraction}). Open coding \& constant comparison are two widely used techniques from Grounded Theory \cite{stol2016grounded} during qualitative data analysis. Grounded Theory (GT) is a bottom-up approach and focuses on theory generation, rather than extending or verifying existing theories \cite{stol2016grounded}. Open coding generates codes for incidents that can be further classified into concepts and categories \cite{stol2016grounded}. Constant comparison is a continuous process for verifying the generated concepts and categories. Both concepts and categories evolve and saturate until they fit the data \cite{stol2016grounded}. Specifically, in this SMS, open coding \& constant comparison were employed for analyzing the data item D6 to answer RQ2 (sources), the data items D8 and D9 to answer RQ4 (approaches and tools), and the data item D10 to answer RQ5 (challenges). In addition, descriptive statistics \cite{wohlin2003empirical} was also used for analyzing the data item D6 to answer RQ2 (sources) and the data items D8 and D9 to answer RQ4 (approaches and tools). Descriptive statistics can provide quantitative summaries based on the initial description of the extracted data. Moreover, we used open coding \& constant comparison and predefined classifications (i.e., the conceptual model for architectural description in the ISO 42010:2011 standard \cite{6129467}, the categories of architecture decisions by Philippe Kruchten \cite{kruchten2004ontology}, and the categories of architectural changes by William \textit{et al}. \cite{williams2010characterizing}) for analyzing the data item D5 to answer RQ1 (mined architectural information). On another hand, we used open coding \& constant comparison and predefined classifications of architecting activities by Hofmeister \textit{et al}. \cite{hofmeister2007general}, Tang \textit{et al}. \cite{tang2010comparative}, and Li \textit{et al}.\cite{li2013application} for analyzing the data item D7 to answer RQ3. As mentioned above, we utilized a qualitative data analysis tool MAXQDA to support the analysis process. MAXQDA assists human annotators to label text segments within their contexts and assign them to categories. Specifically, before the formal data analysis (manual labeling), to reach an agreement about the data items that we gathered from the previous phase (i.e., Phase 4: Data extraction), we first performed a pilot data analysis. Specifically, this analysis process involved the following steps: (1) The first and second authors independently checked and read a random sample of 5 primary studies (from 79 selected studies, see Figure \ref{SearchAndScreeningResults}). (2) The first two authors independently labeled the extracted data with codes that succinctly summarize the data items (see Table \ref{DataExtraction}) for answering RQs. (3) The first two authors independently grouped all the codes into higher-level concepts and turned them into categories or subcategories. The grouping process was iterative, in which each author continuously went back and forth between the code, concepts, categories, and extracted data items to revise and refine both the code, concepts, and categories. In order to improve the reliability of the pilot data analysis results, the first two authors held a meeting and followed a negotiated agreement approach~\cite{campbell2013coding} to compare the coding results, then discussed their disagreements, confusions, and uncertain judgments on the data encoding results in an effort to reconcile them and arrive at a final version of the pilot data analysis results in which as many discrepancies as possible have been resolved. For example, when answering RQ1 (mined architectural information), the first author was checking this study (i.e., {[S2]}) in order to label the mined architectural information, and he came across with these sentences: “\textit{(...) Continuously means that they are analyzed in response to every modification. When analyzing the implementation, the system tries to extract as much architectural information from implementation artifacts as possible (...)}”. In this case, the first author got confused on which label should be assigned to those sentences as the mined architectural information. In a meeting with the second author, they discussed and agreed to label it as “general architectural information”. The first author carried on with the formal data analysis and followed the same steps used during the pilot data analysis. In the following paragraph, we provide the details of the formal data analysis process. The first author fully read the full text of each selected study from the remaining primary studies. Subsequently, he summarized the main ideas stated in several sentences in the selected study. He continued to encode the summarized ideas (from each selected study) to generate codes, concepts, and categories and subcategories. For example, when answering RQ1 (mined architectural information), he referred to these sentences in the selected study {[S27]} that states: “\textit{(...) Implementation of a tactic often spreads across more than one source file. Consequently, domain topics that motivate an architectural tactic are often not fully presented in the tactical file itself but also appear in neighboring files (i.e., files which use tactical files or are used by tactical files). Therefore, we need to identify the whole context in which a tactic is implemented. The context analysis is comprised of two steps. First, we identify complete tactic instances whose implementation spreads across multiple tactical files. Second, we consider the full technical context of an implemented tactic instance (...) To discover this full context in which a tactic instance is implemented, we use Algorithm 2 to discover the full context in which each tactic instance is implemented (...)}”. In this case, he summarized and encoded those sentences as the code “identify the whole context in which a tactic is implemented”. Afterwards, the first author grouped this code into a higher-level concept (i.e., “tactic and context”). Then, he applied constant comparison to compare the concepts identified in one summarized idea with the concepts that emerged from other summarized ideas to identify the concepts which have similar semantic meanings. He proceeded to group similar concepts into main categories and subcategories. For example, the concept “tactic and context” was merged into the subcategory “architectural tactic and context relationship” that was further categorized into the category “design relationship” (see Table \ref{minedArchitecturalInfo}). Moreover, as stated before in this section, we used predefined categorization of architectural information to generate categories and subcategories of mined architecture information for answering RQ1. Among others, we utilized an existing category of architectural design decisions provided in \cite{kruchten2004ontology} (i.e., structural, behavioral, and ban decisions) to classify the mined architectural decisions. To mitigate the personal bias during the formal data analysis, the second and third authors of this SMS reviewed and validated the generated codes, concepts, categories, and subcategories. The results of this SMS are provided in Section \ref{Results}. \section {Results} \label{Results} We first present the results of studies search (both automatic search and snowballing) and the results of study screening in Section \ref{StudiesScreeningResults}. The overview of the selected studies is presented in Section \ref{StudiesDemographic}, and the results of five RQs are reported in Section \ref{ResultsOfRQ1} to Section \ref{ResultsOfRQ5}. \subsection{Study search and screening results}\label{StudiesScreeningResults} Figure \ref{StudiesScreeningResults} illustrates the number of studies that we got from the automatic search, each round of study screening, and the snowballing. The automatic search in the seven electronic databases resulted in 22,540 potentially relevant studies. 2,524 studies were shortlisted after the first round of study screening (i.e., by reading titles), 303 primary studies were kept after the second round of study screening (i.e., by reading abstracts), and 66 primary studies were selected after the third round of study screening (i.e., by reading full texts). Afterwards, we conducted the snowballing process (i.e., forward and backward snowballing), and 3,123 references (of the 66 studies that we got from the third round of study screening) were returned. These 3,123 references were further screened based on the titles (197 studies retained), abstracts (33 studies retained), and full texts (13 studies retained). Finally, 79 (i.e., 66+13) primary studies were included in this SMS. The details of the selected 79 primary studies are provided in \ref{SlectedStudies}. \begin{figure} [!h] \centering \includegraphics [scale=0.64] {Figures/StudiesScreeningResults} \caption{Results of study search and screening} \label{SearchAndScreeningResults} \end{figure} \subsection{Overview of the selected studies}\label{StudiesDemographic} In this section, we present an overview of the selected primary studies from three perspectives (i.e., publication years, venues, and author types). \textbf{(a) Distribution of the selected studies over the years} To better understand the publication years of the selected studies, we summarized the publication number of these studies per year (from 2006 till 2021) and analyzed the growth of publication number as time goes by. Note that, there could be other studies published in December 2021 that were not indexed by the selected databases (see Table \ref{ElectronicDatabases}) because we executed our search in November 2021. Figure \ref{YearsAndStudies} shows the distribution of the 79 selected studies over the past fifteen years (i.e., 2006–2021). It is clear that there are very few studies (i.e., 11 studies) published from 2006 to 2010 on mining architectural information. However, we can see a fair amount of attention on mining architectural information from 2011 to 2021 (2-13 studies were published per year). Overall, we found evidence of an increasing interest in the software architecture community on mining architectural information. \begin{figure} [!h] \centering \includegraphics [scale=0.6] {Figures/YearsAndStudies} \caption{Distribution of the selected studies over the years} \label{YearsAndStudies} \end{figure} \begin{figure} \centering \subfigure[Publication venue] { \begin{minipage}{7cm} \includegraphics[width=7cm, height=5cm]{Figures/StudiesAndVenues.pdf} % \label{StudiesAndVenues} \end{minipage} } \quad \subfigure[Author type] { \begin{minipage}{5cm} \includegraphics[width=6cm, height=5cm]{Figures/AuthorTypes.pdf} \label{AuthorTypes} \end{minipage} } \caption{Publication venues and author types based on the distribution of the selected studies} \end{figure} \textbf{(b) Distribution of the selected studies in publication venues} The selected studies were published in 46 venues. We only list the venues that publish more than one selected studies in Table \ref{PublicationVenues} (see \ref{SelectedStudiesInVenues}). Note that, merged conferences were only counted as one conference, for example, Working International Conference on Software Architecture (WICSA) and International Conference on Software Architecture (ICSA) were counted as one conference. As shown in Table \ref{PublicationVenues}, some venues tend to publish more studies on mining architectural information. For instance, the dedicated conferences on software architecture record the most publications (i.e., 17), which are International Conference on Software Architecture (ICSA) and European Conference on Software Architecture (ECSA) with 12 and 5 publications, respectively. The conference with the most number of studies (i.e., 12 studies) is ICSA. In terms of journals, both the Journal of Systems and Software and IEEE Transactions on Software Engineering index the most studies, each with 5 studies (see Table \ref{PublicationVenues}). As shown in Figure \ref{StudiesAndVenues}, the 79 studies cover three types of publication venues, i.e., conference, journal, and workshop. Most of the selected studies were published in conferences (i.e., 59.5\%, 47 out of 79) compared with journals (i.e., 31.6\%, 25 out of 79) and workshops (i.e., 8.9\%, 7 out of 79), which indicates that conferences are the ideal venues to disseminate the work on mining architectural information. \textbf{(c) Types of authors in the selected studies} We also looked at the affiliations of the authors of the selected studies to see whether they are from academia, industry, or both. As seen in Figure \ref{AuthorTypes}, most of the selected studies were conducted by researchers only (e.g., 92.4\%, 73 out of 79), and some of the studies are the outcomes of collaboration between researchers and practitioners (i.e., 7.6\%, 6 out of 79). For example in {[S53]}, Juergen \textit{et al}. proposed an approach named “Continuous Architectural Knowledge Integration (CAKI)” that combines the continuous integration of internal and external architectural knowledge sources together with enhanced semantic searching, reasoning, and personalization capabilities dedicated to large organizations. Their work involved industrial collaboration with Siemens (a company that conducts large-scale software development projects). In our selected primary studies, we found that there is no paper that was solely authored by practitioners. The results reveal that less attention has been received to the topic of mining architectural information by industry compared to academia. \subsection{Results of RQ1: Mined architectural information}\label{ResultsOfRQ1} As described in Section \ref{DataSynthesis}, open coding \& constant comparison and predefined classifications were used to analyze the extracted data item D5 (see Table \ref{DataExtraction}) and identify the mined architectural information from software repositories. Our data synthesis generated 8 main categories and 29 subcategories of mined architectural information (see Table \ref{minedArchitecturalInfo}), in which \textit{architectural description} (62.02\%, 49 out of 79 studies), \textit{architectural decision} (24.1\%, 19 out of 79 studies), \textit{architectural solution} (18.9\%, 15 out of 79 studies), and \textit{system requirement} (17.7\%, 14 out of 79 studies) are the top four most mined architectural information to support the development process. Note that, one study may mine more than one type of architectural information, therefore, the total number of studies in Table \ref{minedArchitecturalInfo} is larger than 79. \begin{longtable}{p{7.3em}p{15em}p{15em}p{1.5em}} \caption{Categories and subcategories of mined architectural information} \label{minedArchitecturalInfo} \\\hline \multicolumn{1}{l}{\textbf{Category}} & \multicolumn{1}{l}{\textbf{Subcategory}} & \multicolumn{1}{l}{\textbf{Studies}} & \multicolumn{1}{l}{\textbf{Count}} \\\hline \endfirsthead \multicolumn{4}{c}% {{\bfseries }}\\ \endhead \multicolumn{4}{r}{{}} \\ \endfoot \hline \hline \endlastfoot {Architectural description (62.02\%, 49)} & Architectural model & {[S11]} {[S12]} {[S29]} {[S30]} {[S36]} {[S37]} {[S39]} {[S46]} {[S47]} {[S51]} {[S54]} {[S64]} {[S65]} {[S67]} {[S68]}{[S73]} {[S75]} & \multicolumn{1}{c} {17} \\\cline{2-4} & Architectural view & {[S8]} {[S18]} {[S35]} {[S37]} {[S39]} {[S46]} {[S47]} {[S52]} {[S54]} {[S59]} {[S64]} {[S65]} {[S68]} & \multicolumn{1}{c} {13} \\\cline{2-4} & Architectural rationale & {[S1]} {[S11]} {[S14]} {[S24]} {[S25]} {[S26]} {[S38]} {[S47]} & \multicolumn{1}{c} {8} \\\cline{2-4} & Architectural concern & {[S6]} {[S7]} {[S8]} {[S9]} {[S11]} {[S66]} & \multicolumn{1}{c} {6} \\\cline{2-4} & System of interest & {[S11]} {[S26]} {[S29]} {[S39]} {[S51]} & \multicolumn{1}{c} {5} \\\cline{1-4} {Architectural decision (24.1\%, 19)} & Structural decision & {[S19]} {[S22]} & \multicolumn{1}{c} {2}\\ \cline{2-4} & Behavioral decision & {[S19]} {[S22]} & \multicolumn{1}{c} {2} \\ \cline{2-4} & Ban or non-existence decision & {[S19]} {[S22]} & \multicolumn{1}{c} {2} \\ \cline{2-4} & General architectural decision & {[S1]} {[S8]} {[S10]} {[S14]} {[S18]} {[S24]} {[S25]} {[S26]} {[S33]} {[S35]} {[S38]} {[S43]} {[S47]} & \multicolumn{1}{c} {13} \\ \cline{1-4} {Architectural solution (18.9\%, 15)} & Architectural tactic & {[S1]} {[S6]} {[S7]} {[S13]} {[S15]} {[S26]} {[S9]} {[S24]} {[S56]} {[S57]} & \multicolumn{1}{c} {10} \\ \cline{2-4} & Architectural pattern &{[S1]} {[S35]} {[S24]} {[S63]} & \multicolumn{1}{c} {4} \\ \cline{2-4} & Framework for addressing scalability & {[S26]} & \multicolumn{1}{c} {1} \\ \cline{1-4} {System requirement (17.7\%, 14)} & Quality attribute & {[S1]} {[S8]} {[S11]} {[S24]} {[S26]} {[S42]} {[S66]} {[S76]} & \multicolumn{1}{c} {8} \\ \cline{2-4} & Functional requirement & {[S1]} {[S43]} {[S45]} {[S55]} {[S76]} & \multicolumn{1}{c} {5} \\ \cline{2-4} & General system requirement & {[S25]} & \multicolumn{1}{c} {1} \\ \cline{1-4} {Architectural change (17.7\%, 14)} & Perfective change & {[S40]} {[S41]} & \multicolumn{1}{c} {2} \\ \cline{2-4} & Corrective change & {[S40]} {[S41]} & \multicolumn{1}{c} {2} \\ \cline{2-4} & Adaptive change & {[S40]} {[S41]} & \multicolumn{1}{c} {2} \\ \cline{2-4} & Preventative change & {[S40]} {[S41]} & \multicolumn{1}{c} {2} \\ \cline{2-4} & General architectural change & {[S1]} {[S18]} {[S35]} {[S46]} {[S47]} {[S77]} & \multicolumn{1}{c} {6} \\ \cline{1-4} {Design relationship (15.1\%, 12)} & Component-Component relationship & {[S12]} {[S20]} {[S59]} {[S67]} {[S74]} & \multicolumn{1}{c} {5} \\ \cline{2-4} & Architectural tactic-Context relationship & {[S6]} {[S27]} & \multicolumn{1}{c} {2} \\ \cline{2-4} & Architectural pattern-Architectural pattern relationship & {[S63]} {[S78]} & \multicolumn{1}{c} {2} \\ \cline{2-4} & Architectural pattern-Quality attribute relationship & {[S31]} & \multicolumn{1}{c} {1} \\ \cline{2-4} & Architectural tactic-Quality attribute relationship & {[S6]} & \multicolumn{1}{c} {1} \\ \cline{2-4} & Design pattern-Architectural tactic relationship & {[S21]} & \multicolumn{1}{c} {1} \\ \cline{1-4} {Architectural technical debt (11.4\%, 9)} & Architectural compliance issue & {[S30]} {[S34]} {[S36]} {[S73]} & \multicolumn{1}{c} {4} \\ \cline{2-4} & Architectural anti-pattern & {[S37]} {[S64]} {[S72]} & \multicolumn{1}{c} {3} \\ \cline{2-4} & Architectural smell & {[S29]} {[S71]} & \multicolumn{1}{c} {2} \\ \cline{1-4} {General architectural information (22.8\%, 18)} & &{[S2]} {[S3]} {[S5]} {[S16]} {[S17]} {[S23]} {[S28]} {[S32]} {[S44]} {[S48]} {[S49]} {[S50]} {[S53]} {[S60]} {[S61]} {[S69]} {[S70]} {[S79]} & \multicolumn{1}{c} {18} \\ \cline{1-4} \end{longtable} \textbf{Architectural description} is one of the important sources used for communicating and sharing architectural information about the high-level design of a software system \cite{clements2003documenting}. For example, the information in architectural description, such as architectural views, which describe systems in multiple views (e.g., logical view, process view, development view), enable the architecture to be communicated and understood by different stakeholders (e.g., architects, developers, and project managers) \cite{469759}. However, this information is not always well described or documented during the development process \cite{ding2014open}. Hence, due to the lack of architectural description, software engineers (e.g., developers) may modify the related artifacts (e.g., source code) without fully understanding the underlying architecture \cite{perry1992foundations}, which results in gradual degradation of architectural quality \cite{herold2016ead}. Even though this information is not explicitly documented, they are implicitly captured in different artifacts, code, requirement, and architecture documents. 62.02\% of the selected studies (49 out of 79) proposed approaches and tools for mining or recovering architectural descriptions of software systems from several artifacts (e.g., code {[S29]}{[S39]}{[S46]}{[S51]}) that are contained in various software repositories (e.g., Version control systems, Online Q\&A sites) to assist development process. We further classified the mined architectural information in the \textit{architectural description} category by using the conceptual model for architectural description in the ISO 42010:2011 standard \cite{6129467}, which defines a set of architectural elements that make up the architectural description model \cite{6129467}. These architecture elements have been used to identify and categorize architecture elements in SA documents of OSS \cite{ding2014open}, and they also have been utilized to categorize architectural information communicated during OSS development \cite{bi2021architecture}. As shown in Table \ref{minedArchitecturalInfo}, we collected five subcategories of mined architectural information (i.e., architectural elements) in the \textit{architectural description} category. \begin{itemize} \item\textit{Architectural model} is usually employed to represent a whole architecture or a part of it~\cite{6129467}. Among other usages, an effective architectural model can be used as the basis for design time analysis to determine whether the software system will meet desired quality attributes \cite{garlan2004using}. However, creating an effective architectural model of a system, specifically, an effective architectural model of a large and complex system, and keeping that architectural model consistent with architectural changes that may occur as the system evolves, is one of the serious problems that confront today's software developers \cite{szvetits2016systematic}. Architectural model is the most mined (17 out of 49 studies) architectural information in architectural description category. For example, Granchelli \textit{et al}. {[S39]} proposed a semi-automatic approach based on Model-Driven Engineering and Domain-Specific Language for automatically mining and recovering architectural models of microservice-based systems from source code. The proposed approach can also allow an architect to manually refine the initial extracted architecture model into refined architectural models that are suitable for his/her needs, such as for performing architecture change impact analysis. \item\textit{Architectural view} expresses the architecture of the system in accordance with an architecture viewpoint which addresses one or more of the concerns held by the system’s stakeholders \cite{6129467}. Architectural view is reported as the second most common (13 out of 49) mined architectural information in the architectural description category. For instance, an architecture recovery approach and a prototype tool were developed in {[S46]} to help software engineers (e.g., architects) automatically extract and recover various architectural views, including system-level and component-level architectural views from source code. The mined architectural views can assist in determining, for example, when, how, and to what extent system-level or component-level architectural views evolve during software development and evolution. \item\textit{Architectural rationale} records explanation, justification, and reasoning about architectural design decisions that have been made in architecture design \cite{6129467}. The results in Table \ref{minedArchitecturalInfo} reveal that architectural rationale is the third common (8 out of 49 studies) mined architecture information in the architectural description category. For example, in {[S24]}, Lopez \textit{et al}. presented an ontology-based approach named TREx (Toeska Rationale Extraction) for extracting, representing, and exploring architectural rationale information from text documents (such as email, wiki, and meeting notes). Specifically, this approach consists of three components: (1) pattern-based information extraction to recover rationale, (2) ontology-based representation of rationale and architectural concepts, and (3) facet-based interactive exploration of rationale. Initial results from applying TREx suggest that architectural rationale (such as reasons behind architectural decisions) can be semi-automatically extracted and mined from a project’s unstructured text documents. The mined architectural rationale provides several benefits in software development, such as rationale reuse, rationale auditing, and architects training. \item\textit{System of interest} refers to the system whose architecture is under consideration during the development \cite{6129467}. In {[S11]}, Bi \textit{et al}. mined architectural information (including system of interest) that is communicated during the development of Open Source Software (OSS). The mined architectural information (e.g., system of interest) can be used to enrich architecture documentation, especially the documentation of architectural design decisions, in OSS development. \item\textit{Architectural concern} denotes the interest pertaining to the system development and is related to one or more stakeholders \cite{6129467}. For instance, in {[S66]}, Gokyer \textit{et al}. proposed an approach based on NLP and ML techniques to automatically mine architectural concerns from non-functional requirements expressed in plain text. The mined architectural information (i.e., architectural concerns) can guide architects in making design decisions effectively. \end{itemize \textbf{Architectural decision}: Designing the architecture of a software system typically involves making numerous architecture decisions and each decision could positively or negatively affect the functional and non-functional properties of the system \cite{shahin2009architectural}. Recovering and understanding these design decisions can help inform future architectural decisions and implementation choices, and can avoid introducing architectural inefficiencies \cite{Shahbazian2018RecoveringAD} or architectural decay later \cite{hassaine2012advise}. 24.1\% of the selected studies (19 out of 79) proposed approaches and tools for mining architectural decisions from various sources, such as issue tracking systems (e.g., Jira) and VCS (e.g., GitHub), to support the development. We further categorized the mined architectural decisions into three subcategories by following an existing classification of design decisions provided in \cite{kruchten2004ontology} (see Table \ref{minedArchitecturalInfo}). The mined architectural decisions in this category can help architects and developers utilize these decisions in similar development contexts. \begin{itemize} \item\textit{Structural decision} refers to design decisions that lead to the creation of subsystems, layers, partitions, and components in the logical view of architecture \cite{kruchten2004ontology}. For example, Bhat \textit{et al}. {[S22]} employed an ML-based approach to mine and classify architectural decisions from issue tracking systems (e.g., Jira) into three subcategories, including structural decisions. \item\textit{Behavioral decision} is more related to how the architectural elements interact with each other to provide functionality or to satisfy specific non-functional requirements (e.g., quality attributes) \cite{kruchten2004ontology}. For example, Bhat \textit{et al}. {[S19]} developed an approach with a prototype tool to automatically curate design decision knowledge for architectural decision recommendations. One of the components (i.e., Document Classifier) of the tool could categorize the design decisions described in issue tracking systems (e.g., Jira) into three categories, including behavioral decisions. \item\textit{Ban or non-existence decision} refers to design decisions that result in the removal of an architectural artifact or interaction between architectural artifacts \cite{kruchten2004ontology}. In {[S22]}, Bhat \textit{et al}. developed an ML-based approach to mine and classify architectural decisions from issue tracking systems into three subcategories, including ban or non-existence decisions. \item\textit{General architectural decision} refers to a study that has a generic description about architectural decisions (e.g., design decisions in {[S47]}). In other words, this study does not explicitly specify a concrete category or subcategory of mined architectural decisions that belongs to one of the abovementioned categories or subcategories. \end{itemize \textbf{Architectural solution}: Architectural solutions are the fundamental building blocks in architecture design, and they are used to address architecture design concerns \cite{SA2012}. 18.9\% (15 out of 79) studies researched on mining architectural solutions to support the development. We further categorized the mined architectural solutions into three subcategories (see Table \ref{minedArchitecturalInfo}). \begin{itemize} \item\textit{Architectural tactic} is a design solution used to satisfy a specific quality attribute of a system (e.g., reliability, performance) \cite{SA2012}. As shown in Table \ref{minedArchitecturalInfo}, 9 out of 79 studies focused on mining architectural tactics from software repositories. For example, Chinnappan \textit{et al}. {[S9]} employed a manual approach (i.e., qualitative data analysis) to mine energy-aware architectural tactics for Robotic Operating System (ROS) based software systems from GitHub and Stack Overflow. To do so, they carried out a multi-phase study that resulted in seven energy-awareness architectural tactics (such as energy savings mode and stop current task \& recharge). To foster the applicability of the identified tactics even beyond the ROS community, they described these tactics in a generic implementation-independent manner by utilizing diagrams inspired by the UML component and sequence diagram notations. These energy-aware architectural tactics can serve as guidance for roboticists, as well as other developers interested in architecting and implementing energy-aware software. \item\textit{Architectural pattern}: In common architectural pattern documents or collections (such as architecture pattern books), architecture patterns are documented with templates that consist of multiple attributes, such as intent, structure, and context \cite{buschmann2008pattern}. To adapt to modern development, developers are required to consider specific attributes, for example, “known uses” which exemplifies the use scenarios of architectural patterns in practice \cite{liu2020mining}. However, it is not easy to update the contents of these attributes manually due to the complexity of and rapidly evolving architectural technologies \cite{barua2014developers}. To mitigate this problem, Liu \textit{et al.} {[S63]} proposed a semi-automatic approach for mining architecture/design pattern use scenarios and related pattern pairs from Stack Overflow. This mined architectural information can aid developers to acquaint with the usage and relatedness of architecture/design patterns in today’s modern software development. \item\textit{Framework for addressing scalability}: Software frameworks provide developers with powerful tools to develop more flexible and less error-prone applications in a more effective way. Software frameworks often help expedite the development process by providing necessary functionality “out of the box”. Providing frameworks for reusability, scalability, and separation of concerns is key to software development \cite{schmidt2003patterns}. For example, M\'{a}rquez \textit{et al}. {[S26]} proposed a semi-automatic approach for mining frameworks in OSS for addressing scalability concerns in microservice applications, and these frameworks can assist developers in designing scalable microservice applications. \end{itemize} \textbf{System requirement}: Analyzing requirements to relate and map them to their corresponding architectural elements (e.g., architectural components) has become one of the major challenges faced by architects during the development process \cite{casamayor2012functional}. The failure of a high percentage of software projects is often caused by, for example, the lack of proper requirements analysis, incomplete requirement specifications, and changing requirements, among others~\cite{hull2005requirements}. Thus, 17.7\% of the studies (14 out of 79) focus on mining the requirements of systems from repositories for assisting architecting process. We further categorized the mined architectural information in this category into three subcategories (see Table \ref{minedArchitecturalInfo}). \begin{itemize} \item\textit{Quality attribute}: While all systems have quality attributes or Non-Functional Requirements (NFRs), they may not be explicitly stated in requirements specification. Furthermore, as some NFRs represent significant properties of the systems, specifically, properties of fault-tolerant systems, those NFRs require appropriate analysis effort to ensure they are met during the architecture design~\cite{chung2012non}. When the specified NFRs are not met, the system incurs a costly re-work. Quality attribute requirement is the most (8 out of 79 studies) mined architectural information in the system requirement category to support the development (see Table \ref{minedArchitecturalInfo}). For instance, in {[S66]}, Gokyer \textit{et al}. proposed an approach based on NLP and ML techniques for automatically mining NFRs expressed in plain text and mapping these NFRs to architectural elements, such as architectural components, in the problem domain. The mined architectural information can guide architects in making architectural decisions effectively. \item\textit{Functional requirement}: Essentially, a system’s utility is determined by both its functionality and non-functional characteristics \cite{chung2012non}. Even though both functional and non-functional characteristics must be taken into consideration in the development of a quality software system, there has been a lopsided emphasis on the functionality of the system \cite{chung2012non}. 5 out of 79 studies paid attention to mining functional requirements to support the development (see Table \ref{minedArchitecturalInfo}). For example, Casamayor \textit{et al}. {[S43]} presented an approach based on NLP and ML techniques for semi-automatically mining and analyzing functional requirements (from the textual description of requirements) that will become the responsibilities of certain architectural components in the system, in order to help bridge the gap between requirements analysis and architectural design. \item\textit{General system requirement} refers to the study that has a generic description about system requirements (i.e., {[S25]}). In other words, this study does not explicitly specify a concrete category or subcategory of mined system requirements that belongs to one of the abovementioned categories or subcategories. \end{itemize \textbf{Architectural change}: Software changes are inevitable. There are many reasons for software changes, such as repairing defects and evolving user requirements \cite{williams2010characterizing}. When changes affect the architecture, software engineers need a comprehensive understanding of the causes of architecture changes and their impact \cite{williams2010characterizing}. This understanding is important because, as changes may affect many aspects of a system and introduce complexities in architecture, which will likely introduce a significant number of architectural issues. 17.7\% (14 out of 79) studies mined and analyzed architectural change information from different perspectives (e.g., analyzing the causes of architectural changes \cite{williams2010characterizing}) to support the development. Similar to these studies (i.e., \cite{ding2015causes}\cite{mondal2019exploratory}), we adopted a predefined classifications of architectural changes introduced by Williams and Carver \cite{williams2010characterizing} to categorize the mined architectural change information into four subcategories (see Table \ref{minedArchitecturalInfo}). \begin{itemize} \item\textit{Perfective change} refers to architectural changes that are resulted from new or changed requirements. This subcategory of architectural changes improves the system to better meet user needs \cite{williams2010characterizing}. For example, Mondal \textit{et al}. {[S40]} proposed an ML-based approach for mining information related to architectural changes from source code and commit messages, and labeled them into four subcategories, including perfective changes. This mined architectural information can help developers better analyze and characterize the causes and impact of architectural changes prior to their implementations. \item\textit{Corrective change} denotes architectural changes that occur in response to defects in software systems \cite{williams2010characterizing}. Mondal \textit{et al}. {[S41]} proposed an approach for automatically classifying architectural changes into four subcategories (including corrective changes) from commit messages, to reduce the manual effort when performing architectural change analysis (such as analyzing architecture changes for detecting architectural anti-patterns). \item\textit{Adaptive change} refers to architectural changes that happen when moving to a new environment or platform or accommodating new standards \cite{williams2010characterizing}. In {[S40]}, Mondal \textit{et al.} proposed a text classifier called ArchiNet that classfies architectural changes into four subcategories (including adaptive changes) from commits and source code. \item\textit{Preventive change} refers to architectural changes that take place when restructuring or reengineering the software system \cite{williams2010characterizing}. For instance, Mondal\textit{et al.} {[S41]} conducted experiments with commits to compare three approaches, which are based on Latent Dirichlet Allocation (LDA) technique, about their performance in categorizing architectural changes into four subcategories, including preventive changes. \item\textit{General architectural change} refers to the studies that have a generic description of architectural changes (e.g., in {[S46]}). In other words, these studies do not explicitly specify a concrete category or subcategory of mined architectural changes that belongs to one of the abovementioned categories or subcategories. \end{itemize} \textbf{Design relationship}: An architecture of a system is designed with the aid of architectural elements (e.g., architectural patterns, tactics, and components). However, there are complex interdependency relationships between these elements, and many of the tasks faced by developers today are to deal with an efficient and effective use of these elements together during the application design to meet both functional and non-functional goals~\cite{bi2018architecture}. For example, designing with a certain architecture pattern to address specific quality attribute requirements cannot be considered in isolation, but there might be other key architectural elements to be considered, such as design contexts \cite{bedjeti2017modeling}. 15.1\% (12 out of 79) studies researched on mining design relationships from various sources to aid the development. The mined architectural information in this category provides developers the relationships between architectural elements (e.g., incompatibility between architectural elements \cite{karthik2019automatic}) to consider when designing with these elements. \begin{itemize} \item\textit{Component-Component relationship}: Software applications may comprise a number of heterogeneous components, such as software components and hardware components, and these components need to interact and communicate with each other in order to satisfy various design concerns. Nevertheless, combining these components together as part of the application stack to meet both functional and non-functional requirements of systems is not easy for developers \cite{karthik2019automatic}. 5 out of 79 studies focused their research on mining component-component relationships to assist the development process. For instance, Karthik \textit{et al}. {[S20]} presented an automatic ML-based approach for mining the relationships between architectural components from the unstructured text of Q\&A site posts. Moreover, the mined component-component relationships were categorized into three types, namely incompatibility, required, and recommended. This mined architectural information can help developers who work with component-based systems to effectively understand their systems. \item\textit{Architectural tactic-Context relationship}: Whilst applying Architectural Tactics (ATs) to address Quality Attributes (QAs) is well explored in existing works, e.g., \cite{mirakhorli2013domain}\cite{bi2021mining}, there is no guidelines for architects, who look for information on what considerations (e.g., design contexts) they need to consider when applying ATs to address QA concerns. In order to provide architects with such information, Gopalakrishnan \textit{et al.} {[S27]} proposed an ML and text mining based approach for extracting and mining typical design contexts in the source code in which architectural tactics are implemented. The authors went on to perform an in-depth analysis of the relationship between the design contexts in source code and architectural tactics implemented in that code through building inference models, which can be used to predict and recommend the placement of architectural tactics within source code packages/modules. \item\textit{Architectural pattern-Architectural pattern relationship}: Architectural patterns have increasingly become an integral part of architecture design practices \cite{buschmann2007pattern}. Architectural patterns are seldom applied in isolation within an architecture. Individual architectural patterns can only solve specific parts of the design problem \cite{kamal2010mining}. Architectural patterns are often combined with other relevant architectural patterns during architecture design, which gives these patterns the potential to address multiple architecturally significant requirements \cite{avgeriou2005architectural}. For example, the Client-Server and Broker patterns are often used in combination to design distributed system architectures \cite{buschmann1996pattern}. However, combining architectural patterns effectively during architecture design remains a challenging task for software engineers because, for example, the integration of any two architectural patterns can take several forms, and existing architectural pattern languages only mention generic architectural pattern to architectural pattern relationships and do not go into the details of their combination \cite{kamal2010mining}. In {[S78]}, Kamal \textit{et al.} conducted a qualitative data analysis and mined the design relationships between various architectural patterns from several sources of software repositories, such as architecture design documents and cases studies. The mined architectural information in this subcategory can assist software engineers who work with software systems that require combinations of more than one architectural pattern during the development. \item\textit{Architectural pattern-Quality attribute relationship}: Many architecture design methods consider the use of architectural patterns as a fundamental design concept \cite{rozanski2012software}. When making an effective architectural pattern selection, developers should consider, among other aspects, the impact of that pattern on promoting or inhibiting specific quality attributes. However, for inexperienced developers, this task often requires significant time and effort. The reasons include a plenty number of architectural patterns, the emergence of new architectural patterns, and the lack of techniques and tools for automatically identifying the most suitable architectural patterns for specific software systems \cite{velasco2016knowledge}. Velasco-Elizondo \textit{et al}. {[S31]} proposed an approach based on an information extraction technique (i.e., entity extraction) and knowledge representation (i.e., ontology) to automatically analyze and mine architecture patterns considering specific quality attributes (e.g., performance) from architectural pattern descriptions. To be specific, the ontology contains two subtypes of ontologies: one is English grammar-based ontology, which is further categorized into promotes verb, modal verb, etc, and the other is performance ontology that defines performance-specific concepts (e.g., throughput). The mined architectural in this subcategory can facilitate developers to select architectural patterns through knowing whether specific quality attributes are promoted or inhibited. \item\textit{Architectural tactic-Quality attribute relationship}: Software systems typically have multiple QAs, and ATs provide established design solutions to address these QAs \cite{SA2012}. However, the information about the relationships between ATs and QAs have not been explored systematically \cite{bi2021mining}. To gather such QA-AT relationship information and help architects make informed design decisions when they apply ATs to address QAs in practice, Bi \textit{et al}. {[S6]} developed a semi-automatic approach for mining QA-AT discussions from practitioners (i.e., Stack Overflow) efficiently. Moreover, the authors analyzed the mined QA-AT discussions to structure the design relationships between ATs and QAs used in practice. \item\textit{Design pattern-Architectural tactic relationship}: Design patterns are often used to design and implement architectural tactics for addressing QAs \cite{mirakhorli2012variability}. Mirakhorli \textit{et al}. reported that an architectural tactic can be designed and implemented differently from one system to another \cite{mirakhorli2012tactic}. In other words, there are multiple ways for designing and implementing an architectural tactic in a system for addressing a specific QA. For example, the heartbeat tactic is a relatively simple tactic used to monitor the availability of a critical component \cite{SA2012}. However, Mirakhorli \textit{et al}. \cite{mirakhorli2012tactic} observed numerous variations in how heartbeat tactic could be implemented. Therefore, Mirakhorli \textit{et al}. {[S21]} proposed an approach for mining the way developers use design patterns within the implementation of architectural tactics from OSS projects. Moreover, the authors went on to analyze and mine architectural information, and they reported various relationships between design patterns and architectural tactics that other developers should consider. \end{itemize} \textbf{Architectural technical debt}: Architectural Technical Debt (ATD) is incurred by architecture decisions that consciously or unconsciously compromise system-wide QAs, particularly maintainability and evolvability \cite{li2014architectural}. Typical ATD includes violations of architecture design principles or rules \cite{li2015systematic}. ATD needs to be identified and removed since it is harmful to the system’s long-term health \cite{li2015systematic}. 9 out of 79 studies developed approaches for identifying and mining ATD for different purposes, such as mining ATD for further management in order to keep the accumulated ATD under control {[S37]}. We categorized the mined architectural information related to ATD into three subcategories (see Table \ref{minedArchitecturalInfo}). \begin{itemize} \item\textit{Architectural compliance issue} occurs when the implemented architecture deviates away from the intended architecture. The phenomenon of divergence between the intended and implemented architecture is regarded as architecture erosion \cite{perry1992foundations}. An eroded architecture can aggravate the brittleness of the system and decrease architecture sustainability \cite{li2022understanding}. For instance, a software system with an eroded architecture may lead to the deterioration of the engineering quality of the system \cite{de2012controlling}, and make it difficult for developers to understand the internal structure of the system \cite{perry1992foundations}. Even though the consequences of architectural conformance issues have been clearly defined in the literature, there have been less attention to mining architectural information related to architectural conformance issues in order to assist the development. Only 4 out of 79 studies researched on mining architectural compliance issues. For example, Maffort \textit{et al}. {[S34]} presented an approach that relies on four heuristic models for detecting and mining the absences (i.e., something expected is not found) and divergences (i.e, something prohibited is found) in source code based architectures. The mined architectural compliance issues can be used to rapidly raise architectural deviation warnings, without deeply involving architects. \item\textit{Architectural anti-pattern} may occur due to the violation of design principles \cite{baldwin2000design}. Mo \textit{et al}. {[S64]} presented an approach for automatically detecting and mining six types of architecture anti-patterns (e.g., unstable interface, modularity violation groups, and unhealthy inheritance hierarchy), defined as connections among files that violate design principles. Through analyzing the mined architectural anti-patterns, the authors demonstrated that the identified architectural anti-patterns have a significant impact on file bug-proneness and change-proneness. This mined architectural information can be used by architects to pinpoint architectural anti-patterns in the systems, quantify their severity, and determine priorities for refactoring. \item\textit{Architectural smell} may be caused by applying architecture design solutions in an inappropriate context, mixing design fragments that have undesirable system behavior, or applying design abstractions at the wrong level of granularity \cite{garcia2009identifying}. D\'{ı}az-Pace \textit{et al.} {[S71]} developed an approach based on Link Prediction (LP) techniques and an ML classification model to mine two types architectural smells, namely cyclic dependency and hub-like dependency from OSS projects. \end{itemize} \textbf{General architectural information} refers to the studies that have a generic description about architectural information (e.g., design information {[S17]}, design discussion {[S48]}, architectural information {[S60]}). In other words, these studies do not explicitly specify a concrete category or subcategory of mined architectural information that belongs to one of the abovementioned categories or subcategories. \begin{tcolorbox}[colback=gray!5!white,colframe=gray!75!black,title=Key Findings of RQ1] \textbf{Finding 1}: 8 main categories and 29 subcategories of architectural information are mined from various sources to assist architecting activities. \textbf{Finding 2}: \textit{Architectural description} (62.02\%, 49 out of 79 studies), \textit{architectural decision} (24.1\%, 19 out of 79 studies), \textit{architectural solution} (18.9\%, 15 out of 79 studies), and \textit{system requirement} (17.7\%, 14 out of 79 studies) are the top four most mined categories of architectural information. \end{tcolorbox} \subsection{Results of RQ2: Sources used for mining architectural information}\label{ResultsOfRQ2} We observed that the selected studies utilized various sources for mining architectural information in order to support the development. We applied open coding and constant comparison to analyze the extracted data item D6 (see Table \ref{DataExtraction}) for answering RQ2 and categorize the reported sources into thirteen core categories (see Figure \ref{SourceMinedFigure}). As shown in Figure \ref{SourceMinedFigure}, the most frequently used source when mining architectural information is \textit{Version Control Systems (VCS)}, e.g., GitHub, (used by 45 studies), followed by \textit{Online Q\&A sites}, e.g., Stack Overflow, (used by 21 studies), and \textit{Software description and documentation} (used by 11 studies). In the following paragraph, we prove examples for the five most employed sources in mining architecture information. Ghorbani \textit{et al}. {[S30]} collected a set of open source Java applications from \textit{VCS} (specifically, GitHub) and mined architectural inconsistencies from those applications. Soliman \textit{et al}. {[S4]} collected architecture related posts in \textit{Online Q\&A sites} (specifically, Stack Overflow) and mined architectural knowledge for technology decisions. Gilson \textit{et al}. {[S42]} used \textit{Software description and documentation} (specifically, requirements description) to extract quality attributes in order to help architects make architectural decisions in the early phases of the development. Zalewski \textit{et al}. {[S14]} gathered data from two sources (i.e., \textit{Wiki} and \textit{Online Q\&A sites} (specifically, Stack Exchange)) for mining alternative architectural solutions to a given architectural problem, in order to provide assistance to the decision-making architect. Shahbazian \textit{et al}. {[S35]} crawled several open source projects from \textit{Issue tracking systems} (specifically, Jira) to detect and mine implementation issues that lead to possibly unintentional design decisions and subsequently change the system’s architecture. Note that, since some studies used more than one source, the sum of the sources reported in Figure \ref{SourceMinedFigure} exceeds the total number of the selected studies (i.e., 79 studies). \begin{figure} \centering \includegraphics[width=14cm, height=7cm]{Figures/SourceMinedFigure.pdf} \caption{The sources used for mining architectural information} \label{SourceMinedFigure} \end{figure} \begin{tcolorbox}[colback=gray!5!white,colframe=gray!75!black,title=Key Findings of RQ2] \textbf{Finding 3}: \textit{VCS}, e.g., GitHub, (used by 45 studies), \textit{Online Q\&A sites}, e.g., Stack Overflow, (used by 21 studies), and \textit{Software description and documentation} (used by 11 studies) are the top three most frequently used sources for mining architectural information. \end{tcolorbox} \subsection{Results of RQ3: Supported architecting activities}\label{ResultsOfRQ3} We collected the architecting activities that can be supported by the mined architectural information (the results of RQ1). As mentioned in Section~\ref{DataSynthesis}, we employed open coding \& constant comparison, predefined classifications (i.e., ten architecting activities proposed by Hofmeister \textit{et al}. \cite{hofmeister2007general}, Tang \textit{et al}. \cite{tang2010comparative}, and Li \textit{et al}. \cite{li2013application}), and descriptive statistics for analyzing the extracted data item D7 (see Table \ref{DataExtraction}) and identify the supported architecting activities. Hofmeister \textit{et al}. \cite{hofmeister2007general} presented a general model for architecture design, which consists of three main activities: architecture analysis, architecture synthesis, and architecture evaluation. Tang \textit{et al}. \cite{tang2010comparative} extended this general model by adding two architecting activities (i.e., architecture implementation and architecture maintenance) in the architecture life cycle. These two architecting activities emphasize that architecture is not only an important asset for design but also for the later stage in the software development life cycle. Li \textit{et al}. \cite{li2013application} extended the aforementioned architecting activities by adding five more activities (i.e., architecture recovery, architectural description, architecture understanding, architecture impact analysis, and architecture reuse) from the perspective of Architecture Knowledge (AK) management (e.g., AK reuse) to support the architecture life cycle. As mentioned above, we followed the ten predefined architecting activities when answering RQ3. However, through our qualitative data analysis (see Section \ref{DataSynthesis}), we obtained two additional architecting activities, namely “architecture conformance checking” and “all activities” (means that a study can support all the eleven architecting activities reported in this SMS). Thus, we used twelve architecting activities to answer RQ3 in this SMS. Table \ref{MappingBetweenArchActivitiesMinedArchInfoAndStudies} presents the frequency of occurrence of the 12 architecting activities supported by the mined architectural information. Moreover, Table \ref{MappingBetweenArchActivitiesMinedArchInfoAndStudies} provides the relationships between mined architectural information and supported architecting activities as well as relevant studies. These relationships act as a panorama to comprehensively understand the state of the research in mining architectural information. As shown in Table \ref{MappingBetweenArchActivitiesMinedArchInfoAndStudies}, the top three most supported architecting activities are \textit{architecture understanding} (40 out of 79 studies), \textit{architecture recovery} (26 out of 79 studies), and \textit{architectural description and documentation} (23 out of 79 studies). We describe the twelve supported architecting activities below. \begin{landscape} \begin{longtable}{p{13em}p{19em}p{20em}p{5em}} \caption{Relationships between mined architectural information and supported architecting activities} \label{MappingBetweenArchActivitiesMinedArchInfoAndStudies} \\\hline \textbf{Supported architecting activity} & \multicolumn{1}{l}{\textbf{Mined architectural information}} & \multicolumn{1}{l}{\textbf{Studies}} & \textbf{Count (\%)} \\\hline \endfirsthead \multicolumn{4}{c}% {{\bfseries }}\\ \endhead \multicolumn{4}{r}{{}} \\ \endfoot \hline \hline \endlastfoot \multirow{4}{4em}[14pt]{Architecture Understanding} & Architectural description, Architectural decision, System requirement, Architectural solution, Architectural change, Architectural technical debt, General architectural information, Design relationship & {[S14] }{[S18]} {[S19]} {[S21]} {[S25]} {[S26]} {[S28]} {[S29]} {[S30]} {[S32]} {[S33]} {[S34]} {[S35]} {[S36]} {[S37]} {[S38]} {[S39]} {[S40]} {[S41]} {[S42]} {[S46]} {[S47]} {[S50]} {[S51]} {[S52]} {[S57]} {[S58]} {[S59]} {[S60]} {[S61]} {[S64]} {[S65]} {[S67]} {[S68]} {[S69]} {[S71]} {[S72]} {[S73]} {[S74]} {[S77]} & \multicolumn{1}{c} {40 (50.6\%)}\\ \cline{1-4} \multirow{4}{13em}[14pt]{Architecture Recovery} & Architectural solution, Architectural description, System requirement, Architectural change, Architectural decision, Architectural technical debt, General architectural information & {[S1]} {[S12]} {[S17]} {[S19]} {[S24]} {[S26]} {[S28]} {[S33]} {[S35]} {[S36]} {[S37]} {[S39]} {[S40]} {[S41]} {[S46]} {[S47]} {[S51]} {[S53]} {[S57]} {[S59]} {[S69]} {[S73]} {[S75]} {[S77]} &\multicolumn{1}{c} {26 (32.9\%)}\\ \cline{1-4} \multirow{4}{13em}[14pt]{Architectural Description and Documentation} & Architectural solution, System requirement, Architectural description, Architectural change, Architectural decision, General architectural information & {[S1]} {[S7]} {[S11]} {[S14]} {[S19]} {[S21]} {[S25]} {[S29]} {[S30]} {[S32]} {[S38]} {[S39]} {[S40]} {[S47]} {[S50]} {[S51]} {[S52]} {[S58]} {[S60]} {[S61]} {[S63]} {[S73]} {[S77]} & \multicolumn{1}{c} {23 (29.1\%)}\\ \cline{1-4} \multirow{4}{13em}[14pt]{Architecture Maintenance and Evolution} & Architectural decision, Architectural solution, Architectural description, Architectural technical debt, Architectural change, System requirement, Design relationship & {[S5]} {[S19]} {[S25]} {[S29]} {[S30]} {[S32]} {[S36]} {[S37]} {[S41]} {[S46]} {[S57]} {[S59]} {[S64]} {[S65]} {[S67]} {[S68]} {[S69]} {[S71]} {[S74]} {[S76]} {[S79]} & \multicolumn{1}{c} {21 (26.6\%)}\\ \cline{1-4} \multirow{4}{4em}[14pt]{Architecture Implementation} & Architectural decision, General architectural information, Architectural description, Architectural solution & {[S7]} {[S13]} {[S19]} {[S20]} {[S21]} {[S27]} {[S32]} {[S33]} {[S34]} {[S35]} {[S38]} {[S49]} {[S56]} {[S63]} {[S65]} {[S73] {[S79]}} & \multicolumn{1}{c} {17 (21.5\%)}\\ \cline{1-4} \multirow{4}{13em}[14pt]{Architecture Analysis} & System requirement, Architectural description, Architectural technical debt, General architectural information, Architectural decision & {[S2]} {[S8]} {[S14]} {[S29]} {[S31]} {[S32]} {[S37]} {[S42] {[S43]} {[S45]} {[S55]} {[S66]} {[S76]} {[S79]}} & \multicolumn{1}{c} {14 (17.7\%)}\\ \cline{1-4} \multirow{4}{13em}[14pt]{Architecture Reuse} & Architectural decision, System requirement, Architectural solution, Architectural description, Architectural change, General architectural information & {[S1]} {[S13]} {[S19]} {[S24]} {[S25]} {[S26]} {[S28]} {[S33]} {[S36]} {[S55]} {[S56]} {[S60]} {[S78]} & \multicolumn{1}{c} {13 (16.5\%)}\\ \cline{1-4} \multirow{4}{13em}[14pt]{Architecture Synthesis} & Architectural solution, System requirement, Architectural decision, Architectural description, Architectural change, Design relationship & {[S1]} {[S2]} {[S3]} {[S4]} {[S7]} {[S10]} {[S14]} {[S15]} {[S31]} {[S38] {[S79]}} &\multicolumn{1}{c} {11 (13.9\%)}\\ \cline{1-4} \multirow{4}{13em}[14pt]{Architecture Evaluation} & General architectural information, Architectural solution, Architectural decision, System requirement, Architectural description & {[S1]} {[S2]} {[S3]} {[S4]} {[S7]} {[S14]} {[S15]} {[S34]} {[S36]} {[S55]} {[S79]} &\multicolumn{1}{c} {11 (13.9\%)}\\ \cline{1-4} \multirow{4}{13em}[14pt]{Architecture Impact Analysis} & General architectural information, Architectural change, Architectural solution, System requirement, Architectural description & {[S32]} {[S34]} {[S35]} {[S39]} {[S40]} {[S41]} {[S47]} {[S49]} {[S55]} {[S57]} & \multicolumn{1}{c} {10 (12.7\%)}\\ \cline{1-4} \multirow{4}{13em}[20pt]{Architecture Conformance Checking} & Architectural description, Architectural technical debt & {[S12]} {[S30]} {[S34]} {[S51]} {[S73]} &\multicolumn{1}{c} {5 (6.3\%)}\\ \cline{1-4} \multirow{4}{14em}[24pt]{All Activities} & Architectural description, General architectural information & {[S16]} {[S44]} {[S54]} &\multicolumn{1}{c} {3 (3.8\%)}\\ \cline{1-4} \end{longtable} \end{landscape} \textbf{\textit{Architecture Understanding (AU)}} is normally conducted to comprehend elements of an architecture design, such as architectural solutions and relationships between architectural elements (e.g., the relationships between architectural components \cite{stevanetic2014exploring}), as well as the corresponding design decisions (why the architecture is designed the way it is). AU assists architects and concerned stakeholders to get a comprehensive knowledge about the architecture of a system \cite{li2013application}. The information gained during AU can be used as input for other architecting activities (such as architecture analysis and implementation) \cite{bengtsson2004architecture}. The AU activity is supported by 50.6\% (40 out of 79) of the studies, and eight types of architectural information (e.g., Architectural description, Architectural decision, System requirement) were mined to assist AU (see Table \ref{MappingBetweenArchActivitiesMinedArchInfoAndStudies}). For example, In {[S19]}, Bhat \textit{et al}. developed an approach and a prototype tool (Amelie - Decision Explorer (ADeX)) for automatic curation design decision knowledge for architectural decision recommendations. One component of Amelie tool can mine architectural decisions from project artifacts (e.g., project description in MS Excel files, issues), and this information can support stakeholders to get a comprehensive understanding of the entire project's architectural elements. In addition, Kazman \textit{et al}. {[S37]} developed an automatic approach for extracting and mining the files implicated in architecture flaws (i.e., architectural debt) from the development artifacts. This mined architectural information (i.e., architectural debt) can guide developers to explore and understand the underlying architecture of a system for further analysis (e.g., quantifying the return on investment of removing those architecture flaws). \textbf{\textit{Architecture Recovery (ARec)}} is mainly performed to recover architectural elements from system implementation~\cite{riva2000reverse} or documentation~\cite{shahin2013recovering}. The ARec activity is supported by 32.9\% (26 out of 79) of the studies, and seven types of architectural information (such as Architectural solution, Architectural description, System requirement) were mined to aid ARec (see Table \ref{MappingBetweenArchActivitiesMinedArchInfoAndStudies}). For instance, Shahbazian \textit{et al}. {[S47]} developed an approach named RecovAr that relies on two existing techniques (i.e., Algorithm for Comprehension-Driven Clustering (ACDC) \cite{tzerpos2000accd} and Architecture Recovery using Concerns (ARC) \cite{garcia2011enhancing}) for automatically recovering design decisions from the readily available history artifacts of projects, such as issue tracker and version control repository. In {[S53]}, Musil \textit{et al}. developed an approach named Continuous Architectural Knowledge Integration (CAKI) that combines the continuous integration of internal and external Architecture Knowledge (AK) sources (such as Wiki, Technical blogs and tutorials,, Q\&A sites) together with enhanced semantic reasoning and personalization capabilities dedicated to large organizations. The evaluation results showed that CAKI potentially reduces AK search effort by concurrently yielding more diverse and relevant results. Furthermore, CAKI facilitates architects to efficiently recover and gradually explore relevant AK from various AK sources. \textbf{\textit{Architectural Description and Documentation (AD)}}: Architecture is described and documented using a collection of architectural elements, such as architectural views and models \cite{clements2003documenting}. The main goals of AD are facilitating the expression and evolution of software systems, providing a blueprint for system development, and supporting the communication between stakeholders \cite{6129467}. The AD activity is mentioned in 29.1\% (23 out of 79) of the studies, and six types of architectural information (such as Architectural solution, System requirement, Architectural description) were mined to support ARec (see Table \ref{MappingBetweenArchActivitiesMinedArchInfoAndStudies}). In {[S60]}, Happel \textit{et al}. presented an approach and a prototype tool named Ontobrowse that can allow software engineers (e.g., architects and developers) to search and document architectural elements (descriptions of services, serviceLayers, interfaces, domain concepts, and architectural design rules) of a Service-Oriented Architecture (SOA) based system from different sources, for instance, issue tracking system and Subversion. \textbf{\textit{Architecture Maintenance and Evolution (AME)}}: Architecture may evolve during its lifetime to respond to changing requirements \cite{tang2010comparative}. Architectural maintenance is to adjust the architecture according to changes and faults \cite{tang2010comparative}. The AME activity is supported by 26.6\% (21 out of 79) of the studies, and seven types of architectural information (such as Architectural decision, Architectural solution, Architectural description, Architectural technical debt) were mined to support AME (see Table \ref{MappingBetweenArchActivitiesMinedArchInfoAndStudies}). In {[S37]}, Kazman \textit{et al}. presented a tool called Titan for mining architectural technical debts (specifically, architectural anti-patterns), which incur high maintenance penalties. The mined architectural information can help architects and developers locate architectural anti-patterns in the systems during architectural maintenance, determine the penalty incurred by these debts, and the expected benefits if the debts are removed. \textbf{\textit{Architecture Implementation (AI)}}: During this activity, developers transform and implement architecture design into code \cite{tang2010comparative}. The AI activity is supported by 21.5\% (17 out of 79) of the studies, and four types of architectural information (such as Architectural decision, General architectural information) were mined to facilitate AI (see Table \ref{MappingBetweenArchActivitiesMinedArchInfoAndStudies}). For example, Ali \textit{et al}. {[S79]} developed an ML-based approach for mining AK (such as system requirements, design decisions, architectural solutions) from Stack Overflow in order to assist five architecting activities. Among other usages, the mined AK, including detailed design of module (such as module decomposition, coupling, cohesion), can be utilized by developers during AI. \textbf{\textit{Architecture Analysis (AA)}} aims to define the problems that an architecture needs to address. During AA, Architecturally Significant Requirements (ASRs) are identified from given architectural concerns and context \cite{hofmeister2007general}. The AA activity is supported by 17.7\% (14 out of 79) of the studies, and five types of architectural information (such as System requirement, Architectural description) were mined to aid AA (see Table \ref{MappingBetweenArchActivitiesMinedArchInfoAndStudies}). In {[S42]}, Gilson \textit{et al}. proposed an approach for extracting and mining quality attributes from use stories in agile development projects. The mined architectural information guides developers to get relevant quality attributes and potential architectural key drivers during AA, and provides a “bigger picture” of potential architectural drivers for early architecture decision making. \textbf{\textit{Architecture Reuse (AReu)}} is to reuse existing architectural elements, such as architectural solutions, architectural decisions and their rationale for addressing various architectural problems, and this activity can help achieve an architecture of better quality at a lower cost~\cite{li2013application}. The AReu activity is facilitated by 16.5\% (13 out of 79) of the studies, and six types of architectural information (such as Architectural decision, System requirement, Architectural solution) were mined to support AReu (see Table \ref{MappingBetweenArchActivitiesMinedArchInfoAndStudies}). In {[S1]}, Soliman \textit{et al}. used a semi-automatic approach to mine AK, including architectural solutions (e.g., architectural patterns and tactics) and architectural decisions, from issue tracking systems. Among other support, the mined AK can support practitioners (e.g., architects) to determine architectural scenarios, in which the reuse of the AK (e.g., architectural solutions) from issue tracking system could be the most suitable. \textbf{\textit{Architecture Synthesis (AS)}}: In this architecting activity, a set of candidate architectural solutions are proposed to address the ASRs that have been gathered during AA, and the AS activity essentially links the problem space to the solution space during architectural design \cite{hofmeister2007general}. The AS activity is supported by 13.9\% (11 out of 79) of the studies, and six types of architectural information (such as Architectural solution, System requirement, Architectural decision) were mined to aid AS (see Table \ref{MappingBetweenArchActivitiesMinedArchInfoAndStudies}). For instance, in {[S4]}, Soliman \textit{et al}. used a qualitative analysis approach to mine architectural knowledge for technology decisions from Stack Overflow and classified the AK into several categories, including solution synthesis. The mined AK in the solution synthesis category provides developers a set of technology solutions, which have certain characteristics (such as technology features, quality attribute evaluation) for addressing architectural concerns. \textbf{\textit{Architecture Evaluation (AE)}} aims to assess the architectural solutions that are proposed during AS against the ASRs \cite{hofmeister2007general}. During the AE activity, various factors are considered, such as pros and cons, trade-offs, and constraints of architectural solutions that result in the selection of appropriate architectural solutions for satisfying ASRs. The AE activity is supported by 13.9\% (11 out of 79) of the studies, and five type of architectural information (such as General architectural information, Architectural solution, Architectural decision) were mined to guide AE (see Table \ref{MappingBetweenArchActivitiesMinedArchInfoAndStudies}). For instance, Andrzej \textit{et al}. {[S14]} proposed an Architecture Decision Support System (ADSS) prototype tool that can mine various types of architectural information (e.g., architectural solutions, architectural decisions and their rationale) from diverse sources, such as Q\&A sites and Wiki. The mined architectural information can help architects evaluate (among others) candidate architectural solutions to a given architectural problem by making architectural decisions when conducting AE. \textbf{\textit{Architecture Impact analysis (AIA)}} aims to identify directly affected and indirectly-influenced elements of an architecture due to an architectural change scenario \cite{bengtsson2004architecture}. The outcome of this activity aids architects to comprehend the dependencies between the changed elements and the affected elements in the architecture \cite{li2013application}. The AIA activity is supported by 12.7\% (10 out of 79) of the studies, and five types of architectural information (such as General architectural information, Architectural change, Architectural solution) were mined to aid AIA \ref{MappingBetweenArchActivitiesMinedArchInfoAndStudies}. For example, Anish \textit{et al}. {[S55]} developed an ML-based approach for automatically identifying and mining ASRs from requirements documents, and classifying the mined ASRs into several subcategories based on the types of architectural impacts the mined ASRs can have on the system components, which helps architects perform AIA. \textbf{\textit{Architecture Conformance Checking (ACC)}} is conducted to check and evaluate if the implemented architecture conforms to the intended architecture, and this activity can help architects identify and correct architectural violations and further avoid constant architecture erosion during the software development life cycle \cite{pruijt2013architecture}. The ACC activity is supported by 6.3\% (5 out of 79) of the studies, and two types of architectural information (i.e., Architectural description, Architectural technical debt) were mined to facilitate ACC (see Table \ref{MappingBetweenArchActivitiesMinedArchInfoAndStudies}). For example, Ghorbani \textit{et al}. {[S30]} presented DARCY, an approach that automatically detects and mines inconsistent module dependencies within Java applications. The mined architectural information can be used to check the inconsistency and evaluate the consistency in the implementation of architecture when conducting ACC. \textbf{\textit{All Activities}} means that the mined architectural information can support all the abovementioned eleven architecting activities. Only 3.8\% (3 out of 79) of the studies claimed that they can support all the aforementioned architecting activities, and two types of architectural information (such as Architectural description) were mined (see Table \ref{MappingBetweenArchActivitiesMinedArchInfoAndStudies}). For instance, in {[S54]}, Weinreich and Buchgeher presented an approach named Language for Integrated Software Architecture (LISA), which consists of the LISA model and the LISA toolkit to facilitate the eleven abovementioned architecting activities. The LISA model supports the description of both static and dynamic structures at different levels of abstraction, from classes, packages and modules to components. Among other things, LISA model supports the description of architectural solutions, architectural decisions, and architecting activities. The LISA toolkit, which is an extension of the Eclipse IDE, can facilitate the incremental and concurrent development of requirements, architecture, and implementation. For example, to facilitate AA, the approach can be used to capture ASRs mined from various sources (such as issue management systems). \begin{tcolorbox}[colback=gray!5!white,colframe=gray!75!black,title=Key Findings of RQ3] \textbf{Finding 4}: Twelve architecting activities can be supported by the mined architectural information, in which \textit{architecture understanding} (50.6\%, 40 out of 79 studies), \textit{architecture recovery} (32.9\%, 26 out of 79 studies), and \textit{architectural description and documentation} (29.1\%, 23 out of 79 studies) are the top three most supported architecting activities. A few numbers of the selected studies (i.e., 3, 3.8\%) have proposed approaches for mining architectural information to support all the architecting activities reported in this SMS. \end{tcolorbox} \subsection{Results of RQ4: Approaches and tools}\label{ResultsOfRQ4} We describe the approaches and tools used for mining architectural information in Section \ref{Approach} and Section \ref{Tools}, respectively. \subsubsection{Approaches}\label{Approach} The selected studies have proposed and used different approaches to mine architectural information from different sources. We analyzed and synthesized the approaches used for mining architectural information by following the coding steps for data synthesis as described in Section \ref{DataSynthesis}. We observed that some selected studies did name their approaches used to mine architectural information and others did not. If a study does not provide a name for the proposed approach, we labelled it as “Unnamed” (see Table \ref{SammazationOf_Autom_Approaches} and Table \ref{SammazationOfSemi-Autom_Approaches}). Moreover, we found that the proposed approaches have different level of automation (e.g., automatic, semi-automatic). In addition, these approaches have been applied in different types of tasks (e.g., classification, clustering) for mining architectural information from various sources. Thus, in this SMS, we utilized a two-level categorization to categorize the approaches for mining architectural information. At the first level, we divided the approaches in three high-level categories based on the level of automation: (1) \textit{automatic}, (2) \textit{semi-automatic}, and (3) \textit{manual}. At the second level, we further categorized the approaches in three categories based on different types of tasks in mining architectural information: (1) \textit{architectural information classification}, (2) \textit{architectural information clustering}, and (3) \textit{architectural information retrieval}. In total, we gathered 81 approaches from the selected studies of which 61.7\% (i.e., 50 out of 81) (see Table \ref{SammazationOf_Autom_Approaches}) are automatic approaches, 32.1\% (i.e., 26 out of 81) are semi-automatic approaches (see Table \ref{SammazationOfSemi-Autom_Approaches}), and 6.2\% (i.e., 5 out of 81) are manual approaches (see Table \ref{SammazationOfManual_Approaches}). Architectural information classification based approaches count 59.3\% (48 out of 81), followed by clustering based approaches (20.9\%, 17 out of 81), and information retrieval based approaches (19.8\%, 16 out of 81). In this SMS, one selected study may make use of two or more approaches, and consequently the number of collected approaches (81) is greater than the number of the selected studies (79). Furthermore, to help and ease the applicability of these approaches in practice, we also present the sources that the proposed approaches can be applied on. \textbf{Architectural information classification} refers to the approaches that mine architectural information based on classification techniques. Architectural information classification based approaches are the most frequently 59.3\% (48 out of 81) used approaches in mining architectural information. Classification techniques learn from the data input provided to them and then use this learning to classify new observations \cite{kotsiantis2006machine}. There are two types of architectural information classification based approaches that were employed in the selected studies: binary classification based approaches (e.g., {[S6]}{[S48]}{[S59]}) and multi-class classification based approaches (e.g., {[S40]}{[S41]}{[S76]}{[S79]}). Architectural information classification based approaches are applied either automatically (see Table \ref{SammazationOf_Autom_Approaches}), semi-automatically (see Table \ref{SammazationOfSemi-Autom_Approaches}), or manually (see Table \ref{SammazationOfManual_Approaches}). For example, in {[S40]}, Mondal \textit{et al.} proposed an approach named Archinet for mining and classifying architectural changes from code and commit messages in GitHub. Specifically, they formulated this problem as a multi-class classification task that automatically classifies architectural changes into four categories. In {[S6]}, Bi \textit{et al.} developed a semi-automatic dictionary-based mining approach to extract Quality Attribute (QA) and Architectural Tactic (AT) related discussions in Stack Overflow (SO) posts. They formulated this problem as a binary classification task, which automatically identifies QA-AT related discussions from SO posts. Then the authors went on to manually structure the design relationships between QAs and ATs used in practice, and build a knowledge base of how developers use ATs with respect to QA concerns. In {[S6]}, Malavolta \textit{et al}. employed a manual approach (i.e., qualitative analysis) to mine and elicit evidence-based architectural guidelines for open-source ROS-based software from GitHub, BitBucket, and GitLab. Their qualitative data analysis yielded 39 guidelines for architecting robotics software, which can be used by roboticists to architect their systems to achieve particular quality requirements. \textbf{Architectural information clustering} denotes the approaches that employ clustering based techniques to mine architectural information from various sources. Architectural information clustering based approaches count 20.9\% (17 out of 81 approaches). The clustering techniques aim at reducing the amount of data by categorizing or grouping similar data items together into subsets or clusters \cite{saxena2017review}. The goal is to create clusters with internal coherence, placing similar objects in the same group, and assigning dissimilar objects to different groups. In this sense, documents that belong to a certain cluster should be as similar as possible and dissimilar from documents in other clusters. We identified two types of architectural information clustering based approaches employed in the selected studies: partitional clustering (e.g, {[S43]}) and hierarchical clustering (e.g., {[S68]}). In addition, these approaches operate in two ways: automatic and semi-automatic (see Table \ref{SammazationOf_Autom_Approaches} and Table \ref{SammazationOfSemi-Autom_Approaches}). Partitional clustering approaches output an initial partition with a certain number of clusters. Specifically, partitional clustering approaches divide a dataset into a number of groups based upon a certain criterion known as fitness measure \cite{nanda2014survey}. The fitness measure directly affects the nature of the formation of clusters. Once an appropriate fitness measure is selected, the partitioning task is converted into an optimization problem (e.g., grouping based on minimization of distance or maximization of correlation between patterns) \cite{nanda2014survey}. For example, Casamayor \textit{et al}. {[S43]} used a partitional clustering based approach to semi-automatically mine potential responsibilities of components in a software system to be developed. Specifically, NLP techniques and the K-means algorithm have been applied to automatically cluster candidate responsibilities into groups. Firstly, this approach processes requirements documents by the Part-Of-Speech (POS) tagging technique to detect actions, activities, or tasks that will become responsibilities of certain components in the architecture of a system. Afterward, K-means is utilized to group (i.e., partitional clustering) similar responsibilities into architectural components. Hierarchical clustering approaches work by iteratively merging smaller clusters into larger ones, or by splitting larger clusters onto small ones \cite{saxena2017review}. The key point is the rule used by the approach to decide which small clusters are to merge or which larger clusters are to split. The result is a tree of clusters showing how clusters are related. The output of hierarchical clustering is a hierarchy, a structure that is more informative than an unstructured set of clusters \cite{saxena2017review}. For instance, in {[S68]}, Mitchell and Mancoridis presented an approach that automatically searches and hierarchically groups highly interdependent modules into same subsystems/clusters and, conversely, groups independent modules into separate subsystems/clusters. The approach was developed to help software engineers perform a variety of large and complex system understanding and maintenance activities. \textbf{Architectural information retrieval} covers the approaches that employ Information Retrieval (IR) based techniques \cite{singhal2001modern} to search and retrieve architectural information in various sources. Around 19.8\% (16 out of 81) of the approaches are based on IR techniques, which have been utilized to (semi-)automatically search and retrieve architectural information by using two types of searching mechanisms, namely keyword-based search and semantic search. On the one hand, keyword-based search approaches do not understand polysemy and synonymy \cite{guha2003semantic}. Specifically, when looking at a repository, a keyword-based approach looks for the distribution of documents within the repository to find how relevant a document is to the search query of the user \cite{guha2003semantic}. Basically, this means that a document with similar words/terms to those the user types into a search engine will be thought to be more relevant and will appear at a higher position in the search results \cite{guha2003semantic}. For example, Aman-ul-haq and Ali Babar {[S23]} proposed a tag-based/annotation-based knowledge identification and extraction approach. One of the functionalities of this approach is the keyword-based search functionality, which can search and retrieve desired architectural artifacts in the repository using the keywords attached to each architectural artifact. This approach has been developed to reduce the time and effort required for searching and capturing architecture knowledge in a software repository. On the other hand, semantic search-based approaches understand polysemy and synonymy and know the meaning of the words/terms \cite{guha2003semantic}. Specifically, semantic search-based approaches are designed to understand the context the words/terms are used within the documents in order to match these words/terms more accurately to the user search queries \cite{guha2003semantic}. For instance, in {[S13]}, Mujhid \textit{et al.} presented an approach based on IR and program analysis techniques for searching and reusing architectural tactics implemented in OSS projects. Among other algorithms used by this approach, the approach uses a novel information retrieval algorithm that automatically searches and ranks the source files implementing architectural tactics in the software repositories based on (i) the semantic similarity of a source file to a searched architectural tactic and (ii) the semantic similarity of a source file and its direct dependent files to a technical problem represented in the search query. The approach was developed to help developers search and reuse implementation examples of architectural tactics for a given technical context. \begin{landscape} \begin{longtable}{p{14em}p{7em}p{10em}p{19em}p{4em}} \caption{Summarization of tasks, automatic approaches, used sources, and mined architectural information from the selected studies} \label{SammazationOf_Autom_Approaches} \\\hline \textbf{Task} & \textbf{Automatic approaches} & \textbf{Used source} & \textbf{Mined architectural information} & \textbf{Studies} \\\hline \endfirsthead \multicolumn{5}{c}% {{\bfseries }}\\ \endhead \multicolumn{5}{r}{{}} \\ \endfoot \hline \hline \endlastfoot \multirow{5}{13em}[14pt]{Architectural information classification (32 approaches)} & Unnamed & Q\&A sites & General architectural information & {[S5]}\\ \cline{2-5} & Unnamed & Q\&A sites & Design relationship (i.e., Component-Quality attribute relationships) & {[S10]}\\ \cline{2-5} & Unnamed & Version control systems & Component-Component relationship & {[S12]}\\ \cline{2-5} & Unnamed & Mixed artifacts (Q\&A sites, Technical blogs and tutorials, Presentations and videos, etc.) & Architectural solution (i.e., architectural tactic) & {[S15]}\\ \cline{2-5} & Unnamed & Chat messages & General architectural information & {[S16]}\\ \cline{2-5} & Unnamed & Version control systems & General architectural information & {[S17]}\\ \cline{2-5} & Unnamed & Version control systems & General architectural decision, Architectural description (i.e., Architectural view), General architectural change & {[S18]}\\ \cline{2-5} & Unnamed & Mixed artifacts (Software description and documentation, Issue tracking systems, etc.) & Architectural decision (i.e., Structural, Behavioral, Ban or non-existence decisions) & {[S19]}\\ \cline{2-5} & Unnamed & Q\&A sites & Design relationship (i.e., Component-Component relationship) & {[S20]}\\ \cline{2-5} & Unnamed & Issue tracking systems & Architectural decision (i.e., Structural, Behavioral, Ban or non-existence decisions) & {[S22]}\\ \cline{2-5} & Unnamed & Software description and documentation & General architectural information & {[S23]}\\ \cline{2-5} & Unnamed & Software description and documentation & General system requirement, General architectural decision, Architectural description (i.e., Architectural rationale), Architectural solution (i.e., Architectural tactics, architectural patterns) & {[S25]}\\ \cline{2-5} & Unnamed & Version control systems & Design relationship (i.e., Architectural tactic-Context relationship) & {[S27]}\\ \cline{2-5} & DARCY & Version control systems & Architectural technical debt (i.e., Architectural compliance issue), Architectural description (i.e., Architectural model) & {[S30]}\\ \cline{2-5} & Formal elicitation & Software description and documentation & General architectural decision & {[S33]}\\ \cline{2-5} & Lightweight top-down capture & Software description and documentation & General architectural decision & {[S33]}\\ \cline{2-5} & Lightweight bottom-up capture & Version control systems & General architectural decision & {[S33]}\\ \cline{2-5} & Unnamed & Issue tracking systems & Architectural description (i.e., Architectural view), General architectural decision, General architectural change & {[S35]}\\ \cline{2-5} & ArchiNet & Version control systems & Architectural change (i.e., Perfective change, Corrective change, Adaptive change, Preventative change) & {[S40]}\\ \cline{2-5} & Unnamed & Mixed artifacts (Version controls systems, Developer mailing lists) & Architectural change (i.e., Perfective change, Corrective change, Adaptive change, Preventative change) & {[S41]}\\ \cline{2-5} & Unnamed & Software description and documentation & System requirement (i.e., Quality attributes) & {[S42]}\\ \cline{2-5} & Unnamed & Mixed artifacts (Q\&A sites, Issue tracking systems, Chat messages) & General architectural information & {[S48]}\\ \cline{2-5} & Unnamed & Version control systems & General architectural information & {[S49]}\\ \cline{2-5} & Unnamed & Software description and documentation & System requirement (i.e., Functional requirements) & {[S55]}\\ \cline{2-5} & Unnamed & Code search engine & Architectural solution (i.e., Architectural tactics) & {[S56]}\\ \cline{2-5} & Unnamed & Version control systems & Architectural solution (i.e., Architectural tactics) & {[S57]}\\ \cline{2-5} & IMEAV & Software description and documentation & General architectural information & {[S62]}\\ \cline{2-5} & NFR2AC & Software description and documentation & Architectural description (i.e., Architectural concerns), System requirement (i.e., Quality attributes) & {[S66]}\\ \cline{2-5} & Unnamed & Version control Systems & Architectural technical debt (i.e., Architectural smells) & {[S71]}\\ \cline{2-5} & SOMAD & Version control systems & Architectural technical debt (i.e., Architectural anti-patterns) & {[S72]}\\ \cline{2-5} & Unnamed & App Stores & System requirement (i.e., Quality attributes, Functional requirements) & {[S76]}\\ \cline{2-5} & Unnamed & Q\&A sites & General architectural information & {[S79]}\\ \cline{1-5} \multirow{5}{12em}[14pt]{Architectural information clustering (9 approaches)} & Unnamed & Version control systems & Architectural solution (i.e., Architectural patterns), Architectural technical debt (i.e., Architectural compliance issues) & {[S34]}\\ \cline{2-5} & POPCON & Version control systems & Architectural description (i.e., Architectural models), Architectural technical debt (i.e., Architectural compliance issues) & {[S36]}\\ \cline{2-5} & Unnamed & Mixed artifacts (Version control system, Issue tracking systems) & Architectural technical debt (i.e., Architectural anti-patterns), Architectural description (i.e., Architectural models, Architectural views) & {[S37]}\\ \cline{2-5} & ARCADE & Version control systems & Architectural description (i.e., Architectural views, Architectural models), General architectural change & {[S46]}\\ \cline{2-5} & RecoverAr & Mixed artifacts (Version control system, Issue tracking systems) & Architectural description (i.e., Architectural views, Architectural models, Architectural rationale), General architectural decision, General architectural change & {[S47]}\\ \cline{2-5} & Architecture Mining & Version control systems & Architectural description (i.e., Architectural models, System of interest) & {[S51]}\\ \cline{2-5} & Unnamed & Mixed artifacts (Version control systems, Issue tracking systems) & Architectural description (i.e., Architectural models, Architectural views), Architectural technical debt (i.e., Architectural anti-patterns) & {[S64]}\\ \cline{2-5} & R3 & version control systems & Architectural description (i.e., Architectural models, Architectural views) & {[S65]}\\ \cline{2-5} & ROMANTIC & Version control systems & Architectural description (i.e., Architectural models) & {[S75]}\\ \cline{1-5} \multirow{5}{12em}[14pt]{Architectural information retrieval (9 approaches)} & ArchEngine & Version control systems & Architectural solution (i.e., Architectural tactics) & {[S13]}\\ \cline{2-5} & Unnamed & Wiki, Q\&A sites & General architectural information & {[S14]}\\ \cline{2-5} & SAK-SIR & Mixed artifacts (Technical blogs and tutorials, Online books, research articles, and white papers) & General architectural information & {[S32]}\\ \cline{2-5} & Unnamed & Q\&A sites & Architectural description (Architectural rationale), General architectural decision & {[S38]}\\ \cline{2-5} & Unnamed & Software description and documentation & General architectural information & {[S44]}\\ \cline{2-5} & CAKI & Mixed artifacts (Wiki, Q\&A sites, Technical blogs and tutorials, etc.) & General architectural information & {[S53]}\\ \cline{2-5} & Ontobrowse & Version control systems & General architectural information & {[S60]}\\ \cline{2-5} & Unnamed & Version control systems & Architectural description (i.e., Architectural models), Design relationship (i.e., Component-Component relationships) & {[S67]}\\ \cline{2-5} & Unnamed & Version control systems & Design relationship (i.e., Component-Component relationships) & {[S74]}\\ \cline{1-5} \end{longtable} \end{landscape} \begin{landscape} \begin{longtable}{p{14em}p{7em}p{10em}p{19em}p{4em}} \caption{Summarization of tasks, semi-automatic approaches, used sources, and mined architectural information from the selected studies} \label{SammazationOfSemi-Autom_Approaches} \\\hline \textbf{Task} & \textbf{Semi-automatic approaches} & \textbf{Used source} & \textbf{Mined architectural information} & \textbf{Studies} \\\hline \endfirsthead \multicolumn{5}{c}% {{\bfseries }}\\ \endhead \multicolumn{5}{r}{{}} \\ \endfoot \hline \hline \endlastfoot \multirow{5}{13em}[14pt]{Architectural information classification (10 approaches)} & Unnamed & Issue tracking systems & System requirement (i.e., Quality attribute, Functional requirement), General architectural decision, Architectural description (i.e., Architectural rationale), General architectural change, Architectural solution (Architectural pattern, architectural tactic) & {[S1]}\\ \cline{2-5} & Unnamed & Mixed artifacts (Version control systems, Q\&A sites, Technical blogs and tutorials, Online books, research articles, and white papers, Presentations and videos, etc.) & General architectural information & {[S2]}\\ \cline{2-5} & Unnamed & Q\&A sites & General architectural information & {[S4]} \\ \cline{2-5} & Unnamed & Q\&A sites & Design relationship (i.e., Architectural tactic-Quality attribute relationship) & {[S6]} \\ \cline{2-5} & Unnamed & Version control systems & Design relationship (i.e., Design pattern-Architectural tactic relationship) & {[S21]} \\ \cline{2-5} & TREx & Mixed artifacts (Chat messages, Wiki, etc.) & Architectural description (i.e., Architectural rationale) & {[S24]}\\ \cline{2-5} & Unnamed & Software description and documentation & Design relationship (i.e., Architectural pattern-Quality attribute relationship) & {[S31]}\\ \cline{2-5} & Unnamed & Mixed artifacts (Wiki, Technical blogs and tutorials, etc.) & Architectural description (i.e., Architectural view) & {[S52]} \\ \cline{2-5} & LISA & Mixed artifacts (Issue tracking system, Software description and documentation) & Architectural description (i.e., Architectural model, Architectural view) & {[S54]} \\ \cline{2-5} & Keecle & Version control systems & Architectural deception (i.e., Architectural view), Design relationship (i.e., Component-Component relationship) & {[S59]}\\ \cline{1-5} \multirow{5}{12em}[14pt]{Architectural information clustering (9 approaches)} & Unnamed & Mixed artifacts (Version control system, Wiki, Q\&A sites) & Architectural solution (i.e., Framework for addressing scalability, Architectural tactics), General architectural decision, Architectural description (i.e., System of interest, Architectural rationale) & {[S26]} \\ \cline{2-5} & Unnamed & Version control systems & Architectural description (i.e., Architectural model, System of interest), Architectural technical debt (Architectural smell) & {[S29]} \\ \cline{2-5} & MicroART & Version control systems & Architectural description (i.e., Architectural model, Architectural view, System of interest) & {[S39]} \\ \cline{2-5} & Unnamed & Software description and documentation & System requirement (i.e., Functional requirement) & {[S43]} \\ \cline{2-5} & Unnamed & Q\&A sites & Design relationship (i.e., Architectural pattern-Architectural pattern relationship), Architectural solution (i.e., Architectural pattern) &{[S63]} \\ \cline{2-5} & Unnamed & Version control systems & Architectural description (i.e., Architectural model, Architectural view) & {[S68]} \\ \cline{2-5} & Unnamed & Version control systems & General architectural information & {[S69]}\\ \cline{2-5} & Unnamed & Version control systems & Architectural description (Architectural model), Architectural technical debt (i.e., Architectural compliance issue) & {[S73]}\\ \cline{2-5} & Unnamed & Version control systems & General architectural change & {[S77]}\\ \cline{1-5} \multirow{5}{13em}[28pt]{Architectural information retrieval (7 approaches)} & Unnamed & Q\&A sites & General architectural information & {[S3]}\\ \cline{2-5} & Unnamed & Mixed artifacts (Chat messages, Wiki, etc.) & General architectural information & {[S28]}\\ \cline{2-5} & MaRK-II & Software description and documentation & System requirement (i.e., Functional requirement) & {[S45]} \\ \cline{2-5} & KaitoroCap & Software description and documentation & General architectural information & {[S50]} \\ \cline{2-5} & Semantic-based & Mixed artifacts (Online books, research articles, and white papers) & Architectural solution (i.e., architectural pattern, tactic), system requirement (i.e., quality attributes) & {[S58]} \\ \cline{2-5} & unnamed & Wiki & General architectural information & {[S61]} \\ \cline{2-5} & unnamed & Software description and documentation & General architectural information & {[S70]} \\ \cline{1-5} \end{longtable} \end{landscape} \begin{landscape} \begin{longtable}{p{14em}p{7em}p{10em}p{19em}p{4em}} \caption{Summarization of tasks, manual approaches, used sources, and mined architectural information from the selected studies} \label{SammazationOfManual_Approaches} \\\hline \textbf{Task} & \textbf{Manual approaches} & \textbf{Used source} & \textbf{Mined architectural information} & \textbf{Studies} \\\hline \endfirsthead \multicolumn{5}{c}% {{\bfseries }}\\ \endhead \multicolumn{5}{r}{{}} \\ \endfoot \hline \hline \endlastfoot \multirow{5}{13em}[14pt]{Architectural information classification (5 approaches)} & Qualitative data analysis & Mixed artifacts (Version control systems, Q\&A sites, ROS answers sites) & Architectural solution (i.e., Architectural tactic) & {[S7]} \\ \cline{2-5} & Qualitative data analysis & Version control systems & System requirement (i.e., Quality attribute), General architecture solution, General architectural decision, Architectural description (i.e., Architectural view, Architectural concern) & {[S8]} \\ \cline{2-5} & Qualitative data analysis & Mixed artifacts (Version control systems, Q\&A sites, ROS answers sites) & Architectural solution (i.e., Architectural tactic) & {[S9]} \\ \cline{2-5} & Qualitative data analysis & Developer mailing lists & System requirement (i.e., Quality attribute), Architectural description (i.e., Architectural model, Architectural rationale, Architectural concern, System of interest) & {[S11]} \\ \cline{2-5} & Qualitative data analysis & Online books, research articles, and white papers & Design relationship (i.e., Architectural pattern-Architectural pattern relationship) & {[S78]} \\ \cline{1-5} \end{longtable} \end{landscape} \subsubsection{Tools}\label{Tools} Various tools are proposed to facilitate searching, extracting, and mining architectural information, and make it more convenient for developers to fasten the development process. We explored the selected studies and collected the tools that support mining architectural information. In total, 52 tools were collected from the selected studies. We categorized these tools into two categories: \textit{general tools} 30.8\% (16 out of 52) and \textit{dedicated tools} 69.2\% (36 out of 52). General tools are provided to analyze, process, and mine various information, not limited to architectural information. For example, Stanford Parser is a general NLP tool frequently used in textual information preprocessing tasks, and Gokyer \textit{et al}. {[S66]} used Stanford Parser for analyzing the grammar of textual architecture information in software description and documentation. In Table \ref{SummarizationOfGeneralTools}, we provide the collected general tools, their descriptions, and the relevant studies in which these general tools were used. Dedicated tools are developed for specifically mining architectural information. We found that dedicated tools have two levels of automation (i.e., automatically and semi-automatically) and have been applied in different types of tasks (e.g., clustering) for mining architectural information from various sources. Thus, similar to what we did when analyzing architectural information mining approaches, we utilized a two-level categorization to categorize the dedicated tools for mining architectural information. At the first level, we divided these tools in three high-level categories based on the level of automation: (1) \textit{automatic} (86.1\%, 31 out of 36 tools) (see Table \ref{SammazationOfAutomaticTools}) and (2) \textit{semi-automatic} (13.8\%, 5 out of 36 tools) (see Table \ref{SammazationOfSemi-automaticTools}). At the second level, we further categorized the dedicated tools in three categories based on different types of tasks in mining architectural information: (1) \textit{architectural information classification} (36.1\%, 13 out of 36 tools), (2) \textit{architectural information clustering} (33.3\%, 12 out of 36 tools), and (3) \textit{architectural information retrieval} (30.6\%, 11 out of 36 tools). Moreover, to help and ease the applicability of these approaches in practice, we also present the sources that the proposed approaches can be applied on. The detail information about the dedicated tools is shown in Table \ref{SammazationOfAutomaticTools} and \ref{SammazationOfSemi-automaticTools}, which list the categories of different tasks in mining architectural information, tool names, source used, mined architectural information, URL link, descriptions, and the relevant studies in which these dedicated tools were proposed. Note that, the results in the “URL link” column were collected from the selected studies. “Not provided” means that the selected study did not provide a URL link of the proposed tool. \begin{longtable}{p{8em}p{20em}p{8em}p{5em}} \caption{General tools used to mine architectural information} \label{SummarizationOfGeneralTools} \\\hline \textbf{Tool name} & \textbf{Description} & \textbf{URL link} & \textbf{Studies} \\\hline \endfirsthead \multicolumn{3}{c}% {{\bfseries }}\\ \hline \endhead \multicolumn{3}{r}{{}} \\ \endfoot \hline \hline \endlastfoot {Weka} & It is a collection of machine learning algorithms for data mining tasks. It contains tools for data preparation, classification, regression, clustering, association rules mining, and visualization. It has been mainly used in ten selected studies for data prepossessing, textual architectural information classification, and clustering. & \url{https://www.weka.io/} & {[S4]} {[S3]} {[S17]} {[S19]} {[S25]} {[S40]} {[S43]} {[S49]} {[S55]} {[S76]} \\ \cline{1-4} {Stanford POS Tagger} & Stanford POS (Part of Speech) tagger is a piece of software that reads the text in some language and assigns parts of speech to each word (and other token), such as noun, verb, adjective, etc. It has been employed in six selected studies to automatically assign a grammatical label to every word in a sentence of architectural artifacts. & \url{https://nlp.stanford.edu/software/tagger.shtml} & {[S25]} {[S52]} {[S69]} {[S66]} {[S43]} {[S45]}\\ \cline{1-4} {GATE API} & It is a framework and graphical development environment for building robust natural language processing tools and applications. Four selected studies used GATE API to develop tools for extracting and mining textual architectural information. & \url{https://gate.ac.uk/} & {[S25]} {[S66]} {[S31]} {[S24]} \\ \cline{1-4} {Stanford Parser} & It is a program that works out the grammatical structure of sentences, for instance, which groups of words go together as (“phrases”) and which words are the subject or object of a verb. Probabilistic parsers use knowledge of language gained from hand-parsed sentences to try to produce the most likely analysis of new sentences. It has been used in three selected studies to analyze the grammar of textual architecture artifacts. & \url{https://nlp.stanford.edu/software/lex-parser.shtml} & {[S43]} {[S66]} {[S69]}\\ \cline{1-4} {Understand} & It is a static analysis commercial tool that focuses on source code comprehension, metrics, and standards testing. It is designed to help maintain and understand large amounts of legacy or newly created source code. It has been used in three selected studies for code comprehension by providing architectural dependency graphs from the source code of the projects for further analysis. & \url{https://www.scitools.com/} & {[S26]} {[S64]} {[S37]} \\ \cline{1-4} {Lucene} & It is a Java library providing powerful indexing and search features, as well as spellchecking, hit highlighting and advanced analysis/tokenization capabilities. Three selected studies have used Lucene for developing keyword-based search tools for searching and retrieving architectural information. & \url{https://lucene.apache.org/} & {[S3]} {[S13]} {[S52]} \\ \cline{1-4} {JavaDoc} & It is a documentation generator for generating API documentation in HTML format from Java code. One selected study utilized JavaDoc tool to generate documentation from Java code comments in order to extract architectural information from textual artifacts of the programs. & \url{https://docs.oracle.com/en/java/javase/17/javadoc/javadoc.html} & {[S73]} \\ \cline{1-4} {Arcacia} & It is tool for analyzing code written in C or C++ programming languages. It has been used one selected study to transform code of software projects written in C into a Module Dependency Graph (MDG), which represents the structure of the system’s code components and relations for further analysis. & \url{http://www.research.att.com/sw/tools/Acacia} & {[S68]} \\ \cline{1-4} {Chava} & It is tool for analyzing source code written Java programming language. One selected study has employed Chava to analyze software projects' modules/components source code (written in Java) in order to learn more about the dependencies between these modules/components source code for further analysis. & \url{http://www.research.att.com/ciao} & {[S68]} \\ \cline{1-4} {Bunch} & It is a tool that is developed in Java programming language, and it can be used to implement various software clustering algorithms. It has been used in one selected study to automatically partition the structures of software systems in order to identify the clusters of architecturally relevant entities (e.g., modules or components) contained in source code. & \url{http://serg.cs.drexel.edu/} & {[S68]} \\ \cline{1-4} {LingPipe} & It is used for processing text using computational linguistics. LingPipe can be used to do tasks such as, finding the names of people, organizations or locations in textual artifacts, automatically classifying textual artifacts into categories. It has been employed in one selected study to process textual architecture artifacts based on computational linguistics. & \url{http://www.alias-i.com/lingpipe/} & {[S66]} \\ \cline{1-4} {Tesseract} & It is an open-source OCR engine that can process different types of images and identify the text in them. It has been utilized in one selected study to analyze graphical documents containing architectural diagrams in order to recover architectural information from those diagrams. & \url{https://github.com/tesseract-ocr/tesseract} & {[S62]} \\ \cline{1-4} {Cytoscape} & It is an open source software platform that is designed to provide a basic set of features for data integration, analysis, and visualization. One selected study has used Cytoscape for various purposes, such as generating architectural dependency diagrams from the source code of the projects for further analysis. & \url{https://cytoscape.org/what_is_cytoscape.html} & {[S26]} \\ \cline{1-4} {WireShark} & It is a widely used network protocol analyzer tool. It lets you see what’s happening on your network at a microscopic level. One selected study employed this tool to sniff and analyze network packets in the Kubernetes pods that run the application components in order to determine component-to-component interactions that occurred while the application was running. & \url{https://www.wireshark.org/} & {[S29]} \\ \cline{1-4} {Classycle} & Classycle analyzes the static class and package dependencies in Java applications or libraries. One selected study utilized Classycle for finding cyclic dependencies between architecturally significant classes or packages of the applications. & \url{http://classycle.sourceforge.net/} & {[S30]} \\ \cline{1-4} {GitMiner} & It is an advanced search tool and automation in GitHub. This tool aims to facilitate research by code or code snippets on GitHub through the site’s search page. It has been used in one selected study to browse through commits in order to identify a commit that removed or added an architectural component in the version history of a software project for the purpose of determining if certain architectural decisions were made in the version history of the projects. & \url{https://github.com/UnkL4b/GitMiner} & {[S18]} \\ \cline{1-4} \end{longtable} \begin{landscape} \begin{longtable}{p{14em}p{6em}p{7em}p{11em}p{8em}p{3em}} \caption{Summarization of tasks, automatic tools, used sources, and mined architectural information from the selected studies} \label{SammazationOfAutomaticTools} \\\hline \textbf{Task} & \textbf{Tool name} & \textbf{Used source} & \textbf{Mined architectural information}& \textbf{URL link} & \textbf{Studies} \\\hline \endfirsthead \multicolumn{6}{c}% {{\bfseries }}\\ \endhead \multicolumn{4}{r}{{}} \\ \endfoot \hline \hline \endlastfoot \multirow{6}{12em}[14pt]{Architectural information clustering (12 tools)} & Arcan & Mixed artifacts (Version control systems, Issue tracking systems) & System requirement (i.e., Quality attribute, Functional requirement), General architectural decision, Architectural description (i.e., Architectural rationale), General architectural change & \url{https://essere.disco.unimib.it/wiki/arcan/} & {[S1]}\\ \cline{2-6} & TypeChef & Version control systems & Component-Component relationship & Not provided & {[S12]} \\ \cline{2-6} & $\mu$Miner & Version control systems & Architectural description (i.e., Architectural model, System of interest), Architectural technical debt (Architectural smell) & \url{https://github.com/di-unipi-socc/microMiner} & {[S29]} \\ \cline{2-6} & ArchLint & Code & Architectural solution (i.e., Architectural pattern), Architectural technical debt (i.e., Architectural compliance issue) & \url{http://aserg.labsoft.dcc.ufmg.br/archlint/} & {[S34]} \\ \cline{2-6} & PopCon & Version control systems & Architectural description (i.e., Architectural model), Architectural technical debt (i.e., Architectural compliance issue) & Not provided & {[S36]} \\ \cline{2-6} & Titan & Version control systems & Architectural technical debt (i.e., Architectural anti-pattern), Architectural description (i.e., Architectural model, Architectural view) & Not provided & {[S37]} \\ \cline{2-6} & MicroART & Version control systems & Architectural description (i.e., Architectural model, Architectural view, System of interest) & \url{https://github.com/microart/microART-Tool} & {[S39]} \\ \cline{2-6} & ARCADE-Controller & Code & Architectural description (i.e., Architectural view, Architectural model), General architectural change & No provided & {[S46]} \\ \cline{2-6} & SOUND & Mixed artifacts (Version control systems, Software description and documentation) & General architectural information & No provided & {[S69]} \\ \cline{2-6} & Arcan & Version control systems & Architectural technical debt (i.e., Architectural smell) & \url{https://essere.disco.unimib.it/wiki/arcan/} & {[S71]} \\ \cline{2-6} & SIC & Version control systems & Architectural model, Architectural technical debt (i.e., Architectural compliance issue) & Not provided & {[S73]} \\ \cline{2-6} & ArchSlice & Version control systems & General architectural change & \url{https://github.com/akm523/archslice} & {[S77]} \\ \cline{1-6} \multirow{6}{13em}[14pt]{Architectural information classification (11 tools)} & QuABaseBD & Mixed artifacts (Technical blogs and tutorials, Q\&A sistes, etc.) & Architectural solution (i.e., architectural tactics) & Not provided & {[S15]} \\ \cline{2-6} & ArchiKCo & Chat messages & General architectural information & Not provided & {[S16]} \\ \cline{2-6} & ADeX & Mixed artifact (Software description and documentation, Issue tracking systems, etc.) & Architectural decision (i.e., Structural, Behavioral, Ban or non-existence decisions) & \url{https://amelietor-9f8c3.firebaseapp.com} & {[S19]} \\ \cline{2-6} & AAKET & Mixed artifacts (Chat messages, Software description and documentation) & General architectural information & Not provided & {[S23]} \\ \cline{2-6} & Recommender system & Version control systems & Design relationship (i.e., Architectural tactic-Context relationship) & Not provided & {[S27]} \\ \cline{2-6} & DARCY & Version control systems & Architectural technical debt (i.e., Architectural compliance issue), Architectural description (i.e., Architectural model) & \url{https://sites.google.com/view/darcy-project/home} & {[S30]} \\ \cline{2-6} & DecisionStickies & Software description and documentation & General architectural decision & Not provided & {[S33]} \\ \cline{2-6} & Eclipse plug-in & Version control systems & General architectural decision & Not provided & {[S33]} \\ \cline{2-6} & Formal elicitation & Software description and documentation & General architectural decision & Not provided & {[S33]} \\ \cline{2-6} & ArcheR & Requirements document & Not provided & System requirement (i.e., Functional requirement) & {[S55]} \\ \cline{2-6} & NFR2AC & Software description and documentation & Architectural description (i.e., Architectural concern), System requirement (i.e., Quality attribute) & Not provided & {[S66]} \\ \cline{1-6} \multirow{6}{12em}[14pt]{Architectural information retrieval (8 tools)} & Architectural Tactic Recommender System & Version control systems & Architectural solution (i.e., Architectural tactic) & \url{https://design.se.rit.edu/ArchEngine/} & {[S13]} \\ \cline{2-6} & ADSS & Mixed artifacts (Wiki, Q\&A sites, etc.) & General architectural information & Not provided & {[S14]} \\ \cline{2-6} & Unakite & Q\&A sites & Architectural description (Architectural rationale), General architectural decision & \url{https://unakite.info} & {[S38]} \\ \cline{2-6} & AK-Finder & Software description and documentation & General architectural information & \url{http://softcode.nl/AK-Finder/index.php} & {[S44]} \\ \cline{2-6} & CAKI system & Mixed artifacts (Version Control systems, Wiki, Q\&A sites, Online books, research articles, and white papers, etc.) & General architectural information & Not provided & {[S53]} \\ \cline{2-6} & CASE & Mixed artifacts (Wiki, Q\&A sites, etc.) & Architectural solution (i.e., architectural pattern, tactic), system requirement (i.e., quality attributes) & Not provided & {[S58]} \\ \cline{2-6} & Ontobrowse & Mixed artifacts (Version control system, Issue tracking systems) & General architectural information & Not provided & {[S60]} \\ \cline{2-6} & MaRK & Software description and documentation & System requirement (i.e., Functional requirement) & \url{https://github.com/lawenliu/MaRK} & {[S45]} \\ \cline{2-6} \end{longtable} \end{landscape} \begin{landscape} \begin{longtable}{p{14em}p{6em}p{7em}p{11em}p{8em}p{3em}} \caption{Summarization of tasks, semi-automatic tools, used sources, and mined architectural information from the selected studies} \label{SammazationOfSemi-automaticTools} \\\hline \textbf{Task} & \textbf{Tool name} & \textbf{Used source} & \textbf{Mined architectural information}& URL link & \textbf{Studies} \\\hline \endfirsthead \multicolumn{6}{c}% {{\bfseries }}\\ \endhead \multicolumn{5}{r}{{}} \\ \endfoot \hline \hline \endlastfoot \multirow{4}{12em}[10pt]{Architectural information retrieval (3 tools)} & HyperOnto & Wiki & General architectural information & Not provided & {[S61]} \\ \cline{2-6} & kaitorocap & Software description and documentation & General architectural information & Not provided & {[S50]} \\ \cline{2-6} & Archie & Code search engines & System requirement (i.e., Quality attribute), Architectural solution (i.e., Architectural tactics), Architectural solution (i.e., Architectural tactic) & \url{https://github.com/ArchieProject/Archie-Smart-IDE} & {[S56]}{[S57]}\\ \cline{1-6} \multirow{6}{12em}[14pt]{Architectural information classification (2 tools)} & Toeska & Mixed artifacts (Chat messages, Wiki, etc.) & Architectural deception (i.e., Architectural rationale) & Not provided & {[S24]} \\ \cline{2-6} & LISA toolkit & Mixed artifacts (Issue tracking system, Software description and documentation) & Architectural description (i.e., Architectural model, Architectural view) & Not provided & {[S54]} \\ \cline{1-6} \end{longtable} \end{landscape} \begin{tcolorbox}[colback=gray!5!white,colframe=gray!75!black,title=Key Findings of RQ4] \textbf{Finding 5}: We gathered 81 approaches for mining architectural information, in which 61.7\% (i.e., 50 out of 81) are automatic approaches, 32.1\% (i.e., 26 out of 81) are semi-automatic approaches, and 6.2\% (i.e., 5 out of 81) are manual approaches. Architectural information classification is the most common task supported by the approaches (59.3\%, 48 out of 81 approaches) in mining architectural information. \textbf{Finding 6}: We collected 52 tools for mining architectural information, in which 30.8\% (i.e., 16 out of 52) are general tools and 69.2\% (i.e., 36 out of 52) are dedicated tools. Weka and Stanford POS Tagger are the most common used general tools in mining architectural information. Concerning dedicated tools, architectural information classification is the most common task supported by the dedicated tools (36.1\%, 13 out of 36) in mining architectural information. \end{tcolorbox} \subsection{Results of RQ5: Challenges}\label{ResultsOfRQ5} The challenges refer to the obstacles or issues encountered in mining architectural information. As mentioned in Section \ref{DataSynthesis}, we used open coding and constant comparison to analyze the extracted data item (i.e., D10, see Table \ref{DataExtraction}) and identified the challenges faced in mining architectural information. Our data synthesis yielded four categories, of which one was categorized as “Others” (referring to codes that do not fit into the already generated subcategories), and seven subcategories. Table \ref{Challenges} shows these categories and subcategories, their descriptions along with relevant studies. \begin{longtable}{p{8em}p{15em}p{12em}p{2em}} \caption{Categorization of the challenges encountered in mining architectural information} \label{Challenges} \\\hline \multicolumn{1}{l}{\textbf{Type}} & \multicolumn{1}{l}{\textbf{Subtype}} & \multicolumn{1}{l}{\textbf{Studies}} & \multicolumn{1}{l}{\textbf{Count}} \\\hline \endfirsthead \multicolumn{4}{c}% {{\bfseries \tablename\ \thetable{} -- continued from previous page}} \\\hline \multicolumn{1}{l}{\textbf{Type}} & \multicolumn{1}{l}{\textbf{Subtype}} & \multicolumn{1}{l}{\textbf{Studies}} & \multicolumn{1}{l}{\textbf{Count}} \\\hline \endhead \hline \multicolumn{4}{r}{{Continued on next page}} \\ \endfoot \hline \hline \endlastfoot \multirow{4}{6em}{Architectural element description (17)} & Vagueness or ambiguity in architectural element description & {[S1]} {[S15]} {[S18]} {[S20]} {[S40]} {[S41]} {[S42]} {[S45]} {[S77]} & \multicolumn{1}{c} {9} \\ \cline{2-4} & Incompleteness in architectural element description & {[S1]} {[S6]} {[S10]} {[S15]} {[S40]} {[S48]} & \multicolumn{1}{c} {6} \\ \cline{2-4} & Redundancy in architectural element description & {[S40]} {[S45]} & \multicolumn{1}{c} {2} \\ \cline{1-4} \multirow{3}{7em}{Architectural dataset (17)} & Size of architectural dataset for ML model training & {[S4]} {[S13]} {[S15]} {[S25]} {[S40]} {[S41]} {[S50]} {[S55]} {[S64]} {[S72]} & \multicolumn{1}{c} {10} \\ \cline{2-4} & Quality of architectural dataset for ML model training & {[S1]} {[S6]} {[S13]} {[S14]} {[S15]} {[S24]} {[S40]} & \multicolumn{1}{c} {7} \\ \cline{1-4} \multirow{3}{7em}{Approach and tool (13)} & Applicability of approaches to heterogeneous architectural data & {[S17]} {[S24]} {[S25]} {[S27]} {[S32]} {[S48]} {[S64]} & \multicolumn{1}{c} {7} \\ \cline{2-4} & Limitation of approaches and tools & {[S14]} {[S27]} {[S59]} {[S64]} & \multicolumn{1}{c} {4} \\ \cline{2-4} & Feature selection & {[S5]}{[S20]} & \multicolumn{1}{c} {2} \\ \cline{1-4} \multirow{3}{7em}[13pt]{Others (10)} & & {[S10]} {[S13]} {[S14]} {[S28]} {[S34]} {[S43]} {[S45]} {[S48]} {[S55]} {[S59]} & \multicolumn{1}{c} {10} \\ \cline{1-4} \end{longtable} \textit{\textbf{Architectural element description}}: As shown in Figure \ref{SourceMinedFigure} (the results of RQ2 in Section \ref{ResultsOfRQ2}), the selected studies used diverse sources for mining architectural information. However, the description of architectural elements in these sources is critical for architectural information mining approaches and tools. 21.5\% (17 out of 79) of the studies reported the challenges related to architectural element descriptions encountered when mining architectural information. We further categorized these challenges into three subcategories (see Table \ref{Challenges}). \textit{Vagueness or ambiguity in architectural element description}: Architectural elements, including architectural patterns, architectural components, and architectural rationale (e.g., benefits and drawbacks of architectural solutions) are not clearly described in certain sources, which leads to challenges on architectural information mining approaches (e.g., manual or automatic approaches) and tools. 11.4\% (9 out of 79) of the studies reported the challenge related to the vagueness or ambiguity in architectural element descriptions in different source of software repositories. For example, in {[S1]}, Soliman \textit{et al}. proposed a semi-automatic approach for mining AK (including architectural components, architectural rationale, and system requirements) from issue tracking systems (e.g., Jira). However, the authors argued that it is quite challenging to accurately identify and mine AK concepts in issue tracking systems due to, for example, vagueness and implicitness in architectural concept descriptions in those sources. For example, the authors stated that architectural concepts which are described vaguely or implicitly rend them to be difficulty identified and mined inaccurately, since the meanings of the architectural concepts are unclear and it may not be determined from their contexts, and there also might be more than one meanings in the description of those architectural concepts. \textit{Incompleteness in architectural element description}: Some architectural elements are incompletely described in certain sources, and this may cause various problems (e.g., less accuracy of ML/DL-based approaches) in mining architectural information. 7.6\% (6 out of 79) of the studies reported the problem related to the incompleteness in architectural element descriptions in verous source of software repositories. For example, Karthik and Medvidovic {[S20]} proposed a DL-based approach to automatically mine inter-component relationships from Stack Overflow posts. However, the authors reported that due to the incompleteness in architectural component descriptions in Stack Overflow posts, the approach can include irrelevant architectural components and exclude relevant information (such as version information of architectural components). Furthermore, Ven \textit{et al}. {[S18]} claimed that it is difficult to identify and mine the made architectural decisions in commit messages because sometimes architecture decisions in commit messages are cryptic, short, or incomplete. \textit{Redundancy in architectural element description}: Redundant descriptions in the architectural datasets induce redundant features. Empirical evidence from features selection literature shows that, along with irrelevant features, redundant features also affect the performance of Deep Learning (DL)/ML algorithms and thus should be eliminated as well \cite{yu2003feature}. Moreover, redundant descriptions lead to performing the same operations repeatedly, which is labor-intensive and time-consuming in mining architectural information. 2.5\% (2 out of 79) of the studies mentioned the challenge related to the redundancy in architectural element descriptions in various source of software repositories. For example, in {[S45]}, Lian \textit{et al}. proposed an ML-based approach that is based on NLP and clustering techniques for assisting architects to identify requirements knowledge on components from a collection of domain documents. However, the authors reported the challenges related to redundant descriptions of various architectural elements (e.g., system components and features) in those documents. \textbf{\textit{Architectural dataset}} is a critical asset for several architectural information mining approaches and tools to extract real and significant architectural information from the datasets. However, there are no public and suitable datasets for certain architecture design tasks. Architectural dataset is reported as a challenge in 21.5\% (17 out of 79) of the studies. We further categorized the challenges in the architectural dataset category into two subcategories (see Table \ref{Challenges}). \textit{Size of the architectural dataset for ML/DL model training}: The size of dataset is a matter, specifically, the size of dataset for training DL or ML-based approaches in mining architectural information, since it can influence the performance (e.g., accuracy rate) of these approaches. 12.7\% (10 out of 79) of the studies stated the challenge related to the size of architectural dataset in mining architectural information. For example, Gilson \textit{et al}. {[S42]} proposed an ML-based approach for extracting and mining quality attributes from user stories for early architecture decisions making. However, the authors mentioned that the approach did not perform well due to the size of dataset (i.e., the dataset is too small) used in the training of ML models, and thus the approach needs some improvements with large datasets. \textit{Quality of the architectural dataset for ML/DL model training}: The architectural datasets vary much in quality, including bias, noise, imbalance, and mislabelling. There are many factors that may impact the quality of architectural datasets, such as the source of the dataset, whether the dataset has been preprocessed, and the ground truth in the dataset \cite{yang2020survey}, and any of these factors will influence the performance of the approaches or tools used. Quality of the architectural dataset is the challenge of 8.7\% (7 out of 79) of the selected studies. In {[S24]}, L\'{o}pez \textit{et al}. proposed an approach named Toeska Rationale Extraction (TREx) for extracting, recovering, and exploring architectural rationale information from various sources, such as Wiki, meeting notes, emails. However, the authors stated that the effectiveness (e.g., accuracy) of the approach on extracting data (e.g., architectural rationale) is heavily dependent on the quality of the available textual architectural artifacts. \textbf{\textit{Approach and tool}} denotes the challenges related to approaches or tools used in mining architectural information, and these challenges were reported in 16.5\% (13 out of 79) of the studies. \textit{Applicability of approaches to heterogeneous architectural data}: As pointed out by Canfora \textit{et al}. \cite{canfora2015defect}, most software projects are heterogeneous (e.g., in terms of size, domain specifications, architecture patterns, programming languages, and software models). Therefore, due to this heterogeneity, specific architectural mining approaches and tools can hardly be generalized or applied to mine architectural information from various architectural data. 8.9\% (7 out of 79) of the studies reported the problem related to the applicability of the proposed approaches to different architectural data. For example, in {[S48]}, Mahadi \textit{et al}. proposed an ML classification based approach for mining design discussions in several sources (including VCS (e.g., GitHub), issue tracking systems (e.g., Jira), and Q\&A sites (Stack Overflow)). However, the authors reported the challenges related to poor performance (i.e., accuracy) when applying this ML classifier trained on the architectural dataset from one source (e.g., GitHub) to a different architectural dataset from another source (e.g., Stack Overflow). \textit{Limitation of approaches and tools}: The results highlight that 5.1\% (4 out of 79) of the studies considered the limitation of architectural information mining approaches and tools as a serious challenge. For example, in {[S27]}, Gopalakrishnan \textit{et al}. proposed an approach that could only mine architectural tactics from project source code. However, this approach is limited to mining nine architectural tactics (such as, audit trail, authentication, heartbeat). To mine additional architectural tactics, the approach will need to be expended, which will require performing additional training based on source code that implements other tactics. \textit{Feature selection}: As many candidate features are introduced to better represent various research domains, diverse studies extracted and used a greater variety of features to train architectural information mining approaches. However, not all features used are beneficial to improving the performance (e.g., accuracy) of architectural information mining approaches. First, some interdependent features do not fit when applied to architectural information mining approaches. Another reason is that using too many features may result in overfitting issues \cite{yang2022predictive}. Therefore, how to select the most suitable features for mining architectural information has become a critical challenge. In {[S20]}, Karthik and Medvidovic proposed an approach that uses noun chunking to identify inter-component relationships from Stack Exchange website. However, this approach can exclude some relevant information from the noun chunk, and sometimes words unrelated to architectural components get included as of the noun chunk. The authors stated that a supervised ML approach for the Named Entity Recognition problem could be used to improve the accuracy. \textbf{Others} refer to other factors (such as time and labor constraints) that could influence the effectiveness of architectural information mining approaches. For example, in {[S43]}, Casamayor \textit{et al}. presented a text mining approach for analyzing textual requirements in order to obtain a functional decomposition of a system in responsibilities, that can guide the design of architecture. However, the authors reported that it required a lot of time and effort to manually annotate textual data in order to accomplish this approach. \begin{tcolorbox}[colback=gray!5!white,colframe=gray!75!black,title=Key Findings of RQ5] \textbf{Finding 7}: 4 types and 8 subtypes of challenges were identified in mining architectural information. \textbf{Finding 8}: The challenges in mining architectural information are mainly derived from the descriptions of architectural elements, wherein vagueness or ambiguity in architectural element description is a prevalent challenge. \textbf{Finding 9}: There exist a limited number of quality datasets with an appropriate size available for architectural information mining approaches. \textbf{Finding 10}: There are still lacks of effective approaches and tools that are applicable to various architectural data from diverse sources in mining architectural information. \end{tcolorbox} \section{Discussion}\label{Discussion} In this section, we revisit the findings of this SMS by interpreting the results in Section \ref{AnalysisResults} and discussing their implications for researchers and practitioners in Section \ref{ImplicationsResearchersAndPractitioners} \subsection{Analysis of results}\label{AnalysisResults} \subsubsection{Mined architectural information could support architecting activities}\label{Discussion_RQ1 and RQ3} The results of RQ1 (see Section \ref{ResultsOfRQ1}) reveal that various categories and subcategories of architectural information have been mined in software development, such as \textit{architectural description}, \textit{architectural decision}, \textit{system requirement}, and \textit{design relationship} (see Table \ref{minedArchitecturalInfo}). Moreover, the results show that \textit{architectural description} is the most mined category of architectural information. The potential reason could be that architectural description plays as a blueprint for system development, specifically, a blueprint for effectively and successfully developing large and complex systems \cite{clements2003documenting}. In addition, the information in architectural description, such as architectural views, which describe systems in multiple views (e.g., process view and development view), enables the architecture to be communicated to, and understood by, different stakeholders. Also, it allows stakeholders to verify whether the architecture will address their concerns \cite{6129467}. The core benefits for mining architectural description are lowering architectural knowledge vaporization and speeding up maintenance of software systems \cite{borrego2019towards}. These benefits can be achieved by capturing, sharing, and reusing architectural information (such as architectural views, architectural models, architectural rationale) in architectural description, thus reducing the time that new developers need to familiarize with an existing system and potentially speeding up the development. \textit{Architectural decision} is the second most mined category of architectural information. The reason could be that researchers and practitioners consider architectural decisions as one of the indispensable ingredients that can drive the architecture design of a system \cite{jansen2005SoftArch}. Architectural decisions involve important choices on core components and connectors, and the overall software-intensive system, to satisfy and balance stakeholders’ concerns \cite{zimmermann2010architectural}. Mining and capturing architectural decisions promises significant practical benefits: lower costs of architectural change, increased reuse of architectures, and less erosion of architectural design. These benefits are obtained by offering architects access to the decisions that lead to the architecture, rather than only the resulting outcomes and artifacts from the architectural decisions \cite{jansen2005SoftArch}. Therefore, these practical benefits encouraged researchers to work on the topic of mining architectural decisions from development artifacts in order to support the development process. The mined architectural information could support a variety of architecting activities (see Table \ref{MappingBetweenArchActivitiesMinedArchInfoAndStudies}). Each architecting activity may require specific architectural information for it to be carried out effectively. For example, during architecture evaluation, architectural information, such as the benefits and drawbacks of certain architectural solutions (e.g., architectural patterns and tactics) can help architects evaluate the architectural solutions (e.g., choosing suitable architectural patterns according to the positively and negatively affected quality attributes) in specific application domains. The results of RQ3 show that 12 architecting activities can be supported by the mined architectural information, such as \textit{architecture understanding}, \textit{architecture recovery}, \textit{architecture maintenance and evolution} (see Table \ref{MappingBetweenArchActivitiesMinedArchInfoAndStudies}), where \textit{architecture understanding} is the most supported activity (i.e., 50.6\%, 40 out of 79). Some mined architectural information to support AU include architectural description, architectural decision, system requirement, and architectural solution (see Table \ref{MappingBetweenArchActivitiesMinedArchInfoAndStudies}). An architecture of a system should be well understood before any corrections or changes can be applied to it. AU is a prerequisite for and supportive to other architecting activities (such as AA, AI, AME, and AE). Architectural elements, such as architectural decisions, are vital in achieving the desired software quality, and a good understanding of these architectural elements and their relationships is important in supporting AU \cite{stevanetic2014exploring}. Understanding architectural decisions that make an architecture is also a key factor for maintaining the software system \cite{shahin2014architectural}. For example, AU enables maintainers to gain a comprehensive overview of the complex dependency relationships between architectural components and connectors. Therefore, these facts push researchers and practitioners to explore various sources of software repositories and mine several types of architectural information in order to facilitate AU during the architecting process. Moreover, sometimes architectural information is tacit and not documented at all \cite{ding2014open}. With the increase in the size and complexity of the systems, architectural information management becomes more challenging, and stakeholders need to find the right information and recover it efficiently. Searching and recovering relevant architectural information from various artifacts is a challenging task. With a large volume of artifacts, applying mining approaches can greatly facilitate the task of recovering desired architectural information. These facts motivate researchers and practitioners to turn their attention to developing architectural information mining approaches and tools for recovering architectural information from diverse artifacts contained in software repositories. Thus, in this SMS, we found that \textit{architecture recovery} is the second 32.9\% (26 out of 79) most supported architecting activity. Architecture can be discovered from various artifacts, such as source code, software documentation, physical file organizations, etc \cite{ducasse2009software}. During our data synthesis, we found that most of the studies supporting AR recovered architectural information, such as architectural description (e.g., architectural views, models, and rationale) from source code (e.g., {[S39]}{[S47]}). \textbf{The relationships between the mined architectural information (results of RQ1) and supported architecting activities (results of RQ3)}: In Table \ref{MappingBetweenArchActivitiesMinedArchInfoAndStudies}, we show the relationships between mined architectural information and supported architecting activities through the results of RQ1 (see Section \ref{ResultsOfRQ1}) and RQ3 (see Section \ref{ResultsOfRQ3}). This table can help researchers and practitioners know which architecting activity (e.g., architecture maintenance and evolution) can be supported by what types of mined architectural information (e.g., architectural technical debt) during the development. \subsubsection{Various approaches and tools are used to mine architectural information from diverse sources}\label{Discussion_RQ4 and RQ2} The results of RQ4 show that various approaches and tools (see Section \ref{Approach} and Section \ref{Tools}) have been proposed and employed to mine architectural information from diverse sources (results of RQ2, see Figure \ref{SourceMinedFigure}), for example, \textit{VCS} (e.g., GitHub), \textit{Online Q\&A sites} (e.g., Stack Overflow), \textit{Software description and documentation}, \textit{Wiki}, \textit{Issue tracking systems}, and \textit{Chat messages}. As shown in Figure \ref{SourceMinedFigure}, \textit{VCS} (56.9\%, 45 out of 79 studies) is the most frequently used source for mining architectural information followed by \textit{Online Q\&A sites} (26.6\%, 21 out of 79 studies). One reason is that these two sources are pervasively used in software development. For example, tens of millions of software practitioners use VCS, such as GitHub, by committing source code and updating related artifacts to the \textit{VCS}, and \textit{Online Q\&A sites} (e.g., Stack Overflow) is the most visited online developer community to search and share software development related information, including architectural information \cite{de2022developers}. Thus, these facts make researchers and practitioners tend to first rely on these two sources and find out valuable data on \textit{VCS} and \textit{Online Q\&A sites} when mining architectural information. Regarding the proposed and employed approaches and tools, we found that the research on mining architectural information focused their efforts on automating the proposed approaches (61.7\%, 50 out of 81) (see Table \ref{SammazationOf_Autom_Approaches}) for mining architectural information from various sources. On the other hand, the findings show that 52 tools have been proposed and utilized for mining architectural information, of which 30.8\% (16 out of 52) are general tools and 69.2\% (36 out of 52) are dedicated tools. The general tools are usually employed to support the dedicated tools in mining architectural information. Both architectural information classification based approaches (59.3\%, 48 out of 81) and dedicated tools that support architectural information classification (36.1\%, 13 out of 36) are the most frequently used approaches and tools in mining architectural information. One potential reason could be that, classification based approaches and tools normally require an existing classification, which is the common situation in the scenarios of mining architectural information, like the ISO 42010:2011 standard \cite{6129467} for the classification of architecture elements and the architectural decision ontology~\cite{kruchten2004ontology} for the classification of architectural decisions. \textbf{The relationships between the approaches and tools (results of RQ4) and used sources (results of RQ2) for mining architectural information}: In Table \ref{SammazationOf_Autom_Approaches}, \ref{SammazationOfSemi-Autom_Approaches}, \ref{SammazationOfManual_Approaches}, \ref{SammazationOfAutomaticTools}, and \ref{SammazationOfSemi-automaticTools}, we show the relationships between the tasks in mining architectural information, architectural information mining approaches and tools, used sources for mining architectural information, and mined architectural information. These tables can provide suggestions to researchers and probationers about which categories of architectural information (e.g., architectural change, design relationship) can be mined by which architectural information mining approaches and tools from which sources (e.g., VCS, developer mailing lists) of software repositories. \subsection{Implications for researchers and practitioners}\label{ImplicationsResearchersAndPractitioners} \textbf{Investigating rarely mined types of architectural information}: The results of RQ1 show that, although 8 categories and 29 subcategories of architectural information have been mined to support the architecting activities, certain types of architectural information are rarely mined, such as design relationship and architectural technical debt. Therefore, researchers can propose dedicated approaches or tools for mining these rarely explored types of architectural information to support the development. For example, there has been less attention to mining architectural information related to Architectural Technical Debt (ATD), such as architectural conformance issues, architectural anti-patterns, and architectural smells, which is a critical type of architectural information for \textit{architecture maintenance and evolution}. ATD compromises system-wide quality attributes, particularly maintainability and evolvability \cite{li2014architectural}, and ATD needs to be identified and well managed since it is harmful to the system’s long-term health \cite{li2015systematic}. Thus, the studies on mining architectural technical debt for facilitating architecture maintenance and evolution would be a major contribution to architecture research. \textbf{Exploration of less used sources for mining architectural information}: The results of RQ2 show that some sources are rarely used in mining architectural information, such as developer mailing lists, technical blogs and tutorials, chat messages (e.g., IRC), presentations and videos (e.g., Youtube), App Stores (e.g., Google Play Store), code search engines (e.g., Searchcode), and search engines (e.g., Google). As a future direction, we encourage more work on the exploration of mining architectural information shared in those sources, since it will improve the comprehensiveness of mined architectural information and increase the evidence of academic study results through e.g., triangulation~\cite{wohlin2012experimentation}. For example, Soliman \textit{et al}. \cite{soliman2021exploring} found that technical blogs and tutorials are a common source of architecture knowledge on the Web. However, technical blogs and tutorials are currently not empirically researched for their architecture knowledge, and thus current architectural information mining approaches (e.g., {[S15]}) could be extended to mine the architecture knowledge in technical blogs and tutorials. \textbf{Assisting rarely supported architecting activities}: Regarding the supported architecting activities, the results of RQ3 reveal that certain architecting activities, such as \textit{architecture understanding}, \textit{architecture recovery}, and \textit{architectural description and documentation}, gain much attention from the literature, whereas other activities such as \textit{architecture synthesis}, \textit{architecture evaluation}, \textit{architecture impact analysis}, and \textit{architecture conformance checking} receive less attention. Moreover, very few studies can facilitate “all activities” (see Table~\ref{MappingBetweenArchActivitiesMinedArchInfoAndStudies}). For example, Architecture Conformance Checking (ACC) can help architects identify and correct architectural violations and further avoid constant architecture erosion during the software development life cycle \cite{pruijt2013architecture}. ACC approaches and tools analyze the implemented architectures in source code to mine, for example, architectural compliance issues, and report violations identified in architectures. Thus, with regard to these benefits, it is of paramount importance to develop dedicated approaches and tools for assisting ACC during the architecting process. \textbf{Developing tools to better support the approaches}: We observed that many dedicated tools (see Table \ref{SammazationOfAutomaticTools} and \ref{SammazationOfSemi-automaticTools}) used to mine architectural information are implemented to support architectural information mining approaches (see Table \ref{SammazationOf_Autom_Approaches}, Table \ref{SammazationOfSemi-Autom_Approaches}, and Table \ref{SammazationOfManual_Approaches}). However, for the proposed architectural information mining approaches (i.e., 81 approaches), still more than half of the approaches (i.e., 45 approaches) are not supported by tools. Thus, researchers and practitioners should develop architectural information mining tools to better support the architectural information mining approaches. For instance, in {[S78]}, Kamal and Avgeriou proposed a manual approach for mining architectural pattern-architectural pattern relationships from books, research articles, and white papers. While this approach can assist architects to reuse architectural patterns during the architecting process, this approach can be more helpful if it is (semi-)automated supported by a dedicated tool. \textbf{Open source benchmark architectural datasets for architecture research}: The results of RQ5 report the challenges related to the quality and size of architectural datasets (e.g., lack of adequate training data {[S42]}). The quality of architecture-related artifacts is important for architectural information mining approaches and tools. A significant portion of research aimed to improve the quality of architectural datasets, including acquiring features, formulating data, cleaning data, and obtaining expert labelling of data. For instance, in the task of semi-automatic architecture information classification, the ground truth of the training data is crucial to the final results. The unstructured nature of the text and documents used makes the tasks of labeling data and selecting features a challenge. Regarding these problems, it is important to construct benchmark architectural datasets to facilitate architecture research. A number of studies have worked on alleviating these problems by constructing large open source architectural datasets from software repositories (e.g., \cite{zogaan2017automated}\cite{garcia2021constructing}) to support specific architectural research topics. However, sufficiently large-scale and quality architectural datasets for certain architecture research topics are not yet available. Therefore, the software architecture research community should work towards the construction of quality benchmark architectural datasets (e.g., training datasets) for various architecture research topics, such as architecture analysis and evaluation. \textbf{Towards innovative approaches and tools for mining architectural information}: The results of RQ5 also highlight the challenge related to architectural element descriptions, such as vagueness or ambiguity in architectural element descriptions, which may lead to, for example, poor performance of architectural information mining approaches and tools. Therefore, researchers should opt to develop more robust approaches and tools that could deal with vagueness or ambiguity in architectural element descriptions during mining architectural information. Moreover, the results of RQ5 report the limited applicability of the approaches to heterogeneous architectural datasets, and across software projects and artifact types. For example, an architectural information mining approach trained on a dataset from online Q\&A sites did not work well on an architectural dataset from GitHub \cite{mahadi2020cross}. As pointed out by Canfora \textit{et al}. \cite{canfora2015defect}, most software projects are heterogeneous and have various feature spaces and metric distributions. Certain context factors (e.g., size, domain, and programming language) may have a huge impact on data heterogeneity. Thus, architecture research community might work on this problem of limited applicability of architectural information mining approaches by accommodating the data heterogeneity issue in architectural datasets. \textbf{Adopting the proposed approaches and tools}: In our recent industrial survey on how developers search for architectural information \cite{de2022developers}, we found that none of the participants (practitioners) used the existing approaches and tools (proposed in the literature) to search and mine architectural information from software repositories. However, many approaches and tools have been proposed and employed in mining architectural information according to the results of RQ4, for example, the ArchEngine tool in {[S13]} can help developers search and mine architectural tactics from available software projects for later reuse. Thus, practitioners should pay attention to the latest academic research results, and are encouraged to apply and adopt the proposed architectural information approaches and tools according to their needs and project contexts, which can help identify not only the limitation of those approaches and tools but also the real needs of practitioners in mining architectural information in practice. \section{Threats to Validity}\label{ThreatValidity} In this section, we discuss the potential threats to the validity of our SMS, as well as the measures that we used to mitigate the threats according to the guidelines in \cite{wohlin2012experimentation} and \cite{zhou2016map}. \subsection{Construct validity} Construct validity concerns whether the theoretical and conceptual constructs are correctly interpreted and measured. In this SMS, the main threats to construct validity include study search and study screening: \textbf{Study search}. Finding all relevant primary studies is a challenge to any SMS or literature review-based study. To tackle this issue, a search strategy was prepared and performed in our research. Search terms were constructed with the strings identified by checking the titles and keywords from relevant publications already known to the authors. However, we agreed that there may be relevant studies that were omitted, which affected the completeness of the retrieved results. To mitigate this threat, we searched in seven popular electronic databases (see Table \ref{ElectronicDatabases}) that publish software engineering research \cite{chen2010towards}. Before the formal search, a pilot search was performed to enhance the suitability (e.g., the search terms) and quality of this mapping study. Moreover, to ensure the completeness of the study search, we used the “snowballing” technique \cite{wohlin2014guidelines} to include any potentially relevant studies that were possibly missed out during the automatic search. \textbf{Study screening}. Whether or not to include relevant studies mainly depends on the understanding of the researchers who were involved in the study screening, and this inevitably introduced personal bias due to the limitation of personal knowledge. To mitigate the selection bias, firstly, we defined a set of inclusion and exclusion criteria (see Table \ref{InclusionExculusionCriteria}) regarding the study selection. Secondarily, before the formal study screening (manual inspection), to reach an agreement about the inclusion and exclusion criteria, a pilot study screening was performed whereby the first two authors randomly selected 100 studies (from the 22,540 retrieved studies) and checked them independently by following three rounds of study screening (see Section \ref{StudyScreening}). To measure the inter-rater agreement between the first two authors, we calculated the Cohen’s Kappa coefficient \cite{cohen1960coefficient} and got an agreement of 0.935. The results from the pilot study screening were checked and examined by the first two authors of this study, and they also discussed about the uncertain studies for reaching an agreement and reducing the risk that relevant studies were omitted. \subsection{Internal validity} Internal validity pertains to the study design that has a potential impact on the results. In this SMS, the main threats to internal validity are concerned with the extracted data and the synthesis of the results: \textbf{Data extraction}. The first potential threat to internal validity stems from the quality of the extracted data, which might have a negative impact on the data synthesis and categorization results. Several measures were taken to mitigate the bias of the researchers who conducted data extraction. First, to ensure the consistency of data extraction results, we discussed and formulated the data items by the first three authors of this study to reach a consensus on the content of the data to be extracted. In addition, before the formal data extraction, the first author conducted a pilot data extraction with 15 selected studies. The pilot data extraction results from the first author were checked by the second and third authors according to the description of each data item. Any disagreement was discussed and resolved together for reaching an agreement about the understanding of the data items (specified in Table \ref{DataExtraction}). \textbf{Data synthesis}. The quality of data synthesis may affect the correctness of the answers to our five RQs (see Table \ref{ResearchQuestions}). Researchers may have their own understanding on data synthesis, for instance, the categorization of extracted data. To minimize the personal bias, we performed a pilot data synthesis before the formal one. Specifically, the first two authors randomly chose 5 primary studies (from 79 selected studies). They independently read the full text of these 5 studies and encoded the extracted data items (see Table \ref{DataExtraction}) from those studies in order to answer the five RQs. To improve the reliability of the pilot data synthesis results, the first two authors held a meeting and followed a negotiated agreement approach~\cite{campbell2013coding} to compare the data encoding results, then discussed their disagreements, confusion, and uncertain judgments on the data encoding results in an effort to reconcile them and arrive at a final version of the pilot data synthesis results in which as many discrepancies as possible have been resolved. \subsection{External validity} External validity concerns the extent to which the results and findings of a study can be generalized. This SMS provides an overview of the state of the art of the studies on mining architectural information in software development. Although we took measures to increase the completeness of our study search, there are still limitations and we may have missed relevant studies on mining architectural information in this SMS. To ensure the completeness of the selected studies, we conducted the search process by including the seven most popular digital databases that publish software engineering-related studies \cite{chen2010towards}. Moreover, the snowballing technique was employed in the study search to mitigate the possibility of missing relevant studies. \subsection{Reliability} Reliability refers to whether the study will provide the same results and findings when it is replicated by other researchers. In this SMS, the study screening, data extraction, and data synthesis were manually conducted by following the designed protocol. A potential threat to the reliability of the SMS is the personal bias in data extraction and synthesis. To alleviate this threat, we took measures (e.g., pilot data extraction and synthesis) to mitigate such bias between the authors. Moreover, the results from the categorization stage were cross-checked by involving at least two authors of the study. \section{Conclusions} \label{ConclusionFurtureWork} In this mapping study, we aimed to give a comprehensive overview of the current state of research on mining architectural information. Specifically, we investigated the mined architectural information, sources used, supported architecting activities, and approaches and tools employed, as well as the challenges faced when mining architectural information. To achieve this goal, we automatically searched the papers in seven electric databases. In addition, we complemented the automatic search with the snowballing technique. We finally selected 79 primary studies published in 46 venues for further data extraction and synthesis to answer the five RQs. We summarize the main results and findings of this SMS as follows: \begin{itemize} \item 8 categories and 29 subcategories of architectural information have been mined to support the development. The architectural description is the most mined category of architectural information. \item Various sources have been used to mine architectural information, such as \textit{VCS} (e.g., GitHub), \textit{Online Q\&A sites} (e.g., Stack Overflow), \textit{Software description and documentation}, \textit{Wiki}, and \textit{Issue tracking systems}. \item 12 architecting activities can be supported by the mined architectural information, where \textit{architecture understanding} (50.6\%, 40 out of 79 studies), \textit{architecture recovery} (32.9\%, 26 out of 79 studies), and \textit{architectural description and documentation} (29.1\%, 23 out of 79 studies) are the top three most supported architecting activities. \item 81 approaches and 52 tools have been proposed and employed to mine architectural information. Architectural information classification is most frequently supported by the approaches (59.3\%, 48 out of 81 approaches) in mining architectural information. \item The challenges in mining architectural information are mainly derived from the descriptions of architectural elements, wherein vagueness or ambiguity in architectural element description is the prevalent challenge. Moreover, there exist a limited number of quality datasets with an appropriate size available for architectural information mining approaches. These challenges should receive more attention in future studies. \end{itemize} The results and findings of this SMS provide meaningful implications for both researchers and practitioners in the software architecture community. Strong industrial evidence is needed to assess the effectiveness of the proposed architectural information mining approaches and tools. In addition, dedicated approaches shall be proposed to tackle the reported challenges in mining architectural information, such as the vagueness or ambiguity in architectural element descriptions. Moreover, we encourage more collaboration between researchers and practitioners to close the gap between academia and industry and address the existing challenges, such as constructing a large-scale and quality architectural dataset for architectural information mining approaches. \section*{Acknowledgment} This work is funded by the National Natural Science Foundation of China (NSFC) with Grant No. 62172311. \begin{appendix} \section{Selected Studies}\label{SlectedStudies} \noindent{[S1]} M. Soliman, M. Galster, P. Avgeriou, An exploratory study on architectural knowledge in issue tracking systems, in: Proceedings of the 15th European Conference on Software Architecture (ECSA), 2021, pp. 117–133.\vspace{2mm}\\ {[S2]} M. Soliman, M. Wiese, Y. Li, M. Riebisch, P. Avgeriou, Exploring web search engines to find architectural knowledge, in: Proceedings of the 18th IEEE International Conference on Software Architecture (ICSA), 2021, pp. 162–172.\vspace{2mm}\\ {[S3]} M. Soliman, A. R. Salama, M. Galster, O. Zimmermann, M. Riebisch, Improving the search for architecture knowledge in online developer communities, in: Proceedings of the 15th IEEE International Conference on Software Architecture (ICSA), 2018, pp. 186–195.\vspace{2mm}\\ {[S4]} M. Soliman, M. Galster, A. R. Salama, M. Riebisch, Architectural knowledge for technology decisions in developer communities: An exploratory study with Stack Overflow, in: Proceedings of the 13th Working IEEE/IFIP Conference on Software Architecture (WICSA), 2016, pp. 128–133.\vspace{2mm}\\ {[S5]} F. Tian, F. Lu, P. Liang, M. A. Babar, Automatic identification of architecture smell discussions from Stack Overflow. in Proceedings of the 32nd International Conference on Software Engineering and Knowledge Engineering (SEKE), 2020, pp. 451–456.\vspace{2mm}\\ {[S6]} T. Bi, P. Liang, A. Tang, X. Xia, Mining architecture tactics and quality attributes knowledge in Stack Overflow, Journal of Systems and Software 180 (2021) 111005.\vspace{2mm}\\ {[S7]} I. Malavolta, K. Chinnappan, S. Swanborn, G. A. Lewis, P. Lago, Mining the ROS ecosystem for green architectural tactics in robotics and an empirical evaluation. in: Proceedings of the 18th International Working Conference on Mining Software Repositories (MSR), 2021, pp. 300-311.\vspace{2mm}\\ {[S8]} I. Malavolta, G. A. Lewis, B. Schmerl, P. Lago, D. Garlan, Mining guidelines for architecting robotics software, Journal of Systems and Software 178 (2021) 110969. \vspace{2mm}\\ {[S9]} K. Chinnappan, I. Malavolta, G. A. Lewis, M. Albonico, P. Lago, Architectural tactics for energy-aware robotics software: A preliminary study, in: Proceedings of the 15th European Conference on Software Architecture (ECSA), 2021, pp. 164–171.\vspace{2mm}\\ {[S10]} M. Borg, I. Lennerstad, R. Ros, E. Bjarnason, On using active learning and self-training when mining performance discussions on stack overflow, in: Proceedings of the 21st International Conference on Evaluation and Assessment in Software Engineering (EASE), 2017, pp. 308–313.\vspace{2mm}\\ {[S11]} T. Bi, W. Ding, P. Liang, A. Tang, Architecture information communication in two OSS projects: The why, who, when, and what, Journal of Systems and Software 181 (2021) 111035.\vspace{2mm}\\ {[S12]} S. Nadi, T. Berger, C. K\"{a}stner, K. Czarnecki, Mining configuration constraints: Static analyses and empirical results, in: Proceedings of the 36th International Conference on Software Engineering (ICSE), 2014, pp. 140–151. \vspace{2mm}\\ {[S13]} M. J. Mujhid, J. C. Santos, R. Gopalakrishnan, M. Mirakhorli, A search engine for finding and reusing architecturally significant code, Journal of Systems and Software 130 (2017) 81–93.\vspace{2mm}\\ {[S14]} A. Zalewski, K. Borowa, K. Lisocki, Supporting architectural decision-making with data retrieved from online communities, in: Proceedings of the 16th International Conference on Dependability and Complex Systems (DepCoS-RELCOMEX), 2021, pp. 496–509.\vspace{2mm}\\ {[S15]} I. Gorton, R. Xu, Y. Yang, H. Liu, G. Zheng, Experiments in curation: Towards machine-assisted construction of software architecture knowledge bases, in: Proceedings of the 14th IEEE International Conference on Software Architecture (ICSA), 2017, pp. 79–88.\vspace{2mm}\\ {[S16]} G. Borrego, A. L. Mor\'{a}n, R. R. Palacio, A. Vizca\'{i}no, F. O. Garc\'{i}a, Towards a reduction in architectural knowledge vaporization during agile global software development, Information and Software Technology 112 (2019) 68–82.\vspace{2mm}\\ {[S17]} G. Viviani, M. Famelis, X. Xia, C. Janik-Jones, G. C. Murphy, Locating latent design information in developer discussions: A study on pull requests, IEEE Transactions on Software Engineering 47 (7) (2019) 1402–1413. \vspace{2mm}\\ {[S18]} J. S. v. d. Ven, J. Bosch, Making the right decision: supporting architects with design decision data, in: Proceedings of the 7th European Conference on Software Architecture (ECSA), 2013, pp. 176–183. \vspace{2mm}\\ {[S19]} M. Bhat, C. Tinnes, K. Shumaiev, A. Biesdorf, U. Hohenstein, F. Matthes, Adex: A tool for automatic curation of design decision knowledge for architectural decision recommendations, in: Proceedings of the 6th IEEE International Conference on Software Architecture Companion (ICSA-C), 2019, pp. 158–161. \vspace{2mm}\\ {[S20]} S. Karthik, N. Medvidovic, Automatic detection of latent software component relationships from online Q\&A sites, in: Proceedings of the 7th IEEE/ACM International Workshop on Realizing Artificial Intelligence Synergies in Software Engineering (RAISE), 2019, pp. 15–21. \vspace{2mm}\\ {[S21]} M. Mirakhorli, P. M\"{a}der, J. Cleland-Huang, Variability points and design pattern usage in architecture tactics, in: Proceedings of the 20th ACM SIGSOFT International Symposium on the Foundations of Software Engineering (FSE), 2012, pp. 1–11. \vspace{2mm}\\ {[S22]} M. Bhat, K. Shumaiev, A. Biesdorf, U. Hohenstein, F. Matthes, Automatic extraction of design decisions from issue management systems: a machine learning based approach, in: Proceedings of the 11th European Conference on Software Architecture (ECSA), 2017, pp. 138–154. \vspace{2mm}\\ {[S23]} Aman-ul-haq, M. A. Babar, Tool support for automating architectural knowledge extraction, in: Proceeding on the 4th Workshop on SHAring and Reusing architectural Knowledge (SHARK), 2009, pp. 49–56. \vspace{2mm}\\ {[S24]} C. L\'{o}pez, V. Codocedo, H. Astudillo, L. M. Cysneiros, Bridging the gap between software architecture rationale formalisms and actual architecture documents: An ontology-driven approach, Science of Computer Programming 77 (1) (2012) 66–80. \vspace{2mm}\\ {[S25]} B. Rogers, Y. Qiao, J. Gung, T. Mathur, J. E. Burge, Using text mining techniques to extract rationale from existing documentation, in: Proceedings of the 9th International Conference on Design Computing and Cognition (DCC), 2014, pp. 457–474. \vspace{2mm}\\ {[S26]} G. M\'{a}rquez, M. M. Villegas, H. Astudillo, An empirical study of scalability frameworks in open source microservices-based systems, in: Proceedings of the 37th IEEE International Conference of the Chilean Computer Science Society (SCCC), 2018, pp. 1–8. \vspace{2mm}\\ {[S27]} R. Gopalakrishnan, P. Sharma, M. Mirakhorli, M. Galster, Can latent topics in source code predict missing architectural tactics?, in: Proceedings of the 39th IEEE/ACM International Conference on Software Engineering (ICSE), 2017, pp. 15–26. \vspace{2mm}\\ {[S28]} A. M. Figueiredo, J. C. Dos Reis, M. A. Rodrigues, Improving access to software architecture knowledge an ontology-based search approach, International Journal Multimedia and Image Processing 2 (1/2) (2012) 124–149. \vspace{2mm}\\ {[S29]} J. Soldani, G. Muntoni, D. Neri, A. Brogi, The $\mu$TOSCA toolchain-Mining, analyzing, and refactoring microservice‐based architectures, Software: Practice and Experience 51 (7) (2021) 1591–1621. \vspace{2mm}\\ {[S30]} N. Ghorbani, J. Garcia, S. Malek, Detection and repair of architectural inconsistencies in java, in: Proceedings of the 41st IEEE/ACM International Conference on Software Engineering (ICSE), 2019, pp. 560–571. \vspace{2mm}\\ {[S31]} P. Velasco-Elizondo, R. Mar\'{i}ın-Pi\H{n}a, S. Vazquez-Reyes, A. Mora-Soto, J. Mejia, Knowledge representation and information extraction for analysing architectural patterns, Science of Computer Programming 121 (2016) 176–189. \vspace{2mm}\\ {[S32]} M. L. Rold\'{a}an, S. Gonnet, H. Leone, An ontology-based approach for sharing, integrating, and retrieving architectural knowledge, Electronic Notes in Theoretical Computer Science 339 (2018) 43-62. \vspace{2mm}\\ {[S33]} L. Lee, P. Kruchten, Customizing the capture of software architectural design decisions, in: Proceedings of the 21st Annual Canadian Conference on Electrical and Computer Engineering (CCECE), 2008, pp. 693–698. \vspace{2mm}\\ {[S34]} C. Maffort, M. T. Valente, R. Terra, M. Bigonha, N. Anquetil, A. Hora, Mining architectural violations from version history, Empirical Software Engineering 21 (3) (2016) 854–895. \vspace{2mm}\\ {[S35]} A. Shahbazian, D. Nam, N. Medvidovic, Toward predicting architectural significance of implementation issues, in: Proceedings of the 15th International Working Conference on Mining Software Repositories (MSR), 2018, pp. 215–219. \vspace{2mm}\\ {[S36]} R. Talwadker, D. Aggarwal, Popcon: Mining popular software configurations from community, in: Proceedings of the 13th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM), 2019, pp. 1--6. \vspace{2mm}\\ {[S37]} R. Kazman, Y. Cai, R. Mo, Q. Feng, L. Xiao, S. Haziyev, V. Fedak, A. Shapochka, A case study in locating the architectural roots of technical debt, in: Proceedings of the 37th IEEE International Conference on Software Engineering (ICSE), 2015, pp. 179-188. \vspace{2mm}\\ {[S38]} M. X. Liu, J. Hsieh, N. Hahn, A. Zhou, E. Deng, S. Burley, C. Taylor, A. Kittur, B. A. Myers, Unakite: Scaffolding developers’ decision-making using the web, in: Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology (UIST), 2019, pp. 67-80. \vspace{2mm}\\ {[S39]} G. Granchelli, M. Cardarelli, P. Di Francesco, I. Malavolta, L. Iovino, A. Di Salle, Towards recovering the software architecture of microservice-based systems, in: Proceedings of the 14th IEEE International Conference on Software Architecture Workshops (ICSAW), 2017, pp. 46–53. \vspace{2mm}\\ {[S40]} A. K. Mondal, B. Roy, S. S. Nath, K. A. Schneider, Archinet: A concept-token based approach for determining architectural change categories, In: Proceedings of the 33rd International Conference on Software Engineering and Knowledge Engineering (SEKE), 2021, pp. 7-14. \vspace{2mm}\\ {[S41]} A. K. Mondal, B. Roy, K. A. Schneider, An exploratory study on automatic architectural change analysis using natural language processing techniques, in: Proceedings of the 19th International Working Conference on Source Code Analysis and Manipulation (SCAM), 2019, pp. 62–73. \vspace{2mm}\\ {[S42]} F. Gilson, M. Galster, F. Georis, Extracting quality attributes from user stories for early architecture decision making, in: Proceedings of the 16th IEEE International Conference on Software Architecture Companion (ICSA-C), 2019, pp. 129–136. \vspace{2mm}\\ {[S43]} A. Casamayor, D. Godoy, M. Campo, Functional grouping of natural language requirements for assistance in architectural software design, Knowledge-Based Systems 30 (2012) 78-86. \vspace{2mm}\\ {[S44]} K. A. de Graaf, A. Tang, P. Liang, A. Khalili, Querying software architecture knowledge as linked open data, in: Proceedings of the 14th IEEE International Conference on Software Architecture Workshops (ICSAW), 2017, pp. 272–277. \vspace{2mm}\\ {[S45]} X. Lian, W. Liu, L. Zhang, Assisting engineers extracting requirements on components from domain documents, Information and Software Technology 118 (2020) 106196. \vspace{2mm}\\ {[S46]} P. Behnamghader, D. M. Le, J. Garcia, D. Link, A. Shahbazian, N. Medvidovic, A large-scale study of architectural evolution in open-source software systems, Empirical Software Engineering 22 (3) (2017) 1146–1193. \vspace{2mm}\\ {[S47]} A. Shahbazian, Y. K. Lee, D. Le, Y. Brun, N. Medvidovic, Recovering architectural design decisions, in: Proceedings of the 15th IEEE International Conference on Software Architecture (ICSA), 2018, pp. 95–104. \vspace{2mm}\\ {[S48]} A. Mahadi, K. Tongay, N. A. Ernst, Cross-dataset design discussion mining, in: Proceedings of the 27th International Conference on Software Analysis, Evolution and Reengineering (SANER), 2020, pp. 149–160. \vspace{2mm}\\ {[S49]} A. Shakiba, R. Green, R. Dyer, FourD: do developers discuss design? revisited, in: Proceedings of the 2nd International Workshop on Software Analytics (SWAN), 2016, pp. 43–46. \vspace{2mm}\\ {[S50]} M. T. Su, J. Hosking, J. Grundy, Capturing architecture documentation navigation trails for content chunking and sharing, in: Proceedings of the 9th Working IEEE/IFIP Conference on Software Architecture (WICSA), 2011, pp. 256–259. \vspace{2mm}\\ {[S51]} M. Chaabane, I. B. Rodriguez, K. Drira, M. Jmaiel, Mining approach for software architectures’ description discovery, in: Proceedings of the 14th IEEE/ACS International Conference on Computer Systems and Applications (AICCSA), 2017, pp. 879–886. \vspace{2mm}\\ {[S52]} M. Nicoletti, J. A. Diaz-Pace, S. Schiaffino, Towards software architecture documents matching stakeholders’ interests, in: Proceedings of the 2nd International Conference on Advances in New Technologies, Interactive Interfaces, and Communicability (ADNTIIC), 2011, pp. 176–185. \vspace{2mm}\\ {[S53]} J. Musil, F. J. Ekaputra, M. Sabou, T. Ionescu, D. Schall, A. Musil, S. Biffl, Continuous architectural knowledge integration: Making heterogeneous architectural knowledge available in large-scale organizations, in: Proceedings of the 14th IEEE International Conference on Software Architecture (ICSA), 2017, pp. 189–192. \vspace{2mm}\\ {[S54]} R. Weinreich, G. Buchgeher, Towards supporting the software architecture life cycle, Journal of Systems and Software 85 (3) (2012) 546–561. \vspace{2mm}\\ {[S55]} P. R. Anish, B. Balasubramaniam, J. Cleland-Huang, R. Wieringa, M. Daneva, S. Ghaisas, Identifying architecturally significant functional requirements, in: Proceedings of the 5th International Workshop on the Twin Peaks of Requirements and Architecture (TwinPeaks), 2015, pp. 3–8. \vspace{2mm}\\ {[S56]} D. E. Krutz, M. Mirakhorl, Architectural clones: toward tactical code reuse, in: Proceedings of the 31st Annual ACM Symposium on Applied Computing (SAC), 2016, pp. 1480–1485. \vspace{2mm}\\ {[S57]} M. Mirakhorli, J. Cleland-Huang, Detecting, tracing, and monitoring architectural tactics in code, IEEE Transactions on Software Engineering 42 (3) (2015) 205–220. \vspace{2mm}\\ {[S58]} R. Duddukuri, T. Prabhakar, Helping architects in retrieving architecture documents: A semantic based approach, in: Proceedings of the 1st International Workshop on Semantic Matchmaking and Resource Retrieval (SMR), 2006, pp. 113-120. \vspace{2mm}\\ {[S59]} L. do Nascimento Vale, M. d. A. Maia, Keecle: Mining key architecturally relevant classes using dynamic analysis, in: Proceedings of the 31st IEEE International Conference on Software Maintenance and Evolution (ICSME), 2015, pp. 566–570. \vspace{2mm}\\ {[S60]} H.-J. Happel, S. Seedorf, M. Schader, Ontology-enabled documentation of service-oriented architectures with Ontobrowse semantic wiki, PRIMIUM-Process Innovation for Enterprise Software (2009) 61-80. \vspace{2mm}\\ {[S61]} T. L. Babu, M. S. Ramaiah, T. Prabhakar, D. Rambabu, Archvoc–towards an ontology for software architecture, in: Proceedings of the 2nd Workshop on Sharing and Reusing Architectural Knowledge-Architecture, Rationale, and Design Intent (SHARK/ADI), 2007, pp. 5–10. \vspace{2mm}\\ {[S62]} E. Maggiori, L. Gervasoni, M. Antúnez, A. Rago, and J. A. D\'{i}az-Pace, Towards recovering architectural information from images of architectural diagrams, in: Proceedings of the 15th Argentine Symposium on Software Engineering (ASSE), 2014, pp. 36–50. \vspace{2mm}\\ {[S63]} D. Liu, Z.-L. Ren, Z.-T. Long, G.-J. Gao, H. Jiang, Mining design pattern use scenarios and related design pattern pairs: A case study on online posts, Journal of Computer Science and Technology 35 (5) (2020) 963–978. \vspace{2mm}\\ {[S64]} R. Mo, Y. Cai, R. Kazman, L. Xiao, Q. Feng, Architecture anti-patterns: Automatically detectable violations of design principles, IEEE Transactions on Software Engineering 47 (5) (2019) 1008–1028. \vspace{2mm}\\ {[S65]} G. Bavota, M. Gethers, R. Oliveto, D. Poshyvanyk, A. d. Lucia, Improving software modularization via automated analysis of latent topics and dependencies, ACM Transactions on Software Engineering and Methodology 23 (1) (2014) 1–33. \vspace{2mm}\\ {[S66]} G. Gokyer, S. Cetin, C. Sener, M. T. Yondem, Non-functional requirements to architectural concerns: {ML} and {NLP} at crossroads, in: Proceedings of the 3rd International Conference on Software Engineering Advances (ICSEA), 2008, pp. 400–406. \vspace{2mm}\\ {[S67]} I. Sora, C. B. Chirila, Finding key classes in object-oriented software systems by techniques based on static analysis, Information and Software Technology 116 (2019) 106176. \vspace{2mm}\\ {[S68]} B. S. Mitchell, S. Mancoridis, On the automatic modularization of software systems using the bunch tool, IEEE Transactions on Software Engineering 32 (3) (2006) 193–208\vspace{2mm}\\ {[S69]} Y. Zhang, R. Witte, J. Rilling, V. Haarslev, Ontology-based program comprehension tool supporting website architectural evolution, in: Proceedings of the 8th IEEE International Symposium on Web Site Evolution (WSE), 2006, pp. 41–49.\vspace{2mm}\\ {[S70]} R. C. de Boer, Architectural knowledge discovery: Why and how?, ACM SIGSOFT Software Engineering Notes 31 (5) (2006) 1–4. \vspace{2mm}\\ {[S71]} J. A. D\'{ı}az-Pace, A. Tommasel, D. Godoy, Towards anticipation of architectural smells using link prediction techniques, in: Proceedings of the 18th IEEE International Working Conference on Source Code Analysis and Manipulation (SCAM), 2018, pp. 62–71. \vspace{2mm}\\ {[S72]} M. Nayrolles, N. Moha, P. Valtchev, Improving SOA antipatterns detection in service based systems by mining execution traces, in: Proceedings of the 20th Working Conference on Reverse Engineering (WCRE), 2013, pp. 321–330. \vspace{2mm}\\ {[S73]} V. Bandara, I. Perera, Identifying software architecture erosion through code comments, in: Proceedings of the 18th International Conference on Advances in ICT for Emerging Regions (ICTer), 2018, pp. 62–69. \vspace{2mm}\\ {[S74]} A. Zaidman, S. Demeyer, Automatic identification of key classes in a software system using web mining techniques, Journal of Software Maintenance and Evolution: Research and Practice 20 (6) (2008) 387–417. \vspace{2mm}\\ {[S75]} S. Chardigny, A. Seriai, M. Oussalah, D. Tamzalit, Extraction of component-based architecture from object-oriented systems, in: Proceeding of the 7th working IEEE/IFIP Conference on Software Architecture (WICSA), 2008, pp. 285–288 \vspace{2mm}\\ {[S76]} M. Lu, P. Liang, Automatic classification of non-functional requirements from augmented app user reviews, in: Proceedings of the 21st International Conference on Evaluation and Assessment in Software Engineering (EASE), 2017, pp. 344–353. \vspace{2mm}\\ {[S77]} A. K. Mondal, C. K. Roy, K. A. Schneider, B. Roy, S. S. Nath, Semantic slicing of architectural change commits: Towards semantic design review, in: Proceedings of the 15th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM), 2021, pp. 1–6. \vspace{2mm}\\ {[S78]} A. W. Kamal, P. Avgeriou, Mining relationships between the participants of architectural patterns, in: Proceedings of the 4th European Conference on Software Architecture (ECSA), 2010, pp. 401–408.\vspace{2mm}\\ {[S79]} M. Ali, H. Mushtaq, M. B. Rasheed, A. Baqir, T. Alquthami, Mining software architecture knowledge: Classifying stack overflow posts using machine learning, Concurrency and Computation: Practice and Experience 33 (16) (2021) e6277. \vspace{2mm}\\ \section{Distribution of the Selected Studies in Publication Venues}\label{SelectedStudiesInVenues} \begin{longtable}{p{0.07cm}p{11.5cm}p{1.5cm}p{0.3cm}} \caption{Distribution of the selected studies in publication venues} \label{PublicationVenues} \\ \hline \multicolumn{1}{l} {\textbf{\#}} & \multicolumn{1}{l}{\textbf{Publication venue}} & \multicolumn{1}{l}{\textbf{Type}} & \multicolumn{1}{l}{\textbf{No.}}\\ \hline \endfirsthead \multicolumn{3}{l}% {{\bfseries }}\\ \endhead \multicolumn{3}{r}{{}} \\ \endfoot \hline \hline \endlastfoot 1 & International Conference on Software Architecture (ICSA) & Conference & 12\\ 2 & International Conference of Software Engineering (ICSE) & Conference & 5\\ 3 & European Conference on Software Architecture (ECSA) & Conference & 5\\ 4 & Journal of Systems and Software & Journal & 5\\ 5 & IEEE Transactions on Software Engineering & Journal & 5\\ 6 & Information and Software Technology & Journal & 3\\ 7 & Empirical Software Engineering & Journal & 2\\ 8 & Science of Computer Programming & Journal & 2\\ 9 & International Conference on Evaluation and Assessment in Software Engineering (EASE) & Conference & 2\\ 10 & International Working Conference on Mining Software Repositories (MSR) & Conference & 2\\ 11 & International Conference on Software Engineering and Knowledge Engineering (SEKE) & Conference & 2\\ 12 & International Conference on Software Architecture Workshop (ICSAW) & Workshop & 2\\ 13 & Workshop on Sharing and Reusing Architectural Knowledge (SHARK) & Workshop & 2\\ \end{longtable} \section{Abbreviation used in this SMS} \begin{longtable}{p{10em}p{24em}} \caption{Abbreviations used in this SMS} \label{Abbreviation} \\ \hline \multicolumn{1}{l} {\textbf{Abbreviation}} & \multicolumn{1}{l}{\textbf{Full name}}\\ \hline \endfirsthead \multicolumn{2}{l}% {{\bfseries }}\\ \endhead \multicolumn{2}{r}{{}} \\ \endfoot \hline \hline \endlastfoot AA & Architecture Analysis\\ ACC & Architecture Conformance Checking\\ AD & Architectural Description and Documentation\\ AE & Architecture Evaluation\\ AI & Architecture Implementation\\ AIA & Architecture Impact Analysis\\ AK & Architecture Knowledge\\ AME & Architecture Maintenance and Evolution\\ ARec & Architecture Recovery\\ AReu & Architecture Reuse\\ AS & Architecture Synthesis\\ ASR & Architecturally Significant Requirement\\ AT & Architectural Tactic\\ ATD & Architectural Technical Debt\\ AU & Architecture Understanding\\ DL & Deep Learning\\ LDA & Latent Dirichlet Allocation\\ LP & Link Prediction\\ MDG & Module Dependency Graph\\ ML & Machine Learning\\ MSR & Mining Software Repositories\\ MaRK & Mining Requirements Knowledge\\ NFR & Non-Functional Requirement\\ NLP & Natural Language Processing\\ OSS & Open Source Software\\ POS & Part of Speech\\ QA & Quality Attribute\\ Q\&A & Question and Answers\\ RQ & Research Question\\ SA & Software Architecture\\ SMS & Systematic Mapping Study\\ SLR & Systematic Literature Review\\ SOA & Service-Oriented Architecture\\ VCS & Version Control Systems\\ \end{longtable} \end{appendix}
1,108,101,566,223
arxiv
\section{Introduction} Optical frequency combs \cite{diddams2020optical} have revolutionized in the fields of metrology, including optical atomic clocks \cite{oelker2019demonstration, nemitz2016frequency, stern2020direct} , distance measurement \cite{minoshima2000high, ye2004absolute,coddington2009rapid, na2020ultrafast, riemensberger2020massively}, microwave generation \cite{fortier2011generation, nakamura2020coherent, liu2020photonic}, to name a few. Since the discovery of dissipative Kerr soliton combs (hereafter called soliton combs) \cite{Herr_soliton, kippenberg2018dissipative} in high Q microresonators as potentially, fully chip-scale optical frequency combs \cite{stern2018battery,shen2020integrated}, the soliton combs have been attracting significant attention. Soliton comb is a mode-locked state with high coherence and ultra-short pulse trains. The comb spacing of soliton combs can be more than 100 GHz because of the very short cavity length, which provides unique applications beyond the use in research laboratories such as ranging (including light detection and ranging, LiDAR) \cite{trocha2018ultrafast,riemensberger2020massively}, coherent optical telecommunication \cite{marin2017microresonator}, and mmW/THz waves generation \cite{zhang2019terahertz}. Continuous scanning of the comb modes of soliton combs further provides attractive applications of the soliton combs such as massively parallel frequency-modulated (FM) CW LiDAR \cite{riemensberger2020massively}, high resolution LiDAR based on FMcomb LiDAR \cite{kuse2019frequency}, and broadband, high-frequency resolution spectroscopy \cite{yu2017microresonator, kuse2020continuous, lin2020broadband}. In the applications, scan range and speed are important, because a large scan range enables high depth resolution, while fast scanning enables short measurement time in the LiDAR applications. In spectroscopy, large scanning eliminates blind frequency areas caused by the sparsity of the comb modes, also enabling high-frequency resolution determined by the linewidth of the comb modes, not by the comb mode spacing. Besides, fast scanning allows fast measurement time. To largely scan the comb modes (\textgreater 100 GHz), both a pump CW laser and the resonance frequency of a microresonator have to be scanned simultaneously \cite{riemensberger2020massively,liu2020monolithic}, because the soliton exists only when the pump CW laser is red-detuned from the resonance frequency by a few GHz. A microheater deposited on a microresonator has been utilized for the large range of the comb mode scanning \cite{xue2016thermal,Gaeta_heater}. By implementing a feedback loop based on Pound-Drever-Hall (PDH) locking or soliton power locking to fix the detuning while the comb modes are scanned, comb mode scanning of 190 GHz \cite{kuse2020continuous} and 31 GHz \cite{lin2020broadband} has been demonstrated. In the demonstrations, a pump CW laser based on an external cavity diode laser (ECDL) has been used, which impedes the simultaneous realization of large scan range and fast scan. Scan range and speed of the comb modes of a soliton comb from an ECDL would be at best around 30 GHz and 1 ms limited by a PZT inside the ECDL, respectively (note that larger scan range is possible if slower scan speed is allowed). Although a compact semiconductor laser with an external fiber Bragg grating (FBG) has been also used for the generation of a soliton comb, the scan speed of the laser cannot be fast (> 1 ms) due to the slow response of the FBG \cite{raja2020chip}. Alternatively, distributed feedback (DFB) lasers are more promising for the large scan range and fast scanning because the frequency of DFB lasers can be largely (\textgreater 100 GHz) and rapidly (\textless 1 ms) scanned without any mode-hopping. Very recently, soliton combs have been generated from DFB lasers, in which self-injection locking (SIL) is utilized. In SIL \cite{stern2018battery, voloshin2019dynamics,raja2019electrically,shen2020integrated}, a DFB laser is butt-coupled to a high Q Si$_3$N$_4$ (SiN)-based microresonator (\textgreater 10\textsuperscript{7}), and Rayleigh back-scattering from the microresonator is injected into the DFB laser, enforcing the oscillation frequency of the pump CW laser to be equal (or with an offset in some cases) to the resonance frequency of the microresonator when the resonance frequency is within the injection locking range of the DFB laser. However, it is not clear for the DFB laser with SIL to allow the comb mode scanning of a large range and fast speed because the detuning between the frequency of the injected DFB laser and a resonance frequency of the high Q microresonator is inherently determined by the optical phase of the back-scattered light and optical power coupled to the microresonator, which also makes it difficult to access a single soliton comb. Therefore, for the comb mode scanning, independent control of the resonance frequency and pump CW laser might be more convenient. However, the soliton comb from a DFB laser without the use of SIL has not been demonstrated to the best of our knowledge. In this paper, we demonstrate the generation of a single soliton comb from a microresonator pumped with a DFB laser without implementing SIL. Since SIL is not used, the system is suitable for large and fast comb mode scanning. On the other hand, the thermo-optic effect in microresonators induced when a chaotic comb transitions to a soliton comb has to be overcome. There are several methods to overcome the thermo-optic effect such as fast tuning of the pump frequency by a carrier-suppressed single-sideband modulator (CS-SSB modulator) \cite{Papp_OL_SSB18, stone2018thermal}, fast tuning of the resonance frequency by pump power modulation \cite{Kippenberg_SiN16}, and the utilization of an auxiliary CW laser \cite{niu2018repetition,zhou2019soliton,zhang2019sub, lu2019deterministic}. In our demonstration, we just modulate the injection current of the DFB laser, which is enough to access a stable single soliton comb because of the fast scan speed of the DFB laser (\textgreater 10 GHz/$\mu$s). Compared with conventional soliton comb systems generated from an external cavity diode laser (ECDL) as a pump CW laser with the additional optical modulators, our method not only simplifies the system, but also is readily applicable to comb mode scanning with a large range and fast speed. \section{Result} \subsection{Experimental Setup} Figure 1(a) shows the experimental setup. A DFB laser is used for a pump CW laser with an oscillation wavelength of around 1548 nm. The output from the DFB laser is amplified by an Er-doped fiber amplifier (EDFA) with one forward and two backward pump laser diodes. The maximum output from the EDFA is approximately 500 mW. The output from the EDFA is coupled into a high-Q microresonator based on SiN through a lensed fiber. The free spectral range (FSR) and Q of the microresonator are approximately 540 GHz and 10$^6$ (Fig. 1(b)), respectively. The on-chip power is about 200 mW. The output from the microresonator is split into two outputs by a 1 $\times$ 2 optical splitter. One is used to measure the optical spectrum of the generated soliton combs by an optical spectrum analyzer (OSA). The other is directed to a notch filter (NF) to reject the residual of the DFB laser to measure the time evolution of the power of the generated comb. The output from the notch filter is monitored by an oscilloscope (OSC). The oscillation frequency of the DFB laser is controlled via changing the injection current, which is controlled by an arbitrary waveform generator (AWG). The modulation speed can be as high as 10 GHz/$\mu$s as shown in Fig. 1(c). When the injection current of the DFB laser is increased, the output power of the DFB laser increases, which can affect the resonance frequency of the microresonator through the thermo-optic effect. However, in our experiment, the coupled power to the waveguide keeps almost the same because the EDFA is operated in the saturation regime. \begin{figure}[h!] \centering\includegraphics[width=13.3cm]{newcon_Fig1.pdf}\label{setup} \caption{(a) Schematic of the experimental setup to generate a soliton comb. (b) The red curve showed measured resonance frequency, and the blue curve shows Lorentzian fitting. (c) Frequency shift of the DFB laser, when a step function is applied, changing the injection current from 150 mA to 250 mA (red), 350 mA (green) and 450 mA (blue).} \end{figure} \subsection{Results of DFB laser direct modulation setup} To access a stable soliton comb, the frequency of the DFB laser has to be controlled to follow the resonance frequency of the microresonator, since the speed of the thermo dynamics of the microresonator is comparative to the frequency scan speed of the DFB laser. A procedure to find an appropriate signal for the injection current to generate a single soliton comb is shown in Figs. 2(a) and (b). Figures 2(a) and (b) show signals to control the injection current and comb powers, respectively. In the first step, the amount of the increase in the injection current is determined. The increase of the injection current induces the decreases of the oscillation frequency of the DFB laser. When the increase of the injection current is not large enough (Fig. 2(a) - i), a chaotic comb is generated, gradually relaxing to a primary comb due to the decrease of the resonance frequency caused by the increase of the intracavity power (Fig. 2(b) - i). When the increase of the injection current is set at correctly, a soliton step is observed (Fig. 2(b) - ii). \begin{figure}[h!] \centering\includegraphics[width=13.3cm]{newcon_Fig2.pdf}\label{solitonimage}\caption{(a) Illustrations of the control signals of the DFB laser. The insets show the relationship between the wavelength of the pump CW laser and the resonance wavelength of the microresonator. The red arrow indicates controlled parameters in each procedure. (b) Illustrations of the transmission comb power without the residual pump laser. The purple and light blue areas show the time when the chaotic comb and (multi) soliton state exist, respectively.} \end{figure} The increase of the injection current should not be too much but should be set at the value just above where the soliton step is observed. Although the soliton step is observed in Fig. 2(b) - ii, the soliton cannot be kept long (\textless 1 $\mu$s) because of the thermo-optic effect. The thermo-optic effect shifts the resonance frequency of the microresonator to high frequency, increasing the detuning between the frequency of the pump CW laser and the resonance frequency. In the second step, the time to decrease the injection current is determined to extend the lifetime of the soliton. In this step, the control signal to the injection current is instantaneously turned off (Fig. 2(a) - iii). Note that the frequency of the DFB laser begins to increase a little after turning off the control signal because of the delayed response of the DFB laser. When the frequency of the DFB laser begins to increase right after the transition from the chaotic to the soliton comb, the lifetime of the soliton comb is extended by mitigating the change of the detuning, followed by a single soliton state (Fig. 2(b) - iii). In the third step, the offset of the control signal is determined (Fig. 2(a) - iv), which fixes the final frequency of the DFB laser, to keep the single soliton comb for a long time (Fig. 2(b) - iv). The signal to generate a soliton comb used in this report is determined according to the above procedure as shown in Fig. 3(a). Note that we gradually and a little reduce the control signal between 0.1 ms and 0.4 ms to deal with the shift of the resonance frequency caused by the slow thermal effect in the microresonator as shown in the inset in Fig. 3(a). Figure. 3(b) shows the time evolution of the frequency of the DFB laser (blue) measured by an imbalanced Mach-Zehnder interferometer and the comb power (red). The decrease in the frequency of the DFB laser causes the decrease of the detuning from 0 to 1 $\mu$s. At 1.2 $\mu$s, the detuning becomes red (the frequency of the DFB laser is smaller than the resonance frequency), and the chaotic comb transitions to the soliton state. During the soliton state, the frequency of the DFB laser begins to increase, switching the comb power to decrease (the red curve in Fig. 3(b)) since the comb power is proportional to the detuning. When the detuning is further decreased, a single soliton comb is obtained. Finally, after applying the slow control signal as shown in the inset in Fig. 3(a), the detuning is fixed, and the single soliton comb is maintained for a few hours without any feedback loops. The optical spectrum of the soliton comb is shown in Fig. 3(c). The comb spacing corresponds to the FSR of the microresonator with 55 nm of 3 dB bandwidth. \begin{figure}[h!] \centering\includegraphics[width=13.3cm]{newcon_Fig3.pdf}\label{combpower} \caption{(a) The control signal for the injection current of the DFB laser. The inset shows the control signal with a short time span. (b) Frequency change of the DFB laser (blue) with the time evolution of the comb power (red). The purple and light blue area show the time when the chaotic comb and (multi) soliton state exist, respectively. (c) The optical spectrum of the soliton comb generated.} \end{figure} \section{Conclusion} In conclusion, we demonstrated the generation of a soliton comb pumped by a DFB laser, in which the injection current of the DFB laser is adequately controlled. The thermo-optic effect in a microresonator, induced especially when the chaotic comb transitions to the soliton comb, is overcome by the fast scanning of the DFB laser. The demonstrated method does not require any additional optical components such as a CS-SSB modulator, AOM, and auxiliary cooling laser, enabling the simpler comb system than systems with bulky ECDLs. More importantly, since DFB lasers have a larger and faster mode-hop-free scan range than ECDLs, the soliton comb from DFB laser can be readily applied to high resolution spectroscopy \cite{kuse2020continuous} and LiDAR \cite{kuse2019frequency} along with a large range and fast speed scanning. Here, the linewidth of DFB lasers is two orders of magnitude worse than that of ECDLs, which may be a significant drawback of soliton combs generated from DFB lasers in terms of phase noise. However, in ref \cite{nishimoto2020investigation}, we showed the phase noise of the comb modes is not equal to that of pump CW lasers but is limited by the thermo-refractive noise of microresonators when an ECDL is used as a pump CW laser. Furthermore, the relative phase noise between the comb modes, which is important for mmW/THz wireless communications by soliton combs, has little difference between the soliton combs generated from a DFB laser and ECDL \cite{nishimoto2020investigation}. Therefore, we believe soliton combs generated from DFB lasers are very attractive because of the advantages; simple system, large and fast scanning. \section*{Funding} This work was financially supported by JST PRESTO (JPMJPR1905), Japan Society for the Promotion of Science (19H00871), Cabinet Office, Government of Japan (Subsidy for Reg. Univ. and Reg. Ind. Creation), Research Foundation for Opto-Science and Technology, and Nakatani Foundation for Advancement of Measuring Technologies in Biomedical Engineering. \\ \section*{Disclosures} The authors declare no conflicts of interest.
1,108,101,566,224
arxiv
\section{Introduction} \label{sec:intro} \IEEEPARstart{I}{n} cognitive radios, detecting white spaces, and determining channel occupancy in a dynamic radio environment is essential for opportunistic access to radio resources. The simplest and widely used method of assessing the availability of the radio resources is energy detection, for which efficiency in terms of probability of detection and probability of false alarm is analyzed extensively in the literature (see \cite{Urkowitz1967, Sharma2015, Ali2017}). However, these studies consider the context of a static primary user (PU) signal, which is either active or inactive within the entire detection period. In realistic scenarios, however, the PU signal can dynamically switch between active and inactive states, while detection is in progress. The detection of discontinuous PU is considered in \cite{Penna2011, Saad2016, MacDonald2017, Duzenli2019} by redesigning the detection algorithms, in which, however, signal, noise, and PU traffic parameters are assumed to be known \textit{a priori} while the methods to obtain/estimate these necessary parameters are not described. Many studies show that any uncertainty in estimating parameters of the received signal seriously limits the ability of the detector to assign energy to a particular activity state correctly \cite{Mariani2011}. Consequently, the correct operation of the energy detector in a dynamic radio channel requires accurately estimating, a) PU traffic parameters such as the average and current duration of the PU, and its channel occupancy ratio and, b) signal and noise parameters as noise variance and signal-to-noise ratio (SNR). Importantly, the estimation of all these parameters and the ensuing detection performance starts with the accurate splitting of the signal and noise samples in received energy samples. In this letter, we present a practical algorithm for energy samples separation---marking of signal and noise samples in a received time frame---of a dynamic PU signal. The algorithm uses rank order filtering, earlier studied for signal spectrum analysis only \cite{Taher2014, Agarwal2017, Nikonowicz2019}, for temporal signal analysis by redesigning the signal processing and samples marking processes. We evaluate the algorithm in terms of marking signal samples and complete samples separation with respect to SNR and different PU activity factors, and also examine the execution time of the separation process. Besides, its performance is compared with the well-known reference methods in the literature \cite{Toma2020, Iwata2019, Chien2019}, which is then followed by its utility appraisal for channel occupancy estimation. To assess the accuracy of these operations, a semi-experimental simulation setup of packet-based PU transmission is designed, where the background distortion comes from the radio frequency (RF) noise traces captured with National Instrument software defined radio (SDR), USRP-2900. The proposed solution, with its appealing performance, provides a convenient basis, although not the subject of this article, for parameter estimation in the subsequent detection of intermittent PU signals. The rest of the article is organized as follows. Section~\ref{sec:system} gives the motivation for samples separation, and Section~\ref{sec:algorithm} describes the proposed separation algorithm. Section~\ref{sec:simulation} explains the simulation methodology and shows numerical results. Finally, Section~\ref{sec:conclusion} gives the concluding remarks. \section{Motivation for Samples Separation} \label{sec:system} To reason the need for samples separation, we restate the energy detection (ED) process for dynamic PU signals. Consider the ED-based sampling of dynamic PU modeled as an alternating renewal process, similar to \cite{Penna2011, Saad2016, MacDonald2017}, and shown in Fig.~\ref{fig:transmission}. At any time instant, PU is either in ON (active) or OFF (idle) state, while the state transition occurs at random time instances, and the state holding times are exponentially distributed with mean $\tau$ and $\mu$, respectively. The energy detector collects signal samples $x_{n}, n=1\dots N$ in a detection interval of duration ($T$), which is independent of the PU ON/OFF process. As the signal is sampled at a specific frequency $f_s$, total number of collected samples are $N = f_{s}{T}$, and $N_{0}$ and $N_{1}$ represent the number of samples corresponding to hypothesis (subject to detection) $H_{0}$ and $H_{1}$. As $N\rightarrow \infty$, normalized occupancy/absence rate ${N_{i}}/{N}$ approaches it average value $p_{i}, i\in \{0,1\}$. \begin{figure}[!t] \centering \includegraphics[trim=0 0 0 15,clip,width=0.90\linewidth]{transmission.pdf} \caption{Illustration of a dynamic PU signal activity along with the energy sampling process.} \label{fig:transmission} \end{figure} A predominant model to characterize energy detection performance for such dynamic PU scenario, in terms of test statistic ($\beta$), probability of detection ($P_{d}$), and probability of false alarm ($P_{f}$), is as follows \cite{Penna2011} \begin{equation} \beta{}=p_0\frac{1}{N_0}\sum_{n=1}^{N_0}{\left\vert{}w_{n}\right\vert{}}^2+p_1\frac{1}{N_1}\sum_{n=1}^{N_1}{\left\vert{}x_{n}+w_{n}\right\vert{}}^2 , \label{eq:beta} \end{equation} \begin{equation} P_d=Q\left[\sqrt{N}\frac{\frac{\gamma{}}{{\sigma{}}_w^2}-\left(1+p_1\rho{}\right)}{\sqrt{1+p_1\left({\rho{}}^2+2\rho{}\right)}}\right] , \label{eq:detprob} \end{equation} \begin{equation} P_f=Q\left[\sqrt{N}\left(\frac{\gamma{}}{{\sigma{}}_w^2}-1\right)\right] , \label{eq:falseprob} \end{equation} where $\gamma$ is the decision threshold, $\sigma_{w}^2$ is noise variance, $\rho$ is the SNR. To implement this detection model or any other involving PU transition probabilities (e.g., \cite{MacDonald2017}), several necessary parameters, as noise variance, SNR, instantaneous and mean occupancy/absence rates, are assumed to be known. In practice, the very first step to extract these parameters is samples separation. Fig.~\ref{fig:system} summarizes the different stages of the energy detection process while featuring the source and demand of the necessary parameters at each stage. In this context, our objective is to develop a samples separation algorithm that is effective in extracting the necessary (detection-related) parameters of bursty PU signal. We assume a specific (exponential) distribution of PU idle/active state; however, the design of the separation algorithm is generic and blind to the PU activity pattern. \section{Algorithm Design and Description} \label{sec:algorithm} In this section, we describe the design of the proposed algorithm, which finds its motivation from rank order filtering (ROF). ROF, a commonly used image processing technique, sorts the input values in ascending order, and selects for the output value encountered at a certain rank order number. The selected input value becomes the output, without any calculation performed on the input values. The two special operations of ROF are \textit{erosion}---equivalent to lowest rank as it returns the minimum of the input set, and \textit{dilation}---equivalent to highest rank as it returns the maximum. Erosion and dilation, besides being useful in image processing, can also be effectively used in impulse noise reduction and noise power estimation, as demonstrated in \cite{Taher2014, Agarwal2017, Nikonowicz2019}. These studies iteratively increase the size of the filters used on the power spectrum samples. By filtration, the peak values of the spectrum are reduced until the difference in the noise floor achieved in $i$--th and $(i+1)$--th iteration falls below a predetermined threshold value. Although effective, the algorithms in \cite{Taher2014, Agarwal2017, Nikonowicz2019} are only dedicated to estimating spectrum parameters and are burdened with the following disadvantages that limit their usefulness for time-domain analysis of dynamic PU behavior: \begin{itemize} \item The selection of an appropriate threshold for the noise floor difference is problematic due to its unambiguous interpretation. \item The process carried out on spectrum samples strongly benefits from the processing gain provided by the fast Fourier transform (FFT) \cite{Lyons2004}. Although the transition between frequency and time domains can be performed quite efficiently, for simple energy detectors, it is not imperative to transform the signal into the frequency domain. \end{itemize} \begin{figure}[!t] \centering \includegraphics[width=1\linewidth]{system.pdf} \caption{Block diagram of an energy detection process for discontinuous PU signals: source and demand of necessary parameters.} \label{fig:system} \end{figure} With these considerations, we design a new separation technique, by reconstructing the ROF-based solutions, for accurate extraction of parameters from intermittent PU transmission, while keeping the separation process as simple as possible to enable low-complexity detection. The main steps of the proposed algorithm are as follows: \textbf{Initialization---energy vector}: The separation algorithm starts with the conversion of an $N$-sample signal time frame, i.e., $x=[x_{1},\dots x_{N}]$, into a vector of energy samples, $y_{n}=|x_{n}|^{2}$. Afterward, a moving average (MAV) of a small size $m_{\mathrm{init}} \ll N$ is used to reduce noise variance initially. Because the recursive formulation of the $m$-sized MAV as $\bar{y}_{n}=\bar{y}_{n-1}+\frac{1}{m}(x_{n}-x_{n-m})$ requires only one addition, one subtraction, and one division per sample, the formula is independent of the number of samples $N$, and the runtime complexity for each sample is constant, i.e., $\mathcal{O}(1)$. Thus, the complexity of the pre-processing preceding filtering is kept to a minimum. \textbf{LOOP process---ROF}: In this step, the initially averaged energy vector $\bar{y}$ is iteratively filtered by consecutive erosion and dilation operations. A consistent increase in the size $k$ of $movmin$ and $movmax$ filters allows finding the size $m_{\mathrm{sec}}$ for which the energy decrease in relation to the energy after previous filtration $e'_{k}=\frac{e_{k-1}-e_{k}}{e_{k-1}}$ remains the highest. The search for maximum value removes the requirement to set a threshold, as typical in earlier works. The resulting size of the filter $m_{\mathrm{sec}}$ is interpreted as the probable longest continuous signal duration in the analyzed frame, and is used in the second moving average, $\bar{\bar{y}}$. \textbf{Samples marking process---derivative evaluation}: The identification of signal and noise samples is based on the evaluation of the derivative of the double--averaged energy vector $y'_{n}=\bar{\bar{y}}_{n}-\bar{\bar{y}}_{n-1}$. The intervals in which the derivative has positive values with a width wider than the assumed threshold of minimal signal width $\lambda_{\mathrm{msw}}$ indicate signal samples. The threshold $\lambda_{\mathrm{msw}}$ can be interpreted as a resolution of the algorithm, i.e., the minimum detectable signal duration. Due to the influence of the noise variance, the problem of maintaining the required interval continuity occurs. Positive intervals potentially indicating the signal, can be divided by single samples with non-positive values. Therefore, before assessing if the width of the positive interval meets the condition of $\lambda_{\mathrm{msw}}$, adjacent intervals separated by the single non-positive sample are combined. This simple step significantly improves the accuracy of the algorithm for weak signals. \begin{algorithm}[H] \small \caption{Pseudo-code of the samples separation algorithm} \begin{algorithmic}[1] \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \REQUIRE $x$, $m_{\textrm{init}}$, $\lambda_{\textrm{msw}}$ \ENSURE noise, signal \\ \textit{Initialisation} : \STATE{$y \!=\! (\textrm{abs}(x))^2 \!$, $\bar{y}\! =\! \textrm{mav}(y, m_{\textrm{init}})$, $e\! =\! \textrm{sum}(\bar{y})/\textrm{length}(\bar{y})$} \\ \textit{Parallelized LOOP process:} \FOR{$k = 2 \textrm{~to length}(\bar{y})/2$} \STATE{$erosion = \textrm{movmin}(\bar{y},k)$} \STATE{$dilation = \textrm{movmax}(erosion,k)$} \STATE{$e_{k} = \textrm{sum}(dilation)/\textrm{length}(dilation)$} \STATE{$e'_{k} = (e_{k-1}-e_{k})/e_{k-1}$} \ENDFOR \STATE{$[value, index] = \textrm{max}(e'(2:end))$, $\bar{\bar{y}} = \textrm{mav}(\bar{y},index)$} \\ \textit{Samples marking process:} \FOR {$j = 1 \textrm{~to length}(\bar{\bar{y}})-1$} \STATE{$y'_{j}$ = $\bar{\bar{y}}_{j}-\bar{\bar{y}}_{j+1}$} \ENDFOR \FOR{$l = 3 \textrm{~to length}(y')$} \IF{$y'_{l}>0$} \STATE{$y'_{l} = 1$} \IF {$y'_{l-2}>0$} \STATE{$y'_{l-1} = 1$} \ENDIF \ELSE \STATE{$y'_{l}=0$} \ENDIF \ENDFOR \FOR {$i = 2 \textrm{~to length}(y')$} \IF {$y'_{i}$} \STATE{$y'_{i}=y'_{i-1}+1$} \ENDIF \ENDFOR \FOR{$n = 1 \textrm{\ to length}(y')$} \IF{$y'_{n}>\lambda_{\textrm{msw}}$} \STATE{$mark(n-\lambda_{\textrm{msw}}:n) = 1$} \ENDIF \ENDFOR \STATE{signal = $y \cdot mark$, $\textrm{noise} = y - \textrm{signal}$} \RETURN {noise, signal} \end{algorithmic} \end{algorithm} The above-presented solution results in a simple yet effective separation technique for the detection of packet-based PU signals. The pseudo-code of the proposed separation scheme is given in Algorithm 1 with notations: $m_{\textrm{init}}$-- initial size of the moving average, $\lambda_{\textrm{msw}}$-- minimal signal width, $x$-- received signal, $y$-- processed signal, $e$-- energy, $e'$-- energy decrease, and $y'$-- differential. \section{Simulation Setup and Results} \label{sec:simulation} To assess the performance of the proposed algorithm, we have developed a simulation setup that supports random ON/OFF traffic behavior of PU. For background noise, the simulator uses radio-frequency (RF) noise I/Q traces sampled in a bandwidth of 5 MHz centered around 868 MHz, i.e., the first channel of the IEEE 802.15.4 standard. The RF noise traces are collected in an open university area using National Instrument USRP-2900. We recorded the average noise power of -110.7 dBm, which is normalized to 1 mW in the simulations. \begin{figure}[!t] \centering \includegraphics[width=1\linewidth]{timeframe_differntial.pdf} \caption{An instance of analyzed time frame: (top)--the time frame with RF noise and PU pulse signals with randomly distributed durations. (bottom)--splitting of signal and noise samples based on the processed sign of the differential.} \label{fig:timeframe} \end{figure} In the simulations, the subject of analysis is non-overlapping time frames, where each frame contains 1024 noise samples to which a PU signal is added as a rectangular pulse as shown in Fig.~\ref{fig:timeframe}(top). Both the signal pulses and the following noise only duration are exponentially distributed with mean values being swept respectively from 10\% to 30\% and 90\% to 70\% of the time frame. The sample splitting is based on a positive or non-positive differential processed accordance to Algorithm~1, and depicted in Fig.~\ref{fig:timeframe}(bottom). As the basic reference method, we study the estimation of primary user activity based on the idle/busy periods determined by using short spectrum sensing decisions \cite{Umebayashi2014, Iwata2019, Toma2020}. As the above method, however, requires noise floor information, which we obtain using the extended Forward Consecutive Mean Excision (FCME) algorithm with Welch FFT \cite{Umebayashi2014, Iwata2019}. For the simulation of the FCME-based algorithm, a 64-sample periodogram is adopted along with an energy detection window with a length of 5\% of the analyzed time frame and a false alarm probability of 1\%. As the second reference method (operating in the time-domain), we adopt linear discriminant analysis (LDA). LDA is used in statistics and pattern recognition as a basic mathematical tool to separate two classes of objects, applied as Fisher discriminant function \cite{Chien2019}. The performance of the algorithms is measured by the average percentage of correctness in assigning samples to groups of signal-with-noise or noise-only samples, compared to a known pattern generated independently for each time frame. Fig.~\ref{fig:signalmark} shows the marking/separation accuracy for the signal group, indicating a decrease in the assignment accuracy with a decrease in SNR. The comparison indicates that the proposed ROF-based samples separation (RSS) shows a performance near to FCME-based solution and significantly higher than LDA (i.e., the reference time-domain method). Moreover, the accuracy of RSS remains almost independent of the mean occupancy of the time frame. \begin{figure}[!t] \centering \includegraphics[width=0.90\linewidth]{signal_mark_3.pdf} \caption{Signal samples marking efficiency as an average ratio of correctly recognized signal samples.} \label{fig:signalmark} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=0.90\linewidth]{total_mark_3.pdf} \caption{Total separation efficiency as an average ratio of correctly recognized signal and noise samples.} \label{fig:totalmark} \end{figure} The total marking accuracy, measured as an average ratio of correctly marked signal samples and noise samples, is shown in Fig.~\ref{fig:totalmark}. Note that the ability to correctly recognize the signal samples directly affects the overall separation efficiency. With a decrease in SNR, the efficiency of RSS decreases by the percentage of the signal presence in the frame, which is entirely recognized as noise in extreme cases. Thus, the inaccuracy in RSS is mainly limited to error type false negative, i.e., less recognizable signal samples are assigned to the noise group, while the noise is rarely classified as a signal, leading to reduced false positive errors. At high SNR, the accuracy does not drop more than the assumed 5\% resolution threshold. \begin{figure}[!t] \centering \includegraphics[width=0.90\linewidth]{time_complexity_1.pdf} \caption{Time complexity as an average execution time of the separation process for a given number of samples.} \label{fig:complexity} \end{figure} Fig.~\ref{fig:complexity} compares the average execution time of the proposed RSS algorithm with the reference methods. The results were obtained by averaging the time over a thousand single-threaded function calls performed on a quad-core 3.07 GHz Intel Xeon W3550. Time analysis shows that the proposed RSS method exhibits significantly lower complexity, especially for a small number of samples, with respect to the reference solutions. These time differences are of significant importance as the sample separation remains only a supportive process for effective estimation of channel parameters and primary user detection. \begin{figure}[!t] \centering \includegraphics[width=0.90\linewidth]{occupancy_3.pdf} \caption{Accuracy of the mean channel occupancy estimation.} \label{fig:occupancy} \end{figure} The marking efficiency of signal-with-noise samples in the exemplary application can be directly used for estimating the average occupancy of the PU signal under test. As shown in Fig.~\ref{fig:occupancy}, it is visible that for a low PU occupancy of 10--20\% with SNR above 0 dB, the proposed algorithm preserves high 1--2\% accuracy. In the case of increasing occupancy and weak signals, the algorithm noticeably loses efficiency. Therefore, for a secondary user subject to strong PU signals with a moderate activity factor, the proposed algorithm demonstrates to be a perfectly matched, easy to implement, and accurate separation technique. \section{Conclusion} \label{sec:conclusion} Stimulated by the need for estimating several vital parameters to perform the energy detection of dynamic PU, we developed a new algorithm for accurate separation of signal and noise samples in the received signal time frame. The algorithm has its roots in rank order filtering-based spectral analysis that we reconstructed for low-complexity time-domain analysis of bursty signals. We evaluated the algorithm in terms of its accuracy to recognize signal samples, complete sample separation, time complexity and utility in channel occupancy estimation. The algorithm exhibits an accuracy of 87\% in separating samples even for weak signals with SNR close to 0 dB. For strong and narrow pulses, it provides up to 97\% of correct sample recognition and remains competitive for twice as complex solutions. The achieved accuracy, together with a simple design, makes the proposed solution a convenient basis for obtaining information required for effective energy detection. \section*{Acknowledgment} This work was supported by the Polish Ministry of Science and Higher Education within the status activity task 08/83/SBAD/4737 in 2019, and by the Swedish Knowledge Foundation under grant Research Profile NIIT. \bibliographystyle{IEEEtran}
1,108,101,566,225
arxiv
\section{Introduction} In this paper we establish local Lipschitz continuity of harmonic maps from non smooth metric measure spaces with synthetic Ricci curvature lower bounds in the sense of the $\RCD$ theory with values into ${\sf CAT}(0)$ metric spaces with non-positive sectional curvature. This answers to a question raised several times in the recent literature, see \cite{GigliPasqualettoSoultanis20,DiMarinoGiglietal21,GigliTulyenev20,GigliTulyenev21,Guo21}. Building on the top of this regularity result, we prove a Bochner-Eells-Sampson type inequality with Hessian term, which is expected to be fundamental for future applications. \medskip A smooth map $u:M^n\to N^k$ between Riemannian manifolds is called harmonic when the \emph{tension field} \begin{equation*} \Delta u:=\mathrm{tr}\nabla\left(\mathop{}\!\mathrm{d} u\right) \end{equation*} vanishes identically, as a section of the pull-back bundle $u^*TN$. There are several examples of harmonic maps: harmonic functions when the target space is $\mathbb{R}$, geodesics when the source space is $\mathbb{R}$, isometries, conformal maps, holomorphic maps between K\"ahler manifolds and inclusions of volume minimizing submanifolds. Their role is ubiquitous in Geometric Analysis.\\ The basic question becomes then the existence of harmonic maps, under suitable assumptions. The problem was approached from a parabolic perspective by Eells-Sampson \cite{EellsSampson64} and subsequently by Hamilton \cite{Hamilton}, based on the long time behaviour of the non-linear heat equation \begin{equation*} \frac{\mathop{}\!\mathrm{d} }{\mathop{}\!\mathrm{d} t}u(x,t)=\Delta u(x,t)\, . \end{equation*} Later on, the variational perspective was put forward by Hildebrandt-Kaul-Widman \cite{HildebrandtKaul1,HildebrandtKaul2} (see also the subsequent work of Schoen \cite{Schoen}), building on top of the interpretation of harmonic maps as critical points of the energy functional \begin{equation*} E(u):=\int_M\abs{\mathop{}\!\mathrm{d} u}^2\mathop{}\!\mathrm{d}\mathrm{vol}_M\, . \end{equation*} Very much intertwined with the question of existence, there is the issue of regularity.\\ If we denote by $\mathrm{Ric}_M$ and $\mathrm{R}^N$ the Ricci curvature tensor of $M$ and the Riemann curvature tensor of $N$, respectively, and by $\{e_{\alpha}\}_{1\le \alpha\le n}$ an ortonormal base of $TM$, the Bochner-Eells-Sampson formula for harmonic maps \begin{equation}\label{eq:BES} \Delta\frac{1}{2}\abs{\mathop{}\!\mathrm{d} u}^2=\abs{\nabla \mathop{}\!\mathrm{d} u}^2+\mathrm{Ric}_M(\nabla u,\nabla u)-\sum_{\alpha,\beta=1,\dots,n}\langle\mathrm{R}^N\left(u_*e_{\alpha},u_{*}e_{\beta}\right)u_*e_{\alpha},u_*e_{\beta}\rangle \end{equation} hints towards a prominent role of lower bounds on the Ricci curvature of $M$ and upper bounds on the sectional curvature of $N$ in developing a regularity theory. Indeed, if we assume that $\mathrm{Ric}_M\ge K$ and $\mathrm{R}^N\le 0$, then from \eqref{eq:BES} we obtain that \begin{equation}\label{eq:Bochnerineq} \Delta\frac{1}{2}\abs{\mathop{}\!\mathrm{d} u}^2\ge K\abs{\mathop{}\!\mathrm{d} u}^2\, . \end{equation} A priori, local $L^{\infty}$-estimates for $\abs{\mathop{}\!\mathrm{d} u}$ can be derived from \eqref{eq:Bochnerineq} via the classical De Giorgi-Moser iteration. Smoothness then follows from elliptic regularity, see \cite{EellsSampson64,SchoenUhlenbeck}. We refer also to the more recent work of Sturm \cite{Sturm05} for a different, probabilistic interpretation of the curvature conditions on source and target spaces in the theory of harmonic maps, deeply related with the developments of the present note. \medskip In the last thirty years, starting from the work of Gromov-Schoen \cite{GromovSchoen}, there has been growing interest in developing a theory of harmonic maps between spaces more general than Riemannian manifolds and possibly non smooth. This has required a completely new set of ideas and techniques, as neither isometric embeddings into Euclidean spaces, nor local charts are in general available.\\ The analysis in \cite{GromovSchoen} was dedicated to maps from smooth source spaces with values into locally finite Riemannian simplicial complexes, with striking applications in Geometric Group Theory. Korevaar-Schoen \cite{KorevaarSchoen,KorevaarSchoen2}, and independently Jost \cite{Jost94,Jost95}, later developed a general theory of Sobolev and harmonic maps with values into metric spaces with non-positive curvature in the sense of Alexandrov. From the variational perspective, the curvature assumption on the target guarantees convexity of the energy functional.\\ In \cite{KorevaarSchoen,KorevaarSchoen2} source spaces are smooth manifolds and the authors obtain local Lipschitz continuity of harmonic maps. In \cite{Jost94,Jost95} source spaces are locally compact metric spaces with a Dirichlet form and the author obtains local H\"older continuity, assuming a uniform scale invariant Poincar\'e inequality. \smallskip For the sake of the applications, Lipschitz continuity and suitable versions of the Bochner-Eells-Sampson inequality \eqref{eq:BES} are two cornerstones. An example from \cite{Koskelaetal} shows that, in general, doubling and Poincar\'e assumptions on the source space do not guarantee Lipschitz regularity of harmonic functions, even in the scalar valued case. Meanwhile, the developments of the theory of Alexandrov spaces motivated the conjecture that harmonic maps from metric spaces with sectional curvature bounded from below with values into metric spaces with non-positive curvature should be locally Lipschitz. The conjecture was formulated by Lin \cite{Lin97} and, in a more open form, by Jost \cite{Jost98}, and it has been recently settled by Zhang-Zhu \cite{ZhangZhu18}.\\ We mention also \cite{EellsFuglede,Gregori,KuwaeShioya,Fuglede,Chen95,DaskaMese08,DaskaMese10} for previous, related developments of the theory of harmonic maps between metric spaces, without the aim of being complete in this list. \medskip Nowadays, there is a well established theory of metric measure spaces with lower bounds on the Ricci curvature in synthetic sense, the so-called $\RCD(K,N)$ metric measure spaces $(X,\mathsf{d},\mathfrak{m})$. Here $K\in\mathbb{R}$ plays the role of a synthetic lower bound on the Ricci curvature and $1\le N<\infty$ plays the role of a synthetic upper bound on the dimension, in the sense of the Lott-Sturm-Villani ``Curvature-Dimension'' condition \cite{Sturm06I,Sturm06II,LottVillani09}. The ``Riemannian'' assumption, formulated in terms of linearity of the heat flow, is added to force Hilbertian behaviour in the much broader class of Finsler geometries, that the Curvature-Dimension condition does not rule out a priori.\\ We address the reader to \autoref{sub:GARCD} (see also the survey \cite{Ambrosio18} and references therein) for the relevant background on the theory of $\RCD(K,N)$ metric measure spaces. Here we just remark that the $\RCD$ theory is fully consistent with the theory of smooth (weighted) Riemannian manifolds with (weighted) Ricci curvature bounded from below and with the theory of Alexandrov spaces with sectional curvature bounded from below. Furthermore, the $\RCD(K,N)$ condition is stable with respect to the measured Gromov-Hausdorff topology, under cone and spherical suspension constructions and under quotients by actions of groups of measure preserving isometries. \medskip The role of the lower Ricci curvature bound on the source space for the regularity of harmonic maps in the classical theory, together with the remarkable developments of Geometric Analysis on $\RCD(K,N)$ spaces in recent years, give strong motivations for a theory of harmonic maps from $\RCD(K,N)$ spaces with values into ${\sf CAT}(0)$ metric spaces with non-positive curvature. A theory of Sobolev maps in this framework has been developed by Gigli-Tyulenev in \cite{GigliTulyenev21}, where existence and uniqueness for solutions of the Dirichlet problem have been achieved too. Local H\"older regularity of harmonic maps has been recently obtained by Guo in \cite{Guo21}, along the lines of the previous \cite{Jost97,Lin97}.\\ The question of local Lipschitz regularity of harmonic maps from $\RCD(K,N)$ spaces to ${\sf CAT}(0)$ spaces has been raised several times in the recent literature, starting from \cite{GigliPasqualettoSoultanis20} and later in \cite{DiMarinoGiglietal21,GigliTulyenev20,GigliTulyenev21,Guo21}. \smallskip The first main result of this paper is a positive answer to this question in full generality, that can be considered also as a complete answer to the question raised in \cite{Jost98}. Indeed, we fully generalize the Lipschitz regularity result for smooth manifolds by Eells-Sampson \cite{EellsSampson64} to the natural synthetic framework.\\ We address the reader to \autoref{subsec:energyharmonics} for the introduction of the relevant background and terminology about harmonic maps in this setting. Below, with the notation $\mathrm{osc}\, u$ we indicate the oscillation of the harmonic map $u$, which is locally bounded thanks to \cite{Guo21}. \begin{theorem}[cf. with \autoref{mainthcore}]\label{thm:main} Let $(X,\mathsf{d},\mathfrak{m})$ be an $\RCD(K,N)$ metric measure space for some $K\in\mathbb{R}$ and $1\le N<\infty$. Let $(Y,\mathsf{d}_Y)$ be a ${\sf CAT}(0)$ space and let $\Omega\subset X$ be an open domain. Assume that $u:\Omega\to Y$ is a harmonic map. Then for any $0<R\le 1$ there exists a constant $C=C(K,N,R)>0$ such that if $B_{2R}(q)\Subset \Omega$ for some point $q\in X$, then for any $x,y\in B_{R/16}(q)$ it holds \begin{equation}\label{eq:Lipintro} \mathsf{d}_Y(u(x),u(y))\le C(K,N,R)\left(\left(\fint_{B_{R}(q)}\abs{\mathop{}\!\mathrm{d} u(z)}^2\mathop{}\!\mathrm{d}\mathfrak{m}(z)\right)^{\frac{1}{2}}+\mathrm{osc}_{\overline{B}_R(q)}u\right)\mathsf{d}(x,y)\, . \end{equation} \end{theorem} We postpone to the second part of the introduction a description of the strategy of the proof of \autoref{thm:main}. As we shall see, the key step will be establishing a (very) weak form of the Bochner-Eells-Sampson inequality \eqref{eq:Bochnerineq}. Towards this goal, we will need to introduce several original ideas with respect to the previous literature. \medskip As of today, there seems to be no notion of Hessian available for harmonic maps from $\RCD(K,N)$ metric measure spaces with values into ${\sf CAT}(0)$ metric spaces. This would be required to give a meaning to a version of \eqref{eq:BES} in this setting, where only the curvature terms are disregarded. Nevertheless, we are able to prove a Bochner-Eells-Sampson inequality for harmonic maps where a Hessian-type term \begin{equation*} \abs{\nabla \abs{\mathop{}\!\mathrm{d} u}}\le \abs{\nabla \mathop{}\!\mathrm{d} u} \end{equation*} appears. Below we shall denote by $\lip u$ the pointwise Lipschitz constant of a harmonic map $u:\Omega\to Y$, defined by \begin{equation*} \lip u(x):=\limsup_{y\to x}\frac{\mathsf{d}_Y(u(x),u(y))}{\mathsf{d}(x,y)}\, . \end{equation*} We remark that, by \autoref{thm:main}, the pointwise Lipschitz constant of a harmonic map is locally bounded. \begin{theorem}[cf. with \autoref{thm:Bochner}]\label{thm:Bochnerintro} Let $(X,\mathsf{d},\mathfrak{m})$ be an $\RCD(K,N)$ metric measure space for some $K\in\mathbb{R}$, $1\le N<\infty$, and let $(Y,\mathsf{d}_Y)$ be a ${\sf CAT}(0)$ space. Let $\Omega\subset X$ be an open domain and let $u:\Omega\to\mathbb{R}$ be a harmonic map. Then $\lip u\in W^{1,2}_{{\rm loc}}(\Omega)\cap L^{\infty}_{{\rm loc}}(\Omega)$ and \begin{equation}\label{eq:bochnerwithhessianintro} \Delta \frac{\abs{\lip u}^2}{2}\ge \abs{\nabla \lip u}^2+K\abs{\lip u}^2\, ,\quad\text{on $\Omega$}\, , \end{equation} in the sense of distributions. \end{theorem} We refer to the very recent work \cite{ZhangZhongZhu19} by Zhang-Zhong-Zhu for an analogous statement under the assumption that the source space is a smooth $N$-dimensional Riemannian manifold with Ricci curvature bounded from below by $K$ and to \cite{Freidin19,FreidinZhang20} for previous instances of Bochner-Eells-Sampson formulas (without Hessian-type terms) for maps with values into ${\sf CAT}(k)$ spaces and with source smooth Riemannian manifolds and polyhedra, respectively. \smallskip The validity of a Bochner inequality for scalar valued maps defined on a non-smooth $\RCD$ space (even without Hessian term) is a very deep result: it was proved for $\RCD(K,\infty)$ spaces by Ambrosio-Gigli-Savar\'e \cite{AGSDuke} (see also \cite{AGS15} by the same authors for the reverse implication). The dimensional improvement for $\RCD^*(K,N)$ spaces was established independently by Erbar-Kuwada-Sturm \cite{EKS} and by Ambrosio-Savar\'e and the first author \cite{AmbrosioMondinoSavare19} (together with the reverse implication). The fact that the scalar Bochner inequality (without Hessian) ``self-improves'' to estimate the norm of the Hessian was noticed in the setting of $\Gamma$-calculus by Bakry \cite{Bakry85} and subsequently proved in the non-smooth setting of $\RCD$ spaces by Savar\'e \cite{Savare14} and Gigli \cite{Gigli18}. \smallskip To the best of our knowledge, \autoref{thm:Bochnerintro} is the first instance of a Bochner-Eells-Sampson inequality with Hessian-type term for harmonic maps when both the source and the target spaces are non smooth.\\ We remark that the appearance of the Hessian-type term in \eqref{eq:bochnerwithhessianintro} is expected to have fundamental importance in the future developments of the theory, see for instance the discussion in the introduction of \cite{ZhangZhongZhu19}. \subsection*{Strategy of the proof}\label{subsec:strat} There are two fundamental difficulties in the proof of the local Lipschitz continuity for harmonic maps in this setting. The first one is the need to deduce the information coming from the combination of the Bochner-Eells-Sampson formula \eqref{eq:BES} with the curvature constraints on the source space from ``lower order considerations'', independent of any regularity. This is a fundamental issue in Geometric Analysis on non smooth spaces, which requires a new argument in the present situation and it is well illustrated already at the level of scalar valued harmonic functions. The second key point is the need to turn the combination of harmonicity, which is understood here in variational terms, with the curvature constraints of the target into a differential inequality. This difficulty is tight with the non-linearity of the variational problem and it requires an original idea. \medskip We illustrate the first idea in the case of linear harmonic functions $u:\Omega\to\mathbb{R}$, where $\Omega\subset X$ is an open domain and $(X,\mathsf{d},\mathfrak{m})$ is an $\RCD(0,N)$ metric measure space, for the sake of simplicity. The case of general lower Ricci curvature bounds $K\in\mathbb{R}$ would introduce additional error terms without affecting the general strategy.\\ Notice that local Lipschitz estimates in this case follow from Harnack's inequality as soon as one is able to prove that \begin{equation}\label{eq:nablauSH} \Delta \abs{\nabla u}^2\ge 0\, , \end{equation} in the sense of distributions. For smooth Riemannian manifolds, and under the assumption that $u$ is smooth, the estimate \eqref{eq:nablauSH} follows from Bochner's inequality. For $\RCD(0,N)$ spaces we refer to \cite{Jiang12,ZhangZhu16} for a distributional approach, building on the top of weak versions of Bochner's inequality (see also \cite{AGS15,EKS,AmbrosioMondinoSavare19}). Here we present a different strategy, more suitable to be generalized to ${\sf CAT}(0)$-valued harmonic maps. This approach was developed in the case of source spaces with sectional curvature bounded from below in the Alexandrov sense by Petrunin and Zhang-Zhu in \cite{Petrunin96,ZhangZhu12}. We remark that the strategy has strong analogies with the so-called two points maximum principle, see \cite{CrandallIshiiLions,Korevaar,Kruzkov} and \cite{Andrews} for a recent survey. \smallskip We assume that $u$ is continuous (a property that is usually proved via De Giorgi-Moser's iteration and Harnack's inequality) and we wish to bootstrap the continuity to local Lipschitz continuity. We introduce the evolution via the Hopf-Lax semigroup (up to a sign) \begin{equation*} \mathcal{Q}^tu(x):=\sup_{y\in \Omega}\left\{u(y)-\frac{\mathsf{d}^2(x,y)}{2t}\right\}\, . \end{equation*} Then we claim that $\Delta \mathcal{Q}^tu\ge 0$, locally and for $t>0$ sufficiently small. On a smooth Riemannian manifold, neglecting the regularity issues, this inequality can be proved with a computation using the second variation of the arc length and Jacobi fields, see for instance the survey by Andrews \cite{Andrews} for similar arguments. On Alexandrov spaces with lower sectional curvature bounds, the statement is proved by Zhang-Zhu \cite{ZhangZhu12} (following an argument proposed by Petrunin in the unpublished \cite{Petrunin96}) relying on several perturbation arguments and on Petrunin's second variation formular for the arc-length \cite{Petrunin98}. None of these approaches is available in the framework of $\RCD(K,N)$ metric measure spaces. Roughly speaking, the main reason is that they rely on estimates for second order variations ``in single directions'', from which Laplacian estimates are deduced, as a second step, by average. \smallskip In turn, on $\RCD$ spaces there is a completely alternative strategy, where the need to estimate second order variations in single directions is completely overcome. The sub-harmonicity $\Delta \mathcal{Q}^tu\ge 0$ follows from the interplay between Optimal Transport and the Heat Flow in this setting \cite{SturmVonRenesse,Kuwada10,AGSDuke}, by further developing an argument found by the authors in \cite{MondinoSemola21}. Denoting by $P_s$ the Heat Flow \begin{equation*} \frac{\mathop{}\!\mathrm{d}}{\mathop{}\!\mathrm{d} s}P_su=\Delta P_su\, , \end{equation*} the so-called Kuwada duality \cite{Kuwada10} guarantees that \begin{equation*} P_s\mathcal{Q}^tu(x)\ge P_su(y)-\frac{\mathsf{d}^2(x,y)}{2t}\, , \end{equation*} for any $x,y\in X$ and for any $s>0$. Therefore, formally \begin{equation}\label{eq:propagation} \liminf_{s\to 0}\frac{P_s\mathcal{Q}^tu(x)-\mathcal{Q}^tu(x)}{s}\ge \liminf_{s\to 0}\frac{P_su(x_t)-u(x_t)}{s}=\Delta u(x_t)=0\, , \end{equation} where $x_t\in \Omega$ is any point such that \begin{equation*} \mathcal{Q}^tu(x)=u(x_t)-\frac{\mathsf{d}^2(x,x_t)}{2t}\, . \end{equation*} This (formally) shows that $\Delta \mathcal{Q}^tu\ge 0$. As $u$ is harmonic, also \begin{equation}\label{eq:subha} \Delta \left( \frac{\mathcal{Q}^tu-u}{t}\right)\ge 0\, . \end{equation} If we recall that the Hopf-Lax semigroup solves the Hamilton-Jacobi equation \begin{equation*} \frac{\mathop{}\!\mathrm{d} \mathcal{Q}^tu}{\mathop{}\!\mathrm{d} t}-\frac{1}{2}\abs{\nabla \mathcal{Q}^tu}^2=0\, , \end{equation*} see for instance \cite{AGS14}, then we easily infer that a local gradient estimate for $u$ follows, again formally, from a uniform (as $t\downarrow 0$) local $L^{\infty}$-estimate for \begin{equation}\label{eq:diffsemi} \frac{\mathcal{Q}^tu-u}{t}\, . \end{equation} Indeed, taking the limit as $t\downarrow 0$, it holds $(\mathcal{Q}^tu-u)/t\to \frac{1}{2}\abs{\nabla u}^2$. In order to get the sought local, uniform estimate for \eqref{eq:diffsemi}, it is now sufficient to use Harnack's inequality for sub-harmonic functions, thanks to \eqref{eq:subha}. \smallskip It is possible to interpret the strategy outlined above in terms of the two-variables functions $F_t:\Omega\times \Omega\to\mathbb{R}$, \begin{equation}\label{eq:twovarF} F_t(x,y):=u(y)-\frac{\mathsf{d}^2(x,y)}{2t}\, . \end{equation} Second variation arguments exploit the assumption that $u$ is harmonic, therefore the first term above solves the Laplace equation with respect to the $y$ variable, and the lower Ricci curvature bound of the ambient space, encoded into the behaviour of the distance squared on the product $X\times X$. \medskip In order to deal with harmonic maps with values into ${\sf CAT}(0)$ spaces $(Y,\mathsf{d}_Y)$, the non-linearity of the target is a fundamental issue, as there is no clear counterpart of \eqref{eq:twovarF}.\\ A key idea borrowed from \cite{ZhangZhu18} (see also \cite[Section 5]{Andrews}) is to consider the function of two variables \begin{equation}\label{eq:Gproduct} G_t(x,y):=\mathsf{d}_Y(u(x),u(y))-\frac{\mathsf{d}^2(x,y)}{2t}\, . \end{equation} The strategy is to exploit the curvature constraints of the source and the target spaces and the harmonicity of the map $u$ in terms of the behaviour of the function $G_t$ on the product $X\times X$. \smallskip The lower Ricci curvature bound enters into play as in the scalar valued case, through the Wasserstein contractivity of the Heat Flow and it controls the second term at the right hand side in \eqref{eq:Gproduct}. Notice that, again, this is a fundamentally different approach with respect to the previous \cite{EellsSampson64,KorevaarSchoen,ZhangZhu18} and with respect to the strategy illustrated in \cite{Andrews}. It shares some similarities with \cite{Sturm05} where, however, the perspective on harmonic maps was parabolic rather than variational. \medskip The ${\sf CAT}(0)$ condition on the target is combined with the assumption that $u$ is harmonic to control the first term at the right hand side in \eqref{eq:Gproduct}. In particular, the combination of these two assumptions, neglecting the regularity issues, leads to the inequality \begin{equation}\label{eq:Lapla1} \Delta \left(\mathsf{d}_Y^2(u(\cdot),u(x_0))-\mathsf{d}^2_Y(u(\cdot),P)\right)(x_0)\le 0\, , \end{equation} for any $x_0\in\Omega$ and for any $P\in Y$. Moreover, the non-positive curvature assumption allows a decoupling of the two variables in \eqref{eq:Gproduct}, via a quadrilateral comparison finding its roots in \cite{Reshetniak} (see \autoref{lemma:elemCAT} for the precise statement), and it leads from \eqref{eq:Lapla1} to the differential inequality \begin{equation}\label{eq:Lapla2} \Delta_xf(x_0)+\Delta_yg(y_0)\ge 0\, , \end{equation} for suitably constructed auxiliary functions $f:X\to\mathbb{R}$ and $g:X\to\mathbb{R}$ such that \begin{equation*} f(x)+g(y)\le \mathsf{d}_Y(u(x),u(y))\, ,\quad\text{for any $x,y\in X$}\, \end{equation*} and \begin{equation*} f(x_0)+g(y_0)=\mathsf{d}_Y(u(x_0),u(y_0))\, , \end{equation*} see \eqref{eq:rewritten} and the subsequent discussion for the details of the construction.\\ The combination of \eqref{eq:Lapla2}, together with the Wasserstein contractivity of the Heat Flow to control the second term in \eqref{eq:Gproduct}, proves the sub-harmonicity of the function \begin{equation*} \mathcal{G}^t(x):=\sup_{y\in\Omega}\left\{\mathsf{d}_Y(u(x),u(y))-\frac{\mathsf{d}^2(x,y)}{2t}\right\}\, . \end{equation*} This will be rigorously proved in \autoref{sec:propagation}. The local Lipschitz regularity of $u$ will be established in \autoref{sec:Lip}, via a variant of the aforementioned argument presented for scalar valued harmonic maps, where the sub-harmonicity of the function $\mathcal{G}^t$ plays the role of \eqref{eq:subha}. \smallskip A major difficulty that we encounter in making rigorous the strategy above is that we are able to verify \eqref{eq:Lapla1} (and therefore also \eqref{eq:Lapla2}) only away from a set of negligible measure in the source domain, see \autoref{prop:intw}. This is in perfect analogy with similar situations in the classical viscosity theory of Partial Differential Equations, see for instance \cite{CrandallIshiiLions,CaffarelliCabre95}, and with the case of Alexandrov spaces \cite{Petrunin96,ZhangZhu18}. We overcome this issue with an original perturbation argument of independent interest, in the spirit of Jensen's approximate maximum principle \cite{Jensen88} for semi-concave functions. In the Euclidean theory, perturbation arguments usually rely on the affine structure. In \cite{Petrunin96,ZhangZhu18} the authors employ a combination of two perturbation arguments: the first one is required to move the minimum of a given function near to a regular point in the sense of the Alexandrov theory and it is achieved with a small additive perturbation using Perelman's concave functions. The second one uses the existence of concave bi-Lipschitz coordinates as a replacement of the Euclidean affine functions. In the present situation these techniques seem out of reach. Indeed, the existence of concave auxiliary functions heavily relies on the synthetic lower bound on the sectional curvature and the existence of bi-Lipschitz coordinates goes much beyond the present regularity theory of spaces with lower Ricci curvature bounds, even in the ``non-collapsed'' case.\\ Perturbations will be constructed using distance functions, by further developing an idea introduced by Cabr\'e \cite{Cabre98}, with a different aim, in the setting of smooth Riemannian manifolds with lower sectional curvature bounds (see also the subsequent \cite{Kim04,WangZhang13}). The proof of the key estimate will require several new ingredients with respect to the case of Riemannian manifolds and will rely, again, on a new interpretation of the interplay between Optimal Transport and Heat Flow through Kuwada's duality, see \autoref{sec:perturbation}. For more details, we refer the reader to the first few pages of \autoref{sec:perturbation}, where we present the general perturbation strategy, the difficulties of the present setting and the new ideas developed in the paper. \medskip Once local Lipschitz continuity has been established, the Bochner-Eells-Sampson inequality with Hessian type term \eqref{eq:bochnerwithhessianintro} will be proved via bootstrap. The main idea, borrowed from the recent \cite{ZhangZhongZhu19}, is to run the same arguments above changing the squared distance $\mathsf{d}^2(x,y)$ with any power $\mathsf{d}^p(x,y)$ for $1<p<\infty$ and then to let $p\to\infty$. While \cite{ZhangZhongZhu19} considers smooth Riemannian manifolds, following the strategy of \cite{ZhangZhu18}, in the present context the analysis is possible thanks to the Wasserstein contractivity of the Heat Flow in any Wasserstein space with distance $W_p$ for $1\le p\le \infty$, see \cite{Savare14}. Combined with a version of Kirchheim's metric differentiability theorem \cite{Kirchheim94}, recently established in \cite{GigliTulyenev21}, this will lead to the sought Bochner-Eells-Sampson inequality in \autoref{sec:Bochner}. \medskip \medskip \textbf{Acknowledgements.} The authors are supported by the European Research Council (ERC), under the European Union Horizon 2020 research and innovation programme, via the ERC Starting Grant “CURVATURE”, grant agreement No. 802689. \section{Preliminaries} This preliminary section is meant to introduce some basic material and to set the terminology that we shall adopt in the paper. In \autoref{sub:GARCD} we collect some mostly well known preliminaries about Geometric Analysis on $\RCD(K,N)$ metric measure spaces. In \autoref{subsec:energyharmonics} we introduce the relevant background and terminology about Sobolev and harmonic maps from $\RCD(K,N)$ spaces into ${\sf CAT}(0)$ spaces. \subsection{Geometric Analysis tools on $\RCD(K,N)$ spaces}\label{sub:GARCD} Throughout the paper, $(X,\mathsf{d},\mathfrak{m})$ will be a metric measure space, i.e. $(X,\mathsf{d})$ is a complete and separable metric space endowed with a non-negative Borel measure which is finite on bounded sets. \smallskip Given $f:X\to \mathbb{R}$, we denote with $\lip f $ the slope of $f$ defined as \begin{equation*} \lip f (x_{0}):=\limsup_{x\to x_{0}} \frac{|f(x)-f(x_{0})|}{\mathsf{d}(x, x_{0})} \; \text{ if $x_{0}$ is not isolated}\, , \quad \lip f(x_{0})=0 \; \text{ otherwise}\, . \end{equation*} We denote by $C(X)$ the space of continuous functions and with $\Lip(X)$ (resp. $\Lipb (X),$ $\Lipbs(X)$) the space of Lipschitz functions on $(X, \mathsf{d})$ (resp. bounded Lipschitz functions, and Lipschitz functions with bounded support). Analogous notations will be used for the spaces of continuous and Lipschitz functions on an open domain $\Omega\subset X$.\\ We will indicate by $B_r(x)$ the open ball $B_r(x):=\{y\in X\, :\, \mathsf{d}(x,y)<r\}$ for $r>0$ and $x\in X$ and by $\overline{B_r(x)}:=\{y\in X\, :\, \mathsf{d}(x,y)\le r\}$ the closed ball, for $x\in X$ and $r>0$. \smallskip The Cheeger energy (introduced in \cite{Cheeger99} and further studied in \cite{AGS14}) is defined as the $L^{2}$-lower semicontinuous envelope of the functional $f \mapsto \frac{1}{2} \int_{X} (\lip f)^2 \, \mathop{}\!\mathrm{d} \mathfrak{m}$, i.e.: \begin{equation*} {\sf Ch}(f):=\inf \left\{ \liminf_{n\to \infty} \frac{1}{2} \int_{X} (\lip f_n)^2\, \mathop{}\!\mathrm{d} \mathfrak{m} \, :\, f_n\in \Lip(X), \; f_{n}\to f \text{ in }L^{2}(X,\mathfrak{m}) \right \}\, . \end{equation*} If ${\sf Ch}(f)<\infty$ it was proved in \cite{Cheeger99,AGS14} that the set $$ G(f):= \left\{g \in L^{2}(X,\mathfrak{m}) \, :\, \exists \, f_{n} \in \Lip(X), \, f_n\to f, \, \lip f_n \rightharpoonup h\geq g \text{ in } L^{2}(X,\mathfrak{m}) \right\} $$ is closed and convex, therefore it admits a unique element of minimal norm called \textit{minimal weak upper gradient} and denoted by $|\nabla f|$. The Cheeger energy can be then represented by integration as $${\sf Ch}(f):=\frac{1}{2} \int_{X} |\nabla f|^{2} \mathop{}\!\mathrm{d} \mathfrak{m}\, . $$ It is not difficult to see that ${\sf Ch}$ is a $2$-homogeneous, lower semi-continuous, convex functional on $L^{2}(X,\mathfrak{m})$, whose proper domain ${\rm Dom}({\sf Ch}):=\{f \in L^{2}(X,\mathfrak{m})\,:\, {\sf Ch}(f)<\infty\}$ is a dense linear subspace of $L^{2}(X,\mathfrak{m})$. It then admits an $L^{2}$-gradient flow which is a continuous semigroup of contractions $(P_{t})_{t\geq 0}$ in $L^{2}(X,\mathfrak{m})$, whose continuous trajectories $t \mapsto P_{t} f$, for $f \in L^{2}(X,\mathfrak{m})$, are locally Lipschitz curves from $(0,\infty)$ with values into $L^{2}(X,\mathfrak{m})$. \smallskip Throughout the paper, we will assume that ${\sf Ch}: {\rm Dom}({\sf Ch})\to \mathbb{R}$ satisfies the parallelogram identity (i.e. it is a quadratic form) or, equivalently, that $P_t: L^{2}(X,\mathfrak{m}) \to L^{2}(X,\mathfrak{m})$ is a linear operator for every $t\geq 0$. This condition is known in the literature as \textit{infinitesimal hilbertianity}, after \cite{AGSDuke, Gigli15}. If $(X,\mathsf{d}, \mathfrak{m})$ is infinitesimally hilbertian, then ${\rm Dom}({\sf Ch})$ endowed with the norm $\|f\|_{H^{1,2}}^2:= \|f\|_{L^2}+ 2 {\sf Ch}(f)$ is a Hilbert space (in general it is only a Banach space) that will be denoted by $W^{1,2}(X, \mathsf{d},\mathfrak{m})$. \medskip The main subject of our investigation will be the so-called $\RCD(K,N)$ metric measure spaces $(X,\mathsf{d},\mathfrak{m})$, i.e. infinitesimally Hilbertian metric measure spaces with Ricci curvature bounded from below and dimension bounded from above, in synthetic sense.\\ The Riemannian Curvature Dimension condition $\RCD(K,\infty)$ was introduced in \cite{AGSDuke} (see also the more recent \cite{AGMS} for the current axiomatization) coupling the Curvature Dimension condition $\CD(K,\infty)$, previously proposed in \cite{Sturm06I,Sturm06II} and independently in \cite{LottVillani09}, with the infinitesimally Hilbertian assumption, corresponding to the Sobolev space $W^{1,2}$ being Hilbert.\\ The combination of the Curvature-Dimension condition for finite $1\le N<\infty$ with the linearity of the Heat Flow subsequently led to the notions of $\RCD(K,N)$ and $\RCD^*(K, N)$ spaces, corresponding to $\CD(K, N)$ (resp. $\CD^*(K, N)$, see \cite{BacherSturm10}) coupled with linear Heat Flow. The class $\RCD(K,N)$ was proposed in \cite{Gigli15}, motivated by the validity of the sharp Laplacian comparison and of the Cheeger-Gromoll splitting theorem. The (a priori more general) $\RCD^*(K,N)$ condition was thoroughly analysed in \cite{EKS} and (subsequently and independently) in \cite{AmbrosioMondinoSavare19} (see also \cite{CavallettiMilman21} for the equivalence betweeen $\RCD^*$ and $\RCD$ in the case of finite reference measure). \medskip We avoid giving a detailed introduction to this notion, addressing the reader to the survey \cite{Ambrosio18} and references therein for the relevant background. Below we recall some of the main properties that will be relevant for our purposes. \smallskip Unless otherwise stated, from now on we assume that $(X,\mathsf{d},\mathfrak{m})$ is an $\RCD(K,N)$ metric measure space for some $K\in\mathbb{R}$ and $1\le N<\infty$. \medskip We begin by recalling the notion of Laplacian. \begin{definition}\label{def:laplacian} The Laplacian $\Delta:D(\Delta)\to L^2(X,\mathfrak{m})$ is a densely defined linear operator whose domain consists of all functions $f\in W^{1,2}(X,\mathsf{d},\mathfrak{m})$ satisfying \begin{equation*} \int hg\mathop{}\!\mathrm{d}\mathfrak{m}=-\int \nabla h\cdot\nabla f\mathop{}\!\mathrm{d}\mathfrak{m} \quad\text{for any $h\in W^{1,2}(X,\mathsf{d},\mathfrak{m})$} \end{equation*} for some $g\in L^2(X,\mathfrak{m})$. The unique $g$ with this property is denoted by $\Delta f$. \end{definition} As a consequence of the infinitesimal hilbertianity, it is easily checked that $\Delta$ is an (unbounded) linear operator. More generally, we say that $f\in W^{1,2}_{{\rm loc}}(X,\mathsf{d},\mathfrak{m})$ is in the domain of the measure valued Laplacian, and we write $f\in D(\boldsymbol{\Delta})$, if there exists a Radon measure $\mu$ on $X$ such that, for every $\psi\in\Lip_c(X)$, it holds \begin{equation*} \int\psi\mathop{}\!\mathrm{d}\mu=-\int\nabla f\cdot\nabla \psi\mathop{}\!\mathrm{d}\mathfrak{m}\, . \end{equation*} In this case we write $\boldsymbol{\Delta}f:=\mu$. If moreover $\boldsymbol{\Delta}f\ll\mathfrak{m}$ with $L^{2}_{{\rm loc}}$ density we denote by $\Delta f$ the unique function in $L^{2}_{{\rm loc}}(X,\mathfrak{m})$ such that $\boldsymbol{\Delta}f=\Delta f\, \mathfrak{m}$ and we write $f\in D_{{\rm loc}}(\Delta)$. When there is no risk of confusion, we will adopt the simpler notation $\Delta$ even for the measure valued Laplacian. Notice that the definition makes sense even under the assumption that $f\in W^{1,p}_{{\rm loc}}(X,\mathsf{d},\mathfrak{m})$ for some $1\le p<\infty$, and we will rely on this observation later. \medskip We shall also consider the Laplacian on open sets, imposing Dirichlet boundary conditions. Let us first introduce the local Sobolev space with Dirichlet boundary conditions. \begin{definition} Let $(X,\mathsf{d},\mathfrak{m})$ be an $\RCD(K,N)$ metric measure space and let $\Omega\subset X$ be an open and bounded domain. Then we let $W^{1,2}_{0}(\Omega)$ be the $W^{1,2}(X,\mathsf{d},\mathfrak{m})$ closure of $\Lip_c(\Omega,\mathsf{d})$. \end{definition} We also introduce the local Sobolev space (i.e. without imposing Dirichlet boundary conditions). \begin{definition} Let $(X,\mathsf{d},\mathfrak{m})$ be an $\RCD(K,N)$ metric measure space and let $\Omega\subset X$ be an open and bounded domain. We say that a function $f\in L^2(\Omega,\mathfrak{m})$ belongs to the local Sobolev space $W^{1,2}(\Omega,\mathsf{d},\mathfrak{m})$ if \begin{itemize} \item[(i)] $f\phi\in W^{1,2}(X,\mathsf{d},\mathfrak{m})$ for any $\phi\in\Lip_c(\Omega,\mathsf{d})$; \item[(ii)] $\abs{\nabla f}\in L^2(X,\mathfrak{m})$. \end{itemize} Above, we intend that $f\phi$ is set to be $0$ outside from $\Omega$. Notice that $\abs{\nabla f}$ is well defined on any $\Omega'\subset \Omega$ (and hence on $\Omega$) as $\abs{\nabla (f\phi)}$ for some $\phi\in \Lip_c(\Omega)$ such that $\phi\equiv 1$ on $\Omega'$. \end{definition} \begin{definition} Let $f\in W^{1,2}(\Omega)$. We say that $f\in D(\Delta,\Omega)$ if there exists a function $h\in L^2(\Omega,\mathfrak{m})$ such that \begin{equation*} \int_{\Omega}gh\mathop{}\!\mathrm{d}\mathfrak{m}=-\int_{\Omega}\nabla g\cdot\nabla f\mathop{}\!\mathrm{d}\mathfrak{m}\, ,\quad\text{for any $g\in W^{1,2}_0(\Omega,\mathsf{d},\mathfrak{m})$}\, . \end{equation*} \end{definition} The Heat Flow $P_t$, previously defined as the $L^2(X,\mathfrak{m})$-gradient flow of ${\sf Ch}$, can be equivalently characterised by the following property: for any $u\in L^2(X,\mathfrak{m})$, the curve $t\mapsto P_tu\in L^2(X,\mathfrak{m})$ is locally absolutely continuous in $(0,+\infty)$ and satisfies \begin{equation*} \frac{\mathop{}\!\mathrm{d}}{\mathop{}\!\mathrm{d} t}P_tu=\Delta P_tu \quad\text{for $\mathscr{L}^1$-a.e. $t\in(0,\infty)$}\, . \end{equation*} Under our assumptions the Heat Flow provides a linear, continuous and self-adjoint contraction semigroup in $L^2(X,\mathfrak{m})$. Moreover $P_t$ extends to a linear, continuous and mass preserving operator, still denoted by $P_t$, in all the $L^p$ spaces for $1\le p<+\infty$. \medskip It has been proved in \cite{AGSDuke,AGMS} that, on $\RCD(K,\infty)$ metric measure spaces, the dual heat semigroup $\bar{P}_t:\mathcal{P}_2(X)\to\mathcal{P}_2(X)$ of $P_t$, defined by \begin{equation*} \int_X f \mathop{}\!\mathrm{d} \bar{P}_t \mu := \int_X P_t f \mathop{}\!\mathrm{d} \mu\qquad\quad \forall \mu\in \mathcal{P}_2(X),\quad \forall f\in \Lipb(X)\, , \end{equation*} is $K$-contractive (w.r.t. the $W_2$-distance) and, for $t>0$, maps probability measures into probability measures absolutely continuous w.r.t. $\mathfrak{m}$. Then, for any $t>0$, we can introduce the so called \textit{heat kernel} $p_t:X\times X\to[0,+\infty)$ by \begin{equation*} p_t(x,\cdot)\mathfrak{m}:=\bar{P}_t\delta_x\, . \end{equation*} As there is no risk of confusion, we will mostly adopt the notation $P_t$ also for the dual Heat Flow defined on probability measures with finite second order moment. We shall denote by $P_t\delta_x$ the heat kernel (measure) centred at $x\in X$ at time $t>0$. \smallskip A key property of the heat kernel follows, namely the so-called stochastic completeness: for any $x\in X$ and for any $t>0$ it holds \begin{equation}\label{eq:stochcompl} \int_X p_t(x,y)\mathop{}\!\mathrm{d}\mathfrak{m}(y)=1\, . \end{equation} Let us recall a classical regularity result for solutions of the Poisson equation, see \cite{Jiang12}. \begin{proposition} Let $(X,\mathsf{d},\mathfrak{m})$ be an $\RCD(K,N)$ metric measure space. Let $\Omega\subset X$ be an open and bounded domain. If $g\in L^{\infty}(\Omega)$ and $f\in W^{1,2}(\Omega)$ verifies $\Delta f=g$ on $\Omega$, then $f$ is locally Lipschitz. \end{proposition} Following \cite{Gigli15} we introduce the notion of Laplacian bound in the sense of distributions. \begin{definition}\label{def:distributions} Let $(X,\mathsf{d},\mathfrak{m})$ be an $\RCD(K,N)$ metric measure space and let $\Omega\subset X$ be an open domain. Let $f\in W^{1,2}(\Omega)$ and $\eta\in L^{\infty}(\Omega)$. Then we say that $\Delta f\le \eta$ in the sense of distributions if the following holds. For any non-negative function $\phi\in\Lip_c(\Omega)$, \begin{equation*} -\int_{\Omega}\nabla f\cdot\nabla \phi\mathop{}\!\mathrm{d}\mathfrak{m}\le \int_{\Omega}\phi\eta\mathop{}\!\mathrm{d}\mathfrak{m}\, . \end{equation*} \end{definition} The following can be obtained with minor modifications from \cite{KinnunenMartio02}. We refer to \cite[Corollary 3.5]{ZhangZhu18} for the case of Alexandrov spaces, the proof works verbatim in the present setting. \begin{proposition}\label{prop:poissonhelp} Let $(X,\mathsf{d},\mathfrak{m})$ be an $\RCD(K,N)$ metric measure space. Let $\Omega\subset X$ be an open domain, $g\in L^{\infty}(\Omega)$ and $f\in W^{1,2}_{{\rm loc}}(\Omega)\cap C(\Omega)$. Then the following are equivalent: \begin{itemize} \item[(i)] $\Delta f\le g$ in the sense of distributions; \item[(ii)] for any open domain $\Omega'\Subset\Omega$, if $v\in W^{1,2}(\Omega')$ solves $\Delta v=g$ on $\Omega'$ and $f-v\in W^{1,2}_0(\Omega)$, then $v\le f$ in $\Omega'$. \end{itemize} \end{proposition} The following is a slight extension of \cite[Proposition 3.27]{MondinoSemola21}. We consider only constant upper bounds for the Laplacian, but we remove the Lipschitz continuity assumption. \begin{proposition}\label{prop:hfmeanLapla} Let $(X,\mathsf{d},\mathfrak{m})$ be an $\RCD(K,N)$ metric measure space for some $K\in\mathbb{R}$ and $1\le N<\infty$. Let $\Omega\subset X$ be an open domain and let $f:\Omega\to\mathbb{R}$ be continuous and bounded. If $f\in W^{1,2}(\Omega)$ admits measure valued Laplacian on $\Omega$, with \begin{equation}\label{eq:assumedC} \Delta f\le C\, ,\quad\text{on $\Omega$} \end{equation} in the sense of distributions for some constant $C\in\mathbb{R}$, then the following holds. For any domain $\Omega'\Subset\Omega$ and for any function $g:X\to\mathbb{R}$ with polynomial growth and such that $f\equiv g$ on $\Omega'$, it holds \begin{equation}\label{eq:limsuptocompute} \limsup_{t\to 0}\frac{P_tg(x)-g(x)}{t}\le C\, ,\quad\text{for any $x\in \Omega'$}\, . \end{equation} \end{proposition} \begin{proof} Thanks to \cite[Lemma 2.53]{MondinoSemola21} the value of the $\limsup$ in \eqref{eq:limsuptocompute} is independent of the chosen extension $g$ of $f$. In particular, thanks to \cite{MondinoNaber19,AmbrosioMondinoSavare}, we can choose a regular cut-off function $\phi:X\to[0,1]$ with compact support inside $\Omega$ and such that $\phi\equiv 1$ on $\Omega'$, Lipschitz and with bounded Laplacian. Then we consider $g:=f\phi$, where it is understood that $g\equiv 0$ outside from $\Omega$. By the standard Leibniz rule for the measure valued Laplacian, $g$ admits measure valued Laplacian on $X$. Moreover, \begin{equation*} \Delta g=f\Delta\phi +2\nabla f\cdot\nabla \phi+\phi\Delta f\, . \end{equation*} Hence, for any $x\in X$ and for any $t\ge 0$, \begin{equation*} P_tg(x)-g(x)\le \int_0^tP_s\left(f\Delta\phi +2\nabla f\cdot\nabla \phi+\phi\Delta f\right)(x)\mathop{}\!\mathrm{d} s\, . \end{equation*} Applying \cite[Lemma 2.53]{MondinoSemola21} to the first two terms above, we easily obtain that \begin{equation*} \limsup_{t\to 0}\frac{P_tg(x)-g(x)}{t}\le \limsup_{t\to 0}\frac{1}{t}\int_0^tP_s\left(\phi\Delta f\right)(x)\mathop{}\!\mathrm{d} s\le C\, , \end{equation*} where we employed \eqref{eq:assumedC} and the comparison principle for the Heat Flow to obtain the last inequality. \end{proof} We recall that $\RCD(K,N)$ metric measure spaces $(X,\mathsf{d},\mathfrak{m})$ are \emph{strongly rectifiable spaces}, according to \cite[Definition 2.18]{GigliTulyenev21} (see also the previous \cite{GigliPasqualetto21}). This condition means that for any $\varepsilon>0$ there exists a countable covering of $X$ with Borel sets $U_i^{\varepsilon}$, away from a set of $\mathfrak{m}$-measure $0$, such that all these sets are $(1+\varepsilon)$-biLipschitz to Borel subsets of $\mathbb{R}^n$ with charts $\phi_i^{\varepsilon}$, for a given $n\in\mathbb{N}$ independent of $i$, and it holds \begin{equation*} c_i\mathscr{L}^n|_{\phi_i^{\varepsilon}(U_i^{\varepsilon})}\le \left(\phi_{i}^{\varepsilon}\right)_{\sharp}\left(\mathfrak{m}|_{U_i^{\varepsilon}}\right)\le (c_i+\varepsilon)\mathscr{L}^n|_{\phi_i^{\varepsilon}(U_i^{\varepsilon})}\, , \end{equation*} for some constant $c_i>0$, where we denoted by $\left(\phi_{i}^{\varepsilon}\right)_{\sharp}$ the pushforward operator for measures through $\phi_{i}^{\varepsilon}$.\\ This statement follows from the combination of \cite{MondinoNaber19}, \cite{KellMondino,DePhilippisMarcheseRindler,GigliPasqualetto21} and \cite{BrueSemolaCPAM}. \smallskip We refer again to \cite[Definition 2.18]{GigliTulyenev21} for the notion of \emph{aligned} family of atlas $\mathcal{A}^{\varepsilon_n}$, for a given sequence $\varepsilon_n\downarrow 0$, of a strongly rectifiable space that will be relevant for the subsequent developments of the note. The unique natural number $1\le n\le N$ such that the above hold will be denoted by \emph{essential dimension} of the $\RCD(K,N)$ metric measure space $(X,\mathsf{d},\mathfrak{m})$. \subsection{Energy and non-linear harmonic maps}\label{subsec:energyharmonics} The subject of this paper are harmonic maps from open subsets of $\RCD(K,N)$ metric measure spaces to ${\sf CAT}(0)$ spaces. Let us recall the relevant background and terminology. The main reference for this presentation is \cite{GigliTulyenev21}. We refer to \cite{KorevaarSchoen} for the original notions and results in the case when the source space is a smooth Riemannian manifold. \begin{definition}\label{def:ksloc} Let $(X,\mathsf{d},\mathfrak{m})$ be a metric measure space, $Y_{\bar{y}}=(Y,\mathsf{d}_Y,\bar y)$ a pointed complete space, $\Omega\subset X$ open and $u\in L^2(\Omega,Y_{\bar y})$. \\For every $r>0$ we define $\mathrm{ks}_{2,r}[u,\Omega]:\Omega\to[0,\infty]$ as \[ {\sf ks}_{2,r}[u,\Omega](x):= \left\{\begin{array}{ll} \displaystyle{\Big|\fint_{B_r(x)} \frac{\mathsf{d}^2_Y(u(x),u(y))}{r^2}\,\mathop{}\!\mathrm{d} \mathfrak{m}(y) \Big|^{1/2}}&\qquad\text{if }B_r(x)\subset\Omega,\\ 0&\qquad\text{otherwise} \end{array} \right. \] and say that $u\in{\sf KS}^{1,2}(\Omega,Y_{\bar y})$ provided \begin{equation} \label{eq:defkso} \mathrm{E}_2^\Omega(u):=\sup\limsup_{r\downarrow0}\int_\Omega \varphi\,{\sf ks}^2_{2,r}[u,\Omega]\,\mathop{}\!\mathrm{d}\mathfrak{m}<\infty, \end{equation} where the $\sup$ is taken among all $\varphi:X\to[0,1]$ continuous and such that $\supp(\varphi)$ is compact and contained in $\Omega$. \end{definition} We recall that the notion of metric differentiability, originally formulated in \cite{Kirchheim94} for maps $u:\mathbb{R}^n\to Y$, where $(Y,\mathsf{d}_y)$ is a metric space, has been extended to the case when the source space is a strongly rectifiable metric measure space in \cite[Definition 3.3]{GigliTulyenev21}, with the introduction of the notion of approximately metrically differentiable map. Moreover, in \cite{GigliTulyenev21} it is proved that maps $u\in{\sf KS}^{1,2}(\Omega,Y_{\bar y})$ are $\mathfrak{m}$-a.e. approximately metrically differentiable. We shall denote the metric differential of $u$ as $\mathrm{md}_{\cdot }u$; the metric differential is a semi-norm on $\mathbb{R}^n$ whenever $(X,\mathsf{d},\mathfrak{m})$ is an $\RCD(K,N)$ metric measure space with essential dimension $1\le n\le N$. \begin{definition} Given a seminorm $\norm{\cdot}$ on $\mathbb{R}^n$, we define its $2$-size as \begin{equation*} S_2^2(\norm{\cdot }):=\fint_{B_1(0^n)}\norm{v}^2\mathop{}\!\mathrm{d}\mathscr{L}^n(v)\, . \end{equation*} \end{definition} In the next statement, we recall the existence of the energy density and its representation in terms of the $2$-size of the metric differential (proved in \cite[Theorem 3.13]{GigliTulyenev21}), and the identification result between the metric differential and differentials $\mathop{}\!\mathrm{d} u$ of metric valued Sobolev maps in the sense of \cite{GigliPasqualettoSoultanis20} (see also \cite[Definition 4.5]{GigliTulyenev21}) obtained in \cite[Theorem 4.12]{GigliTulyenev21}. Such identification plays an important role in some of the subsequent developments of the theory, see for instance \autoref{thm:laplacomp} below. \begin{theorem}\label{thm:ksomega} Let $(X,\mathsf{d},\mathfrak{m})$ be locally uniformly doubling, supporting a Poincar\'e inequality and strongly rectifiable, $\Omega\subset X$ open and $Y_{\bar{y}}=(Y,\mathsf{d}_Y,{\bar y})$ a pointed and complete space. Then the following hold: \begin{itemize} \item[(i)] ${\sf KS}^{1,2}(\Omega,Y_{\bar y})=W^{1,2}(\Omega,Y_{\bar y})$ as sets. \item[(ii)] For any $u\in {\sf KS}^{1,2}(\Omega,Y_{\bar y})$, there is a function $\mathrm{e}_2[u]\in L^2(X)$, called \emph{$2$-energy density} of $u$, such that \[ {\sf ks}_{2,r}[u,\Omega]\quad\to\quad \mathrm{e}_2[u]\qquad\text{ $\mathfrak{m}$-a.e.\ and in $L^2_{{\rm loc}}(\Omega)$ as $r\downarrow 0$}\, . \] \item[(iii)] Any $u\in{\sf KS}^{1,2}(\Omega,Y_{\bar y})$ is approximately metrically differentiable $\mathfrak{m}$-a.e.\ in $\Omega$ (here we extend $u$ on the whole $X$ declaring it to be constant outside $\Omega$ to apply the definition of approximate metric differentiability) and it holds \begin{equation} \label{eq:endenso} \mathrm{e}_2[u](x)=S_2(\mathrm{md}_x(u))=S_2(\mathop{}\!\mathrm{d} u)(x)\qquad\mathfrak{m}\text{-a.e.}\ x\in\Omega. \end{equation} \item[(iv)] The functional $\mathrm{E}^\Omega_2:L^2(\Omega,Y_{\bar y})\to[0,+\infty]$ defined by \eqref{eq:defkso} is lower semicontinuous and can be written as \[ \mathrm{E}_2^\Omega(u) :=\left\{\begin{array}{ll} \displaystyle{\int_\Omega\mathrm{e}_2^2[u]\,\mathop{}\!\mathrm{d}\mathfrak{m}},&\qquad\text{ if }u\in{\sf KS}^{1,2}(\Omega,Y_{\bar y}),\\ +\infty,&\qquad\text{ otherwise}. \end{array} \right. \] \end{itemize} \end{theorem} We recall that a metric space $(Y,\mathsf{d}_Y)$ satisfies the ${\sf CAT}(0)$ condition if for any points $x_0,x_1\in Y$ and for any minimizing geodesic $\gamma:[0,1]\to Y$ connecting them the parallelogram inequality \begin{equation} \mathsf{d}_Y^2(\gamma_t,y)\le (1-t)\mathsf{d}_Y^2(x_0,y)+t\mathsf{d}_Y^2(x_1,y)-t(1-t)\mathsf{d}^2_Y(x_0,x_1) \end{equation} holds for any $t\in [0,1]$ and for any $y\in Y$. \medskip An outcome of the \emph{universal infinitesimal Hilbertianity} of ${\sf CAT}(0)$ spaces, previously proved in \cite{DiMarinoGiglietal21}, is the following representation of the energy density as Hilbert-Schmidt norm of the differential, obtained in \cite[Proposition 6.7]{GigliTulyenev21}. \begin{proposition}[Energy density as Hilbert-Schmidt norm] Let $(X,\mathsf{d},\mathfrak{m})$ be a strongly rectifiable space of dimension $n\in\mathbb{N}$ with uniformly locally doubling measure and supporting a Poincar\'e inequality (in particular these assumptions hold if it is a $\RCD(K,N)$ space for some $K\in\mathbb{R}$ and $N\in[1,\infty)$) and $\Omega\subset X$ an open set. Let $Y_{\bar{y}}=(Y,\mathsf{d}_Y,{\bar y})$ be a pointed ${\sf CAT}(0)$ space and $u\in{\sf KS}^{1,2}(\Omega,Y_{\bar y})$. \\Then the energy density $\mathrm{e}_2[u]$ admits the representation formula \[ \mathrm{e}_2[u]=(n+2)^{-\frac12}|\mathop{}\!\mathrm{d} u|_{\sf HS}\quad\mathfrak{m}\text{-a.e. }, \] where $n$ is the dimension of $X$. \end{proposition} It is possible to consider Sobolev spaces with prescribed boundary values also in the metric-valued case. This is key in order to establish existence of harmonic maps, defined as minimizers of the energy functional with prescribed boundary conditions. \begin{definition} Let $(X,\mathsf{d},\mathfrak{m})$ be a metric measure space, $Y_{\bar{y}}=(Y,\mathsf{d}_Y,{\bar y})$ a pointed complete metric space, $\Omega\subset X$ open and $\bar{u}\in L^2(\Omega,Y_{\bar y})$. Then the space ${\sf KS}^{1,2}_{\bar{u}}(\Omega,Y_{\bar y})\subset {\sf KS}^{1,2}(\Omega,Y_{\bar y})$ is defined as \begin{equation*} {\sf KS}^{1,2}_{\bar{u}}(\Omega,Y_{\bar y}):=\left\{u\in {\sf KS}^{1,2}(\Omega,Y_{\bar y})\, :\, \mathsf{d}_Y(\bar{u},u)\in W^{1,2}_0(\Omega) \right\}\, . \end{equation*} Moreover, the energy functional $\mathrm{E}_{2,\bar{u}}^\Omega:L^2(\Omega,Y)\to [0,\infty]$ is defined as \[ \mathrm{E}_{2,\bar{u}}^\Omega= \left\{\begin{array}{ll} \int_{\Omega}\mathrm{e}_2^2[u]\,\mathop{}\!\mathrm{d}\mathfrak{m}&\qquad\text{if $u\in {\sf KS}^{1,2}_{\bar{u}}(\Omega,Y_{\bar y})$},\\ +\infty&\qquad\text{otherwise .} \end{array} \right. \] \end{definition} We recall from \cite{GigliTulyenev21} that if $(X,\mathsf{d},\mathfrak{m})$ is an $\RCD(K,N)$ metric measure space, $Y_{\bar{y}}=(Y,\mathsf{d}_Y,{\bar y})$ is a pointed ${\sf CAT}(0)$ space, $\Omega\subset X$ is an open domain such that $\mathfrak{m}(X\setminus\Omega)>0$ and $\bar{u}\in{\sf KS}^{1,2}(\Omega,Y_{\bar y})$, then the energy functional $\mathrm{E}_{2,\bar{u}}^\Omega$ is convex and lower semicontinuous from $L^2(\Omega,Y)$ to $[0,\infty]$ and it admits a unique minimizer, see \cite[Theorem 6.4]{GigliTulyenev21}. The statement generalizes the previous \cite[Theorem 2.2]{KorevaarSchoen}, dealing with smooth source spaces. We shall call any such minimizer, for a given boundary datum, a harmonic map. \medskip When both $(X,\mathsf{d})$ and $(Y,\mathsf{d}_Y)$ are isometric to smooth Riemannian manifolds, $u:X\to Y$ is harmonic and $f:Y\to\mathbb{R}$ is a $\lambda$-convex function, then the chain rule easily yields that \begin{equation*} \Delta (f\circ u)\ge \lambda\abs{\mathop{}\!\mathrm{d} u}_{\sf HS}^2\, , \end{equation*} see for instance \cite[Chapter 8]{Jost17}.\\ This is generalized to the present setting in \cite[Theorem 4.1]{GigliNobili21}. See also \cite{LytchakStadler20} for the case of maps with Euclidean source and ${\sf CAT}(0)$ target. \begin{theorem}\label{thm:laplacomp} Let $(X,\mathsf{d},\mathfrak{m})$ be an $\RCD(K,N)$ metric measure space with essential dimension $1\le n\le N$. Let $(Y,\mathsf{d}_Y)$ be a ${\sf CAT}(0)$ space and $\Omega\subset X$ be open and bounded. Let $u\in {\sf KS}^{1,2}(\Omega,Y)$ be harmonic and let $f:Y\to\mathbb{R}$ be a $\lambda$-convex function, for some $\lambda\in\mathbb{R}$. Then $f\circ u\in D(\boldsymbol{\Delta},\Omega)$ and $\boldsymbol{\Delta}(f\circ u)$ is a signed Radon measure such that \begin{equation}\label{eq:Laplaboundcompo} \boldsymbol{\Delta}(f\circ u)\ge \lambda\abs{\mathop{}\!\mathrm{d} u}^2_{\sf HS}\mathfrak{m}\, . \end{equation} \end{theorem} With respect to \cite[Theorem 4.1]{GigliNobili21}, there is a significant modification at the right hand side of \eqref{eq:Laplaboundcompo}, namely in \cite[Equation (4.17)]{GigliNobili21} the bound \begin{equation*} \boldsymbol{\Delta}(f\circ u)\ge \frac{\lambda}{n+2}\abs{\mathop{}\!\mathrm{d} u}^2_{\sf HS}\mathfrak{m}\, \end{equation*} is obtained under the same assumptions. We report below the detailed computation showing that the stronger bound \eqref{eq:Laplaboundcompo} actually holds and fixing a typo in \cite{GigliNobili21}. \smallskip Borrowing the notation from \cite{GigliNobili21}, we start from the bound \begin{equation*} \abs{\mathop{}\!\mathrm{d} u_t}_{\sf HS}^2\le e^{-2\lambda tg}\left(\abs{\mathop{}\!\mathrm{d} u}_{\sf HS}^2-2t\langle\mathop{}\!\mathrm{d} g, \mathop{}\!\mathrm{d} f\circ g\rangle+Ct^2\right)\, ,\quad\text{$\mathfrak{m}$-a.e. in $\Omega$}\, . \end{equation*} Integrating over $\Omega$, subtracting and taking the limsup as $t\downarrow 0$, we get \begin{equation*} \limsup_{t\to 0}\frac{\mathrm{E}_2^\Omega(u_t)-\mathrm{E}_2^\Omega(u)}{t}\le \frac{1}{n+2}\int_{\Omega}\left(\lambda g\abs{\mathop{}\!\mathrm{d} u}_{\sf HS}^2+\langle\mathop{}\!\mathrm{d} g,\mathop{}\!\mathrm{d} f\circ u\rangle\right)\mathop{}\!\mathrm{d}\mathfrak{m}\, , \end{equation*} where we notice that, with respect to \cite[Equation (4.12)]{GigliNobili21}, the denominator $(n+2)$ is \emph{ in front of all the terms in the integral}. Then, since $u$ is harmonic and $u_t$ is a competitor for the variational problem in the definition of harmonic maps, \begin{equation*} \mathrm{E}_2^\Omega(u_t)-\mathrm{E}_2^\Omega(u)\ge 0\, \quad\text{for any $t\ge 0$}\, , \end{equation*} therefore \begin{equation*} \frac{1}{n+2}\int_{\Omega}\left(\lambda g\abs{\mathop{}\!\mathrm{d} u}_{\sf HS}^2+\langle\mathop{}\!\mathrm{d} g,\mathop{}\!\mathrm{d} f\circ u\rangle\right)\mathop{}\!\mathrm{d}\mathfrak{m}\ge 0\, ,\quad\text{for any $g\in \Lipbs(X)$, $g\ge 0$}\, . \end{equation*} Hence \begin{equation*} \Delta f\circ u\ge \lambda \abs{\mathop{}\!\mathrm{d} u}_{\sf HS}^2\, , \end{equation*} in the sense of distributions on $\Omega$. \begin{remark}\label{rm:dist2} Let $(Y,\mathsf{d}_Y)$ be a ${\sf CAT}(0)$ space. Then, for any point $P\in Y$, the function $f_P(\cdot):=\mathsf{d}_Y^2(P,\cdot)$ is $2$-convex. It follows from \autoref{thm:laplacomp} that \begin{equation*} \boldsymbol{\Delta}(f_P\circ u)\ge 2(n+2)\mathrm{e}_2^2[u]\, \mathfrak{m}\, . \end{equation*} \end{remark} Let us recall that non-linear harmonic maps from domains inside $\RCD(K,N)$ metric measure spaces to ${\sf CAT}(0)$ spaces are locally H\"older continuous in the interior. This has been proved in full generality in \cite[Corollary 1.7]{Guo21} extending previous results from \cite{Jost97,Lin97}. \begin{theorem}\label{thm:continuity} Let $(X,\mathsf{d},\mathfrak{m})$ be an $\RCD(K,N)$ metric measure space. Let $(Y,\mathsf{d}_Y)$ be a ${\sf CAT}(0)$ space and $\Omega\subset X$ be open and bounded. Let $u\in {\sf KS}^{1,2}(\Omega,Y)$ be harmonic. Then $u$ is locally H\"older continuous on $\Omega$. \end{theorem} We recall the notion of pointwise Lipschitz constant, for a continuous function $u:\Omega\to Y$ defined as \begin{equation*} \lip u(x):=\limsup_{y\to x}\frac{\mathsf{d}_Y(u(x),u(y))}{\mathsf{d}(x,y)}=\limsup_{r\to 0}\sup_{y\in B_r(x)}\frac{\mathsf{d}_Y(u(x),u(y))}{r}\, . \end{equation*} Notice that since $u$ is continuous, the pointwise Lipschitz constant coincides with the approximate local Lipschitz constant, that, in turn, can be identified with the norm of the metric differential, see \cite{GigliTulyenev21}. \begin{proposition}\label{prop:lipfinite} Let $(X,\mathsf{d},\mathfrak{m})$ be an $\RCD(K,N)$ metric measure space and let $(Y,\mathsf{d}_Y)$ be a ${\sf CAT}(0)$ space. Let $\Omega\subset X$ be an open and bounded domain and let $u\in{\sf KS}^{1,2}(\Omega,Y)$ be a harmonic map. There exists a constant $c=c(n)$, where $n$ is the essential dimension of $(X,\mathsf{d},\mathfrak{m})$ such that \begin{equation*} \lip u(x)=\norm{\mathrm{md}_xu}\le c(n)\abs{\mathop{}\!\mathrm{d} u}(x)\, ,\quad\text{for $\mathfrak{m}$-a.e. $x \in \Omega$}\, . \end{equation*} \end{proposition} \begin{proof} The statement has been proved in \cite{GigliTulyenev21}. Notice that the assumption that $u$ is harmonic here plays only a role in the identification between approximate local Lipschitz constant and pointwise Lipschitz constant. Otherwise the statement holds for any Sobolev function. \end{proof} \section{Auxiliary results} The aim of this section is to collect some technical results that will be useful for the proof of the main theorems. Often, the statements are the natural counterpart of analogous results proved in \cite{ZhangZhu18}. The main difference is the use of the Heat Flow as a substitute for averages on balls. This choice makes the connection with Laplacian estimates more transparent and it allows to cover the \emph{weighted} case, corresponding to $\RCD(K,N)$ spaces with essential dimension $1\le n<N$, which is not considered in \cite{ZhangZhu18}. \medskip Let us recall a special case of the quadrilateral comparison from \cite[Corollary 2.1.3]{KorevaarSchoen} (see also the previous \cite{Reshetniak}). \begin{lemma}\label{lemma:elemCAT} Let $(Y,\mathsf{d}_Y)$ be a ${\sf CAT}(0)$ space. Let $\{P,Q,R,S\}$ be an ordered set of points in $Y$ and let us denote by $Q_m$ the midpoint of $QR$. Then \begin{align*} \left(\mathsf{d}_{Y}(P,S)-\mathsf{d}_Y(Q,R)\right)\mathsf{d}_Y(Q,R)\ge& \left(\mathsf{d}_Y^2(P,Q_m)-\mathsf{d}_Y^2(P,Q)-\mathsf{d}_Y^2(Q_m,Q)\right)\\ &+\left(\mathsf{d}_Y^2(S,Q_m)-\mathsf{d}_Y^2(S,R)-\mathsf{d}_Y^2(Q_m,R)\right)\,. \end{align*} \end{lemma} The following statement corresponds to \cite[Proposition 5.4]{ZhangZhu18} originally proved for an Alexandrov space with curvature bounded below as source. We report here the strategy of the proof and indicate the minor changes that are needed in the present more general setting. We consider an $\RCD(K,N)$ metric measure space $(X,\mathsf{d},\mathfrak{m})$ and endow $X\times X$ with the canonical product structure. \begin{proposition}\label{prop:lapladist} Let $(X,\mathsf{d},\mathfrak{m})$ be an $\RCD(K,N)$ metric measure space and let $(Y,\mathsf{d}_Y)$ be a ${\sf CAT}(0)$ space. Let $\Omega\subset X$ be an open and bounded domain and let $u\in{\sf KS}^{1,2}(\Omega,Y)$ be a harmonic map. Then, denoting \begin{equation*} f(x,y)=\mathsf{d}_Y(u(x),u(y))\, , \end{equation*} it holds that $f\in W^{1,2}(\Omega\times\Omega)$ and \begin{equation*} \boldsymbol{\Delta}_{X\times X}f\ge 0\, . \end{equation*} \end{proposition} \begin{proof} The proof is divided into three steps. We first prove that, for any $p\in Y$, the function $\Omega\ni x\mapsto \mathsf{d}(p,u(x))$ is sub-harmonic. Then we check that $f\in W^{1,2}(\Omega\times\Omega)$. Eventually, we combine the two statements to infer that $f$ is sub-harmonic. \medskip \textbf{Step 1.} The fact that, for any point $p\in Y$, the function $\Omega\ni x\mapsto \mathsf{d}_Y(u(x),p)$ is sub-harmonic follows from the convexity of the function $\mathsf{d}_Y(\cdot,p)$ on $Y$, ensured by the ${\sf CAT}(0)$ condition and \autoref{thm:laplacomp}. \medskip \textbf{Step 2.} In order to verify that $f\in W^{1,2}(\Omega\times\Omega)$ we just need to observe that, when $x\in\Omega$ is fixed, $f_x(y):=f(x,y)$ is in $W^{1,2}(\Omega)$ and \begin{equation*} \int_{\Omega}\abs{\nabla f_x(y)}^2\mathop{}\!\mathrm{d}\mathfrak{m}(y)\le \int_{\Omega}\abs{\mathop{}\!\mathrm{d} u}^2\mathop{}\!\mathrm{d}\mathfrak{m}\, , \end{equation*} since $q\mapsto \mathsf{d}_Y(q,p)$ is a $1$-Lipschitz function. Analogously, for any $y\in\Omega$ fixed, the function $f^y(x):=f(x,y)$ is in $W^{1,2}(\Omega)$ and it holds \begin{equation*} \int_{\Omega}\abs{\nabla f^y(x)}^2\mathop{}\!\mathrm{d}\mathfrak{m}\le \int_{\Omega}\abs{\mathop{}\!\mathrm{d} u}^2\mathop{}\!\mathrm{d}\mathfrak{m}\, . \end{equation*} The conclusion that $f\in W^{1,2}(\Omega\times \Omega)$ follows from the tensorization of Cheeger energies and Sobolev spaces. \medskip \textbf{Step 3.} In order to verify that $f$ is sub-harmonic on $\Omega\times \Omega$ we rely again on the tensorization of the Cheeger energies. We consider any non-negative Lipschitz function $\phi$ with compact support in $\Omega\times \Omega$. Then \begin{equation*} \nabla_{X\times X} \phi\cdot \nabla_{X\times X} f(x,y)=\nabla_x\phi\cdot \nabla_xf(x,y)\, +\, \nabla_y \phi\cdot\nabla _yf(x,y)\, ,\quad\text{$\mathfrak{m}\otimes \mathfrak{m}$-a.e. on $\Omega\times \Omega$}\, . \end{equation*} Then we compute \begin{align*} \int_{\Omega\times\Omega}\nabla_{X\times X} \phi\cdot \nabla_{X\times X} f(x,y)\mathop{}\!\mathrm{d}\mathfrak{m}(x)\mathop{}\!\mathrm{d}\mathfrak{m}(y)=&\int_{\Omega}\int_{\Omega}\left(\nabla_x\phi\cdot \nabla_xf(x,y)\right)\mathop{}\!\mathrm{d}\mathfrak{m}(x)\mathop{}\!\mathrm{d}\mathfrak{m}(y)\\ &+\int_{\Omega}\int_{\Omega} \left(\nabla_y \phi\cdot\nabla _yf(x,y)\right)\mathop{}\!\mathrm{d}\mathfrak{m}(x)\mathop{}\!\mathrm{d}\mathfrak{m}(y)\\ =&-\int_{\Omega}\int_{\Omega}\phi(x,y)\mathop{}\!\mathrm{d} \Delta _xf(x,y)\\ &-\int_{\Omega}\int_{\Omega}\phi(x,y)\mathop{}\!\mathrm{d}\Delta_yf(x,y)\\ \le &\, 0\, , \end{align*} where the last inequality follows from Step 1. The sub-harmonicity of $f$ on the product follows. \end{proof} Below we will apply the (global) Heat Flow to functions that are only locally defined on some open domain $\Omega\subset X$, where $(X,\mathsf{d},\mathfrak{m})$ is an $\RCD(K,N)$ metric measure space. It is understood that a point $x\in \Omega$ is fixed and we consider any global extension with polynomial growth of $f|_{U_x}$, where $U_x$ is a neighbourhood of $x$. All the statements are not affected by the specific choice of the extension, thanks to \cite[Lemma 2.53, Lemma 2.54]{MondinoSemola21} \medskip We will need some asymptotic mean value inequalities, playing the counterpart of \cite[Proposition 3.2, Corollary 4.7, Corollary 5.6, Lemma 6.4]{ZhangZhu18} in the present setting. Here a key difference between the strategy in \cite{ZhangZhu18} is that we will consider the short time asymptotic of the Heat Flow, rather than the asymptotic of averages on balls for small radii. We make a brief digression in order to motivate this choice. \smallskip On a smooth $n$-dimensional Riemannian manifold $(M,g)$, for any smooth function $f:M\to\mathbb{R}$ it holds \begin{equation*} \Delta f(x)=\frac{1}{2(n+2)}\left(\lim_{r\to 0}\frac{\fint_{B_r(x)}f\mathop{}\!\mathrm{d}\mathrm{vol}-f(x)}{r^2}\right)\, ,\quad\text{for any $x\in M$}\, , \end{equation*} where $\Delta$ denotes the Laplace-Beltrami operator.\\ The connection between asymptotics of averages on balls and the Laplacian is more delicate on $\RCD(K,N)$ metric measure spaces. This is well illustrated already at the level of smooth, weighted Riemannian manifolds. Indeed, if $(M,g,e^{-\phi}\mathrm{vol})$ is a smooth weighted Riemannian manifold and, as above, $f:M\to\mathbb{R}$ is a smooth function, then, denoting by $\Delta_{\phi}$ the weighted Laplacian associated to the metric measure structure $(M,\mathsf{d},e^{-\phi}\mathrm{vol})$ and by $\Delta$ the Laplace-Beltrami operator on $(M,g)$, it holds \begin{equation*} \Delta_{\phi}f(x)=\Delta f(x)-\nabla\phi(x)\cdot\nabla f(x)\, , \end{equation*} while \begin{equation*} \lim_{r\to 0}\frac{\fint_{B_r(x)}f\mathop{}\!\mathrm{d}\mathrm{vol}-f(x)}{r^2}=\frac{1}{n+2}\left[\frac{1}{2}\Delta f(x)-\nabla f(x)\cdot\nabla\phi(x)\right]\, . \end{equation*} We notice that an extra factor depending on the gradient of the function at $x$ appears in the weighted case. Nevertheless, the identification between asymptotic of averages on balls and Laplacian, up to dimensional constants, holds at critical points. In particular it holds at local minima. \begin{proposition}\label{prop:lappoin} Let $(X,\mathsf{d},\mathfrak{m})$ be an $\RCD(K,N)$ metric measure space for some $K\in\mathbb{R}$ and $1\le N<\infty$ with essential dimension $1\le n\le N$. Let $(Y,\mathsf{d}_Y)$ be a ${\sf CAT}(0)$ space and let $\Omega\subset X$ be an open and bounded domain. Let $u\in {\sf KS}^{1,2}(\Omega, Y)$ be harmonic. Then, for $\mathfrak{m}$-a.e. $x\in \Omega$, \begin{equation}\label{eq:expansionheat} P_t\mathsf{d}_Y^2(u(\cdot),u(x))(x)=2 (n+2)\mathrm{e}^2_2[u](x)t+o(t)\, ,\quad\text{as $t\to 0$}\, . \end{equation} \end{proposition} \begin{proof} We rely on the theory developed in \cite{GigliTulyenev21} and on the convergence and stability theory from \cite{AmbrosioHonda17}. The strategy is to employ a blow-up argument. The rescalings of the map $u$ (more specifically, of the function $\mathsf{d}_Y(u(\cdot),u(x))$) are controlled thanks to \cite{GigliTulyenev21}, while the short time behaviour of the heat kernel is controlled through a classical blow-up argument near regular points. \medskip \textbf{Step 1.} We consider $n$-regular points $x\in X$. In particular, $\mathfrak{m}$-a.e. point $x\in X$ is $n$-regular and it holds that \begin{equation}\label{eq:scaling} X_i:=\left(X,r_i^{-1}\mathsf{d},\frac{1}{\mathfrak{m}(B_{r_i}(x))}\mathfrak{m},x\right)\to \left(\mathbb{R}^n,\mathsf{d}_{\mathrm{eucl}},c_n\mathscr{L}^n,0^n\right)\, , \end{equation} in the pointed measured Gromov-Hausdorff sense for any sequence $(r_i)_{i}$ such that $r_i\to 0$. \smallskip One of the outcomes of \cite{GigliTulyenev21} is the fact that, given $u\in {\sf KS}^{1,2}(\Omega, Y)$ as in the statement, for $\mathfrak{m}$-a.e. $x\in\Omega$ there exists a semi-norm $\mathrm{md}_xu:\mathbb{R}^n\to[0,\infty)$ such that the sequence of functions $f_i:=\mathsf{d}_Y(u(\cdot),u(x))/r_i$, considered along the sequence $X_i$ defined in \eqref{eq:scaling}, converges strongly in $L^2_{{\rm loc}}$ to $\mathrm{md}_xu:\mathbb{R}^n\to[0,\infty)$. This statement can be verified combining \cite[Proposition 3.6]{GigliTulyenev21} (see also \cite[Definition 3.3]{GigliTulyenev21} for the definition of approximate metric differentiability) with the continuity of $u$ from \autoref{thm:continuity} to turn approximate limits into full limits.\\ This argument extends \cite{Kirchheim94}, dealing with the case of Euclidean source space, and \cite{Cheeger99}, dealing with the case of scalar valued functions (see also \cite{Ambrosioetalembedding} for a proof tailored to the $\RCD$ setting and the recent \cite{HondaSire21}). \medskip \textbf{Step 2.} In this second step we compute \begin{equation*} \lim_{t\to 0}\frac{1}{t}P_t\left(\mathsf{d}_Y^2(u(y),u(x))\right)(x) \end{equation*} in terms of $\mathrm{md}_xu$ and relate it with \begin{align}\label{eq:fromGT21} \lim_{r\to 0}\fint_{B_r(x)}\frac{\mathsf{d}_Y^2(u(y),u(x))}{r^2}\mathop{}\!\mathrm{d}\mathfrak{m}(y)=&\, \mathrm{e}^2_2[u](x)=S^2_2(\mathrm{md}_x(u))\\ =&\fint_{B_1(0^n)}\abs{\mathrm{md}_xu(v)}^2\mathop{}\!\mathrm{d}\mathscr{L}^n(v)\, . \end{align} Let us notice that, denoting by $P_t^{\mathbb{R}^n}$ the standard Heat Flow on $\mathbb{R}^n$, \begin{equation}\label{eq:heatRn} P_1^{\mathbb{R}^n}\abs{\mathrm{md}_xu(\cdot)}^2(0^n)=\frac{1}{(4\pi)^{\frac{n}{2}}}\int_{\mathbb{R}^n}e^{-\frac{\abs{v}^2}{4}}\abs{\mathrm{md}_xu(v)}^2\mathop{}\!\mathrm{d}\mathscr{L}^n(v)\, . \end{equation} Moreover, $\mathrm{md}_xu$ is a seminorm, hence \begin{equation}\label{eq:homogeneous} \fint_{B_r(0^n)}\abs{\mathrm{md}_xu(v)}^2\mathop{}\!\mathrm{d}\mathscr{L}^n(v)=r^2\fint_{B_1(0^n)}\abs{\mathrm{md}_xu(v)}^2\mathop{}\!\mathrm{d}\mathscr{L}^n(v)\, ,\quad\text{for any $r>0$}\, . \end{equation} The combination of \eqref{eq:heatRn} with \eqref{eq:homogeneous} gives that \begin{equation}\label{eq:euclidide} P_1^{\mathbb{R}^n}\abs{\mathrm{md}_xu(\cdot)}^2(0^n)=2(n+2)\fint_{B_1(0^n)}\abs{\mathrm{md}_xu(v)}^2\mathop{}\!\mathrm{d}\mathscr{L}^n(v)\, . \end{equation} Notice that, for any sequence $(r_i)_{i\in\mathbb{N}}$ such that $r_i\downarrow 0$, as $f_i$ converge locally in $L^2$ to $\mathrm{md}_xu(\cdot)$, $f_i^2$ converge locally in $L^1$ to $\abs{\mathrm{md}_xu(\cdot)}^2$ along the sequence $X_i$. Therefore, since \begin{equation*} \frac{1}{t}P_t\mathsf{d}_Y^2(u(\cdot),u(y))(x)=P_1^{X_i}f_i^2(x)\, , \end{equation*} by the scaling properties of the Heat Flow and of the heat kernel, a stability argument using \cite[Lemma 4.11]{AmbrosioBrueSemola19} (see also \cite{AmbrosioHonda17,Ambrosioetalembedding}) proves that \begin{equation}\label{eq:stability} \lim_{t\to 0}\frac{1}{t}P_t\mathsf{d}_Y^2(u(\cdot),u(y))(x)=P_1^{\mathbb{R}^n}\abs{\mathrm{md}_xu(\cdot)}^2(0^n)\, . \end{equation} The combination of \eqref{eq:stability} with \eqref{eq:euclidide} and \eqref{eq:fromGT21} proves \eqref{eq:expansionheat}. \end{proof} \begin{remark} As a consistency check, we notice that, thanks to \autoref{rm:dist2}, \begin{equation*} \Delta \mathsf{d}_Y^2(u(\cdot),u(x))\ge 2 (n+2)\mathrm{e}^2_2[u]\mathfrak{m}\, , \end{equation*} in the sense of distributions on $\Omega$. Hence, by a slight variant of \autoref{prop:hfmeanLapla}, we can verify that \begin{align*} \liminf_{t\to 0}\frac{1}{t}P_t\mathsf{d}_Y^2(u(\cdot),u(x))(x) \ge & \liminf_{t\to 0}\frac{1}{t}\int _0^t\left(P_s\Delta \mathsf{d}_Y^2(u(\cdot),u(x))\right)(x)\mathop{}\!\mathrm{d} s\\ \ge &\liminf_{t\to 0}\frac{2(n+2)}{t}\int_0^t P_s\left(\mathrm{e}^2_2[u]\right)(x)\mathop{}\!\mathrm{d} s\, . \end{align*} If $x$ is a Lebesgue point for $\mathrm{e}^2_2[u]$, which is true for $\mathfrak{m}$-a.e. $x\in\Omega$, then by \cite[Lemma 2.54]{MondinoSemola21} \begin{equation*} P_s\left(\mathrm{e}^2_2[u]\right)(x)\to \mathrm{e}^2_2[u](x)\, , \quad\text{as $s\to 0$}\, . \end{equation*} Hence \begin{equation*} \int_0^t P_s\left(\mathrm{e}^2_2[u]\right)(x)\mathop{}\!\mathrm{d} s=t\mathrm{e}^2_2[u](x)+o(t)\, , \quad\text{as $t\to 0$}\, . \end{equation*} Therefore \begin{equation*} P_t\mathsf{d}_Y^2(u(\cdot),u(x))(x)\ge 2(n+2)t\mathrm{e}^2_2[u](x)+o(t)\, , \quad\text{as $t\to 0$}\, , \end{equation*} which yields one of the inequalities in \eqref{eq:expansionheat}. \end{remark} The following proposition generalises \cite[Corollary 5.6]{ZhangZhu18} to an $\RCD$ source space. \begin{proposition}\label{prop:changeP} Let $(X,\mathsf{d},\mathfrak{m})$ be an $\RCD(K,N)$ metric measure space with essential dimension $1\le n\le N$. Let $(Y,\mathsf{d}_Y)$ be a ${\sf CAT}(0)$ space and let $\Omega\subset X$ be an open and bounded domain. Let $u\in {\sf KS}^{1,2}(\Omega, Y)$ be harmonic.\\ Then for $\mathfrak{m}$-a.e. $x_0\in \Omega$ it holds \begin{equation}\label{eq:changepoint} -P_t\mathsf{d}_Y^2(u(\cdot),P)(x_0)+\mathsf{d}_Y^2(u(x_0),P)\le - 2 (n+2)\mathrm{e}^2_2[u](x_0)t +o(t)\, ,\quad\text{as $t\to 0$}\, , \end{equation} for any $P\in Y$. \end{proposition} \begin{proof} We consider the function \begin{equation}\label{eq:transl} x\mapsto -\mathsf{d}_Y^2(u(\cdot),P)(x_0)+\mathsf{d}_Y^2(u(x_0),P) \end{equation} and notice that it vanishes at $x_0$, by its very definition. We claim that the statement holds for any point $x_0$ such that \autoref{prop:lipfinite} holds, \autoref{prop:lappoin} holds and $x_0$ is a Lebesgue point for the energy density $\mathrm{e}^2_2[u]$.\\ Indeed, since the function $-\mathsf{d}_Y(\cdot,P)^2$ appearing in \eqref{eq:transl} is $(-2)$--concave by the ${\sf CAT}(0)$ condition, by \autoref{thm:laplacomp} we have \begin{equation*} \Delta\left(-\mathsf{d}_Y^2(u(\cdot),P)(x_0)+\mathsf{d}_Y^2(u(x_0),P) \right)\le -2(n+2)\mathrm{e}^2_2[u]\, \mathfrak{m}\, . \end{equation*} Hence, applying the Heat Flow and taking into account the considerations above, arguing as in the proof of \autoref{prop:hfmeanLapla} we get \begin{equation*} -P_t\mathsf{d}_Y^2(u(\cdot),P)(x_0)+\mathsf{d}_Y^2(u(x_0),P)\le -2(n+2)tP_t\left(\mathrm{e}^2_2[u]\right)(x_0)+o(t)\, ,\quad\text{as $t\to 0$}\, . \end{equation*} If $x_0$ is a Lebesgue point for $\mathrm{e}^2_2[u]$, then by \cite[Lemma 2.54]{MondinoSemola21} \begin{equation*} -2(n+2)tP_t\left(\mathrm{e}^2_2[u]\right)(x_0)+o(t)=-2(n+2)t(\mathrm{e}^2_2[u](x_0)+o(1))+o(t)\, ,\quad \quad\text{as $t\to 0$}\, . \end{equation*} The claimed \eqref{eq:changepoint} follows. \end{proof} The following asymptotic inequality generalises \cite[Lemma 6.4]{ZhangZhu18} to the present setting. \begin{proposition}\label{prop:intw} Let $(X,\mathsf{d},\mathfrak{m})$ be an $\RCD(K,N)$ metric measure space. Let $(Y,\mathsf{d}_Y)$ be a ${\sf CAT}(0)$ space and let $\Omega\subset X$ be an open and bounded domain. Let $u\in {\sf KS}^{1,2}(\Omega, Y)$ be harmonic.\\ For any $z\in\Omega$ and $P\in Y$, let us set \begin{equation*} w_{z,P}(\cdot):=\mathsf{d}_Y^2(u(\cdot),u(z))-\mathsf{d}_Y^2(u(\cdot),P)+\mathsf{d}_Y^2(P,u(z))\, . \end{equation*} Then for $\mathfrak{m}$-a.e. $x_0\in\Omega$ it holds that \begin{equation}\label{eq:asymChin} \limsup_{t\to 0} \frac{1}{t}P_t \left(w_{x_0,P}(\cdot)\right)(x_0)\le 0\, , \end{equation} for every $P\in Y$. \end{proposition} \begin{proof} The statement follows from the combination of \autoref{prop:lappoin} with \autoref{prop:changeP}. \end{proof} Let us recall the Laplacian comparison for $\RCD(K,N)$ spaces from \cite{Gigli15}. \begin{theorem}\label{thm:Laplaciancomparison} Let $(X,\mathsf{d},\mathfrak{m})$ be an $\RCD(K,N)$ metric measure space for some $K\in\mathbb{R}$ and $1\le N<\infty$. Fix $p\in X$. Then the function $\mathsf{d}^2(p,\cdot)$ admits locally measure valued Laplacian bounded from above \begin{equation}\label{eq:Laplacecomparison} \Delta \mathsf{d}^2(p,\cdot)\le f_{K,N}(\mathsf{d}(\cdot, p))\mathfrak{m}\, , \end{equation} where $f_{K,N}:[0,\infty)$ is a continuous function. When $K=0$, we can take $f_{0,N}:=2N$ and, more in general $f_{K,N}(0)=2N$ for any $K\in\mathbb{R}$. \end{theorem} The following observations will be useful for our future purposes. \begin{lemma}\label{lemma:elemdistprod} The following elementary identity holds \begin{equation*} \mathsf{d}_X^2(x,y)=2\mathsf{d}^2_{X\times X}\left((x,y),D\right)\, , \quad\text{for any $x,y\in X$}\, , \end{equation*} where \begin{equation*} D:=\{(z,z)\, : \, z\in X\} \end{equation*} is the diagonal in the product space $X\times X$ and \begin{equation*} \mathsf{d}^2_{X\times X}\left((x,y),D\right):=\inf_{w\in X}\left\{\mathsf{d}^2\left((x,y),(w,w)\right)\right\}\,. \end{equation*} In particular if $(X,\mathsf{d},\mathfrak{m})$ is an $\RCD(K,N)$ metric measure space, then \begin{equation*} (x,y)\mapsto \mathsf{d}_X^2(x,y)\, , \end{equation*} as a function on $X\times X$, has locally measure valued Laplacian locally bounded from above by a continuous function. \end{lemma} \begin{proof} The first part of the statement is completely elementary. \smallskip In order to prove the second part of the statement, we observe that $X\times X$ is an $\RCD(K,2N)$ metric measure space. Then we recall that the Laplacian comparison for distance functions (squared) from points extends naturally to a Laplacian comparison for distance functions (squared) from closed sets in this setting, see for instance \cite{CavallettiMondino20}. Hence \begin{equation*} \Delta_{X\times X}\mathsf{d}^2(\cdot,\cdot)\le f_{K,2N}\left(\mathsf{d}(\cdot,\cdot)/2\right)\, \end{equation*} in the sense of distributions on $X\times X$, where $f_{K,2N}$ is the function appearing in the classical Laplacian comparison for distance functions, see \autoref{thm:Laplaciancomparison}. \end{proof} We recall that, given probability measures $\mu$ and $\nu$ on $X$, an admissible transport plan between $\mu$ and $\nu$ is a probability measure $\Pi$ on $X\times X$ whose push-forwards via the projections on the first and second marginal are $\mu$ and $\nu$, respectively. More precisely, if $\pi_1:X\times X\to X$ is defined by $\pi_1(x,y)=x$ and $\pi_2:X\times X\to X$ is defined by $\pi_2(x,y)=y$, then \begin{equation} \Pi\left(\pi_1^{-1}(A)\right)=\mu(A)\, ,\quad \Pi\left(\pi_2^{-1}(B)\right)=\nu(B)\, , \end{equation} for any Borel sets $A,B\subset X$. \begin{lemma}\label{lemma:operatordistprod} Let $(X,\mathsf{d},\mathfrak{m})$ be an $\RCD(K,N)$ metric measure space for some $K\in\mathbb{R}$ and $1\le N<\infty$. Let $(p,q)\in X\times X$ and $(x_0,y_0)\in X\times X$. For any $r>0$, let $\Pi_r$ be an admissible transport plan between the heat kernels $P_r\delta_{x_0}$ and $P_r\delta_{y_0}$. Then \begin{align*} \limsup_{r\to 0}&\frac{\int \mathsf{d}^2_{X\times X}\left((x,y), (p,q)\right)\mathop{}\!\mathrm{d} \Pi_r(x,y)-\mathsf{d}^2_{X\times X}\left((x_0,y_0), (p,q)\right)}{r}\\ &\le 2f_{(K,N)}\left(\max\{\mathsf{d}(x_0,p),\mathsf{d}(y_0,q)\}\right)\, , \end{align*} where $f_{K,N}:[0,\infty)\to (0,\infty)$ is defined in \eqref{eq:Laplacecomparison}. \end{lemma} \begin{proof} By the very definition of the product metric measure structure on $X\times X$ it holds \begin{equation*} \mathsf{d}^2_{X\times X}\left((x,y), (p,q)\right)=\mathsf{d}^2_X(x,p)+\mathsf{d}^2_X(y,q)\, . \end{equation*} Therefore, denoting by $\Pi$ an admissible plan between probabilities $\mu,\nu\in \mathcal{P}(X)$, it holds \begin{align*} \int_{X\times X} \mathsf{d}^2_{X\times X}\left((x,y), (p,q)\right)\mathop{}\!\mathrm{d} \Pi(x,y)&= \int _{X\times X}\left( \mathsf{d}^2_X(x,p)+\mathsf{d}^2_X(y,q)\right)\mathop{}\!\mathrm{d} \Pi(x,y)\\ &=\int _X \mathsf{d}^2_X(x,p)\mathop{}\!\mathrm{d}\mu(x)+\int _X\mathsf{d}^2(y,q)\mathop{}\!\mathrm{d}\nu(y)\, . \end{align*} In particular, with the notation of the statement, \begin{align*} \int_{X\times X} \mathsf{d}^2_{X\times X}\left((x,y), (p,q)\right)\mathop{}\!\mathrm{d} \Pi_r(x,y)&-\mathsf{d}^2_{X\times X}\left((x_0,y_0), (p,q)\right)\\ = P_r\left(\mathsf{d}^2(\cdot,p)\right)(x_0)-&\mathsf{d}^2(x_0,p)+P_r\left(\mathsf{d}^2(\cdot,q)\right)(y_0)-\mathsf{d}^2(y_0,q)\, . \end{align*} Hence \begin{align*} \limsup_{r\to 0}&\frac{1}{r}\left[\int_{X\times X} \mathsf{d}^2_{X\times X}\left((x,y), (p,q)\right)\mathop{}\!\mathrm{d} \Pi_r(x,y)-\mathsf{d}^2_{X\times X}\left((x_0,y_0), (p,q)\right)\right] \\ \le\, & \limsup_{t\to 0}\frac{1}{r}\left[P_r\left(\mathsf{d}^2(\cdot,p)\right)(x_0)-\mathsf{d}^2(x_0,p)\right]+\limsup_{r\to 0}\frac{1}{r}\left[P_r\left(\mathsf{d}^2(\cdot,q)\right)(y_0)-\mathsf{d}^2(y_0,q)\right]\\ \le\, & 2f_{(K,N)}\left(\max\{\mathsf{d}(x_0,p),\mathsf{d}(y_0,q)\}\right)\, , \end{align*} where the last inequality follows from \eqref{eq:Laplacecomparison}. \end{proof} \begin{lemma} Let $(X,\mathsf{d},\mathfrak{m})$ be an $\RCD(K,N)$ metric measure space. Let $p,q\in X$ and let $\Pi_r$ be an admissible transport plan between the heat kernels at time $r>0$ from $p$ and $q$, $P_r\delta_p$ and $P_r\delta_q$, respectively. Then it holds \begin{align} 0\le &\liminf_{r\to 0}\frac{\int_{X\times X}\left(\mathsf{d}^2(x,p)+\mathsf{d}^2(y,q)\right)\mathop{}\!\mathrm{d}\Pi_r(x,y)}{r}\label{eq:below}\\ \le & \limsup_{r\to 0}\frac{\int_{X\times X}\left(\mathsf{d}^2(x,p)+\mathsf{d}^2(y,q)\right)\mathop{}\!\mathrm{d}\Pi_r(x,y)}{r}\le 2N\, .\label{eq:above} \end{align} \end{lemma} \begin{proof} The inequality \eqref{eq:below} is trivial, since at the right hand side there is the integral of a non-negative function. \smallskip In order to prove \eqref{eq:above} we notice that, since $\Pi_r$ is an admissible transport plan between $P_r\delta_p$ and $P_r\delta_q$, it holds \begin{equation} \int_{X\times X}\left(\mathsf{d}^2(x,p)+\mathsf{d}^2(y,q)\right)\mathop{}\!\mathrm{d}\Pi_r(x,y)=P_r\left(\mathsf{d}^2(\cdot, p)\right)(p)+P_r\left(\mathsf{d}^2(\cdot,q)\right)(q)\, . \end{equation} The conclusion follows from the Laplacian comparison \autoref{thm:Laplaciancomparison}. \end{proof} \section{Approximate maximum principles and perturbation arguments}\label{sec:perturbation} In this section we construct perturbations of functions with measure valued Laplacian uniformly bounded from above, with the aim of slightly moving around their minimum points. This ability will be fundamental for the subsequent developments of the note. The idea finds its roots in Jensen's approximate maximum principle for semiconcave functions \cite{Jensen88}. In the Euclidean space, perturbations are constructed thanks to affine functions, that perturb the function at the first order without affecting it at the second order, as they have vanishing Hessian.\\ The principle was later extended to smooth manifolds with sectional curvature bounded from below in \cite{Cabre98}, where the fundamental novelty is that perturbations are constructed through distance functions (squared), as affine functions do not have a natural counterpart. The idea was further developed in \cite{Kim04} and \cite{WangZhang13}, where manifolds with non-negative Ricci curvature and general lower Ricci curvature bounds, respectively, were considered.\\ A different strategy to construct well behaved perturbations on Alexandrov spaces with lower sectional curvature lower bounds was proposed independently in the unpublished manuscript \cite{Petrunin96} and later studied in \cite{ZhangZhu12}. The idea is to combine two perturbation arguments. The first one is used to move the minimum at a regular point in the sense of Alexandrov geometry. Then the concave, biLipschitz coordinate functions are employed as a replacement of affine functions in the proof of the Euclidean approximate maximum principle. A key observation is that the concave coordinate functions might affect the behaviour of the original function at second order (while affine functions do not) but the error has the right sign. \smallskip Here we partially generalize the strategy put forward in \cite{Cabre98,Kim04,WangZhang13} to the setting of $\RCD(K,N)$ spaces metric measure spaces. See also \cite{Corderoetal01,Savin07}.\\ Before doing so, we recall the classical smooth Riemannian result and briefly outline its proof from \cite{WangZhang13}. Let us introduce some notation: given a sufficiently regular function $u:\Omega\to\mathbb{R}$, where $\Omega$ is an open domain inside a smooth Riemannian manifold and $E$ is a compact set, for any $a>0$ we let \begin{equation} A_a(E,\Omega,u):=\left\{x\in \overline{\Omega}\, :\, \exists \, y\in E\, :\, \inf_{\overline{\Omega}}\left(u+\frac{a}{2}\mathsf{d}^2_y\right)=u(x)+\frac{a}{2}\mathsf{d}^2_y(x) \right\}\, . \end{equation} \begin{theorem}\label{thm:ABPsmooth} Let $(X,\mathsf{d},\mathfrak{m})$ be a smooth metric measure space with weighted Ricci curvature bounded from below by $-K\le 0$. Let $\Omega\subset X$ be an open domain and let $u:\Omega\to\mathbb{R}$ be a $C^2$ function. Then, for any compact set $E\subset X$ and for any $a>0$ such that \begin{equation} A_{a}(E,\Omega,u)\subset \Omega\, , \end{equation} it holds: \begin{equation}\label{eq:estsmooth} \mathfrak{m}(E)\le \int_{A_a}\exp\left(\frac{1}{2}K\left(\frac{\abs{\nabla u}^2}{a}\right)+\frac{\Delta u}{a}\right)\mathop{}\!\mathrm{d} \mathfrak{m}\, . \end{equation} \end{theorem} We outline the strategy, borrowed from \cite{WangZhang13} (see also the previous \cite{Kim04,Cabre98,McCann01,Corderoetal01}). First, we claim that the map $T:X\to X$ defined by $T(x):=\exp_x(a^{-1}\nabla u(x))$ is surjective from $A_a(E,\Omega,u)$ to $E$. This can be easily verified since for any $y\in E$ there exists $x\in A_a(E,\Omega,u)$ such that \begin{equation*} \inf_{\overline{\Omega}}\left(u+\frac{a}{2}\mathsf{d}^2_y\right)=u(x)+\frac{a}{2}\mathsf{d}^2_y(x)\, . \end{equation*} The first variation implies that \begin{equation*} y=\exp_x(a^{-1}\nabla u(x))\, . \end{equation*} Then, we interpolate between the identity map and the map $T$ via the maps $T^t$ defined by \begin{equation*} T^t(x):=\exp(ta^{-1}\nabla u(x))\, , \quad \text{for all } t\in[0,1]\,. \end{equation*} Since $T=T^1$ is surjective from $A_a(E,\Omega,u)$ onto $E$, in order to get \eqref{eq:estsmooth} it is sufficient to estimate its Jacobian determinant. We set \begin{equation*} J(t,x):=\lim_{r\to 0}\frac{\mathfrak{m}(T^t(B_r(x)))}{\mathfrak{m}(B_r(x))}\, . \end{equation*} Since the weighted Ricci curvature of $(X,\mathsf{d},\mathfrak{m})$ is bounded from below by $-K$, setting $l(t,x):=\log J(t,x)$, $l$ is well defined for any $t\in[0,1)$ and it satisfies the second order differential inequality \begin{equation*} l''(t,x)\le \frac{K}{2}\left(\frac{\abs{\nabla u(x)}}{a}\right)^2\, . \end{equation*} Moreover, the initial conditions \begin{equation}\label{eq:initialsmooth} l(0,x)=0\, ,\quad l'(0,x)=a^{-1}\Delta u(x) \end{equation} are met. By ODE comparison, we get a bound for $l$ at all later times and therefore we bound the Jacobian $J(t,x)$ for all later times $t\in[0,1]$. Then \eqref{eq:estsmooth} follows by integration of the Jacobian bound thanks to the area formula. \begin{remark} For suitably chosen $a$ and $y$, the function \begin{equation*} u_{a,y}:=u+\frac{a}{2}\mathsf{d}^2_y\, , \end{equation*} is a small perturbation of $u$. In particular, for $a$ very small the minima should converge to the minima of the function $u$. Moreover, if $x$ is a minimum point of $u_{y,a}$, then, neglecting the regularity issues, \begin{equation*} 0\le \Delta u_{y,a}(x)=\Delta u(x)+\frac{a}{2}\Delta \mathsf{d}^2_y(x)\le \Delta u(x)+\frac{a}{2}C_{K,N}\, . \end{equation*} Hence \begin{equation*} \Delta u(x)\ge -\frac{a}{2}C_{K,N}\, . \end{equation*} This formal computation suggests that \autoref{thm:ABPsmooth} could be useful to move around minimum points of $u$ via the perturbation $u_{a,y}$, still controlling the Laplacian from below. Notice that, at a qualitative level, \eqref{eq:estsmooth} shows that if $E$ has positive measure, the set of touching points $A_a(E,\Omega,u)$ has positive measure too. \end{remark} A couple of deep difficulties to implement such a strategy in the non-smooth setting of $\RCD(K,N)$ spaces are that: \begin{itemize} \item In a first step, one proves estimates that are \emph{pointwise along the geodesics} (using Jacobi fields computations on each single geodesic) and then, in a second step, such estimates are integrated to get the desired integral bounds. In the non-smooth setting, Jacobi fields computations are not available and typically one works with Wasserstein geodesics (which correspond to ``sets of positive measure'' in the space of geodesics) in order to take advantage of optimal transport tools. \item The initial conditions \eqref{eq:initialsmooth} (which play a key role in the argument) are met since the differential at the origin of the exponential map in a smooth Riemannian manifold is the identity map of the tangent space. This observation has no clear counterpart in the non-smooth setting. \end{itemize} To overcome these difficulties, we introduce several new ingredients. A fundamental role is played by the Hopf-Lax semigroup and by the fact that it preserves Laplacian upper bounds (compare with the recent \cite{MondinoSemola21} by the authors). A key observation is that the Hopf-Lax semigroup plays a similar role of the exponential map, with the two-fold advantage of not needing smoothness of the ambient space and of communicating well with the synthetic lower Ricci bounds (thanks to a deep duality discovered by Kuwada \cite{Kuwada10}, see also \cite{AGS15} for the extension to the $\RCD$ setting). The theory of Regular Lagrangian Flows \cite{AmbrosioTrevisan} (see also \cite{GigliViolo21} for some useful localised versions) is then a key tool in order to develop an Eulerian approach based on the continuity equation (well suited for the non-smooth $\RCD$ setting) of the smooth Lagrangian perspective given by classical Jacobi fields computation along geodesics. \smallskip We will not look for the sharpest possible estimate, regarding the dependence on the various parameters, but rather for a quantitative one sufficient for the subsequent purposes of the present note. \begin{theorem}\label{thm:mainperturbJensen} Let $(X,\mathsf{d},\mathfrak{m})$ be an $\RCD(K,N)$ metric measure space for some $K\in\mathbb{R}$ and $1\le N<\infty$. Let $\Omega\subset X$ be an open domain and let $u:\Omega\to\mathbb{R}$ belong to $W^{1,2}_{{\rm loc}}(\Omega)\cap C(\Omega)$ with a strict minimum point on $\Omega$. Let us assume that $u$ admits measure valued Laplacian on $\Omega$ with \begin{equation} \Delta u\le L\mathfrak{m}\, \quad\text{on $\Omega$}\, , \end{equation} for some constant $L>0$. Then, for any compact set $E\subset X$ and for any $a>0$ sufficiently small so that \begin{equation} A_{a}(E,\Omega,u)\subset \Omega\, , \end{equation} the following estimate holds: \begin{equation}\label{eq:quantest} \mathfrak{m}(E)\le C(K,N,\Omega,a,L)\, \mathfrak{m}(A_a(E,\Omega,u))\, , \end{equation} for some explicit constant $C(K,N,\Omega,a,L)>0$.\\ In particular, if $\mathfrak{m}(E)>0$, then $\mathfrak{m}(A_a(E,\Omega,u))>0$. \end{theorem} \begin{proof} The proof is divided into four steps. We are going to consider a $W_2$-geodesic formally induced by the map $T^t(x):=\exp_x(ta^{-1}\nabla u(x))$ between a suitable probability measure concentrated on $A_{a}(E,\Omega,u)\subset \Omega$ and the normalized restriction of $\mathfrak{m}$ to $E$ and then reversing time. We view this Wasserstein geodesic as a solution of the continuity equation, where the vector field is explicitely determined through the Hopf-Lax semigroup from $u$, see \cite{GigliHan15}. The propagation of Laplacian bounds via the Hopf-Lax semigroup (see the previous \cite{MondinoSemola21}) provides uniform one sided estimates for the divergence of the vector field along the solution of the continuity equation. To conclude, we will combine these one sided estimates with a regularization procedure from \cite{GigliMosconi15} and with \cite[Proposition 5.3]{GigliViolo21}, which is a local version of the estimates obtained in \cite{AmbrosioTrevisan}, to get uniform bounds for the density $\rho_t$ of the interpolant $\mu_t$, that will ultimately show \eqref{eq:quantest}. \medskip \textbf{Step 1.} Let us consider a ball $B_R(q)\Subset \Omega$, where $q\in\Omega$ is the strict minimum point of $\Omega$. Up to adding a constant, which does not affect the statement, we assume that $u(q)<0$. Then, we consider a continuous extension with compact support of $u$ from $B_R(q)$ to $X$ such that $\inf_Xu=u(q)$. Since there is no risk of confusion, we shall keep the notation $u$ also for the global extension of the original function $u:\Omega\to\mathbb{R}$. Then we let \begin{equation} u^c_a(y):=\inf_{x\in X}\left\{u(x)+\frac{a}{2}\mathsf{d}^2(x,y)\right\}\, . \end{equation} For $a>0$ sufficiently small and for $y\in B_R(q)$, it is easy to verify that the infimum can be restricted to the original set of definition, i.e. \begin{equation} u^c_a(y):=\inf_{x\in \overline{\Omega}}\left\{u(x)+\frac{a}{2}\mathsf{d}^2(x,y)\right\}\, . \end{equation} Hence \begin{equation} \frac{u^c_a(y)}{a}=\inf_{x\in\overline{\Omega}}\left\{\frac{1}{2}\mathsf{d}^2(x,y)-\left(\frac{-u(x)}{a}\right)\right\}=\inf_{x\in X}\left\{\frac{1}{2}\mathsf{d}^2(x,y)-\left(\frac{-u(x)}{a}\right)\right\}\, . \end{equation} In particular, the couple $(\phi,\psi):=(-a^{-1}u,a^{-1}u^c_a)$ verifies \begin{equation} \phi(x)+\psi(y)\le \frac{1}{2}\mathsf{d}^2(x,y)\, ,\quad\text{for any $x,y\in X$}\, , \end{equation} the function $\psi$ is $c$-concave and for any $y\in E$ there exists $x\in A_a(E,\Omega,u)$ such that \begin{equation}\label{eq:almostdual} \phi(x)+\psi(y)=\frac{1}{2}\mathsf{d}^2(x,y)\, . \end{equation} Let $\mu_0:=\frac{1}{\mathfrak{m}(E)}\mathfrak{m}\res E$ be the probability measure with constant density w.r.t. $\mathfrak{m}$ concentrated on $E$.\\ Since $\psi$ is a $c$-concave function and $\mu_0$ is absolutely continuous with respect to $\mathfrak{m}$ with bounded density and bounded support, we can consider the Wasserstein geodesic $(\mu_s)_{s\in[0,1]}$ induced by exponentiation from $\mu_0$ by $\psi$, borrowing the terminology from \cite{GigliRajalaSturm}.\\ We shall denote by $\psi^c$ the function obtained from $\psi$ by $c$-duality, i.e. \begin{equation} \psi^c(x):=\inf_{y\in X}\left\{\frac{\mathsf{d}^2(x,y)}{2}-\psi(y)\right\}\, , \quad\text{for any $x\in X$}\, . \end{equation} It is easy to verify that $\psi$ and $\psi^c$ are Lipschitz functions with compact support. Moreover, \begin{equation} \psi^c(x)+\psi(y)\le\frac{1}{2}\mathsf{d}^2(x,y)\, ,\quad\text{for any $x,y\in X$} \end{equation} and for any $y\in E$ there exists $x\in A_a(E,\Omega,u)$ such that \begin{equation}\label{eq:dual} \psi^c(x)+\psi(y)=\frac{1}{2}\mathsf{d}^2(x,y)\, . \end{equation} The above imply that $-a^{-1}u\le \psi^c$ everywhere and $-a^{-1}u=\psi^c$ on $ A_a(E,\Omega,u)$.\\ Furthermore, $\mu_1$ is concentrated on $A_a(E,\Omega,u)$ and, by \cite{GigliRajalaSturm} and \eqref{eq:dual}, we get that $(\psi,\psi^c)$ is an optimal couple of Kantorovich potentials for the geodesic $(\mu_t)_{t\in[0,1]}$. Again by \cite{GigliRajalaSturm}, $\mu_t\ll\mathfrak{m}$ for every $t\in[0,1)$. By the general theory of Wasserstein geodesics on geodesic metric spaces, it holds \begin{align*} &\mathcal{Q}_t(-\psi)+\mathcal{Q}_{1-t}(-\psi^c)\ge 0, \quad \text{everywhere}\\ &\mathcal{Q}_t(-\psi)+\mathcal{Q}_{1-t}(-\psi^c)=0\, ,\quad\text{on $\mathrm{supp}\, \mu_t$, for any $t\in [0,1]$.} \end{align*} Moreover, it is easy to verify that \begin{align} &\mathcal{Q}_{1-t}(a^{-1}u)\ge \mathcal{Q}_{1-t}(-\psi^c)\, , \quad \text{everywhere} \label{eq:conf1}\\ &\mathcal{Q}_{1-t}(a^{-1}u)= \mathcal{Q}_{1-t}(-\psi^c)\, ,\quad\text{on $\mathrm{supp}\, \mu_t$ for every $t\in[0,1]$.} \label{eq:conf2} \end{align} \medskip \textbf{Step 2.} In this step we introduce regularized potentials that induce the Wasserstein geodesic $(\mu_t)_{t\in[0,1]}$, whose existence is guaranteed by \cite{GigliMosconi15}. Indeed, for any $t\in (0,1)$ by \cite[Theorem 3.13]{GigliMosconi15} there exists a Lipschitz function with compact support $\eta_t:X\to\mathbb{R}$ such that \begin{equation}\label{eq:inteta} -\mathcal{Q}_t(-\psi)\le \eta_t\le \mathcal{Q}_{1-t}(-\psi^c) \end{equation}\label{eq:roughlapla} and $\eta_t\in D(\Delta)$ with \begin{equation}\label{eq:roughlapla} \norm{\Delta \eta_t}_{L^{\infty}}\le C(t)<\infty\, , \end{equation} where $C:(0,1)\to(0,\infty)$ is a continuous function depending on $\norm{\phi}_{\infty}$, $K$ and $N$, blowing up near to the boundary points.\\ The combination of \eqref{eq:inteta} with \eqref{eq:conf1} and \eqref{eq:conf2} proves that \begin{equation}\label{eq:pinchetat} \eta_t\le \mathcal{Q}_{1-t}(a^{-1}u)\, , \end{equation} everywhere and \begin{equation}\label{eq:touchetat} \eta_t= \mathcal{Q}_{1-t}(a^{-1}u)\, ,\quad\text{on $\mathrm{supp}\, \mu_t\, ,$ for every $t\in[0,1]$.} \end{equation} % Arguing as in the proof of \cite[Proposition 3.7]{BrueSemolaCPAM} it is possible to verify that the couple $(\mu_t,-\nabla\eta_t)$ is a solution of the continuity equation \begin{equation} \frac{\partial\mu_t}{\partial t}+\div (-\nabla \eta_t\mu_t)=0\, ,\quad\text{for every $t\in (0,1)$}\, . \end{equation} Thanks to \eqref{eq:roughlapla}, the theory of Regular Lagrangian Flows on $\RCD$ spaces from \cite{AmbrosioTrevisan} can be applied between any intermediate times $0<s<r<1$ along the solution of the continuity equation $(\mu_t,-\nabla\eta_t)$. \medskip \textbf{Step 3.} The goal of this step is to uniformly control the positive part of the Laplacian of $\eta_t$ for any $t\in[0,1]$.\\ This follows indeed from the assumption that $u$ has measure valued Laplacian with uniformly bounded positive part combined with \eqref{eq:pinchetat}, \eqref{eq:touchetat}, and with a variant of the argument introduced in \cite[Section 4]{MondinoSemola21}, in turn building on top of \cite{Kuwada10,AGS15}. We will first obtain a uniform bound for the positive part of the Laplacian of $\mathcal{Q}_s(a^{-1}u)$, for any $s\in [0,1]$.\\ Let us consider $s\in[0,1]$ and for any $x\in B_R(q)$ we let $x_s\in B_R(q)$ be a point such that \begin{equation*} \mathcal{Q}_s(a^{-1}u)(x)=\inf_{y\in X}\left\{\frac{\mathsf{d}^2(x,y)}{2s}+a^{-1}u(y)\right\}=\frac{\mathsf{d}^2(x,x_s)}{2s}+a^{-1}u(x_s)\, \end{equation*} and $x_s$ minimizes the distance from $x$ among all points with the property above. Then \begin{equation}\label{eq:ineqgeneral} \mathcal{Q}_s(a^{-1}u)(z)\le \frac{\mathsf{d}^2(z,y)}{2s}+a^{-1}u(y)\, ,\quad\text{for any $z,y\in X$} \end{equation} and \begin{equation}\label{eq:idr0} \mathcal{Q}_s(a^{-1}u)(x)=\frac{\mathsf{d}^2(x,x_s)}{2s}+a^{-1}u(x_s)\, . \end{equation} In particular, we can easily deduce the classical estimate for the Hopf-Lax semigroup \begin{equation}\label{eq:boundHL} \frac{\mathsf{d}^2(x,x_s)}{s}\le a^{-1}\abs{u(x)-u(x_s)}\le a^{-1}\mathrm{osc}_{\Omega}u\, ,\quad\text{for any $x\in B_R(q) $}\, . \end{equation} For any $r\ge 0$, we let $\Pi_r$ be the optimal transport plan for quadratic cost between probability measures $P_r\delta_x$ and $P_r\delta_{x_s}$, where we denoted by $P_r\delta_{p}$ the heat kernel measure with centre $p\in X$ at time $r$. Then we integrate both sides of \eqref{eq:ineqgeneral} with respect to $\Pi_r$ on $X\times X$ and get \begin{align}\label{eq:conseqcontract} P_r\mathcal{Q}_s\left(a^{-1}u\right)(x)\le &\, \frac{W_2^2(P_r\delta_x,P_r\delta_{x_s})}{2s}+a^{-1}P_ru(x_s)\\ \le &\, \frac{e^{-2Kr}\mathsf{d}^2(x,x_s)}{2s}+a^{-1}P_ru(x_s)\, . \end{align} Subtracting \eqref{eq:idr0} from both sides, dividing by $r$ and taking the $\limsup$ as $r\downarrow 0$, taking into account the Wasserstein contractivity of the Heat Flow in the Wasserstein space under the $\RCD(K,\infty)$ condition \cite{AGSDuke,AGMS}, we formally get \begin{equation*} \Delta \mathcal{Q}_s\left(a^{-1}u\right)(x)\le -\frac{Kr\mathsf{d}^2(x,x_s)}{s}+a^{-1}\Delta u(x_s)\, , \end{equation*} which gives a uniform upper bound for the positive part of the Laplacian of $\mathcal{Q}_s\left(a^{-1}u\right)$ as soon as we can uniformly bound $\mathsf{d}^2(x,x_s)/s$ with respect to $x$ and $s$ and uniformly bound the positive part of $\Delta u$. Next, we make the above argument rigorous. \smallskip Recall that by construction of the regularized Kantorovich potentials, $\eta_t$ belongs to the domain of the Laplacian. In particular, see \cite[Lemma 2.56]{MondinoSemola21} for instance, for $\mathfrak{m}$-a.e. $x\in X$ it holds \begin{equation*} \Delta \eta_t(x)=\lim_{r\to 0}\frac{P_r\eta_t(x)-\eta_t(x)}{r}\, . \end{equation*} Since $\eta_t(x)=\mathcal{Q}_{1-t}(a^{-1}u)(x)$ for every $x\in \supp(\mu_t)$, we can use \eqref{eq:conseqcontract} and \eqref{eq:idr0} to get \begin{equation*} \frac{P_r\eta_t(x)-\eta_t(x)}{r}\le \left(\frac{e^{-2Kr}-1}{2r}\right)\frac{\mathsf{d}^2(x,x_{1-t})}{1-t}+a^{-1}\frac{P_ru(x_{1-t})-u(x_{1-t})}{r}\, ,\quad\text{for any $r>0$}\,. \end{equation*} Hence, taking the $\limsup$ as $r\to 0$ we infer that \begin{equation*} \Delta\eta_t(x)\le -K\frac{\mathsf{d}^2(x,x_{1-t})}{1-t}+a^{-1}\limsup_{r\to 0}\frac{P_ru(x_s)-u(x_s)}{r}\, ,\quad\text{for $\mathfrak{m}$-a.e. $x\in\supp (\mu_t)$}\, . \end{equation*} Since by assumption $\Delta u\le L$ on $\Omega$, employing \autoref{prop:hfmeanLapla} and then \eqref{eq:boundHL}, we obtain that \begin{equation}\label{eq:uniformupper} \Delta\eta_t(x)\le -K\frac{\mathsf{d}^2(x,x_t)}{1-t}+a^{-1}L\le -Ka^{-1}\mathrm{osc}_{\Omega}u+a^{-1}L\, , \end{equation} for $\mathfrak{m}$-a.e. $x\in\supp(\mu_t)$ and for every $t\in(0,1]$. \medskip \textbf{Step 4.} Let us complete the proof of \eqref{eq:quantest} combining the previous ingredients with a limiting argument and \cite[Proposition 5.3]{GigliViolo21}. \smallskip We assume $K\le 0$ and set \begin{equation} C:=-Ka^{-1}\mathrm{osc}_{\Omega}u+a^{-1}L\, >\, 0\, . \end{equation} Notice that, by the very definition of $\mu_0:=\rho_0\mathfrak{m}$ it holds \begin{equation} \rho_0\equiv \frac{1}{\mathfrak{m}(E)}\, ,\quad\text{ $\mathfrak{m}$-a.e. on $E$}\, ,\quad\rho_0\equiv 0\, ,\quad \text{$\mathfrak{m}$-a.e. outside of $E$}\, , \end{equation} hence $\norm{\rho_0}_{\infty}=1/\mathfrak{m}(E)$.\\ By the $\RCD(K,N)$ condition (actually essentially non-branching plus $\MCP(K,N)$ would suffice, see \cite[Theorem 1.1]{CavallettiMondino17}, after \cite{Rajala12}), \begin{equation}\label{eq:cont0} \norm{\rho_t}_{L^{\infty}}\le 1/\mathfrak{m}(E)+o(1)\, , \quad\text{as $t\to 0$.} \end{equation} On the other hand, we can apply \cite[Proposition 5.3]{GigliViolo21} (see in particular equation (5.8) therein) in combination with the upper bound for the positive part of the Laplacian of $\eta_t$ in \eqref{eq:uniformupper} to obtain that \begin{equation} \sup_{t\in [s,r]}\norm{\rho_t}_{L^{\infty}}\le \norm{\rho_s}_{L^{\infty}}e^{(r-s)C}\, ,\quad\text{for any $0<s<r<1$}\, . \end{equation} Taking the limit as $s\to 0$ and taking into account \eqref{eq:cont0}, we get \begin{equation}\label{eq:unidensest} \sup_{t\in [0,r]}\norm{\rho_t}_{L^{\infty}}\le \norm{\rho_0}_{L^{\infty}}e^{rC}\le \norm{\rho_0}_{L^{\infty}}e^C\, ,\quad\text{for any $0\le r<1$}\, . \end{equation} To conclude, we observe that the probability measures $\mu_t$ weakly converge to $\mu_1$ as $t\to 1$, as they converge in Wasserstein distance. Then \eqref{eq:unidensest} implies that $\mu_1\ll\mathfrak{m}$ and, setting $\mu_1=\rho_1\mathfrak{m}$, it holds \begin{equation} \norm{\rho_1}_{L^{\infty}}\le \frac{e^C}{\mathfrak{m}(E)}\, . \end{equation} As $\mu_1$ is concentrated on $A_a(E,\Omega,u)$, we conclude that \begin{equation} \mathfrak{m}(E)\le \mathfrak{m}(A)\, e^C\, . \end{equation} \end{proof} \begin{remark} It seems likely that a refinement of the proof of \autoref{thm:mainperturbJensen} could lead to a sharper estimate more in the spirit of \eqref{eq:estsmooth}, when we additionally assume that $u$ has Laplacian locally in $L^2$. However this is not needed for the main results of the present note and thus, for the sake or brevity, we leave it for future investigation. \end{remark} \begin{corollary}\label{cor:perturb} Let $(X,\mathsf{d},\mathfrak{m})$ be an $\RCD(K,N)$ metric measure space. Let $\Omega\subset X$ be an open domain and let $u\in W^{1,2}_{{\rm loc}}(\Omega)\cap C(\Omega)$ be such that: \begin{itemize} \item[(i)] $u$ admits locally measure valued Laplacian on $\Omega$ with $\Delta u\le L\mathfrak{m}$ in the sense of distributions on $\Omega$ for some constant $L\ge 0$; \item[(ii)]$u$ has a strict local minimum at some $x\in \Omega$. \end{itemize} Then for any set of full $\mathfrak{m}$-measure $B\subset \Omega$ and for any natural number $n>0$ there exist $0<a_n<1/n$ and $y_n\in\Omega$ with $\mathsf{d}(x,y_n)<1/n$ such that the function \begin{equation} \Omega\ni z\mapsto u(z)+a_n\mathsf{d}^2(z,y_n) \end{equation} admits a minimum at a point $\bar{x}_n\in B$ with $\mathsf{d}(\bar{x}_n,x)<1/n$. \end{corollary} \begin{proof} It is sufficient to apply \autoref{thm:mainperturbJensen} with $E=B_{1/n}(x)$ and $a$ sufficiently small so that, for any $y\in B_{1/n}(x)$ it holds $\mathsf{d}(x,\bar{x})<1/n$ for any minimum point of the function \begin{equation} \Omega\ni z\mapsto u(z)+a\, \mathsf{d}^2(z,y)\, . \end{equation} As \autoref{thm:mainperturbJensen} shows that the set of all possible minimum points when $y$ varies in $B_{1/n}(x)$ has positive measure, clearly it intersects any set of full measure. \end{proof} \section{The key propagation theorem}\label{sec:propagation} In this section we consider an $\RCD(K,N)$ metric measure space $(X,\mathsf{d},\mathfrak{m})$, for some $K\in\mathbb{R}$ and $1\le N<\infty$, a ${\sf CAT}(0)$ space $(Y,\mathsf{d}_Y)$, an open domain $\Omega\subset X$ and a harmonic map $u:\Omega\to Y$, as in the statement of \autoref{thm:main}. The goal is to prove a propagation estimate analogous to \cite[Lemma 6.7]{ZhangZhu18}, see \autoref{prop:mainestimate}. This is a key technical step for the proof of the Lipschitz continuity of harmonic maps from $\RCD(K,N)$ spaces to ${\sf CAT}(0)$ spaces. Let us stress some fundamental differences between our proof and the one in \cite{ZhangZhu18}. The proof in \cite{ZhangZhu18} uses the second variation formula and parallel transport for Alexandrov spaces \cite{Petrunin98}. It builds on a strategy introduced in \cite{Petrunin96} (see also \cite{ZhangZhu12}). Moreover, it relies on a delicate perturbation argument finding its roots again in \cite{Petrunin96} in the Alexandrov case and similar to the one used in the Euclidean viscosity theory of PDEs to prove the approximate maximum principle for semiconvex functions, see \cite{Jensen88,CaffarelliCabre95}.\\ In order to prove the estimate on non smooth $\RCD$ spaces, we give a completely new argument. The substitute for the second variation formula is the analysis of the interplay between optimal transport and the Heat Flow under lower Ricci curvature bounds finding its roots in \cite{SturmVonRenesse,Kuwada10,AGS15} and further explored by the authors in \cite{MondinoSemola21}. The substitute of the perturbation argument in the Alexandrov theory has been obtained in \autoref{sec:perturbation}. \smallskip We first introduce some terminology. \\For any domain $\Omega'$ compactly contained in $\Omega\subset X$, for any $t>0$ and for any $0\le \lambda\le 1$ we set \begin{equation}\label{eq:introft} f_t(x,\lambda):=\inf_{y\in\Omega'}\left\{\frac{e^{-2K\lambda}\mathsf{d}^2(x,y)}{2t}-\mathsf{d}_Y(u(x),u(y))\right\}\, . \end{equation} We denote by $S_t(x,\lambda)$ the set of those points where the infimum is achieved, i.e. \begin{equation*} S_t(x,\lambda):=\left\{y\in \Omega'\, :\, f_t(x,\lambda)=\frac{e^{-2K\lambda}\mathsf{d}^2(x,y)}{2t}-\mathsf{d}_Y(u(x),u(y))\right\}\, . \end{equation*} Notice that \begin{equation}\label{eq:singosc} 0\ge f_t(x,\lambda)\ge-\mathrm{osc}_{\overline{\Omega'}}u=-\max_{x,y\in \overline{\Omega'}}\mathsf{d}_Y(u(x),u(y))\, , \end{equation} where we recall that $u$ is continuous, see \autoref{thm:continuity}, and thus its oscillation is finite on any bounded set. \smallskip The following is obtained in \cite[Lemma 6.1]{ZhangZhu18} and the proof works without any modification in the present setting, so we omit it. It is a variant of the classical mild regularity properties for the evolution via the Hopf-Lax semigroup (see for instance \cite{AGS14}) in this non-linear setting. \begin{lemma}\label{lemma:firstft} With the notation above, let us set for any open domain $\Omega''\Subset\Omega'$, \begin{equation}\label{eq:C*t0} C_*:=2\, \mathrm{osc}_{\overline{\Omega'}}\, u+2\, , \quad t_0:=\frac{\mathsf{d}^2(\Omega'',\partial\Omega')}{4C_*}\, . \end{equation} Then, for each $t\in (0,t_0)$, the following hold: \begin{itemize} \item[(i)] For each $\lambda\in [0,1]$ and $x\in\Omega''$, it holds $S_t(x,\lambda)\neq\emptyset$ and \begin{equation*} f_t(x,\lambda)=\min_{\overline{B_{\sqrt{C_*t}}(x)}}\left\{\frac{e^{-2K\lambda}\mathsf{d}^2(x,y)}{2t}-\mathsf{d}_Y(u(x),u(y))\right\}\, . \end{equation*} \item[(ii)] For each $\lambda\in[0,1]$, the function $x\mapsto f_t(x,\lambda)$ is in $C(\Omega'')\cap W^{1,2}(\Omega'')$ and satisfies the following energy estimate \begin{equation*} \int_{\Omega''}\abs{\nabla_x f_t(x,\lambda)}^2\mathop{}\!\mathrm{d}\mathfrak{m}\le C(K,N)\frac{{\rm diam}^2(\Omega')}{t^2}\mathfrak{m}(\Omega'')+C(K,N)\int_{\Omega''}\abs{\mathop{}\!\mathrm{d} u}_{\mathrm{HS}}^2\mathop{}\!\mathrm{d}\mathfrak{m}\, . \end{equation*} \item[(iii)] For any $x\in\Omega''$, the function $\lambda\mapsto f_t(x,\lambda)$ is Lipschitz with \begin{equation*} \abs{f_t(\lambda,x)-f_t(\lambda',x)}\le C_*e^{-2K}\abs{\lambda-\lambda'}\, ,\quad\text{for any $\lambda,\lambda'\in [0,1]$}\, . \end{equation*} \item[(iv)] The function $(x,\lambda)\mapsto f_t(x,\lambda)$ is in $C(\Omega''\times[0,1])$ and $W^{1,2}(\Omega''\times[0,1])$, where $X\times [0,1]$ is endowed with the canonical product structure. \end{itemize} \end{lemma} For $\Omega''\Subset\Omega'$, let $t_0>0$ be given by \eqref{eq:C*t0} and, for all $t\in (0, t_{0})$, consider the function $L_{t,\lambda}:\Omega''\to[0,\infty)$ defined by \begin{equation*} L_{t,\lambda}:=\mathsf{d}(x,S_t(x,\lambda))=\min_{y\in S_t(x,\lambda)}\mathsf{d}(x,y)\, . \end{equation*} The following corresponds to \cite[Lemma 6.2]{ZhangZhu18}, whose proof works with no modifications in the present setting, as it relies only on metric arguments, therefore it is omitted. \begin{lemma}\label{lemma:tech2} Let $\Omega''\Subset\Omega'$ and $t_0>0$ be given as above. Then for any $t\in (0,t_0)$ it holds that \begin{itemize} \item[(i)] The function $(x,\lambda)\mapsto L_{t,\lambda}(x)$ is lower semicontinuous on $\Omega''\times [0,1]$. \item[(ii)] For each $\lambda \in [0,1]$, \begin{equation*} \norm{L_{t,\lambda}}_{L^{\infty}(\Omega'')}\le \sqrt{C_*t}\, . \end{equation*} \end{itemize} \end{lemma} Below we report \cite[Lemma 6.3]{ZhangZhu18}, whose proof works again with no modifications in the present setting, therefore it is omitted. \begin{lemma}\label{lemma:tech3} Let $\Omega''\Subset\Omega'$ and $t_0>0$ be given as above. Then for any $t\in (0,t_0)$ it holds that \begin{equation}\label{eq:lemmatech3} \liminf_{\mu\to 0^+}\frac{f_t(x,\lambda+\mu)-f_t(x,\lambda)}{\mu}\ge -e^{-2K\lambda}\frac{K}{t}L^2_{t,\lambda}(x)\, , \end{equation} for any $\lambda\in [0,1)$ and any $x\in\Omega''$. \end{lemma} \begin{remark} Since $\lambda\mapsto f_t(x,\lambda)$ is a Lipschitz function for every $x\in\Omega''$, \eqref{eq:lemmatech3} can be turned into an inequality between derivatives valid $\mathscr{L}^1$-a.e. on $(0,1)$. \end{remark} In the case when $(X,\mathsf{d},\mathfrak{m})$ is an $\RCD(0,N)$ metric measure space, we can remove the dependence on the additional parameter $\lambda$ and set \begin{equation*} f(t,x):=\inf_{y\in\Omega'}\left\{\frac{\mathsf{d}^2(x,y)}{2t}-\mathsf{d}_Y(u(x),u(y))\right\}\, . \end{equation*} Our goal is to prove that for any $p\in\Omega'$ there exists a neighbourhood $U_p$ of $p$ and a positive time $t_p>0$ such that $f_t$ is superharmonic on $U_p$ for any $0<t<t_p$. See \autoref{prop:mainestimate} below for the general statement when $(X,\mathsf{d},\mathfrak{m})$ is an $\RCD(K,N)$ space for general $K\in\mathbb{R}$. \smallskip As mentioned in the discussion at the beginning of the section, the proof of the propagation estimate \autoref{prop:mainestimate} needs several new ideas with respect to \cite{ZhangZhu18}. We introduce the first new ingredient avoiding the technicalities for the sake of this presentation. In particular, for simplicity of presentation, we do not care about the maps being defined only on open domains and assume harmonicity to hold globally.\\ Assume that \begin{equation}\label{eq:eqtime0} f(t,x_0)=\frac{\mathsf{d}^2(x_0,y_0)}{2t}-\mathsf{d}_Y(u(x_0),u(y_0))\, , \end{equation} and \begin{equation}\label{eq:ineqsem} f(t,x)\le \frac{\mathsf{d}^2(x,y)}{2t}-\mathsf{d}_Y(u(x),u(y))\, ,\quad\text{for any $x, y\in X$}\, . \end{equation} Let us consider then the evolutions of Dirac deltas through the Heat Flow at points $x_0$ and $y_0$ and denote them by $P_s\delta_{x_0}$, $P_s\delta_{y_0}$ respectively. Let us also denote by $\Pi_{s,x_0,y_0}$ an optimal transport plan for quadratic cost between $P_s\delta_{x_0}$ and $P_s\delta_{y_0}$. Notice that $\Pi_{s,x_0,y_0}$ is a probability measure on $X\times X$ whose first and second marginals are $P_s\delta_{x_0}$ and $P_s\delta_{y_0}$, respectively. \medskip Let us integrate both sides in \eqref{eq:ineqsem} with respect to $\Pi_{s,x_0,y_0}$. Then we obtain \begin{equation}\label{eq:1tobound} \int _{X\times X}f(t,x)\mathop{}\!\mathrm{d}\Pi_{s,x_0,y_0}(x,y)\le \int_{X\times X}\left(\frac{\mathsf{d}^2(x,y)}{2s}-\mathsf{d}_Y(u(x),u(y))\right)\mathop{}\!\mathrm{d}\Pi_{s,x_0,y_0}(x,y)\, . \end{equation} Notice now that the integrand at the left hand side is independent of $y$. The first marginal of $\Pi_{s,x_0,y_0}$ is $P_s\delta_{x_0}$, hence \begin{equation*} \int _{X\times X}f(t,x)\mathop{}\!\mathrm{d}\Pi_{s,x_0,y_0}(x,y)=\int_Xf(t,x)\mathop{}\!\mathrm{d} P_s\delta_{x_0}=P_s(f(t,\cdot))(x_0)\, . \end{equation*} Since $\Pi_{s_0,x_0,y_0}$ is an optimal transport plan between $P_s\delta_{x_0}$ and $P_s\delta_{y_0}$ for quadratic cost, the $\RCD(0,\infty)$ condition implies that \begin{equation}\label{eq:2usecontr} \int_{X\times X}\frac{\mathsf{d}^2(x,y)}{2t}\mathop{}\!\mathrm{d} \Pi_{s,x_0,y_0}(x,y)=\frac{1}{2t}W_2^2(P_s\delta_{x_0},P_s\delta_{y_0})\le \frac{1}{2t}\mathsf{d}^2(x_0,y_0)\, , \end{equation} for any $s>0$. \smallskip We are left to bound the term \begin{equation}\label{eq:explaintobound} -\int_{X\times X}\mathsf{d}_Y(u(x),u(y))\mathop{}\!\mathrm{d} \Pi_{s,x_0,y_0}(x,y)\, . \end{equation} There are two possibilities. Either $u(x_0)=u(y_0)$ and then \eqref{eq:explaintobound} is trivially bounded above by $0$, or $u(x_0)\neq u(y_0)$ in which case we can argue as follows. Let us apply \autoref{lemma:elemCAT} with points $\{u(x),u(x_0),u(y_0),u(y)\}$. Then, denoting by $z_0$ the midpoint of $u(x_0)u(y_0)$, we obtain \begin{align*} &\left(\mathsf{d}_{Y}(u(x),u(y))-\mathsf{d}_Y(u(x_0),u(y_0))\right)\mathsf{d}_Y(u(x_0),u(y_0)) \\& \qquad \ge \left(\mathsf{d}_Y^2(u(x),z_0)-\mathsf{d}_Y^2(u(x),u(x_0))-\mathsf{d}_Y^2(z_0,u(x_0))\right)\\ &\qquad \quad +\left(\mathsf{d}_Y^2(u(y),z_0)-\mathsf{d}_Y^2(u(y),u(y_0))-\mathsf{d}_Y^2(z_0,u(y_0))\right)\, , \end{align*} for any $x,y$. With the notation introduced in \autoref{prop:intw}, this can be rewritten as \begin{equation*} \left(\mathsf{d}_{Y}(u(x),u(y))-\mathsf{d}_Y(u(x_0),u(y_0))\right)\ge -\frac{1}{\mathsf{d}_Y(u(x_0),u(y_0))}\left(w_{x_0,z_0}(x)+w_{y_0,z_0}(y)\right)\, , \end{equation*} for any $x,y\in X$. Hence \begin{equation}\label{eq:rewritten} -\mathsf{d}_Y(u(x),u(y))\le -\mathsf{d}_Y(u(x_0),u(y_0))+\frac{1}{\mathsf{d}_Y(u(x_0),u(y_0))}\left(w_{x_0,z_0}(x)+w_{y_0,z_0}(y)\right)\, . \end{equation} Integrating both sides of \eqref{eq:rewritten} w.r.t. $\Pi_{s,x_0,y_0}$, we can estimate \begin{align} \nonumber -\int_{X\times X}&\mathsf{d}_Y(u(x),u(y))\mathop{}\!\mathrm{d} \Pi_{s,x_0,y_0}(x,y)\le -\mathsf{d}_Y(u(x_0),u(y_0))\\ \nonumber &+\frac{1}{\mathsf{d}_Y(u(x_0),u(y_0))}\int_{X\times X}\left(w_{x_0,z_0}(x)+w_{y_0,z_0}(y)\right)\mathop{}\!\mathrm{d} \Pi_{s,x_0,y_0}\\ \le -\mathsf{d}_Y&(u(x_0),u(y_0))+\frac{1}{\mathsf{d}_Y(u(x_0),u(y_0))}\left(P_sw_{x_0,z_0}(\cdot)(x_0)+P_sw_{y_0,z_0}(\cdot)(y_0)\right)\, ,\label{eq:3bounded} \end{align} where the last inequality is due to the fact that the marginals of $\Pi_{s,x_0,y_0}$ are the heat kernel measures centred at $x_0$ and $y_0$ at time $s$. The combination of \eqref{eq:1tobound} with \eqref{eq:2usecontr}, \eqref{eq:3bounded} and \autoref{prop:intw}, assuming for the sake of this presentation that $x_0$ and $y_0$ are such that the asymptotic estimate \eqref{eq:asymChin} holds, proves that \begin{equation} \limsup_{s\to 0}\frac{\left(P_sf(t,\cdot)(x_0)-f(t,x_0)\right)}{s}\le 0\, , \end{equation} that yields, formally, super-harmonicity of $f(t,\cdot)$. \medskip We will encounter a key additional difficulty to make rigorous the strategy above, due to the fact that the asymptotic in \autoref{prop:intw} is not available for any point, but rather for $\mathfrak{m}$-a.e. point. In order to deal with this issue, we will rely on \autoref{thm:mainperturbJensen} and \autoref{cor:perturb}. Moreover, there will be error terms to deal with in the case of an $\RCD(K,N)$ space, with general $K\in\mathbb{R}$. \smallskip For the proof of the key propagation results we will rely on the following technical statement. \begin{lemma}\label{lemma:extend} Let $(X,\mathsf{d},\mathfrak{m})$ be an $\RCD(K,N)$ metric measure space for some $K\in\mathbb{R}$ and $1\le N<\infty$. Let $(x,y)\in X\times X$ and $U\subset X\times X$ be an open neighbourhood of $(x,y)$. Let $F:U\to\mathbb{R}$ be a continuous function with a local minimum at $(x,y)$. For any $r>0$, let us denote by $\Pi_r$ an admissible transport plan between $P_r\delta_x$ and $P_r\delta_y$ (which is a probability measure on $X\times X$). Then, for any open set $V\Subset U$ and for any function $G:X\times X\to \mathbb{R}$ continuous and bounded such that $F\equiv G$ on $V$, it holds \begin{equation}\label{eq:liminfPi} \liminf_{r\to 0}\frac{\int_{X\times X}G(z,w)\mathop{}\!\mathrm{d} \Pi_r(z,w)-G(x,y)}{r}\ge 0\, . \end{equation} \end{lemma} \begin{proof} We divide the proof in two steps. First we verify that the value of the $\liminf$ in \eqref{eq:liminfPi} depends only on the behaviour of $G$ in a neighbourhood of $(x,y)$. Then we complete the proof with a particular choice of the global bounded and continuous extension of $F$. \medskip \textbf{Step 1}. It is sufficient to prove the following claim. If $H\in \Cb(X\times X)$ and $H\equiv 0$ in a neighbourhood of $(x,y)$, then \begin{equation}\label{eq:Hprove} \lim_{r\to 0}\frac{\int_{X\times X}H(z,w)\mathop{}\!\mathrm{d} \Pi_r(z,w)}{r}=0\, . \end{equation} We can assume without loss of generality that $H\ge 0$ on $X\times X$, up to substituting $H$ with $\abs{H}$. If $H$ is non-negative, continuous, bounded and it vanishes in a neighbourhood of $(x,y)$, then there exist bounded and continuous, non-negative functions $H_1,H_2:X\to [0,\infty)$ identically vanishing in a neighbourhood of $x$ and $y$ respectively, such that \begin{equation*} 0\le H(z,w)\le H_1(z)+H_2(w)\, ,\quad\text{for any $z,w\in X$}\, . \end{equation*} Since $\Pi_r$ has marginals $P_r\delta_x$ and $P_r\delta_y$ on the first and second component respectively, it holds \begin{align}\label{eq:split} \int_{X\times X}H(z,w)\mathop{}\!\mathrm{d}\Pi_r(z,w)\le& \int_{X\times X}\left(H_1(z)+H_2(w)\right)\mathop{}\!\mathrm{d} \Pi_r(z,w)\\ = &\, P_rH_1(x)+P_rH_2(y)\, . \end{align} Since $H_1$ vanishes identically in a neighbourhood of $x$ and $H_2$ vanishes identically in a neighbourhood of $y$, it follows from \cite[Lemma 2.53]{MondinoSemola21} that \begin{equation}\label{eq:useMS21} \lim_{r\to 0}\frac{P_rH_1(x)}{r}=\lim_{r\to 0}\frac{P_rH_2(y)}{r}=0\, . \end{equation} Combining \eqref{eq:split} with \eqref{eq:useMS21} we obtain \eqref{eq:Hprove}. \medskip \textbf{Step 2.} Given what we obtained in the previous step, it is sufficient to choose any open domain $W\Subset U$ such that \begin{equation*} F(x,y)=\min_{(p,q)\in W}F(p,q)\, . \end{equation*} Then, for any $n\in\mathbb{N}$ we set $F_n:W\to\mathbb{R}$ by \begin{equation*} F_n(p,q):=F(p,q)+\frac{1}{n}\left(\mathsf{d}^2(x,p)+\mathsf{d}^2(y,q)\right)\, . \end{equation*} For any $n\in\mathbb{N}$, $F_n$ admits a strict minimum at $(x,y)$ on $W$. Then we can extend $F_n$ to a global function $G_n:X\times X\to\mathbb{R}$ such that $G_n$ admits a minimum at $(x,y)$. In particular \begin{equation}\label{eq:estforn} \liminf_{r\to 0}\frac{\int_{X\times X}G_n(z,w)\mathop{}\!\mathrm{d} \Pi_r(z,w)-G_n(x,y)}{r}\ge 0\, , \end{equation} since $G_n(z,w)\ge G_n(x,y)$ for any $z,w\in X$ and $\Pi_r$ is a probability measure.\\ Taking into account \autoref{lemma:operatordistprod}, \eqref{eq:estforn} shows that \begin{equation*} \liminf_{r\to 0}\frac{\int_{X\times X}G(z,w)\mathop{}\!\mathrm{d} \Pi_r(z,w)-G(x,y)}{r}\ge -\frac{2N}{n}\, ,\quad\text{for any $n\in\mathbb{N}$, $n\ge 1$}\, , \end{equation*} where $G$ denotes any bounded and continuous extension of $F$ to $X\times X$. Taking the limit as $n\to\infty$ we obtain \eqref{eq:liminfPi}. \end{proof} The statement below is the counterpart of \cite[Lemma 6.7]{ZhangZhu18} in the present setting. It is the main technical tool for the proof of the Lipschitz continuity of harmonic maps from $\RCD(K,N)$ spaces to ${\sf CAT}(0)$ spaces.\\ As we anticipated, there are some fundamental differences between our proof and the proof in \cite{ZhangZhu18}. The first one is that in \cite{ZhangZhu18} the authors build on the parallel transport and second variation formula for Alexandrov spaces from \cite{Petrunin98}, in order to estimate second variations in each direction and then average up the estimates. We will estimate averages w.r.t the heat kernel directly (i.e. Laplacians) relying on the interplay between Heat Flow and optimal transport on $\RCD$ spaces. The second fundamental difference is in the perturbation argument pursued in \autoref{sec:perturbation}. \begin{proposition}\label{prop:mainestimate} Let $(X,\mathsf{d},\mathfrak{m})$ be an $\RCD(K,N)$ metric measure space for some $K\le 0$ and $1\le N<\infty$ and let $(Y,\mathsf{d}_Y)$ be a ${\sf CAT}(0)$ space. Let $\Omega\subset X$ be an open domain and let $u:\Omega\to Y$ be a harmonic map. Let $f_t(x,\lambda)$ be as in \eqref{eq:introft}. Then for any $p\in \Omega'$ there exist a neighbourhood $U_p=B_{R_p}(p)$ of $p$ and a time $t_p>0$ such that, for any $0<t<t_p$ and any $\lambda\in [0,1]$, the function $U\ni x\mapsto f_t(x,\lambda)$ is a super-solution of the Poisson equation \begin{equation*} \Delta f_t(\cdot,\lambda)=-e^{-2K\lambda}\frac{K}{t}L_{t,\lambda}^2\, , \quad \text{on } U_{p}\, . \end{equation*} \end{proposition} \begin{proof} The proof will be divided into three steps. In Step 1 we set up the contradiction argument via an auxiliary function. In Step 2 we perturb the auxiliary function to achieve a strict minimum at a \emph{sufficiently regular} point. In Step 3 we reach a contradiction with a second variation argument. \smallskip \textbf{Step 1}. Following the proof in \cite{ZhangZhu18}, we notice that it is sufficient to prove that $U\ni x\mapsto f_t(x,\lambda)$ is a super-solution of the Poisson equation \begin{equation*} \Delta f_t(\cdot,\lambda)=-e^{-2K\lambda}\frac{K}{t}L_{t,\lambda}^2+\theta \quad \text{ on $U_p$, for any $\theta>0$. } \end{equation*} Let us suppose by contradiction that the claim above is not true for some $t>0$, $\lambda\in [0,1]$ and $\theta_0>0$. Then, by \autoref{prop:poissonhelp}, there exists an open domain $B\subset U_p$ such that, denoting by $v\in W^{1,2}(B)$ the solution of the Poisson problem \begin{equation}\label{eq:solvePoiss} \Delta v= -e^{-2K\lambda}\frac{K}{t}L_{t,\lambda}^2+\theta_0\, ,\quad\text{on $B$}\, , \end{equation} with $v=f_t(\cdot,\lambda)$ on $\partial B$ (i.e. $v-f_t(\cdot,\lambda)\in W^{1,2}_0(B)$), it holds that \begin{equation*} \min_{x\in B}\left\{f_t(x,\lambda)-v(x)\right\}<0=\min_{x\in \partial B}\left\{f_t(x,\lambda)-v(x)\right\}\, . \end{equation*} In particular, $f_t(\cdot,\lambda)-v$ achieves a minimum in the interior of $B$. Let $\bar{x}\in B$ be any such a minimum point. Define the function $H:B\times U\to \mathbb{R}$ by \begin{equation*} H(x,y):=\frac{e^{-2K\lambda}\mathsf{d}^2(x,y)}{2t}-\mathsf{d}_Y(u(x),u(y))-v(x)\, . \end{equation*} Let $\bar{y}\in S_{t}(\bar{x},\lambda)\Subset U$ be such that \begin{equation*} \mathsf{d}(\bar{x},\bar{y})=L_{t,\lambda}(x)\, . \end{equation*} By the very definition of $S_{t}(\bar{x},\lambda)$, $H$ has a minimum at $(\bar{x},\bar{y})$. \smallskip \textbf{Step 2}. We perturb $H$ to achieve a strict minimum at $(\bar{x},\bar{y})$, with a controlled perturbation. To this aim, we consider \begin{equation}\label{eq:choice delta} 0<\delta_0<C(K,N,{\rm diam}(U))\theta_0\, \end{equation} and define the function $H_1:B\times U\to\mathbb{R}$ by \begin{equation*} H_1(x,y):=H(x,y)+\delta_0\mathsf{d}^2(\bar{x},x)+\delta_0\mathsf{d}^2(\bar{y},y)\, . \end{equation*} Since $(\bar{x},\bar{y})$ is a minimum for $H$, $(\bar{x},\bar{y})$ is the unique strict minimum for $H_1$ in $B\times U$. \smallskip The next goal is to perturb again $H_1$ in order to make it achieve its minimum at a point $(\tilde{x},\tilde{y})$ such that $\tilde{x}$ and $\tilde{y}$ are \emph{good points} for \autoref{prop:intw} and $\bar{x}$ is a Lebesgue point for \begin{equation*} x\mapsto e^{-2K\lambda}\frac{K}{t}L_{t,\lambda}^2(x)\, . \end{equation*} We notice that $\mathfrak{m}\otimes\mathfrak{m}$-a.e. point in $B\times U$ verifies these two properties. \smallskip We wish to construct perturbations with the tools developed in \autoref{sec:perturbation}.\\ Let us observe that $H_1$ has measure valued Laplacian on $B\times U$ with positive part bounded from above by a constant \begin{equation*} C=C\left({\rm diam} U,{\rm diam} B,\lambda,t,\delta_0,\norm{L_{t,\lambda}}_{L^{\infty}(B)}\right)\, . \end{equation*} In order to verify this claim we consider the various terms appearing in the definition of $H_1$ separately. The function \begin{equation*} (x,y)\mapsto \frac{e^{-2K\lambda}\mathsf{d}^2(x,y)}{2t} \end{equation*} has measure valued Laplacian on $X\times X$ with positive part bounded above by a constant $C(K,N,t,{\rm diam} U,{\rm diam} B)$ thanks to \autoref{lemma:elemdistprod}. The function \begin{equation}\label{eq:introG} (x,y)\mapsto \delta_0\mathsf{d}^2(\bar{x},x)+\delta_0\mathsf{d}^2(\bar{y},y)=:G(x,y) \end{equation} has measure valued Laplacian with positive part bounded by $C(K,N,\delta_0,{\rm diam} U,{\rm diam} B)$ thanks to the Laplacian comparison and the tensorization of Sobolev spaces. The function \begin{equation*} (x,y)\mapsto -\mathsf{d}_Y(u(x),u(y)) \end{equation*} has non-positive measure valued Laplacian, thanks to \autoref{prop:lapladist}. Moreover, \begin{equation*} \Delta_{x,y}(-v)=-\Delta_xv=e^{-2K\lambda}\frac{K}{t}L_{t,\lambda}^2+\theta_0\, , \end{equation*} by the very construction of $v$, see \eqref{eq:solvePoiss}. Hence the function $(x,y)\mapsto -v(x)$ has Laplacian bounded by $C(K,N,t,\lambda,{\rm diam} U,{\rm diam} B)$, thanks to \autoref{lemma:tech2} (ii). \smallskip For $\mathfrak{m}\otimes \mathfrak{m}$-a.e. $(z,z')\in B\times U$, $z$ is a Lebesgue point of \begin{equation}\label{eq:Lebztilde} z\mapsto e^{-2K\lambda}\frac{K}{t}L_{t,\lambda}^2(z)\, \end{equation} and both $z$ and $z'$ are such that, setting \begin{equation*} w_{q,P}(\cdot):=\mathsf{d}_Y^2(u(\cdot),u(q))-\mathsf{d}_Y^2(u(\cdot),P)+\mathsf{d}_Y^2(P,u(q))\, , \end{equation*} it holds \begin{equation*} \limsup_{t\to 0} \frac{1}{t}P_t \left(w_{q,P}(\cdot)\right)(q)\le 0\, , \end{equation*} for every $P\in Y$, for $q=z$ and $q=z'$. This is a consequence of \autoref{prop:intw}. Combining the above observations with \autoref{cor:perturb} we obtain that, for every $\mu>0$ sufficiently small, there exist $a_{\mu}\in B$, $b_{\mu}\in U$ such that the function $H_{a,b,\mu}:B\times U\to\mathbb{R}$ defined by \begin{equation}\label{eq:introG1} H_{a,b,\mu}(x,y):=H_1(x,y)+\mu\mathsf{d}^2(a_{\mu},x)+\mu\mathsf{d}^2(b_{\mu},y)=:H_1(x,y)+G_{1,\mu}(x,y)\, \end{equation} achieves a strict minimum at a point $(\tilde{x}_{\mu},\tilde{y}_{\mu})\in B\times U$ which verifies the good properties mentioned above. We choose any of the triples $(a,b,\mu)$ such that these conditions are met and set $H_{2,\mu}:=H_{a,b,\mu}$. \smallskip \textbf{Step 3}. Let us see how to reach a contradiction from the construction of the previous two steps.\\ For the first part of this step, $\mu>0$ will be fixed and we will avoid the subscript $\mu$ for the minimum point $(\tilde{x}_{\mu},\tilde{y}_{\mu})$ to simplify the notation.\\ For any $s>0$, let us denote by $P_s\delta_{\tilde{x}}$ and $P_s\delta_{\tilde{y}}$ the heat kernel measures at time $s$ centred at $\tilde{x}$ and $\tilde{y}$ respectively. Moreover, we denote by $\Pi_s$ the optimal transport plan for quadratic cost between $P_s\delta_{\tilde{x}}$ and $P_s\delta_{\tilde{y}}$. In particular, $\Pi_s$ is a probability measure on $X\times X$ whose first and second marginals are $P_s\delta_{\tilde{x}}$ and $P_s\delta_{\tilde{y}}$, respectively and \begin{equation*} \int_{X\times X}\mathsf{d}^2(x,y)\mathop{}\!\mathrm{d} \Pi_s(x,y)\le \int_{X\times X}\mathsf{d}^2(x,y)\mathop{}\!\mathrm{d} \Pi(x,y)\le e^{-2Ks}\mathsf{d}^2(\tilde{x},\tilde{y})\, , \end{equation*} for any probability measure $\Pi$ on $X\times X$ with the same marginals. The second inequality above follows from the Wasserstein contractivity of the Heat Flow on $\RCD(K,\infty)$ spaces, see \cite{AGS15}. \smallskip From now on, when integrating with respect to heat kernel measures, or optimal transport plans between heat kernel measures, continuous functions that are not globally defined, we always understand that we are choosing global extensions with controlled growth at infinity. The independence of the particular choice is justified by \cite[Lemma 2.53]{MondinoSemola21} and the first step of the proof of \autoref{lemma:extend}. We set \begin{equation}\label{eq:minpoint} H_{2,\mu}(x,y):=\frac{e^{-2K\lambda}\mathsf{d}^2(x,y)}{2t}+F_{\mu}(x,y) \end{equation} and claim that \begin{equation*} \liminf_{s\to 0}\frac{\int_{X\times X}H_{2,\mu}(x,y)\mathop{}\!\mathrm{d} \Pi_s(x,y)-H_{2,\mu}(\tilde{x},\tilde{y})}{s}\ge 0\, , \end{equation*} since $(\tilde{x},\tilde{y})$ is a minimum of $H_{2,\mu}$ on $B\times U$. This is indeed a consequence of \autoref{lemma:extend}. \smallskip On the other hand, let us estimate \begin{align} \nonumber \limsup_{s\to 0}&\frac{\int_{X\times X}H_{2,\mu}(x,y)\mathop{}\!\mathrm{d} \Pi_s(x,y)-H_{2,\mu}(\tilde{x},\tilde{y})}{s}\\ \le& \frac{e^{-2K\lambda}}{2t}\limsup_{s\to 0}\frac{\int_{X\times X}\mathsf{d}^2(x,y)\mathop{}\!\mathrm{d}\Pi_s(x,y)-\mathsf{d}^2(\tilde{x},\tilde{y})}{s}\\ +&\limsup_{s\to 0}\frac{\int_{X\times X}-\mathsf{d}_Y(u(x),u(y))\mathop{}\!\mathrm{d} \Pi_s+\mathsf{d}_Y(u(\tilde{x}),u(\tilde{y}))}{s}\label{eq:lapla2dY}\\ +&\limsup_{s\to 0}\frac{\int_{X\times X}\left(-v(x)\right)\mathop{}\!\mathrm{d}\Pi_s(x,y)+v(\tilde{x})}{s}\label{eq:lapla2v}\\ +&\limsup_{s\to 0}\frac{\int_{X\times X}G(x,y)\mathop{}\!\mathrm{d}\Pi_s(x,y)-G(\tilde{x},\tilde{y})}{s}\label{eq:lapla2G}\\ +&\limsup_{s\to 0}\frac{\int_{X\times X}G_{1,\mu}(x,y)\mathop{}\!\mathrm{d}\Pi_s(s,y)-G_{1,\mu}(\tilde{x},\tilde{y})}{s}\, ,\label{eq:lapla2G1} \end{align} where the functions $G$ and $G_1$ have been introduced in \eqref{eq:introG} and \eqref{eq:introG1} respectively. We estimate each of the five terms separately.\\ We start observing that \begin{equation}\label{eq:estbyContraction} \frac{e^{-2K\lambda}}{2t}\limsup_{s\to 0}\frac{\int_{X\times X}\mathsf{d}^2(x,y)\mathop{}\!\mathrm{d}\Pi_s(x,y)-\mathsf{d}^2(\tilde{x},\tilde{y})}{s}\le -K \frac{e^{-2K\lambda}}{t}\mathsf{d}^2(\tilde{x},\tilde{y})\, . \end{equation} Let us deal with \eqref{eq:lapla2dY}. There are two possibilities. Either $u(\tilde{x})=u(\tilde{y})$ in which case $\Pi_s$ is concentrated on the diagonal of $X\times X$ and therefore it trivially holds \begin{equation*} \limsup_{s\to 0}\frac{\int_{X\times X}-\mathsf{d}_Y(u(x),u(y))\mathop{}\!\mathrm{d} \Pi_s+\mathsf{d}_Y(u(\tilde{x}),u(\tilde{y}))}{s}\le 0\, . \end{equation*} Otherwise $u(\tilde{x})\neq u(\tilde{y})$ in which case we can argue as follows. For any $(x,y)\in X\times X$, let us apply \autoref{lemma:elemCAT} with points $\{u(x),u(\tilde{x}),u(\tilde{y}),u(y)\}$. Then, denoting by $\tilde{z}$ the midpoint of the segment $u(\tilde{x})u(\tilde{y})$, we obtain \begin{align*} &\left(\mathsf{d}_{Y}(u(x),u(y))-\mathsf{d}_Y(u(\tilde{x}),u(\tilde{y}))\right)\mathsf{d}_Y(u(\tilde{x}),u(\tilde{y})) \\ &\quad \ge \left(\mathsf{d}_Y^2(u(x),\tilde{z})-\mathsf{d}_Y^2(u(x),u(\tilde{x}))-\mathsf{d}_Y^2(\tilde{z},u(\tilde{x}))\right)\\ &\qquad +\left(\mathsf{d}_Y^2(u(y),\tilde{z})-\mathsf{d}_Y^2(u(y),u(\tilde{y}))-\mathsf{d}_Y^2(\tilde{z},u(\tilde{y}))\right)\, , \quad \text{ for any $x,y\in X$.} \end{align*} With the notation introduced in \autoref{prop:intw}, this can be rewritten as \begin{equation*} \left(\mathsf{d}_{Y}(u(x),u(y))-\mathsf{d}_Y(u(\tilde{x}),u(\tilde{y}))\right)\ge -\frac{1}{\mathsf{d}_Y(u(\tilde{x}),u(\tilde{y}))}\left(w_{\tilde{x},\tilde{z}}(x)+w_{\tilde{y},\tilde{z}}(y)\right)\, , \end{equation*} for any $x,y\in X$. Hence \begin{equation*} -\mathsf{d}_Y(u(x),u(y))\le -\mathsf{d}_Y(u(\tilde{x}),u(\tilde{y}))+\frac{1}{\mathsf{d}_Y(u(\tilde{x}),u(\tilde{y}))}\left(w_{\tilde{x},\tilde{z}}(x)+w_{\tilde{y},\tilde{z}}(y)\right)\, . \end{equation*} Integrating w.r.t. $\Pi_s$, we can estimate \begin{align*} -\int_{X\times X}&\mathsf{d}_Y(u(x),u(y))\mathop{}\!\mathrm{d} \Pi_s(x,y)\le -\mathsf{d}_Y(u(\tilde{x}),u(\tilde{y}))\\ &+\frac{1}{\mathsf{d}_Y(u(\tilde{x}),u(\tilde{y}))}\int_{X\times X}\left(w_{\tilde{x},\tilde{z}}(x)+w_{\tilde{y},\tilde{z}}(y)\right)\mathop{}\!\mathrm{d} \Pi_s(x,y)\\ \le& -\mathsf{d}_Y(u(\tilde{x}),u(\tilde{y}))+\frac{1}{\mathsf{d}_Y(u(\tilde{x}),u(\tilde{y}))}\left(P_sw_{\tilde{x},\tilde{z}}(\cdot)(\tilde{x})+P_sw_{\tilde{y},\tilde{z}}(\cdot)(\tilde{y})\right)\, . \end{align*} Therefore \begin{align*} \limsup_{s\to 0}&\frac{-\int_{X\times X}\mathsf{d}_Y(u(x),u(y))\mathop{}\!\mathrm{d} \Pi_s(x,y)+\mathsf{d}_Y(u(\tilde{x}),u(\tilde{y}))}{s} \\ &\leq \frac{1}{\mathsf{d}_Y(u(\tilde{x}),u(\tilde{y}))}\left(\limsup_{s\to 0}\frac{1}{s}P_sw_{\tilde{x},\tilde{z}}(\cdot)(\tilde{x})+\limsup_{s\to 0}\frac{1}{s}P_sw_{\tilde{y},\tilde{z}}(\cdot)(\tilde{y})\right)\le 0\, , \end{align*} where the last inequality follows from the good choice of the points $\tilde{x}$ and $\tilde{y}$ in combination with \autoref{prop:intw}. \smallskip In order to estimate \eqref{eq:lapla2v} we observe that, by \eqref{eq:solvePoiss}, the assumption that $\tilde{x}$ is a Lebesgue point as in \eqref{eq:Lebztilde} and a minor variant of \cite[Lemma 2.56]{MondinoSemola21} (see also \autoref{prop:hfmeanLapla}), \begin{align*} \limsup_{s\to 0}\frac{\int_{X\times X}\left(-v(x)\right)\mathop{}\!\mathrm{d}\Pi_s(x,y)+v(\tilde{x})}{s}=&\limsup_{s\to 0}\frac{\int_X\left(-v(x)\right)\mathop{}\!\mathrm{d} P_s\delta_{\tilde{x}}(x)+v(\tilde{x})}{s}\\ =&\limsup_{s\to 0}\frac{-P_sv(\tilde{x})+v(\tilde{x})}{s}=K\frac{e^{-2K\lambda}}{t}L^2_{t,\lambda}(\tilde{x})-\theta_0\, . \end{align*} Above, the first equality is justified since the first marginal of $\Pi_s$ is $P_s\delta_{\tilde{x}}$, by the very definition. \smallskip We are left to estimate \eqref{eq:lapla2G} and \eqref{eq:lapla2G1} that are dealt with similar arguments. In order to estimate \eqref{eq:lapla2G}, notice that we can pass to the marginals to obtain \begin{equation*} \int _{X\times X}G(x,y)\mathop{}\!\mathrm{d} \Pi_s(x,y)-G(\tilde{x},\tilde{y})=\delta_0\left(P_s\mathsf{d}^2(\bar{x},\cdot)(\tilde{x})-\mathsf{d}^2(\bar{x},\tilde{x})+P_s\mathsf{d}^2(\bar{y},\cdot)(\tilde{y})-\mathsf{d}^2(\bar{y},\tilde{y})\right)\, . \end{equation*} Hence, by the Laplacian comparison, \begin{equation*} \limsup_{s\to 0}\frac{\int _{X\times X}G(x,y)\mathop{}\!\mathrm{d} \Pi_s(x,y)-G(\tilde{x},\tilde{y})}{s}\le \delta_0\, C(K,N,{\rm diam} U,{\rm diam} B)\, . \end{equation*} By similar reasons \begin{equation*} \limsup_{s\to 0}\frac{\int_{X\times X}G_{1,\mu}(x,y)\mathop{}\!\mathrm{d}\Pi_s(s,y)-G_{1,\mu}(\tilde{x},\tilde{y})}{s}\le \mu\, C(K,N,{\rm diam} U,{\rm diam} B)\, . \end{equation*} Combining the various terms controlled above and taking into account \eqref{eq:minpoint}, we obtain that \begin{equation}\label{eq:lastalign} \begin{split} 0\le & \limsup_{s\to 0}\frac{\int_{X\times X}H_{2,\mu}(x,y)\mathop{}\!\mathrm{d} \Pi_s(x,y)-H_{2,\mu}(\tilde{x},\tilde{y})}{s}\\ \le& -K \frac{e^{-2K\lambda}}{t}\mathsf{d}^2(\tilde{x},\tilde{y}) +K\frac{e^{-2K\lambda}}{t}L^2_{t,\lambda}(\tilde{x})-\theta_0+\delta_0C(K,N,{\rm diam} U,{\rm diam} B)\\ &+\mu C(K,N,{\rm diam} U,{\rm diam} B)\, . \end{split} \end{equation} Now we let $\mu\downarrow 0$. Recall that there is an implicit dependence of the minimum point $(\tilde{x},\tilde{y})$ of $H_{2,\mu}$ from the parameter $\mu$ in all the computations above. Since \begin{equation*} H_{2,\mu}=H_1+G_{1,\mu}\, , \end{equation*} where $G_{1,\mu}$ has been introduced in \eqref{eq:introG1} and $H_1$ has a unique strict minimum at $(\bar{x},\bar{y})$ it is elementary to verify that the minimum points $(\tilde{x}_{\mu},\tilde{y}_{\mu})$ of $H_{2,\mu}$ converge to $(\bar{x},\bar{y})$ as $\mu\to 0$.\\ Let us rewrite \eqref{eq:lastalign} with the explicit dependence of the minimum points from the parameter $\mu$ as \begin{align*} 0\le& -K \frac{e^{-2K\lambda}}{t}\mathsf{d}^2(\tilde{x}_{\mu},\tilde{y}_{\mu}) +K\frac{e^{-2K\lambda}}{t}L^2_{t,\lambda}(\tilde{x}_{\mu})-\theta_0\\ &+\delta_0C(K,N,{\rm diam} U,{\rm diam} B) +\mu C(K,N,{\rm diam} U,{\rm diam} B)\, . \end{align*} Then we notice that $\mathsf{d}^2(\tilde{x_{\mu}},\tilde{y}_{\mu})\to\mathsf{d}^2(\bar{x},\bar{y})$ as $\mu\to 0$ and that \begin{equation*} \mathsf{d}^2(\bar{x},\bar{y})=L^2_{t,\lambda}(\bar{x})\le \liminf_{\mu\to 0}L^2_{t,\lambda}(\tilde{x}_{\mu})\, , \end{equation*} by lower semicontinuity, see \autoref{lemma:tech2} (i). Taking the limit as $\mu\downarrow 0$ we obtain \begin{equation*} \theta_0\le \delta_0C(K,N,{\rm diam} U,{\rm diam} B)\, , \end{equation*} which is a contradiction as soon as $\delta_0>0$ is sufficiently small. \end{proof} Arguing along the lines of the proof of \cite[Corollary 6.9]{ZhangZhu18}, it is possible to extend \autoref{prop:mainestimate} to any domain $\Omega''\Subset\Omega'$. \begin{corollary}\label{cor:tomoreglobal} With the same notation of \autoref{prop:mainestimate} above, for any open domain $\Omega''\Subset\Omega'$, there exists $t_1>0$ such that for any $0<t<t_1$ and for any $\lambda\in[0,1]$ the function \begin{equation*} x\mapsto f_t(x,\lambda) \end{equation*} is a super-solution of the equation \begin{equation*} \Delta f_t(\cdot,\lambda)=-e^{-2K\lambda}\frac{K}{t}L_{t,\lambda}^2\, ,\quad\text{on $\Omega''$}\, . \end{equation*} \end{corollary} \section{Lipschitz continuity}\label{sec:Lip} In this section we complete the proof of \autoref{thm:main}. The main differences with \cite{ZhangZhu18} are contained in the previous \autoref{sec:perturbation} and \autoref{sec:propagation}, corresponding to the key ingredients for the proof. In this section, we adapt \cite[Section 6]{ZhangZhu18} with minor modifications. \medskip We keep the notation of the previous section. In particular we recall that $(X,\mathsf{d},\mathfrak{m})$ is an $\RCD(K,N)$ metric measure space, $\Omega\subset X$ is an open domain, $(Y,\mathsf{d}_Y)$ is a ${\sf CAT}(0)$ space and $u:\Omega\to Y$ is a harmonic map. Moreover, we recall that the functions $f_t(\cdot,\cdot)$ were defined in \eqref{eq:introft}. \smallskip We consider local weak solutions and super/subsolutions of the heat equation in space time. Let us introduce some terminology. Given an open domain $G\subset X$ and an open interval $(a,b)\subset \mathbb{R}$ we shall denote the domain $G\times I\subset X\times \mathbb{R}$ as a parabolic cylinder in space time. When $G=B_r(x_0)$ for some $x_0\in X$ and $r>0$ and $I=I_{r}(\lambda_0)=(\lambda_0-r^2,\lambda_0+r^2)$, we use the notation \begin{equation*} Q_r(x_0,\lambda_0):=B_r(x_0)\times I_{r}(\lambda_0)=B_r(x_0)\times (\lambda_0-r^2,\lambda_0+r^2)\, . \end{equation*} We recall the notion of weak solution of the heat equation and of weak solutions and super/subsolutions adopted in \cite{ZhangZhu18}. \begin{definition} Let $Q=G\times I$ be a parabolic cylinder in space time, for some open domain $G\subset X$ and open interval $I\subset (0,\infty)$. A function $g\in W^{1,2}_{{\rm loc}}(G)$ is said to be a weak super-solution of the heat equation \begin{equation*} \Delta g(x,\lambda)=\frac{\partial g}{\partial \lambda} \end{equation*} if it satisfies \begin{equation*} - \int_Q\nabla g\cdot \nabla \phi \mathop{}\!\mathrm{d}\mathfrak{m} \mathop{}\!\mathrm{d}\mathscr{L}^1\le \int _Q\frac{\partial g}{\partial \lambda}\phi\mathop{}\!\mathrm{d}\mathfrak{m}\mathop{}\!\mathrm{d}\mathscr{L}^1\, , \end{equation*} for any non-negative function $\phi\in\Lip_c(Q)$. We call $g$ a sub-solution if $-g$ is a super-solution. We call $g$ a solution if it is both a sub-solution and a super-solution. \end{definition} The following is \cite[Lemma 6.12]{ZhangZhu18}, that works without any modification in the present context. \begin{lemma} Let $Q=G\times I$ be a parabolic cylinder. Let us consider a function $g\in W^{1,2}_{{\rm loc}}(G\times I)$. If for $\mathscr{L}^1$-a.e. $\lambda\in I$ it holds that $g(\cdot,\lambda)$ is a super-solution of the equation \begin{equation} \Delta_xg(\cdot,\lambda)=\frac{\partial g}{\partial\lambda}(\cdot,\lambda)\, ,\quad\text{on $G$} \end{equation} then $g$ is a super-solution of the heat equation \begin{equation} \Delta g=\frac{\partial g}{\partial \lambda}\, ,\quad\text{on $Q$}\, . \end{equation} \end{lemma} Arguing as in the proof of \cite[Proposition 6.13]{ZhangZhu18}, combining \autoref{lemma:tech3} with \autoref{cor:tomoreglobal}, we obtain the following. \begin{proposition}\label{prop:superheat} Let $\Omega''\Subset \Omega'$ and $t_*:=\min\{t_0,t_1\}$, where $t_0$ and $t_1$ are given by \autoref{lemma:firstft} and \autoref{cor:tomoreglobal}, respectively. Then, for each $t\in (0,t_*)$, the function $f_t(\cdot,\cdot)$ is a super-solution of the heat equation \begin{equation} \Delta f_t=\frac{\partial f_t}{\partial\lambda}\, ,\quad\text{on $\Omega''\times(0,1)$}\, . \end{equation} \end{proposition} The strategy to obtain local Lipschitz continuity from the previous results is borrowed from \cite{ZhangZhu18}. We outline the main steps, referring to \cite{ZhangZhu18} for the details of the proofs.\\ In the case $K=0$, where there are no additional error terms, the outcome of our previous constructions is that all the functions $f_t$ are super-solutions of the Laplace equation $\Delta f_t=0$. The strategy is to use this information in combination with a Harnack inequality to promote integral estimates to point-wise estimates. Taking the derivative w.r.t. $t$ of $f_t$ and applying again Harnack's inequality we will obtain uniform estimates on the point-wise Lipschitz constant of $u$. \medskip Let us consider $0<R\le 1$ and let us assume that $B_{2R}(q)\Subset \Omega'$ for some $q\in X$. Let $t_*>0$ be given by \autoref{prop:superheat} for $\Omega''=B_{2R}(q)$ and, for each $t\in (0,t_*)$ and each $\lambda\in (0,1)$, we define the function $x\mapsto \abs{\nabla^-f_t(x,\lambda)}$ on $B_{2R}(q)$ by \begin{equation*} \abs{\nabla ^-f_t(x,\lambda)}:=\limsup_{r\to 0}\sup_{y\in B_r(x)}\frac{\left(f_t(x,\lambda)-f_t(y,\lambda)\right)_+}{r}\, , \end{equation*} where $(a)_+:=\max\{0,a\}$. Set \begin{equation*} \bar{t}:=\min\left\{t_*,\frac{R^2}{64+64\mathrm{osc}_{\overline{\Omega}'}u} \right\} \end{equation*} and \begin{equation*} v(t,x,\lambda):=-f_t(x,\lambda)\, ,\quad\text{for any $(t,x,\lambda)\in (0,\bar{t})\times B_{R/2}(q)\times[0,1]$}\, . \end{equation*} Below we state the counterpart of \cite[Sublemma 6.16]{ZhangZhu18} in our setting. The proof in \cite{ZhangZhu18} is based only on metric arguments and the assumption that $(X,\mathsf{d})$ is an Alexandrov space with curvature bounded from below is never used, therefore it works verbatim in the present setting. \begin{lemma}\label{lemma:derivat} With the notation above, for any $(t,x,\lambda)\in (0,\bar{t},B_{R/4}(q)\times (0,1))$ it holds \begin{equation*} \frac{\partial^+}{\partial t}v(t,x,\lambda):=\limsup_{s\to 0}\frac{v(t+s,x,\lambda)-v(t,x,\lambda)}{s}\le \left(\lip u(x)\right)^2+\abs{\nabla ^-f_t(x,\lambda)}^2\, . \end{equation*} \end{lemma} Below we state the counterpart of \cite[Sublemma 6.17]{ZhangZhu18} in our setting. Also in this case, the proof in \cite{ZhangZhu18} is based only on metric arguments, relying only on \autoref{lemma:firstft} and \autoref{lemma:tech2} to obtain the inequality \begin{equation*} \abs{v(t,x,\lambda)-v(t',x,\lambda)}\le e^{-2K}\frac{{\rm diam}^2(\Omega')}{2a^2}\abs{t-t'}\, , \end{equation*} for any $t,t'\ge a>0$ and any $(x,\lambda)\in B_{R/4}(q)\times (0,1)$. \begin{lemma}\label{lemma:calHt} With the notation above, let $\mathcal{H}:(0,\bar{t})\to\mathbb{R}$ be defined by \begin{equation}\label{eq:defHt} \mathcal{H}(t):=\frac{1}{\mathfrak{m}(B_{R/4}(q))}\int_{B_{R/4}(q)\times (\frac14,\frac34)}v(t,x,\lambda)\mathop{}\!\mathrm{d}\mathfrak{m}(x)\mathop{}\!\mathrm{d}\mathscr{L}^1(\lambda)\, . \end{equation} Then the function $\mathcal{H}$ is locally Lipschitz on $(0,\bar{t})$. \end{lemma} The next step is to get integral estimates for $(x,\lambda)\mapsto \abs{\nabla ^-f_t(x,\lambda)}^2$ and $x\mapsto \lip^2u(x)$, and to employ them in combination with \autoref{lemma:derivat} to bound the derivative with respect to time of $t\mapsto \mathcal{H}(t)$. \smallskip By \autoref{prop:lipfinite}, $\lip^2u(x)\le c(n)^2\abs{\mathop{}\!\mathrm{d} u(x)}^2$ for $\mathfrak{m}$-a.e. $x\in\Omega$, where $n$ is the essential dimension of $(X,\mathsf{d},\mathfrak{m})$. By integration \begin{equation}\label{eq:estlipintegrated} \int_{B_{R/4}(q)}\lip^2u(x)\mathop{}\!\mathrm{d}\mathfrak{m}(x)\le C(n)\int_{B_{R/4}(q)}\abs{\mathop{}\!\mathrm{d} u(x)}^2\mathop{}\!\mathrm{d}\mathfrak{m}(x)\, . \end{equation} The integral bound for $(x,\lambda)\mapsto \abs{\nabla ^-f_t(x,\lambda)}^2$ is based on \autoref{prop:superheat} and the following Harnack inequality for sub-solutions of the heat equation, see \cite{Sturm95II}, or \cite{MarolaMasson13} for a proof under doubling and Poincar\'e conditions. \begin{proposition}\label{prop:Harnack} Let $G\times I$ be a parabolic cylinder in $X\times\mathbb{R}$ and let $g$ be a non-negative locally bounded sub-solution of the heat equation $\Delta g=\frac{\partial g}{\partial \lambda}$ on $Q_r\subset G\times I$. Then there exists a constant $C=C(K,N,{\rm diam} G)$ such that \begin{equation} \mathrm{ess}\sup_{Q_{r/2}}g\le \frac{C}{r^2\mathfrak{m}(B_r(x))}\int_{Q_r}g\mathop{}\!\mathrm{d}\mathfrak{m}\mathop{}\!\mathrm{d}\mathscr{L}^1=\bar{C}\fint_{Q_r}g\mathop{}\!\mathrm{d}\mathfrak{m}\mathop{}\!\mathrm{d}\mathscr{L}^1\, . \end{equation} \end{proposition} The statement below corresponds to \cite[Lemma 6.15]{ZhangZhu18}, whose proof works in the present setting with no modifications. We outline the strategy addressing the reader to \cite{ZhangZhu18} for more details. \begin{proposition}\label{prop:estgradf+} With the notation above, there exists a constant $C=C(K,N,R)>0$ such that \begin{equation}\label{eq:intestmain} \frac{1}{\mathfrak{m}(B_R(q))}\int_{B_{R}(q)\times (\frac14,\frac34)}\abs{\nabla^-f_t(x,\lambda)}^2\mathop{}\!\mathrm{d}\mathfrak{m}(x)\mathop{}\!\mathrm{d}\mathscr{L}^1(\lambda)\le C(K,N,R)\left(\mathrm{osc}_{\overline{\Omega}'}u\right)^2\, , \end{equation} for any $0<t<t_*$. \end{proposition} \begin{proof} We start recalling that, by \eqref{eq:singosc}, \begin{equation}\label{eq:uniboundft} 0\ge f_t(x,\lambda)\ge-\mathrm{osc}_{\overline{\Omega'}}u\, , \end{equation} for any domain $\Omega'\Subset\Omega$ and for any $t>0$ and any $0\le \lambda\le 1$. \smallskip The first step in the proof of \cite[Lemma 6.15]{ZhangZhu18} is based on a maximal function argument, that works verbatim in the present setting, as it relies only on the local doubling and Poincar\'e properties of the metric measure space $(X,\mathsf{d},\mathfrak{m})$ that are guaranteed by the $\RCD(K,N)$ condition for $1\le N<\infty$. We refer to \cite{GigliTulyenev21} for similar arguments. \smallskip The argument leading to equation (6.52) in the second step of the proof of \cite[Lemma 6.15]{ZhangZhu18} builds on \autoref{lemma:firstft} (ii), (iii), the outcome of the previous step and the uniform boundedness of the local maximal function operator from $L^2$ to $L^2$ on balls, which is a consequence of the uniform local doubling property of $\RCD(K,N)$ spaces. Therefore the proof works verbatim in the present setting. \smallskip In order to get equation (6.55) in \cite{ZhangZhu18} the authors apply a parabolic Caccioppoli inequality, which is obtained in \cite[Lemma 4.1]{MarolaMasson13} and holds also in the present setting since it requires only doubling and Poincar\'e conditions, to $-f_t(\cdot,\cdot)$, which is a non-negative sub-solution of the heat equation thanks to \autoref{prop:superheat}. Combined with \eqref{eq:uniboundft}, this leads to \begin{equation}\label{eq:propint} \int_{B_R(q)\times (\frac14,\frac34)}\abs{\nabla f_t(x,\lambda)}^2\mathop{}\!\mathrm{d}\mathfrak{m}(x)\mathop{}\!\mathrm{d}\mathscr{L}^1(\lambda)\le C(K,N,R)\mathfrak{m}(B_{2R}(q))\left(\mathrm{osc}_{\overline{\Omega}'}u\right)^2\, . \end{equation} In order to get equation (6.56) in \cite{ZhangZhu18}, the authors fix $(x,\lambda)\in B_R(q)\times (0,1)$ and observe that the function $\left(f_t(x,\lambda)-f_t(\cdot,\cdot)\right)_+$ is a non-negative sub-solution of the heat equation on $B_R(q)\times (0,1)$, thanks to \autoref{prop:superheat}. Then they apply the Harnack inequality \autoref{prop:Harnack} to obtain a uniform estimate, see equation (6.56), that is integrated over $B_R(q)\times (\frac14,\frac34)$ and combined with \eqref{eq:propint} and the outcome of the first step to get the sought \begin{equation} \int_{B_{R}(q)\times (\frac14,\frac34)}\abs{\nabla^-f_t(x,\lambda)}^2\mathop{}\!\mathrm{d}\mathfrak{m}(x)\mathop{}\!\mathrm{d}\mathscr{L}^1(\lambda)\le C(K,N,R)\mathfrak{m}(B_{2R}(q))\left(\mathrm{osc}_{\overline{\Omega}'}u\right)^2\, , \end{equation} from which \eqref{eq:intestmain} follows by the uniform local doubling property of $(X,\mathsf{d},\mathfrak{m})$. \end{proof} We are ready to prove the main theorem of this paper, that we restate below for the sake of readability. \begin{theorem}\label{mainthcore} Let $(X,\mathsf{d},\mathfrak{m})$ be an $\RCD(K,N)$ metric measure space for some $K\in\mathbb{R}$ and $1\le N<\infty$. Let $(Y,\mathsf{d}_Y)$ be a ${\sf CAT}(0)$ space and let $\Omega\subset X$ be an open domain. Assume that $u:\Omega\to Y$ is a harmonic map. Then for any $0<R\le 1$ there exists a constant $C=C(K,N,R)>0$ such that if $B_{2R}(q)\Subset \Omega$ for some point $q\in X$, then for any $x,y\in B_{R/16}(q)$ it holds \begin{equation*} \mathsf{d}_Y(u(x),u(y))\le C(K,N,R)\left(\left(\fint_{B_{R}(q)}\abs{\mathop{}\!\mathrm{d} u(z)}^2\mathop{}\!\mathrm{d}\mathfrak{m}(z)\right)^{\frac{1}{2}}+\mathrm{osc}_{\overline{B}_R(q)}u\right)\mathsf{d}(x,y)\, . \end{equation*} \end{theorem} \begin{proof} The proof follows closely the one of \cite[Theorem 1.4]{ZhangZhu18}, without relevant modifications. We outline the strategy. \smallskip Let $\mathcal{H}(t)$ be the function defined in \eqref{eq:defHt}. Thanks to \autoref{lemma:calHt} we can apply the dominated convergence theorem to estimate \begin{align*} \frac{\mathop{}\!\mathrm{d}^+}{\mathop{}\!\mathrm{d} t}\mathcal{H}(t):=&\limsup_{s\downarrow 0}\frac{\mathcal{H}(t+s)-\mathcal{H}(t)}{s}\\ \le &\frac{1}{\mathfrak{m}(B_{R/4}(q))}\int_{B_{R/4}(q)\times (\frac14,\frac34)}\left[\left(\lip u(x)\right)^2+\abs{\nabla ^-f_t(x,\lambda)}^2\right]\mathop{}\!\mathrm{d}\mathfrak{m}(x)\mathop{}\!\mathrm{d}\mathscr{L}^1(\lambda)\, , \end{align*} where the inequality follows from \autoref{lemma:derivat}. \smallskip Combining \autoref{prop:estgradf+} with \eqref{eq:estlipintegrated}, for any $t\in (0,\bar{t})$, we can estimate \begin{equation}\label{eq:boundd+} \frac{\mathop{}\!\mathrm{d}^+}{\mathop{}\!\mathrm{d} t}\mathcal{H}(t)\le C(K,N,R)\left(\fint_{B_{R}(q)}\abs{\mathop{}\!\mathrm{d} u(x)}^2\mathop{}\!\mathrm{d}\mathfrak{m}(x)+\left(\mathrm{osc}_{\overline{\Omega}'}u\right)^2\right)\, . \end{equation} Borrowing the notation from \cite{ZhangZhu18}, we set \begin{equation*} \mathcal{A}_{u,R}:=\left(\fint_{B_{R}(q)}\abs{\mathop{}\!\mathrm{d} u(x)}^2\mathop{}\!\mathrm{d}\mathfrak{m}(x)\right)^{\frac{1}{2}}+\mathrm{osc}_{\overline{B}_R(q)}u\, . \end{equation*} Then \eqref{eq:boundd+} implies that \begin{equation}\label{eq:boundrightderivative} \frac{\mathop{}\!\mathrm{d}^+}{\mathop{}\!\mathrm{d} t}\mathcal{H}(t)\le 2C(K,N,R)\mathcal{A}_{u,R}^2\, ,\quad\text{for any $0\le t\le \bar{t}$}\, . \end{equation} It easily follows from \autoref{lemma:firstft} (i) and the continuity of $u$ that \begin{equation*} \lim_{t\to 0}v(t,x,\lambda)=0\, ,\quad\text{for any $(x,\lambda)\in B_{R/4}(q)\times (0,1)$}\, . \end{equation*} Since $v$ is uniformly bounded by \eqref{eq:uniboundft}, we can apply the dominated convergence theorem to infer that \begin{equation*} \lim_{t\downarrow 0}\mathcal{H}(t)=0\, . \end{equation*} Combining with \eqref{eq:boundrightderivative} and the local Lipschitz continuity of $\mathcal{H}$, see \autoref{lemma:calHt}, we get \begin{equation*} \mathcal{H}(t)\le 2C(K,N,R)\mathcal{A}_{u,R}^2t\, ,\quad\text{for any $t\in (0,\bar{t})$}\, . \end{equation*} Let us notice that, for any $t\in (0,\bar{t})$, the function $v(t,\cdot,\cdot)$ is a non-negative sub-solution of the heat equation on the cylinder $B_{R/2}(q)\times (0,1)$ by \autoref{prop:superheat}, hence so is $v/t$. Using the Harnack inequality \autoref{prop:Harnack} we obtain \begin{align}\label{eq:estvt} \nonumber\sup_{B_{R/8}(q)\times (\frac38,\frac58)}\frac{v(t,x,\lambda)}{t}\le& \frac{C}{R^2B_{R/4}(q)}\int_{B_{R/4}(q)\times (\frac14,\frac34)}\frac{v(t,x,\lambda)}{t}\mathop{}\!\mathrm{d}\mathfrak{m}(x)\mathop{}\!\mathrm{d}\mathscr{L}^1(\lambda)\\ \le & \bar{C} \mathcal{A}_{u,R}^2\, ,\quad\text{for any $t\in(0,\bar{t})$}\, . \end{align} Let us see how to complete the local Lipschitz estimate. Let us consider $x,y\in B_{R/8}(q)$. We apply \eqref{eq:estvt} with $\lambda=1/2$ and get \begin{equation}\label{eq:estalmostfinal} \frac{\mathsf{d}_Y(u(x),u(y))}{t}-e^{-K}\frac{\mathsf{d}^2(x,y)}{2t^2}\le \frac{v(t,x,\frac{1}{2})}{t}\le \bar{C}\mathcal{A}^2_{u,R}\, . \end{equation} In particular, if \begin{equation*} \mathsf{d}(x,y)<e^{K/2}\mathcal{A}_{u,R}\bar{t}\, , \end{equation*} then employing \eqref{eq:estalmostfinal} with $t:=\mathsf{d}(x,y)/(e^{K/2}\mathcal{A}_{u,R})$, we obtain \begin{equation}\label{eq:finalalm} \mathsf{d}_Y(u(x),u(y))\le\left(\bar{C}+\frac{1}{2}\right)e^{-K/2}\mathcal{A}_{u,R}\mathsf{d}(x,y)=\tilde{C}\mathsf{d}(x,y)\, . \end{equation} The above shows that the local Lipschitz estimate holds uniformly for points sufficiently close, i.e. when $\mathsf{d}(x,y)\le e^{K/2}\mathcal{A}_{u,R}\bar{t}$.\\ If $\mathsf{d}(x,y)>e^{K/2}\mathcal{A}_{u,R}\bar{t}$, then we consider a minimizing geodesic $\gamma:[0,\mathsf{d}(x,y)]\to X$ connecting $x$ to $y$. Then we choose $N\ge 1$ and points $\gamma(t_i)$ along $\gamma$ with $\gamma(t_0)=\gamma(0)=x$ and $\gamma(t_N)=\gamma(\mathsf{d}(x,y))=y$ in such a way that $\mathsf{d}(\gamma(t_i),\gamma(t_{i+1}))<e^{K/2}\mathcal{A}_{u,R}\bar{t}$ and we apply repeatidly \eqref{eq:finalalm} between $\gamma(t_i)$ and $\gamma(t_i)$ to get \begin{equation*} \mathsf{d}_Y(u(x),u(y))\le\sum_{i=0}^{N-1}\mathsf{d}_Y(u(\gamma(t_i)),u(\gamma(t_{i+1})))\le \tilde{C} \sum_{i=0}^{N-1}\mathsf{d}(\gamma(t_i),\gamma(t_{i+1}))=\tilde{C}\mathsf{d}(x,y)\, , \end{equation*} which concludes the proof of the local Lipschitz continuity of $u$. Above we used the triangle inequality for the first inequality, \eqref{eq:finalalm} for the second inequality and the choice of the points $\gamma(t_i)$ along the minimizing geodesic between $x$ and $y$ for the last equality. \end{proof} \section{Bochner inequality with Hessian-type term}\label{sec:Bochner} The goal of this section is to prove the following. \begin{theorem}\label{thm:Bochner} Let $(X,\mathsf{d},\mathfrak{m})$ be an $\RCD(K,N)$ metric measure space for some $K\in\mathbb{R}$, $1\le N<\infty$, and let $(Y,\mathsf{d}_Y)$ be a ${\sf CAT}(0)$ space. Let $\Omega\subset X$ be an open domain and let $u:\Omega\to\mathbb{R}$ be a harmonic map. Then $\lip u\in W^{1,2}_{{\rm loc}}(\Omega)\cap L^{\infty}_{{\rm loc}}(\Omega)$ and \begin{equation}\label{eq:bochnerwithhessian} \Delta \frac{\abs{\lip u}^2}{2}\ge \abs{\nabla \lip u}^2+K\abs{\lip u}^2\, ,\quad\text{on $\Omega$} \end{equation} in the sense of distributions. \end{theorem} The above \eqref{eq:bochnerwithhessian} is a weak Bochner inequality. The term $\abs{\nabla \lip u}^2$ at the right hand side is a Hessian type term and the appearance of such terms for Bochner inequalities for harmonic maps between singular spaces is a delicate issue. \smallskip Already for scalar valued maps defined on a non-smooth $\RCD$ space, the validity of the Bochner inequality (even without Hessian term) is a deep result: it was proved for $\RCD(K,\infty)$ spaces in \cite{AGSDuke} (see also \cite{AGS15} for the reverse implication); the dimensional improvement for $\RCD^*(K,N)$ spaces was established independently in \cite{EKS} and \cite{AmbrosioMondinoSavare19} (together with the reverse implication). The fact that the scalar Bochner inequality (without Hessian) ``self-improves'' to estimate the norm of the Hessian was noticed in the smooth setting of $\Gamma$-calculus in \cite{Bakry85} and then obtained in the non-smooth setting of $\RCD$ spaces in \cite{Savare14} and \cite{Gigli18}. For \emph{smooth} harmonic maps between \emph{smooth} Riemannian manifolds, a Bochner-type identity was proved in the seminal work \cite{EellsSampson64}. For harmonic maps into singular spaces, obtaining a Bochner inequality is a delicate problem. When the domain $\Omega$ has non-negative sectional curvature and the target $Y$ is a non-positively curved simplicial complex, some weak forms of Bochner-type inequalities have been obtained in \cite{Chen95, KorevaarSchoen}. The list of contributions in the topic is then quite long, until \cite{ZhangZhongZhu19} proved the validity of the Bochner-type inequality \eqref{eq:bochnerwithhessian} for harmonic maps $u:\Omega\to Y$, where $\Omega$ is a \emph{smooth} domain of an $n$-dimensional Riemannian manifold with $\mathrm{Ric}\geq K$, and $Y$ is a ${\sf CAT}$ space. To the best of our knowledge, \autoref{thm:Bochner} is the first result about validity of a Bochner-type inequality with Hessian-type term for harmonic maps when both the source and the target spaces are non-smooth. \medskip The proof of \autoref{thm:Bochner} will follow the strategy in \cite{ZhangZhongZhu19}, dealing with the case of smooth source spaces. The fundamental novelty, in the same spirit as in the previous sections, will be the use of the interplay between optimal transport and heat flow on $\RCD$ spaces as a replacement of computations via the second variation of arc length and parallel transport in the smooth setting.\\ For any $q\in(1,2]$, for any domain $\Omega'$ compactly contained in $\Omega$ and for any $t>0$, we consider the auxiliary function $f_t:\Omega'\to\mathbb{R}$ defined by \begin{equation}\label{eq:defftB} f_t(x):=\inf_{y\in\Omega'}\left\{\frac{\mathsf{d}^p(x,y)}{pt^{p-1}}-\mathsf{d}_Y(u(x),u(y))\right\}\, , \end{equation} where $p:=q/(q-1)$. Notice that if $q\in (1,2]$, then $p\in (2,\infty]$. We will avoid stressing the dependence on $q$, as it will be always clear from the context.\\ Moreover, we shall denote by $S_t(x)$ the set of those points attaining the infimum in \eqref{eq:defftB} and we introduce a function $L_t:\Omega'\to\mathbb{R}$ via \begin{equation*} L_t(x):=\min_{z\in S_t(x)}\mathsf{d}(x,z)\, . \end{equation*} We choose $B_R(o)\subset X$ such that $B_{2R}(o)\Subset \Omega$. We set \begin{equation*} l_0:=\sup_{x,y\in B_{2R}(o)}\frac{\mathsf{d}_Y(u(x),u(y))}{\mathsf{d}(x,y)}<\infty\, , \end{equation*} where finiteness of the local Lipschitz constant follows from \autoref{mainthcore}.\\ The proof of \autoref{thm:Bochner} will depend on some intermediate results. \begin{lemma}\label{lemma:elem} There exists a constant $C=C(p,l_0)>0$ such that for any $0<t<t^*$, it holds \begin{equation} L_t\le Ct\,, \quad\, 0\le -f_t\le Ct\, ,\quad\text{on $B_R(o)$}\, . \end{equation} Moreover, $f_t$ is Lipschitz on $B_R(o)$ and $L_t$ is lower semicontinuous on $B_R(o)$. \end{lemma} The proof of \autoref{lemma:elem} is completely elementary and based on the local Lipschitz continuity of $u:\Omega\to Y$, therefore we omit it. We refer to \cite[Lemma 4.1]{ZhangZhongZhu19} for the detailed proof in the context of maps from smooth Riemannian manifolds to ${\sf CAT}(k)$ spaces, which is based on metric arguments and therefore works verbatim in the present setting. \medskip We introduce a notion of metric differentiability in the present context. \begin{definition} Let $(X,\mathsf{d},\mathfrak{m})$ be an $\RCD(K,N)$ metric measure space with essential dimension $1\le n\le N$ and let $(Y,\mathsf{d}_Y)$ be a complete metric space. Let $\Omega\subset X$ be an open domain and let $u:\Omega\to Y$ be a Lipschitz map. Given a point $x\in\Omega$, we say that $u$ is metrically differentiable at $x$ if the following hold: \begin{itemize} \item[(i)] $x$ is an $n$-regular point of $(X,\mathsf{d},\mathfrak{m})$; \item[(ii)] the functions $g_i:X\to[0,\infty)$ defined by $g_i(y):=\mathsf{d}_Y(u(y),u(x))/r_i$ considered along the pmGH converging sequence $X_i:=(X,r_i^{-1}\mathsf{d},\left(\mathfrak{m}(B_{r_i}(x))\right)^{-1}\mathfrak{m},x)$ for $r_i\downarrow 0$ converge locally uniformly to a semi-norm $\mathrm{md}_x u:\mathbb{R}^n\to [0,\infty)$, which is independent of the chosen sequence, up to composition with Euclidean isometries. \end{itemize} \end{definition} \begin{proposition} Let $(X,\mathsf{d},\mathfrak{m})$ be an $\RCD(K,N)$ metric measure space and let $(Y,\mathsf{d}_Y)$ be a ${\sf CAT}(0)$ space. Let $\Omega\subset X$ be an open domain and let $u:\Omega\to Y$ be a harmonic map. Then $u$ is metrically differentiable at $\mathfrak{m}$-a.e. $x\in\Omega$. \end{proposition} \begin{proof} The statement follows from \autoref{thm:ksomega} (iii) (see also the proof of \autoref{prop:lappoin}), which gives approximate metric differentiability for general Sobolev functions, combined with \autoref{mainthcore}. Indeed, as the function $u$ is locally Lipschitz, the functions $X_i\ni y\mapsto \mathsf{d}_Y(u(x),u(y))/r_i$, where $X_i:=(X,r_i^{-1}\mathsf{d},\left(\mathfrak{m}(B_{r_i}(x))\right)^{-1}\mathfrak{m},x)$, are uniformly Lipschitz on $B^{X_i}_{1}(x)$. Moreover, they all vanish at $x$. Therefore Ascoli-Arzel\'a's theorem for pmGH converging sequences of metric spaces shows that they converge locally uniformly up to subsequences. Since we already know that the sequence converges to a semi-norm on $\mathbb{R}^n$ in $H^{1,2}_{{\rm loc}}$, the convergence is uniform. \end{proof} \begin{proposition}\label{prop:nonlinearHL} Let $q\in (1,\infty)$ and $p\in (1,\infty)$ be such that $1/p+1/q=1$ and let $f_t$ be as above. Then, for any $x\in B_R(o)$, it holds \begin{equation}\label{eq:lowerbound} \liminf_{t\to 0}\frac{f_t(x)}{t}\ge -\frac{1}{q}\left(\lip u(x)\right)^q\, . \end{equation} Moreover, if $u$ is metrically differentiable at $x$, then \begin{equation}\label{eq:limit0} \lim_{t\to 0}\frac{f_t(x)}{t}=-\frac{1}{q}\left(\lip u(x)\right)^q\, ,\quad \lim_{t\to 0}\frac{L_t(x)}{t}=\left(\lip u(x)\right)^{\frac{q}{p}}\, . \end{equation} \end{proposition} \begin{proof} The proof of the \eqref{eq:lowerbound} is elementary and based on the inequality \begin{equation*} \frac{a^p}{p}+\frac{b^q}{q}\ge ab\, \quad\text{for any $a,b\in [0,\infty)$}\, . \end{equation*} We refer to the proof of \cite[Lemma 4.4]{ZhangZhongZhu19} for the detailed argument. \\In order to prove \eqref{eq:limit0}, let us fix a metric differentiability point $x\in \Omega$. We choose $\xi\in\mathbb{R}^n$ with $\norm{\xi}=1$ such that \begin{equation*} \mathrm{md}_x u(\xi)=\norm{\mathrm{md}_x u}=\lip u(x)\, . \end{equation*} Then we consider points $y_t$ such that $\mathsf{d}(x,y_t)=t\left(\lip u(x)\right)^{q/p}$ and $y_t$ converge to the point $\left(\lip u\right)^{q/p}\xi\in \mathbb{R}^n$ along the family $X_t:=(X,t^{-1}\mathsf{d},\left(\mathfrak{m}(B_t(x))\right)^{-1}\mathfrak{m},z)$ as $t\downarrow 0$.\\ By metric differentiability, \begin{equation*} \mathsf{d}_Y(u(y_t),u(x))=\mathsf{d}(x,y_t)\mathrm{md}_x(u)(\xi)+o(\mathsf{d}(x,y_t))=t\left(\lip u(x)\right)^q+o(t)\, ,\quad\text{as $t\to 0$}\, . \end{equation*} By the very definition of $f_t$, \begin{align*} \frac{f_t(x)}{t}\le & \frac{\mathsf{d}^p(x,y_t)}{pt^{p}}-\frac{\mathsf{d}_Y(u(x),u(y_t))}{t}\\ =&\frac{\left(\lip u(x)\right)^q}{p}-\left(\lip u(x)\right)^q+o(1)\\ =&-\frac{\left(\lip u(x)\right)^q}{q}+o(1)\, , \end{align*} which proves that \begin{equation*} \limsup_{t\to 0}\frac{f_t(x)}{t}\le -\frac{\left(\lip u(x)\right)^q}{q}\, . \end{equation*} The verification of the second inequality in \eqref{eq:limit0} is completely analogous and we refer to the proof of \cite[Lemma 4.4]{ZhangZhongZhu19} for the detailed argument. \end{proof} \begin{proposition}\label{prop:Ellipticequation} With the same notation introduced above and under the same assumptions, it holds \begin{equation}\label{eq:ellipticbis} \Delta f_t\le -K\frac{L_t^p}{t^{p-1}}\, ,\quad\text{on $B_{2R}(o)$}\, , \end{equation} in the sense of distributions. \end{proposition} \begin{proof} The proof is completely analogous to the proof of \autoref{prop:mainestimate}, building on the contractivity of the Heat Flow in Wasserstein spaces of order $p$, instead of the Wasserstein spaces of order $2$. Therefore we omit the details and point out the only relevant differences in the argument. \medskip The only modification needed with respect to Step 2 in the proof of \autoref{prop:mainestimate} is the observation that the function \begin{equation*} B_R(o)\times B_R(o)\ni (x,y)\mapsto \mathsf{d}^p(x,y) \end{equation*} has measure valued Laplacian bounded above by a constant $C(K,N,p,R)$ on $B_R(o)\times B_R(o)$, for any $p>2$. The statement follows from the chain rule for the measure valued Laplacian, writing $\mathsf{d}^p(x,y)=\mathsf{d}^2(x,y)\cdot\mathsf{d}^{p-2}(x,y)$ and recalling \autoref{lemma:elemdistprod}. \smallskip With respect to Step 3 in the proof of \autoref{prop:mainestimate} here we consider optimal transport plans between heat kernels for the cost $\mathsf{d}^p$. The Wasserstein $W_p$ contraction estimate for the Heat Flow under the $\RCD(K,\infty)$ condition (see \cite[Theorem 4.4]{Savare14}) guarantees that for any couple of points $w,z\in X$ and for any $s>0$ there exists an admissible transport plan $\Pi_s$ between the heat kernels $P_s\delta_w$ and $P_s\delta_z$ such that \begin{equation*} \int_{X\times X}\mathsf{d}^p(x,y)\mathop{}\!\mathrm{d}\Pi_s\le e^{-Kpt}\mathsf{d}^p(w,z)\, . \end{equation*} Then we replace the optimal transport plans for quadratic cost with optimal transport plan for cost $\mathsf{d}^p$ in the proof of \autoref{prop:mainestimate}. This replaces the estimate \eqref{eq:estbyContraction}. All the subsequent estimates work verbatim as they only rely on the fact that $\Pi_s$ is an admissible transport plan between the heat kernel measures and not on the optimality for a specific cost. \smallskip Following the proof of \autoref{prop:mainestimate} we obtain \eqref{eq:ellipticbis}. \end{proof} \medskip \begin{proof}[Proof of \autoref{thm:Bochner}] In order to prove \eqref{eq:bochnerwithhessian} we use two limiting arguments. The first one will be aimed at proving that $\left(\lip u\right)^q\in W^{1,2}(B_{R}(o))$ and \begin{equation*} \Delta \frac{\left(\lip u\right)^q}{q}\ge K\left(\lip u\right)^q\, \quad\text{on $B_{R}(o)$}\, , \end{equation*} in the sense of distributions for any $q\in(1,2]$. In the second step we will take the limit as $q\to 1$ and obtain \eqref{eq:bochnerwithhessian}. \medskip \textbf{Step 1}. We notice that the functions $-f_t/t$ are uniformly bounded by \autoref{lemma:elem}. Moreover \begin{equation*} \Delta \left(-\frac{f_t}{t}\right)\ge K\frac{L_t^p}{t^p}\ge -\abs{K}C\, ,\quad\text{on $B_{2R}(o)$}\, , \end{equation*} for some constant $C>0$, thanks to \eqref{eq:ellipticbis} and \autoref{lemma:elem} again. By Caccioppoli's inequality, the energies \begin{equation*} \frac{1}{t^2}\int _{B_{R}(o)}\abs{\nabla f_t}^2\mathop{}\!\mathrm{d}\mathfrak{m} \end{equation*} are uniformly bounded. Hence, taking the limit as $t\to 0$ and taking into account \eqref{eq:limit0} we obtain that $\left(\lip u\right)^q\in W^{1,2}(B_R(o))$. Moreover, with the help of \autoref{prop:nonlinearHL}, we can divide by $t>0$ and pass to the limit as $t\downarrow 0$ in \eqref{eq:ellipticbis}, to obtain that, for any $q\in(1,2]$, \begin{equation}\label{eq:Bochnerq} \Delta \frac{\left(\lip u\right)^q}{q}\ge K\left(\lip u\right)^q\,, \quad\text{on $B_{R}(o)$}\, , \end{equation} in the sense of distributions. \medskip \textbf{Step 2.} In this step we argue as in the first one, proving uniform estimates with respect to $q\in(1,2]$ and then taking the limit as $q\downarrow 1$.\\ Notice that the functions $\lip u^q/q$ are uniformly bounded. Moreover, they have Laplacians uniformly bounded from below, thanks to \eqref{eq:Bochnerq}. Hence, by the Caccioppoli inequality they have uniformly bounded $W^{1,2}$ energies on $B_{R/2}(o)$. Therefore we can pass to the $L^2$ limit as $q\downarrow 1$, to obtain that $\lip u\in W^{1,2}(B_{R/2}(o))$ and \begin{equation}\label{eq:Bochner1} \Delta \lip u\ge K\lip u\, ,\quad\text{on $B_{R/2}(o)$ ,} \end{equation} in the sense of distributions.\\ By the chain rule, \eqref{eq:Bochner1} implies that \begin{equation*} \Delta \frac{\abs{\lip u}^2}{2}\ge \abs{\nabla \lip u}^2+K\abs{\lip u}^2\,, \quad\text{on $B_{R/2}(o)$}\, , \end{equation*} in the sense of distributions.\\ As the statement is clearly local, the proof is complete. \end{proof}
1,108,101,566,226
arxiv
\section{Introduction} The Totally Asymmetric Simple Exclusion Process (TASEP) is a stochastic process involving a concentration of hard-core particles which perform random, totally directed walks on a regular one-dimensional lattice subject to the constraint that each lattice site may sustain at most one particle. More specifically, updating rules are defined as follows: each particle attempts to jump to the neighboring lattice site on its right with a given rate, which can be chosen as 1 without any loss of generality, and the jump is actually fulfilled if and only if the target site is empty at this time instant, otherwise the jump is rejected. The jumps in the opposite direction are forbidden. The model possesses a particle-hole symmetry, i.e., it is symmetric with respect to a simultaneous replacement of particles with holes and vice versa and an inversion of the direction of motion. A detailed introduction, definitions and a review of important results obtained for this process can be found in \cite{derrida98}. TASEP on a finite chain of $N$ sites attains a non-equilibrium steady state which depends on the boundary conditions used. The two most typical choices of the latter are: (a) periodic boundary conditions, i.e., the chain forms a ring so that the number of particles initially introduced into the system is conserved, and (b) the chain has open boundaries: on the left extremity it is attached to an infinite reservoir of particles maintained at a constant chemical potential, while on the right extremity there is another infinite reservoir, which also has a constant chemical potential, smaller than the one on the left. Consequently, the particles are injected into the system on the left boundary at a constant rate $\alpha$ provided that this leftmost site (with $j = 1$) is empty at this time instant and, whenever they reach the rightmost site $j = N$ they are removed with a constant rate $\beta$. A sketch of such a model is presented in \fig{f:03}. Note that in the former case the steady state is very simple: all configurations respecting the conservation of the number of particles are equiprobable (see, e.g., \cite{krapivskybook}). On contrary, in the latter case the system evolves towards an out-of-equilibrium steady-state with a non-trivial particle density distribution, which has been determined via a matrix ansatz in \cite{DEHP}. Concurrently, combinatorial interpretations of the steady-state weights of different configurations have been obtained earlier in terms of pairs of paths (for $\alpha=\beta=1$) in \cite{Shapiro82}, and also in terms of weighted permutation tableaux \cite{Corteel07} and weighted binary trees \cite{Viennot07}. \begin{figure}[h] \epsfig{file=tet-f01.eps,width=10cm} \caption{TASEP on a chain containing $N=11$ sites. The chain is attached to a reservoir of particles at $j = 1$ which ``adds'' particles to the system with the constant rate $\alpha$ whenever this site is empty. The particles are removed from the system with the constant removal rate $\beta$ at the site $j = N$.} \label{f:03} \end{figure} In this paper we demonstrate that the generating function of a steady-state TASEP with open boundaries can be represented in terms of partition functions of a 1D hard-core lattice gas at a negative fugacity (i.e., at a purely imaginary chemical potential) and with one adsorbing lattice site. To show that we exploit a bijection (first discussed in \cite{haug}) between the TASEP and the so-called ``heaps of pieces'' (HP) model \cite{viennot-rev}. Further on, we take advantage of a theorem which links the HP model and a certain model of a lattice gas of hard-core objects, first established by X. Viennot in \cite{viennot-rev}. The paper is organized as follows. In Sec. \ref{a} we remind the matrix ansatz for the TASEP with open boundaries. In Sec. \ref{b} we describe the HP model, present the definitions of the so-called Mikado ordering and of the \L{}ukasiewicz paths and establish a connection between the TASEP and the HP model via a direct enumeration of the Mikado orderings. Next, in Sec. \ref{c} we recall the Viennot theorem and eventually show that the generating function of the steady-state TASEP on a chain with open boundaries can be represented in terms of partition functions of a 1D hard-core lattice gas with a negative fugacity, one adsorbing site and a special kind of boundary conditions. Finally, in the Discussion we present a brief summary of the results and outline some open questions. In Appendix I we recall the approach to an enumeration of (1+1)D heaps based on the geometric group theory, and in Appendix II we outline the connection between \L{}ukasiewicz paths introduced in the main text, the Brownian excursions and the Young tableaux. \section{Matrix Ansatz for the TASEP on a chain with open boundaries} \label{a} We start by recalling the matrix ansatz for the steady state of the TASEP model on an $N$-site chain with constant entrance, $\alpha$, and exit, $\beta$, rates \cite{DEHP}. To this end, we first introduce two formal operators $D$ and $E$, which satisfy the relation \begin{equation} DE = D+E \,, \label{eq:10} \end{equation} and two vectors $\langle {\bf V}_{out}|$ and $|{\bf V}_{in}\rangle$, such that \begin{equation} D|{\bf V}_{in}\rangle = \beta^{-1}|{\bf V}_{in}\rangle; \quad \langle {\bf V}_{out}|E = \alpha^{-1} \langle {\bf V}_{out}|. \label{eq:11} \end{equation} Then, the probability of observing any given configuration in the steady state is proportional to a matrix element of the form $\langle {\bf V}_{out}|...|{\bf V}_{in}\rangle$, where in place of dots one should insert a sequence of $N$ operators $D$ and $E$, with $D$ and $E$ corresponding to occupied and empty sites, respectively. To write this down in more formal terms, introduce occupation numbers of the sites of a chain, $\sigma_i$, $(1\le i\le N$), such that $\sigma_i=1$ if the $i$-th site is occupied by a particle and $\sigma_i=0$, otherwise, and define the probability $P(\sigma|t)$ to have a set of occupation numbers, $\sigma = \{\sigma_1,\sigma_2,...,\sigma_N\}$ at time instant $t$. In the steady state, \begin{equation} \frac{d}{dt}P(\sigma|t)=0 \,. \label{eq:12} \end{equation} Dropping the argument $t$, one writes next the probability $P(\sigma)$ in the steady state as follows \begin{equation} P(\sigma)=\frac{1}{Z_N(\alpha,\beta)}f(\sigma), \label{eq:13} \end{equation} where the weight $f(\sigma)$ of the configuration $\{\sigma_1,\sigma_2,...,\sigma_N\}$ is \begin{equation} f(\sigma)=\left<{\bf V}_{out}\right|\prod_{i=1}^N\left(\sigma_i D+(1-\sigma_i)E \right) \left|{\bf V}_{in}\right>. \label{eq:14} \end{equation} For example, the weight $f(\{\sigma\})$ of the configuration shown in \fig{f:03} is $f=\langle {\bf V}_{out}| E D E D D D E D E E D |{\bf V}_{in}\rangle$. The normalization factor $Z_N$, which is often called the (non-equilibrium) partition function, is given by \begin{equation} Z_N(\alpha,\beta) = \sum_{\tau_1=\{0,1\}}...\sum_{\tau_N=\{0,1\}} f(\tau_1,\tau_2,...,\tau_N) = \left<{\bf V}_{out}\right|\left(D+E \right)^N \left|{\bf V}_{in}\right> . \label{eq:15} \end{equation} Except for some particular values of $\alpha$ and $\beta$, the algebra defined by \eq{eq:10} and \eq{eq:11} has no finite-dimensional representations. However, there exist many infinite-dimensional ones, among which the most interesting for us is the one constructed in the following way. Take \begin{equation} \langle {\bf V}_{out}|= (1,\alpha^{-1},\alpha^{-2}, \alpha^{-3},\dots),\quad \langle {\bf V}_{in}|=(1,0, 0, 0, \dots) \label{boundaries} \end{equation} and choose the infinite-dimensional matrices $D$ and $E$ in the form \begin{equation} D = \left( \begin{array}{cccccccc} \frac{1}{\beta} & \frac{1}{\beta} & \frac{1}{\beta} & \frac{1}{\beta} & \frac{1}{\beta} & \frac{1}{\beta} & \ldots \\ 0 & 1 & 1 & 1 & 1 & 1 & \ldots \\ 0 & 0 & 1 & 1 & 1 & 1 & \ldots \\ 0 & 0 & 0 & 1 & 1 & 1 & \ldots \\ 0 & 0 & 0 & 0 & 1 & 1 & \ldots \\ 0 & 0 & 0 & 0 & 0 & 1 & \ldots \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \ddots \end{array}\right); \quad E = \left( \begin{array}{cccccccc} 0 & 0 & 0 & 0 & 0 & 0 & \ldots \\ 1 & 0 & 0 & 0 & 0 & 0 & \ldots \\ 0 & 1 & 0 & 0 & 0 & 0 & \ldots \\ 0 & 0 & 1 & 0 & 0 & 0 & \ldots \\ 0 & 0 & 0 & 1 & 0 & 0 & \ldots \\ 0 & 0 & 0 & 0 & 1 & 0 & \ldots \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \ddots \end{array}\right). \label{eq:17} \end{equation} Then it can be checked directly that both conditions \eq{eq:10} and \eq{eq:11} are fulfilled. In what follows we show that the partition function \eq{eq:15} can be interpreted as a result of a direct enumeration of weighted heaps of pieces in (1+1)D for some special choice of weights of heaps. \section{Connection between the TASEP and the HP model} \label{b} \subsection{Definition of the HP model} A heap of pieces is a collection of elements which are piled together along the vertical axis. If two elements intersect or touch each other in their horizontal projections, then the resulting heap depends on the order in which these two were placed: the element which is placed second is above the element placed first. Such rules resemble the famous {\it tetris} computer game, in which pieces of various shapes are dropped down along vertical direction until they hit the already deposited elements. A heap has a base -- a set of all possible positions in the direction orthogonal to the vertical axis. Bases of various forms can be considered, including lattices in various dimensions, and, more generally, for any fixed graphs. In turn, shapes of pieces can also be different, as well as rules of their interactions. Apparently, the concept of a heap of pieces has been first proposed in 1969 in the work of P. Cartier and D. Foata \cite{cartier} in which they considered monoids generated by some alphabet with special commutation relations. Variety of models, as well as new combinatorial results and their links with the statistical physics were reviewed in \cite{viennot-rev}. The (1+1)D HP model on square and triangular lattices have been exhaustively studied in the literature and played the role of a testing ground for several approaches -- from purely combinatorial \cite{viennot-rev, betrema, bousquet3}, to the ones based on the diagonalization of the spatial transfer matrix and the Bethe Ansatz computations \cite{hakim1,dhar1,dhar2, dhar}. Apart from the enumeration of growing heaps, some other problems in pure mathematics and in mathematical physics are connected to the HP model. For example, various aspects \cite{hakim1, viennot1, bousquet1, bousquet2} of the enumerative combinatorics of partitions are related to a growth of (1+1)D HP. In \cite{vershik} the statistics of growing heaps has been linked to the statistics of two-dimensional growing braids, in \cite{anim-math} the general asymptotic theory of directed two-dimensional lattice paths in half-planes and quarter-planes has been reviewed. One of the main questions in the study of the HP problem is the analysis of an asymptotic behavior of the partition function \begin{equation} Z_N \sim N^{\theta}\Lambda^N, \label{Lambda-def} \end{equation} which enumerates all allowed distinct configurations of $N$-particle heaps ($N\to \infty$) over a given base graph. In case when the base is a D-dimensional lattice of a linear extent $n$, the critical exponent, $\theta$, is universal and depends only on the space dimensionality, while $\Lambda$ depends on $n$, on the lattice geometry, the shape of the pieces, and also on the way how the interactions between them are defined. Here we discuss the heaps of {\it square} pieces which cannot touch each other by their {\it side} faces, top and bottom faces are allowed to touch (see \fig{f:sample} for a typical configuration of such a heap). \begin{figure}[h] \epsfig{file=tet-f02.eps, width=4cm} \caption{A sketch of a particular configuration of a heap with $N=11$ pieces in a bounding box of size $n=5$.} \label{f:sample} \end{figure} One can imagine a heap of pieces as in \fig{f:sample} resulting from some deposition process with pieces falling down from $y=+\infty$ until they reach the lowest possible position respecting the constraint that no pieces have common vertical faces. Numbers inside the falling blocks designate sequential discrete moments of time at which corresponding piece is added to a heap. However, there is an important distinction between the enumeration of configurations in HP and the enumeration of different states in a sequence of falling blocks. In the HP problem, as described above, we are interested in the total number of possible configurations, which respect the rules of a heap's formation (in this case -- the absence of touching side faces of the squares). Thus we imply that all allowed heaps have equal weights. On the other hand, in a deposition problem, although the total set of allowed heaps is the same, there is no such equiprobability: some heaps are obtained more often than others. Let us therefore stress that in what follows we consider just the {\it combinatorial} HP problem rather than the dynamical deposition one. There exists a connection (first revealed in \cite{haug}) between the partition function of the (1+1)D heap of square pieces with no touching vertical faces and the partition function of a steady-state of the TASEP with open boundary conditions. We describe this connection in the subsequent parts of this section. \subsection{Mikado ordering and transfer matrix approach to the HP model} Let us outline the computations of the partition function $Z_N(n)$ of the (1+1)D heap of square pieces of the type shown in \fig{f:sample}. First, we introduce a unique enumeration of heaps, then we show that, given that enumeration, it is possible to write a transfer matrix equation for $Z_N(n)$. Finally, we notice that this equation resembles the one for the partition function of the TASEP with open boundaries. \begin{figure}[ht] \epsfig{file=tet-f03.eps, width=12cm} \caption{Particular realization of a heap. The heap in (a) is obtained by the sequential dropping of bricks and spells as a word $W_a=g_3g_1g_5g_1g_2g_4g_5g_2g_3g_4g_2g_1$; the same heap in (b) is obtained by the sequential dropping of bricks corresponding to another sequence $W_b=g_5g_3g_4g_5g_1g_1g_2g_2g_3g_2g_1g_4$; (c) the unique ``Mikado ordering'' of pieces, see the text for description.} \label{f:01} \end{figure} As we noticed above, each heap can be thought of as a result of some deposition process. However, as it is shown in \fig{f:01}a,b different deposition sequences (depicted by numbers inside the pieces) can lead to a same geometrical heap. It is thus essential to define a rule allowing to enumerate pieces of a heap in a unique way. To do that, note that each heap has at least one piece which satisfies the following two conditions: (i) if it is removed the remaining part is itself a valid heap, and (ii) if it is redeposited (i.e., deposited from above into the same column), the original heap is recovered. We call the set of such ``allowed'' pieces the ``roof'' of a heap. In order to enumerate pieces in a unique way we proceed as follows. We fix the position of the {\it rightmost} element in the roof of the heap and remove this piece. The remaining heap has one piece less, and it itself has an updated roof, so one can repeat the removal procedure until the heap gets empty. As a result, e.g., for the heap shown in \fig{f:02}c, we get the following order of removed pieces \begin{equation} \overleftarrow{W}=g_4\,g_5\,g_1\,g_2\,g_3\,g_4\,g_5\,g_2\,g_2\,g_3\,g_1\,g_1, \label{eq:04} \end{equation} where we use letters (``generators'', in notations of Appendix 1 where a discussion of the underlying group-theoretical construction is outlined) $g_i$ to denote pieces in the $i$-th column. We call such an enumeration procedure the {\it Mikado ordering} because it resembles the famous Mikado game, the goal of which consists in a sequential removal of sticks from a pile, one-by-one, without disturbing the rest of a pile. By construction, each heap has a unique Mikado ordering. Moreover, inverse Mikado ordering \begin{equation} \overrightarrow{W}=g_1\,g_1\,g_3\,g_2\,g_2\,g_5\,g_4\,g_3\,g_2\,g_1\,g_5\,g_4, \label{eq:04a} \end{equation} corresponds to a specific sequence of deposition of pieces that results in a heap shown in \fig{f:01}c. This proves that each Mikado ordering produces a unique heap, i.e., there is a one-to-one correspondence between heaps and their Mikado orderings. So, given a particular configuration of a heap (no matter how it is created), we associate with it a unique sequence of letters constructed according to Mikado rule. It is natural to represent the Mikado orderings by graphs as shown in \fig{f:02}, where the horizontal coordinate is the position of a piece in the Mikado ordering (ordered from right to left as in \eq{eq:04}) and the vertical coordinate is the coordinate of a piece (index of the generator $g$). One can interpret such graphs as some discrete-space walks on the $x=1,\dots, n$ interval. On each step a walker either goes up making an arbitrary number of steps, or stays at the same position, or goes one step down. Paths satisfying these conditions are known in the literature \cite{luk1,luk2} as the \L{}ukasiewicz paths. Clearly, there is one-to-one correspondence between such paths and the Mikado-ordered HPs. Interestingly, there exists a mapping between the \L{}ukasiewicz paths, the standard Dyck paths (discrete one-dimensional directed walks for which only increments of $\pm 1$ are allowed) and the Young tableaux, we discuss this connection in Appendix 2. Now, it is possible to calculate the number of Mikado orderings (and thus, the total number of heaps) as follows. Let $Z_N(x,x_0|n)$ be a total number of heaps with the Mikado ordering of pieces starting with a piece positioned at $x$ and ending with a piece positioned at $x_0$ ($1\le x,x_0 \le n$). The function $Z_N(x,x_0|n)$ satisfies the recursion scheme of the form \begin{equation} \left\{\begin{array}{l} \disp Z_{N+1}(x,x_0|n) = \sum_{x'=1}^{x+1} Z_N(x',x_0|n),\;\; x=1,\dots,n; \medskip \\ Z_{N=0}(x,x_0|n) = \delta_{x,x_0} \end{array} \right. \label{eq:05} \end{equation} Indeed, the Mikado ordering dictates that on each step one takes the rightmost piece off the roof of the heap. Thus, if at sequential time moments the pieces are removed at positions $x$ and $x'$, respectively, then either $x-x' > 1$ (both pieces belong to the roof at the initial step and $x$ is to the right of $x'$, so it is removed first) or $|x-x'| \leq 1$ (piece in position $x$ originally blocks the piece in position $x'$, but it gets released after piece at $x$ is removed). It is easy to verify (see, e.g., \cite{haug}) that this constraint is sufficient, i.e., that any sequence of pieces respecting the rule $x_{i} \geq (x_{i-1} -1)$ for all $i=1,\dots,N$ can be obtained as a valid Mikado ordering (note, however, that the similar statement is not true for heaps of pieces in higher dimensions \cite{npt20}). The allowed sequences of pieces is schematically depicted in \fig{f:02}c (see the Appendix 1 for more details). \begin{figure}[ht] \epsfig{file=tet-f04.eps, width=16cm} \caption{(a) \L{}ukasiewicz path corresponding to the Mikado ordering shown in figure (c). (b) the allowed steps of the \L{}ukasiewicz walk: if $g_x$ is followed by $g_y$, $y$ cannot be larger than $x+1$.} \label{f:02} \end{figure} It is convenient to rewrite the recursion \eq{eq:05} in a matrix form as follows \begin{equation} Z_N(x,x_0|n) = \langle {\bf X}_{out}\,|\, T^N(n)\,|\, {\bf X}_{in} \rangle; \quad {\bf X}_{in}=(\overbrace{0,...,0,1}^{x_0},0,...,0)^{\top}, \; {\bf X}_{out}=(\overbrace{0,...,0,1}^{x},0,...,0), \label{eq:07} \end{equation} where the transfer matrix $T(n)$ reads \begin{equation} T(n)=\left( \begin{array}{cccccccc} 1 & 1 & 1 & 1 & \ldots & 1 \\ 1 & 1 & 1 & 1 & \ldots & 1 \\ 0 & 1 & 1 & 1 & \ldots & 1 \\ 0 & 0 & 1 & 1 & \ldots & 1 \\ \vdots & \vdots & \vdots & \ddots & \ddots & \vdots \\ 0 & 0 & 0 & 0 & \ldots & 1 \end{array}\right); \label{eq:06} \end{equation} while the partition function enumerating all possible heaps is given by \begin{equation} Z_N(n) = \langle {\bf Y}_{out}\,|\, T^N(n)\,|\, {\bf Y}_{in} \rangle; \qquad {\bf Y}_{in}= (1,1,1,...,1,1)^{\top}, \quad {\bf Y}_{out} = (1,1,1,1,...,1). \label{eq:08} \end{equation} Thus, the growth rate $\Lambda(n)$ defined by \eq{Lambda-def} is determined by the largest eigenvalue of the transfer matrix \eq{eq:06}. The corresponding computation has been repeatedly discussed in the literature (see, e.g., \cite{vershik}), and $\Lambda(n)$ is given by \begin{equation} \Lambda(n) = 4\cos^2\frac{\pi}{n+1}\bigg|_{n\gg 1} \approx 4-\frac{4\pi^2}{n^2}. \label{eq:09} \end{equation} In particular, in a large bounding box of base $n\gg 1$ the growth rate is saturated at the value $\lambda_{\infty} = \lim_{n\to\infty}\Lambda(n) = 4$. Now, for our purposes it is essential to notice a striking similarity between Eq. \eq{eq:15} and Eqs. \eq{eq:07} and \eq{eq:08}. Indeed, in the limit $n \to \infty$ the transfer matrix \eq{eq:06} coincides with the matrix $(D+E)$, given by \eq{eq:17} for the case of $\beta = 1$. Thus, for $n \to \infty$ one gets \begin{equation} Z_N^{\text{TASEP}} (\alpha, \beta = 1) = \alpha \lim_{n\to\infty} \sum_{y=1}^{n} \disp \alpha^{-y} Z_{N}^{\text{HP}}(x=1|y), \label{beta1} \end{equation} representing the partition function of the TASEP with open boundaries as a weighted sum over partition functions of heaps of pieces with the topmost piece at $x=1$ (such heaps are called ``pyramids'' in Viennot's notations \cite{viennot-rev}), and we took into account the particular forms of $|{\bf V}_{in}\rangle$, $\langle {\bf V}_{out}|$ given by \eq{boundaries} to arrive at the formula \eq{beta1}. \begin{figure}[ht] \epsfig{file=tet-f05.eps, width=12cm} \caption{Two examples of HP pyramids (right) and their corresponding \L{}ukasiewicz paths (left). Note that, according to the mapping introduced in the text, these two configurations correspond to the same TASEP configuration shown below. If the ``\L{}ukasiewicz walker'' touches the bottom line $x=1$, it gets the weight $\beta^{-1}$ and the very first step, $g_{x_0}$, carries the weight $\alpha^{1-x_0}$. The last step is always at the position $x_N=1$.} \label{f:05} \end{figure} \subsection{Weighted \L{}ukasiewicz paths and TASEP-HP analogy for $\beta \neq 1$} It is easy to generalize \eq{beta1} to the case of arbitrary $\beta$, one just needs to assign an additional weight $\beta^{-1}$ to a heap every time when there appears a piece with coordinate $x=1$. One can rationalize this by considering an adsorbing vertical wall at $x=0$, so that the pieces in the leftmost column acquire an additional energy compared to the pieces in other columns. In the \L{}ukasiewicz path interpretation (see \fig{f:05}) it means that each path acquires the weight $\beta^{-1}$ every time when it touches the horizontal axis. This problem can be reinterpreted as an adsorption of an ideal polymer at a point-like potential well \cite{grosberg-khokhlov} in 1D. Similar weighted sums over random walk trajectories arise in the context of wetting \cite{wetting}, or path-counting on regular graphs with a defect \cite{tnk, nvt17}. As a result, one gets the following mapping \begin{equation} Z_N^{\text{TASEP}} (\alpha, \beta) = \lim_{n\to\infty}\langle {\bf V}_{out}|T_{\beta}(n)^N|{\bf V}_{in} \rangle = \alpha \beta \sum_{\text{all~pyramids~of~size~$N$}} \alpha^{-y} \beta^{-(\#(x=1))}, \label{mapping_main} \end{equation} where the summation runs over all configurations of pyramids of $N$ pieces, $y$ is the coordinate of the last piece in the Mikado ordering (i.e. the leftmost piece in the lowest layer), and $\#(x=1)$ is the number of pieces with the coordinate $x=1$. For example, both pyramids shown in \fig{f:05} have weight $\alpha^{-1} \beta^{-1}$ because the coordinate of the leftmost piece in the lowest layer is $y=2$ and there are 2 pieces with coordinate equal to 1. Note that despite this mapping, {\it there is no one-to-one correspondence} between TASEP configurations and heap configurations. Indeed, while the matrix $E$ (corresponding to an empty site in the TASEP setting) can be identified with a descending step of the corresponding \L{}ukasiewicz path, the matrix $D$ corresponds to the summation over all permitted horizontal or ascending steps in the path. Thus, the weight of a given $N$-particle TASEP configuration can be calculated as a weighted sum within the HP model according to the following rules: \begin{enumerate} \item[(a)] Summation over HP configurations runs over all possible Mikado ordered sequences with $N+1$ pieces, in which the first piece is $g_1$, any sequence $g_i g_k$ with $k\le i$ corresponds to a particle, and a sequence $g_i g_{i+1}$ corresponds to a hole at the corresponding position of the TASEP configuration, \item[(b)] The first letter, $g_y$, in the normally ordered word carries a weight $\alpha^{-y}$, \item[(c)] Each generator $g_1$ carries a weight $\beta^{-1}$, \item[(d)] Weight of all other generators is $1$, \item[(e)] In order to obtain standard form of the weight one should multiply the result by $\alpha \beta$. However, since all weights are defined up to a common multiplicative content, this last step bears no additional meaning and is done only for the purposes of comparison with the conventional formulae \cite{DEHP}. \end{enumerate} \subsection{Generating function of the stationary TASEP via enumeration of weighted heaps} Given the specific form of the transfer matrix \begin{equation} T_{\beta}(n)=\left(\begin{array}{cccccc} \frac{1}{\beta} & \frac{1}{\beta} & \frac{1}{\beta} & \ldots & \frac{1}{\beta} & \frac{1}{\beta} \\ 1 & 1 & 1 & \ldots & 1 & 1 \\ 0 & 1 & 1 & \ldots & 1 & 1 \\ 0 & 0 & 1 & \ldots& 1 & 1 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & \ldots & 1 & 1 \end{array}\right), \label{eq:19} \end{equation} it is possible to calculate the right-hand-side of \eq{mapping_main} exactly. The result is, of course, known \cite{DEHP} but it is instructive: (i) to provide a calculation of the matrix element $\langle {\bf V}_{out}|T_{\beta}(n)^N|{\bf V}_{in} \rangle$ for arbitrary $n$ and (ii) to discuss the interpretation of the well-known stationary TASEP phases in terms of the HP model and the \L{}ukasiewicz paths. Consider vector ${\bf Z}_N = (Z_N(1), Z_N(2),...,Z_N(n))^{\top}$ defined by recurrence relation \begin{equation} {\bf Z}_{N+1} = T_{\beta}(n) {\bf Z}_N; \quad {\bf Z}_{N=0}={\bf V}_{in} =(1,0,...,0,0)^{\top} \label{eq:20} \end{equation} and introduce a generating function ${W}(s) \equiv (W(s,1),W(s,2),...W(s,n))^{\top} = \sum_{N=0}^{\infty} {\bf Z}_N s^N$. Then \begin{equation} \frac{1}{s}({\bf W}(s) - {\bf Z}_0) = T_{\beta}(n) {\bf W}(s);\;\;\; {\bf W}(s) = - \left(s T_{\beta}(n) - I \right)^{-1} {\bf Z}_0, \label{eq:21} \end{equation} where $I$ is the identity matrix. The elements of vector ${\bf W}(s)$ can be obtained as \begin{equation} W(s,k) = \frac{\det B(k)}{\det (T_{\beta}(n) - \frac{1}{s} I)}=\frac{v_{n,k}}{u_n}, \label{eq:22} \end{equation} where the matrix $B(k)$ is obtained from $ (T_{\beta}(n) - \frac{1}{s} I)$ by replacing the $k$-th column with $(-1/s,0,...,0,0)^{\top}$: \begin{equation} B(k) = \left(\begin{array}{cccccc} 1/\beta -1/s & 1/\beta &\ldots &{\bf -1/s} &\ldots &1/\beta \medskip \\ 1 & 1 -1/s & \ldots& {\bf 0} & \ldots &1 \medskip \\ 0 & 1 & \ldots& {\bf 0} & \ldots & 1 \\ 0 & 0 & \ldots& {\bf 0} & \ldots & 1 \medskip \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \medskip \\ 0 & 0 & \ldots & {\bf 0} & \ldots & 1-1/s \end{array}\right), \label{eq:23} \end{equation} and $v_{n,k}$ and $u_n$ are the short-hand notations for the numerator and denominator of \eq{eq:22}, respectively. The denominator $u_n$ satisfies, with respect to $n$, the following recurrence relations \begin{equation} \begin{cases} u_{n+2}=-\frac{1}{s}u_{n+1}-\frac{1}{s}u_n; \medskip \\ u_0=1; \medskip \\ u_1=\frac{1}{\beta}-\frac{1}{s}. \end{cases} \label{eq:25} \end{equation} The solution of \eq{eq:25} has a form \begin{equation} u_n = C_1 p_1^n + C_2 p_2^n, \label{eq:26} \end{equation} where $p_1$ and $p_2$ are the roots of the quadratic equation \begin{equation} p^2 =-\frac{1}{s} p - \frac{1}{s} \label{eq:27} \end{equation} and $C_1$ and $C_2$ are determined from the initial conditions $u_0 = C_1 + C_2$, $u_1 = C_1 p_1 +C_2 p_2$. After some algebra one gets \begin{equation} u_n = u_n(s,\beta)= \frac{\frac{1}{\beta}-\frac{1}{s}-p_2}{p_1-p_2}\,p_1^n- \frac{\frac{1}{\beta}-\frac{1}{s}-p_1}{p_1-p_2}\,p_2^n =\frac{s}{\sqrt{1-4s}} \left(\left(p_1 + \frac{1}{\beta}\right) p_1^n - \left(p_2 + \frac{1}{\beta}\right) p_2^n \right), \label{eq:29} \end{equation} where \begin{equation} p_{1,2}=\frac{-1\pm\sqrt{1-4s}}{2s}. \label{p12} \end{equation} In turn, the determinants in the numerator of \eq{eq:22}, $v_{n,k}=\det B(k)$, can be expressed as: \begin{equation} v_{n,k}(s)= \frac{(-1)^k}{s} u_{n-k}(s,\beta=1). \label{eq:31} \end{equation} Introduce now a generating function \begin{equation} \Xi_n (s,\alpha, \beta) = \sum_{N=0}^{\infty} \langle {\bf V}_{out}|T_{\beta}(n)^N|{\bf V}_{in} \rangle s^N = \sum_{k=1}^n W(s,k) \alpha^{-k+1}, \label{Xi} \end{equation} and substitute eqs. \eq{eq:22}, \eq{eq:29} and \eq{eq:31} into $\Xi_n (s,\alpha, \beta)$ to get \begin{equation} \Xi_n (s,\alpha, \beta)= -\frac{1}{s} \frac{(p_1+1)\dfrac{p_1^n - (-\alpha)^{-n}}{p_1+\alpha^{-1}}-(p_2+1)\dfrac{p_2^n - (-\alpha)^{-n}}{p_2+\alpha^{-1}}}{(p_1+\beta^{-1})p_1^n - (p_2+\beta^{-1})p_2^n}. \label{Xi_n} \end{equation} In the vicinity of $s=0$ \eq{Xi_n} has a well-defined limit for $n \to \infty$ \begin{equation} \begin{array}{rll} \Xi (s,\alpha, \beta) &= &\displaystyle \sum_{N=0}^{\infty} Z_N^{\text{TASEP}} (\alpha, \beta) s^N = \lim_{n \to \infty} \Xi_n (s,\alpha, \beta)= -\frac{1}{s}\frac{p_2+1}{(p_2+\alpha^{-1})(p_2+\beta^{-1})} = \medskip \\ &= & \displaystyle 2\frac{\sqrt{1-4s}+1-2s}{\left(\sqrt{1-4s}+1-2s\alpha^{-1}\right)\left(\sqrt{1-4s}+1-2s\beta^{-1}\right)}, \end{array} \label{TASEP_Xi} \end{equation} which generates partition functions of stationary TASEP. Note the $\alpha \leftrightarrow \beta$ symmetry arises in the $n\to \infty$ limit \eq{TASEP_Xi}, while the expression \eq{Xi_n} does not have this symmetry for any finite $n$ (indeed, it is a polynomial in $\alpha^{-1}$ but an infinite series in $\beta^{-1}$). The large--$N$ behavior of the partition function $Z_N^{\text{TASEP}} (\alpha, \beta)$, and, in particular, the stationary flow, \begin{equation} I = \lim_{N \to \infty} \frac{\langle {\bf V}_{out}|T^{N-1}|{\bf V}_{in}\rangle }{\langle {\bf V}_{out}|T^N|{\bf V}_{in}\rangle } = \lim_{N \to \infty} \frac{Z_{N-1} (\alpha, \beta)}{Z_N (\alpha, \beta)}, \end{equation} is controlled by the smallest (in terms of absolute value) singularity of $\Xi (s,\alpha, \beta)$. Depending on the particular values of $\alpha$ and $\beta$ it could be: \begin{itemize} \item[(i)] the square root singularity, $I^* = s_1 = 1/4$, corresponding to the maximal flow phase, \item[(ii)] the pole $I^{**} = s_2 (\beta)= \beta (1-\beta)$ corresponding to the high density phase, \item[(iii)] the pole $I^{***} = s_3 (\alpha)= \alpha(1-\alpha)$ corresponding to the low density phase. \end{itemize} The transition between these phases occurs at \begin{equation} \begin{array}{rcl} s_1 = s_2 (\beta) & \rightarrow & \beta = 1/2; \medskip \\ s_1 = s_3 (\alpha) & \rightarrow & \alpha = 1/2; \medskip \\ s_2 (\beta) = s_3(\alpha) & \rightarrow & \beta = \alpha; \end{array} \label{eq:derrida} \end{equation} in full agreement with \cite{DEHP}. It is instructive to discuss the interpretation of the TASEP phase transitions \eq{eq:derrida} in terms of the \L{}ukasiewicz paths. The three phases of the stationary TASEP described above (maximal flow, high density and low density) correspond to situations in which typical \L{}ukasiewicz paths are: (i) freely diffusing, (ii) pinned to the absorbing wall, and (iii) fully elongated, respectively. The transition between the diffusive and pinned states indeed is known to occur at a pinning weight $\beta^{-1} = 2$ (see, e.g., \cite{tnk}). In \cite{krug94} it was shown that the path confined between two adsorbing walls with pinning weights $\beta^{-1}, \alpha^{-1}$ is analogous to the TASEP with open boundaries. Here, instead of adsorption to the second wall we have an elongated phase of \L{}ukasiewicz paths, which can be thought of as a result of paths' stretching in external field acting on the first link of the path. The transition between this force-induced phase and the adsorbed phase resembles to some extent the unzipping of DNA under external force \cite{DNA1, DNA2}. The TASEP -- \L{}ukasiewicz paths correspondence also elucidates the $\alpha \to \beta$ symmetry, i.e., the symmetry between the attractive field $U(x) = \delta_{1,x}\log \beta$, acting on all links of the \L{}ukasiewicz paths at a single point $x=1$, and the repulsive field $V(x) = x \log \alpha$ acting only on the end link of the \L{}ukasiewicz path but at any $x$. To the best of our knowledge, this rather nontrivial symmetry has never been discussed before. \section{TASEP and HP from the underlying lattice gas} \label{c} \subsection{Viennot theorem} In this section we have explained how the HP problem and the steady-state TASEP are connected (by virtue of the mapping described in the previous section) with the partition function of a one-dimensional gas of particles with hardcore interactions. This connection is based on a theorem first proved in \cite{viennot-rev}, which links the generating function of a heap of pieces with the generating function of a single layer of the heap. We start with stating the general formulation of the theorem, and then apply it for the particular case of the heap of square pieces with no common vertical sides. Assume that $Z_N$ is a partition function of a heap constructed over some given graph $\cal G$ as a base, where the vertices of the graph $\cal G$ designate possible locations of the elementary pieces, and the edges of the graph designate the vertices which cannot be simultaneously occupied in a single layer (in our particular case the graph is just a chain of $n$ vertices). Let $\Xi (s)$ be the corresponding generating function (grand canonical partition function): \begin{equation} \Xi(s) = \sum_{N=1}^{\infty} Z_N s^N \equiv \sum_{\text{allowed configurations}} s^{\# \text{~of pieces}}, \label{eq:34} \end{equation} Define also the partition function $\Theta(k)$ of all possible distinct configurations of $k$ elementary pieces in a single layer, i.e. all possible subsets of $k$ vertices of $\cal G$, such that no edge has both its ends included into the subset, and the corresponding generating function \begin{equation} \Omega(s) = 1 + \sum_{k=1}^{k_{\max}} \Theta(k) s^k, \label{eq:35} \end{equation} where $k_{\max}$ is the maximal possible size of such a subset. In this formulation $\Omega(s)$ is the partition function of a hard-core lattice gas on $\cal G$ with fugacity $s$. \begin{figure}[ht] \epsfig{file=tet-f06.eps, width=12cm} \caption{Sample configurations of particles with hard-core interactions on a segment ($D=1$) (a) and on a square lattice ($D=2$) (b). Particles are denoted by filled elementary units (small circles for chain, squares for two-dimensional lattice), crosses mark positions which are forbidden for the particles.} \label{f:06} \end{figure} Then the theorem \cite{viennot-rev} states that \begin{equation} \Xi(s) = \frac{1}{\Omega(-s)}. \label{eq:37} \end{equation} For completeness, we present here a sketch of the proof. Consider the product $\Xi(s) \Omega(t)$, which enumerates configurations in the direct product of: \begin{enumerate} \item[(a)] The set of {\it all possible heaps}, enumeration of whose pieces is generated by $s$, \item[(b)] The set of {\it all possible single layers}, enumeration of whose pieces is generated by $t$. \end{enumerate} For brevity, call the first set ``a heap of $s$-pieces'', and the second set -- ``a layer of $t$-pieces''. Consider now an element of the direct product (i.e. a pair of a heap and a layer), and put the heap on top of the layer, i.e. put the layer at the bottom floor, so that all $t$-pieces have vertical coordinate 0, and then put the heap on top of it (i.e. shift all vertical coordinates of the elements of the heap by 1). The resulting configuration is, generally speaking, not a heap of pieces itself: it is possible that some pieces of the $s$-heap are not supported from below by the elements of the $t$-layer. If it is the case, we allow such pieces to fall down to the underlying layer until no further rearrangements are possible. The generating function of all resulting structures can be written in a following way \begin{equation} \Xi(s) \Omega(t) = \sum_{\alpha} t^{n_\alpha} \sum_{\beta}s^{n_\beta} F_{\alpha,\beta}(s), \label{eq:38} \end{equation} where $\alpha$ and $\beta$ enumerate all possible configurations of $t$- and $s$-pieces in the lowest layer (i.e., the combination of $t$-pieces being the original layer configuration, and combination of $s$-pieces no matter where they fell from the upper layer), $n_{\alpha,\beta}$ are the respective numbers of pieces in the lowest layer, and $F_{\alpha,\beta}(s)$ is the generating function of all heaps that can be placed on top of a fixed lowest layer configuration. Now, the crucial idea is that $F_{\alpha,\beta}(s)$ is a function of only the {\it total} configuration of the lowest layer, $\alpha \cup \beta$, and not of the way how the pieces are separated into $s$-type and $t$-type. Therefore, \begin{equation} \Xi(s) \Omega(t) = \sum_{\alpha\cup\beta} F_{\alpha \cup \beta}(s) \sum_{\alpha} t^{n_\alpha} s^{n_\beta} = \sum_{\alpha\cup\beta} F_{\alpha \cup \beta}(s) (s+t)^{n_{\alpha \cup \beta}}, \label{eq:39} \end{equation} where the first sum runs over all possible configurations of the lowest layer, and the second -- over all possible separations of lowest level pieces into $s$- and $t$-types. The last equation allows for the fact that each piece can be assigned to either $s$- or $t$-type independently of others. Note now, that \eq{eq:39} is radically simplified for $t = -s$. Indeed, only the term with $n_{\alpha \cup \beta}=0$ (i.e. which corresponds to an empty layer and an empty heap) survives, and therefore \begin{equation} \Xi (s) \Omega(-s) = 1, \label{eq:40} \end{equation} completing the Viennot's theorem. The function $Z_N$ can be obtained from $\Xi(s)$ in a standard way \begin{equation} Z_N = \frac{1}{2\pi i} \oint \frac{\Xi(s)}{s^{N+1}}ds \label{eq:41} \end{equation} and, therefore, the growth rate, i.e., the leading large-$N$ asymptotics of the partition function \eq{Lambda-def} is controlled by the singularity of $\Xi(s)$ with the smallest absolute value. Taking into account \eq{eq:37} this means that \begin{equation} \Lambda = -s_*^{-1}, \label{eq:42} \end{equation} where $s_*$ is negative and the smallest in absolute value number (among all zeros and non-pole singularities of the generating polynomial $\Omega(s))$. By virtue of \eq{eq:40}, the combinatorics of (D+1)-dimensional HPs can be reformulated as a problem of calculating the grand canonical partition function of a D-dimensional ``hard-square lattice gas'', which in turn can be thought of as a D-dimensional Ising model with finite magnetic field in the limit of strong anti-ferromagnetic coupling. The negative Yang-Lee zero closest to the origin is associated with a point where the thermodynamic functions of a hard-core gas in the thermodynamic limit are known to exhibit a ``non-physical'' singularity on the negative real fugacity axis \cite{Groeneveld62,Gaunt65,Gaunt69,Assis13}. This point sometimes is called ``the Lee-Yang critical point'' \cite{Bouttier02}. This is a remarkably general feature of systems with repulsive interactions, which have pressure function singularities for complex values of the chemical potential (see, e.g. \cite{Taradiy19}). It was argued that systems with repulsive interactions possess universal properties associated with the dominant singularity of the Mayer fugacity series \cite{Poland84,Baram87}. Subsequently, it was shown that indeed this singularity can be identified with the Yang-Lee edge singularity \cite{Lai95,Todo99}. \subsection{Generating function of a 1D hard core lattice gas with an adsorbing site} Viennot theorem, as described above, is formulated for heaps with identical layers and is directly applicable to the unweighted and unrestricted heaps. In the terminology of section III the statement of the theorem can be written as follows \begin{equation} \Xi_n^\text{HP} (s)= \sum_{N=0}^{\infty} Z_{N,n}^{\text{HP}} (\beta) s^N = \Omega_n^{-1}(-s,\beta), \label{viennotHP} \end{equation} where $Z_{N,n}^{\text{HP}}(\beta)$ is the total partition function of all heaps in the $n\times \infty$ box with adsorbing right wall, \begin{equation} Z_{N,n}^{\text{HP}}(\beta)=\langle 1,1,\dots,1|T_{\beta}^N(n)| 1,1,\dots,1\rangle, \label{ZHP} \end{equation} $T_{\beta}(n)$ is given by \eq{eq:19}, and $\Omega_n(s)$ is the grand partition function of the corresponding one-layer problem, i.e., a 1D lattice gas with hard-core interactions (two pieces cannot occupy adjacent sites) and statistical weight $\beta^{-1}$ associated with the leftmost site ($n$ here is the number of accessible lattice sites). For $\beta =1$ this partition function obeys the equation \begin{equation} \Omega_{n+2}(s,1) = \Omega_{n+1}(s,1) + s \Omega_{n}(s,1),\,\, \Omega_0 =1,\,\, \Omega_1= 1+s. \label{eq:Q} \end{equation} Solving \eq{eq:Q} similarly to \eq{eq:25}, we get \begin{equation} \Omega_n(s,1)=\frac{1+s-q_2}{q_1-q_2} q_1^{n}-\frac{1+s-q_1}{q_1-q_2} q_2^{n}; \qquad q_{1,2}=\frac{1\pm \sqrt{1+4s}}{2}. \label{eq:Zn} \end{equation} In the general case $\Omega_n(s,\beta)$ satisfies the recursion \begin{equation} \Omega_n(s,\beta) = \Omega_{n-1}(s,1) + \frac{s}{\beta} \Omega_{n-2}(s,1). \label{Zbeta} \end{equation} Substituting \eq{eq:Zn} into \eq{Zbeta} and collecting the terms leads to the following expression for $\Omega_n(s,\beta)$: \begin{equation} \Omega_n(s,\beta) = \frac{1}{\sqrt{1+4s}}\left(\left(q_1+\frac{s}{\beta}\right) q_1^n - \left(q_2+\frac{s}{\beta}\right) q_2^n\right); \qquad q_{1,2} = \frac{1\pm \sqrt{1+4s}}{2}. \label{Zbeta2} \end{equation} Together with \eq{viennotHP} this allows to recover the grand partition function of a heap of pieces \begin{equation} \Xi_n^\text{HP}(s,\beta) = \Omega_n^{-1}(-s,\beta). \label{HP_Viennot} \end{equation} Note that despite the formal presence of square roots in \eq{Zbeta2}, $\Omega_n(s,\beta)$ is a polynomial of order $\lfloor(n+1)/2\rfloor$ in $s$, and thus the growth rate of the HP is controlled by its largest negative zero. Thus, the growth rate of a HP, given by \eq{eq:09} in the case of $\beta = 1$, is governed by the Lee-Yang zero of the partition function of the corresponding one-dimensional gas. To check this correspondence, we invite the reader to re-derive \eq{eq:09} directly from \eq{Zbeta2}. Now, the mapping described in section III links the weights of TASEP configurations in the stationary state with the enumeration of {\it weighted pyramids} in the HP problem. Weighted pyramids are not heaps of identical layers, and thus Viennot theorem is not directly applicable to them. However, the partition function of pyramids $\Xi_n (s,\alpha, \beta)$, given by \eq{Xi}, which converges to the generating function of stationary TASEP in the large $n$ limit, is very similar to the partition function of all HP configurations \eq{viennotHP}--\eq{ZHP}, essentially they correspond to different matrix elements of the same matrix $\left(s T_{\beta}(n) - I \right)^{-1}$ (see \eq{eq:21}). It is therefore not surprising that the determinant of this matrix is \begin{equation} \det\left (s T_{\beta}(n) - I \right) = s^n u_n(s,\beta) = \Omega_n(-s,\beta). \label{viennot1} \end{equation} Recall that the partition function, $W_n(s,k)$, of all the pyramids which have the last piece at position $k$, is a quotient of such determinants (see \eq{eq:22}, \eq{eq:31}) \begin{equation} W_n(s,k)=\frac{v_{n,k}}{u_n} = \frac{(-1)^k}{s} \frac{u_{n-k}(\beta = 1)}{u_n(\beta)} = (-1)^k s^{k-1} \frac{ \Omega_{n-k}(-s,1)}{ \Omega_n(-s,\beta)} \end{equation} Thus, the partition function of weighted pyramids \eq{Xi_n} can be written in terms of the partition function of the 1D ideal gas with hard-core interaction $\Omega_n(-s,\beta)$. Indeed, \begin{equation} \Xi_n (s,\alpha, \beta) =- \left(-\frac{s}{\alpha}\right)^{n-1} \frac{1}{\Omega_n(-s,\beta)} \sum_{k=1}^n \left(-\frac{\alpha}{s}\right)^{n-k} \Omega_{n-k}(-s,1) = - \left(-\frac{s}{\alpha}\right)^{n-1} \frac{\tilde{\Omega}_n(-s,-\alpha/s)}{\Omega_n(-s,\beta)}, \label{eq:Xi} \end{equation} where we introduced the partial generating function \begin{equation} \tilde{\Omega}_n(s,t) = \sum_{m=0}^{n-1} \Omega_{m}(s,1) t^m. \label{eq:Omega} \end{equation} Note that $\tilde{\Omega}_n(-s,-\alpha/s)$ is, with respect to $1/s$, a polynomial of power $n-1$, so $\Xi_n (s,\alpha, \beta)$ converges to a finite value for $s \to 0$. It is possible to take the large $n$ limit of this expression explicitly and get back to the formula \eq{TASEP_Xi} for the grand partition function of the stationary TASEP, $\Xi (s,\alpha, \beta) = \lim_{n \to \infty} \Xi_n (s,\alpha, \beta)$ (not once again that the $\alpha \leftrightarrow \beta$ symmetry appears only in the limiting formula). This establishes the desired connection between the TASEP problem with free boundary conditions and the partition function of a 1D hard-core lattice gas on a strip with adsorbing boundary. \section{Discussion} \label{d} In this paper we studied the multiple connections among basic classical models of statistical physics: (i) the 1D lattice gas with hard-core interactions, (ii) the 1D TASEP with open boundary conditions, (iii) the problem of (1+1)D heaps enumerations of square pieces with hard-core repulsion in the horizontal direction, and (iv) an ideal (1+1)D polymer chain represented by a \L{}ukasiewicz path. By exploiting various mappings between these problems, and the X. Viennot theorem connecting partition functions of a heap of pieces and that of a single layer of pieces, we were able to show eventually that the partition function of the steady-state TASEP with open boundary conditions can be expressed in terms of a quotient of partition functions \eq{eq:Xi}--\eq{eq:Omega} of a one-dimensional hard-core lattice gas with an adsorbing site at the boundary and negative fugacity. Although all the used individual mappings were already present in the literature, this final result has not, to the best of our knowledge, been reported before. It provides, in our opinion, an important advancement of connections between sonsidered statistical systems. Another interesting and previously unknown mapping is the connection between the three phases in the steady state TASEP with open boundary conditions, and the three states of an ideal polymer chain on a half-line with an adsorbing wall and an external field acting on the end link. The latter connection highlights a non-trivial hidden symmetry between adsorbing potential acting on all the links in the vicinity of the wall, and a repulsive field, which is independent of the distance to the wall, but acts only onto the end monomer. Notably, Viennot's theorem can be exploited further to establish connections between the Yang-Lee zeros of the D-dimensional lattice gas with excluded volume interactions and the enumeration of a (D+1)-dimensional heap of pieces (see \cite{npt20}). This is a nice example of a problem in which Yang-Lee zeros have a direct physical meaning. \begin{acknowledgments} We are grateful to S. Redner and D. Dhar for many illuminating discussions. The work of S.N. is supported by the BASIS Foundation in frameworks of the grant 19-1-1-48-1. \end{acknowledgments}
1,108,101,566,227
arxiv
\section{Introduction\label{sec:intro}} Gluons are ubiquitous at the LHC, and gluon fusion is among the phenomenologically most interesting production mechanisms. Specifically, the production of final states including one or more Higgs bosons is typically dominated by gluon fusion, with a virtual top-quark loop mediating the interaction to the Higgs bosons. Precise predictions for such processes are indispensable for measuring the properties of the Higgs boson. On the one hand, gluon fusion processes experience large $K$-factors.\footnote{ See~\cite{Ahrens:2008qu,Ahrens:2008nc,Ebert:2017uel} for a discussion of 'timelike' logarithms in gluon fusion and their resummation which reduces the size of perturbative corrections significantly.} Examples include a $K$-factor of $2.3$ for single Higgs and $1.7$ for Higgs pair production at next-to-leading order (NLO)~\cite{Spira:1995rr,Harlander:2005rq,Anastasiou:2006hc,Aglietti:2006tp,Anastasiou:2016cez,Borowka:2016ehy,Borowka:2016ypz} which clearly demonstrates the importance of taking higher-order corrections into account. On the other hand, calculating these higher-order corrections is extremely challenging. Gluon fusion is a loop-induced process, and the top-quark mass introduces an additional scale in the loop integrals. While the NLO corrections to single-Higgs production have been known analytically for some time~\cite{Spira:1995rr,Harlander:2005rq,Anastasiou:2006hc,Aglietti:2006tp}, the calculation of NLO corrections to processes with more than one final-state particle is still subject of on-going work. For di-Higgs production, which requires the evaluation of two-loop integrals with four scales, numerical results have only become available recently~\cite{Borowka:2016ehy,Borowka:2016ypz}. To make higher-order computations feasible an effective field theory (EFT), where the top quark has been integrated out in the limit of an infinite top-quark mass, $m_t \to \infty$, has been used extensively in the literature. In this approximation, results are available at NNNLO for single Higgs production~\cite{Anastasiou:2015ema,Anastasiou:2016cez} and at NNLO for Higgs pair production~\cite{deFlorian:2013jea,Grigo:2014jma}, and for other gluon fusion processes, i.e.~$gg \to ZZ$, $gg\to H j$ at NNLO~\cite{Boughezal:2013uia,Chen:2014gva,Boughezal:2015dra,Boughezal:2015aha} and $gg\to HZ$. Beyond the infinite top mass limit, several results have also been obtained in the large-$m_t$ expansion (LME) for a number of processes listed here: \begin{itemize} \item $gg\to H$: up to $1/m_t^6$ at NNLO~\cite{Harlander:2009bw,Pak:2009bx,Harlander:2009mq,Pak:2009dg,Harlander:2009my}, including $gg\to Hg$ at NLO \item $gg\to HH$: up to $1/m_t^{12}$ in \cite{Grigo:2015dia} and $1/m_t^8$ in \cite{Degrassi:2016vss} at NLO; up to $1/m_t^4$ at NNLO\cite{Grigo:2015dia} \item $gg\to HZ$: up to $1/m_t^8$ \cite{Hasselhuhn:2016rqt} at NLO \item $gg\to ZZ$: up to $1/m_t^{12}$ in \cite{Campbell:2016ivq} and $1/m_t^8$ in \cite{Caola:2016trd} at NLO \end{itemize} The expansions can be rescaled with the exact leading order (LO) result \begin{equation} \text{d}\sigma_\text{NLO}^\text{rescaled LME}/\text{d}X = \frac{\text{d}\sigma_\text{NLO}^\text{LME}/\text{d}X}{\text{d}\sigma_\text{LO}^\text{LME}/\text{d}X}\,\text{d}\sigma_\text{LO}^\text{exact}/\text{d}X\,, \label{eq:LME_rescaled} \end{equation} where $\text{d}\sigma/\text{d}X$ indicates the differential cross section with respect to some quantity $X$. For inclusive Higgs production this yields good agreement with the exact NLO result~\cite{Spira:1995rr,Harlander:2005rq,Anastasiou:2006hc,Aglietti:2006tp}. The comparison with the exact Higgs pair production result has however revealed the shortcomings of the approximation~\eqref{eq:LME_rescaled} for this process~\cite{Borowka:2016ehy,Borowka:2016ypz}. This issue is especially pronounced when distributions are considered. Here, we advocate a different approach, based on conformal mapping and the construction of Pad\'e approximations from expansions in different kinematical regimes of the amplitude. This strategy has first been introduced for heavy-quark current correlators $\Pi^{(j)}(q^2/(4m_q^2))$~\cite{Broadhurst:1993mw,Fleischer:1994ef} and applied successfully up to four-loop order~\cite{Chetyrkin:1998ix,Kiyo:2009gb,Hoang:2008qy}. The approximation can be improved systematically by including more information from the various kinematic limits. In fact, the three-loop approximation is indistinguishable from the results of an exact numeric computation~\cite{Maier:2017ypu}. In \cite{Fleischer:1994ef}, it has also been shown for the decay $H\to \gamma\gamma$ that a Pad\'e reconstruction of the top mass effects from the asymptotic expansion in a large top mass yields excellent agreement with the full NLO decay rate. Like for heavy-quark correlators and the $H\to \gamma\gamma$ decay rate, the amplitude for Higgs production in gluon fusion only depends on one ratio of scales and the application of the method is straightforward. However, the amplitudes for the remaining processes listed above depend on 4-5 scales. Pad\'e approximations based on the LME terms alone have been used to reconstruct the interference contribution in $gg\to ZZ$~\cite{Campbell:2016ivq}. An attempt to reconstruct the $gg\to HZ$ cross section has been made in~\cite{Hasselhuhn:2016rqt}.\footnote{The method presented below depends crucially on the analytic structure of the amplitude, whereas \cite{Hasselhuhn:2016rqt} considers Pad\'e approximants to the differential cross section, which is not an analytic function of the ratio $\hat{s}/(4m_t^2)$ near $m_t\to\infty$. Therefore, the approach used in~\cite{Hasselhuhn:2016rqt} does not yield an adequate description above the top threshold and the improvement from employing a conformal mapping is marginal.} In this work, we show how such an approximation can be improved drastically by also taking into account expansions in other kinematic regions, using Higgs pair production as an example. Measuring di-Higgs production at the LHC allows to directly determine the trilinear Higgs boson self-coupling $\lambda_3$~\cite{Djouadi:1999rca, Dolan:2012rv, Baglio:2012np}, which serves as a probe of the shape of the Higgs potential and is a crucial test of the mechanism of electroweak symmetry breaking in nature. While the couplings of the Higgs boson to the gauge bosons and third-generation fermions have been firmly established to be Standard Model like within 10--20\%~\cite{Khachatryan:2016vau,ATLAS:2017bic,Sirunyan:2017elk}, constraining the trilinear self-coupling is highly challenging. With $3000\,\text{fb}^{-1}$ of data the estimated bounds are $0.2 <\lambda_3/\lambda_3^{SM} < 7.0$ (neglecting systematic uncertainties) \cite{ATLASbbbb}. Current bounds from Higgs pair production final states limit the trilinear Higgs self-coupling between $-8.8 <\lambda_3/\lambda_3^{SM} < 15.0$ \cite{CMSbbgaga}. Under the assumption that only the trilinear Higgs self-coupling is modified, bounds can be obtained from single Higgs production through the electroweak corrections \cite{McCullough:2013rea,Gorbahn:2016uoy,Degrassi:2016wml,Bizon:2016wgr} or from electroweak precision observables \cite{Degrassi:2017ucl,Kribs:2017znd}. However, the current bounds are still above the limits from perturbativity \cite{DiLuzio:2017tfn}. Precise theory predictions are crucial in the extraction of $\lambda_3$ from the cross section measurements. It is evident already at leading order (LO) that the LME alone is not sufficient. In fact, as shown in Figure~\ref{fig:intro}, the cross section is dominated by energies of about 400\,GeV, whereas the LME breaks down at the top pair-production threshold around $2m_t \approx 350\,$GeV. As we will show, constructing Pad\'e approximations from the LME can ameliorate this problem to some degree, but not solve it completely. The reason for this is that, above the top threshold, the production amplitude receives non-analytic contributions, which cannot be reproduced by the purely rational Pad\'e approximants. Incorporating these non-analytic threshold corrections enhances the quality of the approximation dramatically in the dominant kinematic region and thus leads to a much improved prediction for the total cross section. \begin{figure}[t!] \centering \includegraphics[width=13cm]{Figures/MHHLME} \caption{Invariant Higgs mass distribution for the full LO cross section (dark blue) and the large mass expansion (LME) up to $\mathcal{O}(1/m_t^8)$ as given in Ref.~\cite{Degrassi:2016vss} (red-dashed).\label{fig:intro}} \end{figure} The outline of this paper is as follows: In Section~\ref{sec:method} we introduce our method for single Higgs production and then show how it can be generalized to the case of Higgs pair production. The computation of the additional input terms from the expansion around the top threshold is described in Section~\ref{sec:threshold}. In Section~\ref{sec:numerics} we perform a detailed comparison of both the LO and NLO Pad\'e approximation with the full LO result and the recent NLO results~\cite{Borowka:2016ehy,Borowka:2016ypz}, respectively. We conclude in Section~\ref{sec:conclusion} and offer an outlook over possible applications of our method. \section{The method\label{sec:method}} We first discuss the construction of a Pad\'e approximation for the simple case of the virtual amplitude $\mathcal{A}_{gg\to H^{(*)}}$ in Section~\ref{sec:ggH_Pade} and then generalize the approach to Higgs pair production in Section~\ref{sec:ggHH_Pade}. \subsection{Pad\'e approximation for $\mathbf{gg\to H^{(*)}}$\label{sec:ggH_Pade}} The LO diagram for the production of an off-shell Higgs in gluon fusion is shown in Figure~\ref{fig:ggH} (left). The corresponding amplitude can be expressed through a dimensionless form factor $F_\triangle$ that only depends on the variable $z=(\hat{s}+i0)/(4m_t^2)$ \begin{equation} \mathcal{A}^{\mu\nu}_{ab}(g(p_1,\mu,a),g(p_2,\nu,b)\to H^{(*)}(p_H)) = \frac{y_t\hat{s}}{\sqrt{2}m_t}\frac{\alpha_s}{2\pi}\delta_{ab}T_FA_1^{\mu\nu}F_\triangle(z) \label{eq:AggH_def} \end{equation} where $\hat{s}=(p_1+p_2)^2=p_H^2$, $y_t=\sqrt{2}m_t/v$ is the top Yukawa coupling, $T_F=1/2$ and \begin{equation} A_1^{\mu\nu} = g^{\mu\nu}-\frac{p_1^\nu p_2^\mu}{p_1\cdot p_2}. \label{eq:A1_proj} \end{equation} The form factor $F_\triangle$ is normalized such that \begin{equation} F_\triangle\xrightarrow[]{m_t\to \infty} \frac{4}{3} + {\cal O}(\alpha_s). \end{equation} The leading-order contribution to the form factor is analytic in the entire complex plane with the exception of a branch cut for real $z\geq1$ due to on-shell $t\bar{t}$ cuts. At NLO, massless cuts like the one shown in the right of Figure~\ref{fig:ggH} introduce a branch cut starting at $z=0$. However, the branch cut can be made explicit \begin{eqnarray} F_\triangle & = & F_\triangle^{1l} + \frac{\alpha_s}{\pi}\,F_{\triangle}^{2l} + \mathcal{O}(\alpha_s^2)\nonumber\\ & = & F_\triangle^{1l} + \frac{\alpha_s}{\pi}\left[C_F F_{\triangle,C_F}^{2l} + C_A \left(F_{\triangle,C_A}^{2l}+F_{\triangle,C_A,\ln}^{2l}\ln(-4z)\right)\right] + \mathcal{O}(\alpha_s^2), \label{eq:Ftri_splitting} \end{eqnarray} such that all the $F_{\triangle,x}^{il}$ (with $i=1,2$ and $x=C_F, C_A, (C_A,\ln))$ on the right-hand side are again analytic except for real $z\geq1$. In $F_{\triangle,C_A}^{2l}$, IR divergences in the amplitude have been subtracted as described in Ref.~\cite{Degrassi:2016vss}. We can now apply the conformal transformation~\cite{Fleischer:1994ef} \begin{equation} z = \frac{4\omega}{(1+\omega)^2} \label{eq:conf_map} \end{equation} to map the entire complex $z$ plane onto the unit disc $|\omega|\leq1$ while the branch cut at $z\geq1$ is mapped onto the perimeter. The physical branch $\im(z) > 0$ corresponds to the upper semicircle, starting at $\omega(z=1) = 1$ and ending at $\omega(z\to \infty + i0) = -1$. With this mapping, the $F_{\triangle,x}^{il}$ are analytic functions of $\omega$ inside the unit circle. We approximate them using a Pad\'e ansatz \begin{equation} [n/m](\omega) = \frac{\sum\limits_{i=0}^n a_i \omega^i}{1 + \sum\limits_{j=1}^m b_j \omega^j} \label{eq:Pade_ansatz} \end{equation} with a total of $n+m+1$ coefficients. They can be fixed by imposing conditions stemming from known expansions of the approximated function. In many cases it is found that diagonal Pad\'e approximants with $n=m$ provide the best description. Indeed, we find that this also holds for our analysis. We therefore discard approximants that are too far away from the diagonal, as detailed below. \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth]{Figures/ggH} \caption{\label{fig:ggH} The LO diagram for Higgs production in gluon fusion (left) and an example for a NLO diagram that contains a branch cut starting at $\hat{s}=0$ (right).} \end{center} \end{figure} The LME for the form factor $F_\triangle$ has been given up to terms of the order $z^4$ in~\cite{Aglietti:2006tp}. The conformal mapping~\eqref{eq:conf_map} transforms this into constraints on the derivatives of the Pad\'e approximant at $\omega=0$. Furthermore the form factor vanishes for $z\to\infty$ as $F_\triangle(z)=\mathcal{O}(1/z)$ since $\hat{s}\sim z$ has been factored out in~\eqref{eq:AggH_def}. In a direct approach this would imply the constraint $[n/m](\omega=-1)=0$. Instead, we construct the Pad\'e approximant for the rescaled form factor \begin{equation} [n/m](\omega) \simeq \left[1+a_R\,z(\omega)\right] F_\triangle(z(\omega)), \end{equation} where $a_R$ is a free parameter. This serves a double purpose. First, it removes the spurious constraint at $\omega=-1$ which implies that the dimensionality of the non-linear system of equations that determines the coefficients of the Pad\'e approximant is reduced by one. Secondly, the variation of the parameter $a_R$ allows us to test the stability of the ansatz and to assign an uncertainty to the reconstruction. A set of Pad\'e approximants with $n+m=4$ can be constructed based only on the constraints from the LME up to $\mathcal{O}(z^4)$. The Pad\'e ansatz \eqref{eq:Pade_ansatz} has $m$ poles in the $\omega$ plane. Here, and in the remainder of this work, we eliminate a subset of Pad\'e approximants based on the positions of these poles. Since the amplitude is analytic inside the unit disc, the canonical selection criterion is to exclude approximants with poles at $|\omega|\leq1+\delta$, where $\delta>0$ should be chosen such that no unphysical resonances, caused by nearby poles, are observed in the amplitude. We find, however, that this criterion proves too restrictive as it excludes almost all approximants. Thus, we relax the selection criterion and exclude approximants with poles in the region corresponding to values of $z$ with $0\leq\text{Re}(z)\leq8$ and $-1\leq\text{Im}(z)\leq1$, thereby excluding poles in the vicinity of the phenomenologically relevant region $0.13\lesssim z\lesssim5$. We have checked the stability of the results under variation of the exclusion region. The result is shown in Figure~\ref{fig:FboxLME} and compared to the exact expression for the form factor~\cite{Spira:1995rr,Harlander:2005rq,Anastasiou:2006hc,Aglietti:2006tp}. At LO the agreement is good, whereas at NLO the Pad\'e curves become unstable under variations of $a_R$ and $n/m$ and show significant deviations from the exact result for energies near and above the top threshold $z\gtrsim1$. \begin{figure} \begin{center} \includegraphics[width=0.7\textwidth]{Figures/FtriPadeLMT}\\[0.3cm] \includegraphics[width=0.7\textwidth]{Figures/FtriNLOPadeLMT} \caption{\label{fig:FboxLME} Pad\'e approximants for $F_\triangle$ at LO (top) and NLO (bottom) constructed using only the LME up to the order $1/m_t^8$ as input. Shown are the real/imaginary part of the Pad\'e approximants (blue/orange) and the exact results (black). We constructed in total 20 approximants of the types [1/3], [2/2] and [3/1] for random values of $a_R$ in the range [0.1,10], while approximants with poles in the rectangle $\text{Re}(z)\in[0,8]$ and $\text{Im}(z)\in[-1,1]$ have been excluded since they can cause unphysical resonances in the form factor.} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.7\textwidth]{Figures/FtriPadefull}\\[1cm] \includegraphics[width=0.7\textwidth]{Figures/FtriNLOPadefull} \caption{\label{fig:FboxLMEThr} We show the same comparison as in Figure~\ref{fig:FboxLME} but for Pad\'e approximants based on the LME and the threshold expansion. Only [5/2], [4/3], [3/4] and [2/5] approximants were constructed at LO and only [4/2], [3/3] and [2/4] approximants were constructed at NLO.} \end{center} \end{figure} We can gain some insight into this deviation by studying the expansion of the form factor around the top threshold. In particular we are interested in the non-analytic terms in the expansion in $(1-z)$ which can be determined with the help of a factorization formula as discussed below in Section~\ref{sec:threshold}. Our results take the form \begin{eqnarray} F_\triangle^{1l} & \mathop{\asymp}\limits^{z\to1} & 2\pi (1-z)^{3/2} + \frac{13\pi}{3} (1-z)^{5/2} + \mathcal{O}\left((1-z)^{7/2}\right), \label{eq:triangle_threshold1}\\ % F_{\triangle,C_F}^{2l} & \mathop{\asymp}\limits^{z\to1} & \pi^2 (1-z) \ln(1-z) - \frac{\pi(40 - 3 \pi^2)}{12} (1-z)^{3/2} + \frac{2\pi^2}{3} (1-z)^2 \ln(1-z) \nonumber\\ & & + \mathcal{O}\left((1-z)^{5/2}\right),\label{eq:triangle_threshold2}\\ % F_{\triangle,C_A}^{2l} & \mathop{\asymp}\limits^{z\to1} & - \frac{\pi\left(3\pi ^2-4\right)}{12}(1-z)^{3/2} + \mathcal{O}\left((1-z)^{5/2}\right),\label{eq:triangle_threshold3}\\ % F_{\triangle,C_A,\ln}^{2l} & \mathop{\asymp}\limits^{z\to1} & \mathcal{O}\left((1-z)^{5/2}\right),\label{eq:triangle_threshold4} \end{eqnarray} where we have used the symbol $\asymp$ to denote that terms that are analytic in $(1-z)$ have been dropped on the right-hand side. We observe that threshold logarithms $\ln(1-z)$, which cannot be reproduced by the Pad\'e ansatz, appear at NLO. Having determined the coefficients of the logarithmic terms at the first two orders we can however subtract them from the form factor and apply the Pad\'e approximation to the subtracted function. Taking a function $f(z)$ with the threshold expansion \begin{equation} f(z) \,\,\mathop{\asymp}\limits^{z\to1}\,\, c_1 \sqrt{1-z} + c_2 (1-z)\ln(1-z) + c_3 (1-z)^{3/2} + c_4 (1-z)^2\ln(1-z) + \dots \end{equation} as an example we define \begin{equation} \tilde{f}(z) = f(z) - c_2 s_2(z) - \left(c_4 - \frac{c_2}{3}\right) s_4(z), \end{equation} where $s_{2,4}$ are constructed such that their leading non-analytic terms in the threshold expansion are given by $(1-z)\ln(1-z)$ and $(1-z)^2\ln(1-z)$, respectively. In addition, the subtraction terms must be analytic around $z = 0$ and at most logarithmically divergent for $z \to \infty$.\footnote{In principle, non-logarithmic poles of the form $z^n$ are also allowed, but these have to cancel against corresponding poles in the Pad\'e approximation.} Apart from these constraints, the exact form of the subtraction functions is arbitrary. Our choice for the functions $s_{2,4}$ can be found in Appendix~\ref{sec:subtractions}. The threshold expansion of $\tilde{f}$ is free of logarithms up to and including the order $(1-z)^2$. An improved approximation of the original function $f$ is then given by \begin{equation} f(z) \simeq [n/m]_{\tilde{f}}(\omega(z)) + c_2 s_2(z) + \left(c_4 - \frac{c_2}{3}\right) s_4(z), \end{equation} where the Pad\'e approximant $[n/m]_{\tilde{f}}$ is constructed from the expansion terms of the subtracted function $\tilde{f}$. In addition, the non-integer powers of $(1-z)$ in eqs.~\eqref{eq:triangle_threshold1} -- \eqref{eq:triangle_threshold4} imply constraints on the derivatives of the Pad\'e approximation at $\omega=1$. By using all the available constraints we can construct approximants with a total of $n+m+1=8$ coefficients at LO and $n+m+1=7$ coefficients at NLO. The results are given in Figure~\ref{fig:FboxLMEThr} and show perfect agreement with the exact LO form factor in the full energy range. At NLO the agreement is excellent up to $z\sim2.5$ where tiny deviations begin to emerge. For very large $z$, outside the phenomenologically relevant energy range, the approximants have unphysical extrema. We suspect that they could be removed by including information from the small $m_t$ expansion (SME) of the form factors in the construction. An alternative implementation is obtained by performing additional subtractions for the root terms by employing the functions $s_{1,3,5}$ in Appendix~\ref{sec:subtractions}, thereby removing all known non-analytic terms in the expansion. This yields the same number of constraints on the Pad\'e approximant. In the following we will only use the subtraction functions $s_{2,4}$, since we find no significant differences between the two approaches. \subsection{Pad\'e approximation for $\mathbf{gg\to HH}$\label{sec:ggHH_Pade}} The amplitude for the process $gg\to HH$ can be parametrized by two dimensionless form factors $F_{1,2}$ \begin{equation} \mathcal{A}^{\mu\nu}_{ab}(g(p_1,\mu,a),g(p_2,\nu,b)\to H(p_3)H(p_4)) = y_t^2\,\frac{\alpha_s}{2\pi}\,\delta_{ab}T_F\, z\left[A_1^{\mu\nu}F_1 + A_2^{\mu\nu}F_2\right], \label{eq:AggHH_def} \end{equation} where $\hat{s}=(p_1+p_2)^2$, $\hat{t}=(p_1-p_3)^2$, $\hat{u}=(p_1-p_4)^2$, $\hat{s}+\hat{t}+\hat{u} = 2m_H^2$, $A_1^{\mu\nu}$ is given in~\eqref{eq:A1_proj} and \begin{equation} A_2^{\mu\nu} = g^{\mu\nu}+ \frac{ p_3^2\, p_1^{\nu}\, p^{\mu}_2 - 2\left(p_3\cdot p_2\right)p_1^{\nu}\,p_3^{\mu} - 2\left(p_3\cdot p_1\right)p_3^{\nu}\,p_2^{\mu} + 2 \left(p_1\cdot p_2\right) p_3^{\mu}\,p_3^{\nu}} {p_T^2\left(p_1\cdot p_2\right)}, \label{eq:A2_proj} \end{equation} with \begin{equation} p_T^2 = \frac{\hat{t}\hat{u} - m_H^4}{\hat{s}}\,. \end{equation} Given that there are four independent scales the dimensionless form factors depend on three ratios \begin{equation} F_i = F_i\left( r_H\equiv \frac{m_H^2}{\hat{s}}, r_{p_T}\equiv \frac{p_T^2}{\hat{s}}, z \right), \hspace{2cm}i=1,2. \label{eq:FiGGHH} \end{equation} This implies that their analytic structure is much more complicated than it was the case for $F_\triangle$. For instance, there are branch cuts in the complex $\hat{t}$ and $\hat{u}$ planes above the thresholds $\hat{t}\geq4m_t^2$ and $\hat{u}\geq4m_t^2$. These are, however, not kinematically accessible for external momenta that are both real and on shell. Furthermore, for $z \geq 1/r_H \geq 4$ there is also a discontinuity from cuts corresponding to the processes $gg\to t\bar{t}H$ and $H\to t\bar{t}$ which are, however, not accessible for the physical Higgs and top masses. In the limit of small quark masses, $z\to\infty$, where this type of cut is present, the recent analytical computation of the NLO virtual amplitudes for Higgs plus jet production~\cite{Melnikov:2016qoc,Melnikov:2017pgf} has revealed a rather complicated structure of logarithms in the soft and (in particular) the collinear limit which is presently not fully understood. Here, we take a practitioners approach and note that when $r_H$ and $r_{p_T}$ are kept fixed we can separate massless cuts as in~\eqref{eq:Ftri_splitting} and again end up with functions that are analytic in $z$ apart from a branch cut for real $z>1$. Therefore it is possible to approximate the top-quark mass dependence of the form factors at a given phase-space point, i.e. for fixed $m_H^2$, $\hat{s}$ and $p_T^2$, by constructing a Pad\'e approximant that describes the dependence on the variable $z$. We find that the inclusion of the top threshold terms, as described for the triangle form factor~\eqref{eq:Ftri_splitting} in Section~\ref{sec:ggH_Pade}, is of even greater importance for the construction of Pad\'e approximants for the form factors~\eqref{eq:FiGGHH} than for $F_\triangle$. The computation of these terms is described in the following Section~\ref{sec:threshold} and our results are given in Appendix~\ref{sec:ggHH_Results}. Readers who are mostly interested in the phenomenological aspects may prefer to proceed to Section~\ref{sec:numerics}. There, we assess the reliability of our approach for Higgs pair production by performing a detailed comparison with the exact NLO results. \section{The amplitude near threshold\label{sec:threshold}} In this section the computation of the non-analytic terms in the threshold expansion of the form factors defined in Section~\ref{sec:method} is described. Factorization formulae for the inclusive production cross section of heavy-particle pairs near threshold have been developed in \cite{Beneke:2003xh,Beneke:2004km,Beneke:2009rj,Beneke:2010da,Falgari:2012hx} and applied to a number of processes~\cite{Beneke:2007zg,Actis:2008rb,Beneke:2010mp, Jantzen:2013gpa,Beneke:2017rdn,Beneke:2011mq,Beneke:2012wb,Beneke:2016kvz}. The approach is based on the factorization of forward-scattering amplitudes which are related to the inclusive cross section by the optical theorem. We have extended the factorization formula to the $gg\to H^{(*)},HH,HZ,ZZ$ amplitudes. Only the basic aspects are sketched here and the reader is referred to the original literature~\cite{Beneke:2003xh,Beneke:2004km,Beneke:2009rj,Beneke:2010da,Falgari:2012hx} for a detailed derivation and discussion. \subsection{Structure of the amplitude near threshold\label{sec:structure}} Near the threshold, $z\to1$, the top quarks can only be on shell if they are non-relativistic. This implies a large hierarchy between the top mass $m_t$, its typical momentum $m_t\sqrt{1-z}$ and its kinetic energy $m_t(1-z)$ which set the hard, soft and ultrasoft scale, respectively. Therefore, an effective field theory (EFT) can be constructed by integrating out the hard and soft scale. Then, the only dynamical modes left are non-relativistic top quarks, collinear and ultrasoft gluons and the external fields. The EFT describes the interactions of the remaining modes and is based on \emph{potential non-relativistic QCD} (PNRQCD)~\cite{Pineda:1997bj,Pineda:1997ie,Beneke:1998jj,Beneke:1999qg,Brambilla:1999xf,Beneke:2013jia} and \emph{Soft Collinear Effective Theory} (SCET)~\cite{Bauer:2000ew,Bauer:2000yr, Bauer:2001yt,Beneke:2002ph,Beneke:2002ni,Becher:2014oda}. The amplitudes for $gg\to F$ with final states $F=H^{(*)},HH,HZ,ZZ$ are given by the master formula~(cf.~\cite{Beneke:2003xh,Beneke:2004km}) \begin{eqnarray} i\mathcal{A}_{gg\to F} & \mathop{=}\limits^{z\to1} & \sum\limits_{k,l}C_{gg\to t\bar{t}}^{(k)}\,C_{t\bar{t}\to F}^{(l)}\, \int d^4x\Braket{F|T\left[i\mathcal{O}_{t\bar{t}\to F}^{(l)}(x)i\mathcal{O}_{gg\to t\bar{t}}^{(k)}(0)\right]|gg}_\text{EFT} \nonumber \\ & & + C_{gg\to F}\Braket{F|i\mathcal{O}_{gg\to F}(0)|gg}_\text{EFT}, \label{eq:Master_eq} \end{eqnarray} where the matrix elements have to be evaluated in the EFT. In analogy with~\cite{Beneke:2003xh,Beneke:2004km} we call the contributions in the first and second line of~\eqref{eq:Master_eq} line the 'resonant' and 'non-resonant' amplitude, respectively. This structure is shown in Figure~\ref{fig:Factorization} in diagrammatic form. \begin{figure} \begin{center} \includegraphics[width=0.9\textwidth]{Figures/Factorization} \caption{\label{fig:Factorization} Graphical representation of the terms in the master formula~\eqref{eq:Master_eq}. The diagram on the left (right) corresponds to the 'resonant' ('non-resonant') part of the amplitude. The shaded area indicates that Coulomb exchanges between the top quark pair are resummed.} \end{center} \end{figure} The 'resonant' part in the first line of~\eqref{eq:Master_eq} contains the contributions that involve a non-relativistic top quark pair, i.e. a top pair that is close to being on resonance. This entails that only a soft spatial momentum can be exchanged between the initial and final state. Since the incoming gluons contain hard momentum components they must be connected by a hard subgraph. The same holds for the two final state particles. Integrating out these hard subgraphs yields local production operators \begin{equation} \left[\mathcal{O}_{gg\to t\bar{t}}^{(k)}\right]^{\mu\nu} = \mathcal{A}_c^{\perp\mu}\mathcal{A}_{\bar{c}}^{\perp\nu}\,\psi^\dagger\Gamma^{(k)}\chi, \label{eq:production_op} \end{equation} that annihilate the incoming gluons and create a non-relativistic top pair and local annihilation operators \begin{equation} \mathcal{O}_{t\bar{t}\to F}^{(l)} = \chi^\dagger\Gamma^{(l)}\psi\,\phi_F^\dagger, \label{eq:annihilation_op} \end{equation} that annihilate the top pair and create the final-state particles. Here $\mathcal{A}_{\bar{c}}^{\perp}$ is the collinear gluon field given in~\cite{Beneke:2010da}, the non-relativistic two-component spinor fields $\psi$ and $\chi$ annihilate a top quark and produce an anti-top quark respectively, $\Gamma^{(k)}$ contains a combination of Pauli matrices, $SU(3)_c$ generators and potentially covariant derivatives and $\phi_F^\dagger$ represents a combination of fields that produces the final state. Both types of operators have associated hard-matching coefficients that absorb the higher-order corrections from hard modes. The propagation of the non-relativistic top pair is subject to a non-local color Coulomb interaction that manifests as $\alpha_s/\sqrt{1-z}$ corrections in the amplitude. These so-called Coulomb singularities can be resummed to all orders within PNRQCD. The 'resonant' contribution contains non-analytic $\sqrt{1-z}$ and $\ln(1-z)$ terms that correspond to on-shell cuts of the non-relativistic top pair. Contributions where a hard momentum component is exchanged between the initial and the final state are contained in the 'non-resonant' part in the second line of~\eqref{eq:Master_eq}. In the EFT they are represented by the matrix element of the local operator \begin{equation} \left[\mathcal{O}_{gg\to F}\right]^{\mu\nu} = \mathcal{A}_c^{\perp\mu}\mathcal{A}_{\bar{c}}^{\perp\nu}\,\phi_F^\dagger, \label{eq:nonres_op} \end{equation} that annihilates the incoming state and creates the final state. Since the top quarks cannot be on shell near threshold when they carry hard momentum, there are no discontinuities associated with $t\bar{t}$ cuts. Therefore, this contribution admits the form of a Taylor expansion in $(1-z)$ once massless cuts have been separated as described in Section~\ref{sec:ggH_Pade}. The computation of this contribution is very involved since already the leading term in the Taylor expansion has the complexity of the full amplitude evaluated directly at the threshold $z=1$. However, we expect the Pad\'e approximation to predict this unknown analytic part of the amplitude very accurately, even when using only the LME as input. Indeed, as we showed explicitly in Section~\ref{sec:ggH_Pade}, adding the knowledge of just the non-analytic terms near threshold is already sufficient to reconstruct the full top-quark mass dependence with high accuracy. Therefore we can safely ignore the non-resonant contribution and only focus on the much simpler factorizable part. \subsection{Computation of the non-analytic terms\label{sec:computation}} In this section we describe the computation of the 'resonant' part of the amplitude~\eqref{eq:Master_eq}. We adopt here the non-relativistic power counting where $\alpha_s\sim\sqrt{1-z}$ and denote the $k$'th order in this counting by nrN$^k$LO to distinguish it from the fixed-order expansion in the strong coupling constant. At nrLO, the matrix element is given by a non-relativistic Green function which resums the $1/\sqrt{1-z}$ enhanced effects from the ladder-exchange of Coulomb gluons as indicated in Figure~\ref{fig:LO}. Hence, at any loop order, the leading non-analytic term in the threshold expansion of the amplitude can be determined by expanding the nrLO result to the respective order in $\alpha_s$. \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth]{Figures/LO} \caption{\label{fig:LO} Matrix element at leading order in the power counting $\alpha_s\sim\sqrt{1-z}$.} \end{center} \end{figure} Up to nrNNLO, terms of the relative order \begin{equation} \frac{\mathcal{A}_\text{'resonant'}}{\mathcal{A}_0(z=1)} \sim \sqrt{1-z}^{\,2l+1} \sum\limits_{k=0}^\infty \left(\frac{\alpha_s}{\sqrt{1-z}}\right)^k \times \begin{cases} \begin{array}{ll} 1 & \text{nrLO},\\ \alpha_s,\sqrt{1-z} & \text{nrNLO},\\ \alpha_s^2,\alpha_s \sqrt{1-z}, (1-z)\hspace{1cm} & \text{nrNNLO}, \end{array} \end{cases} \label{eq:scaling} \end{equation} must be included, where $\mathcal{A}_0(z=1)$ is the LO amplitude evaluated \emph{at} the top threshold, $l=0,1,\dots$ denotes the angular momentum of the top pair and the global factor $\sqrt{1-z}$ accounts for the suppression of the phase-space near threshold. \begin{figure}[t] \begin{center} \includegraphics[width=0.78\textwidth]{Figures/power_counting} \caption{\label{fig:poco} Relation between relativistic (LO, NLO, NNLO) and non-relativistic (nrLO, nrNLO, nrNNLO) power counting up to next-to-next-to-leading order. The axes show the powers of $\alpha_s$ and $\sqrt{1-z}$ in the various coefficients represented by the markers. Note that the normalization is chosen such that $\alpha_s^0$ corresponds to LO.} \end{center} \end{figure} Fig.~\ref{fig:poco} illustrates the relation between different orders in standard relativistic perturbation theory and in the non-relativistic effective theory. For example, the following terms on the right-hand side of Eq.~\eqref{eq:scaling} contribute to the fixed-order expansion up to NLO: \begin{itemize} \item The nrLO terms with relative factors $\sqrt{1-z}^{\,2l+1}, \alpha_s\sqrt{1-z}^{\,2l}$. \item The nrNLO terms with relative factors $\sqrt{1-z}^{\,2l+2}, \alpha_s\sqrt{1-z}^{\,2l+1}$. \item The nrNNLO terms with relative factors $\sqrt{1-z}^{\,2l+3}, \alpha_s\sqrt{1-z}^{\,2l+2}$. \end{itemize} For the processes $gg\to H^{(*)}$ and $gg\to HH$ there is no contribution from S-wave $t\bar{t}$ states due to parity and C-parity conservation.\footnote{The $H$ and $HH$ final states have even parity and C-parity and the $t\bar{t}$ state with angular momentum $l$ and spin $s=0,1$ has $P=(-1)^{l+1}$ and $C=(-1)^{l+s}$. Thus, $l$ is one ($H$) or odd ($HH$) and $s=1$.} The leading 'resonant' contribution therefore contains the P-wave Green function~\cite{Beneke:2013kia} which is suppressed by $(1-z)$ near threshold. We want to determine the 'resonant' amplitude up to nrNLO in the scaling~\eqref{eq:scaling}, which contains the next-to-leading non-analytic terms in the threshold expansion at any loop order. In addition we compute the first two terms in the fixed-order expansion of the nrNNLO result in $\alpha_s$, i.e. those of relative orders $(1-z)^{5/2}$ and $\alpha_s(1-z)^2$. They correspond to the next-to-next-to leading threshold terms for the one and two loop amplitude which we study in Section~\ref{sec:numerics}. The matrix elements in~\eqref{eq:Master_eq} receive corrections from the higher-order non-local potentials and the dynamical modes contained in the EFT. The EFT contains no interactions of collinear modes with non-relativistic modes or between collinear modes of different directions. They cannot be present because the combination of the involved momenta yields hard modes which have been integrated out. Therefore the only collinear corrections at nrNLO are from the left diagram in Figure~\ref{fig:CollinearUltrasoft}. The corresponding loop integral is scaleless and therefore vanishes in dimensional regularization. \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{Figures/Collinear} \includegraphics[width=0.45\textwidth]{Figures/Soft} \caption{\label{fig:CollinearUltrasoft} nrNLO diagrams involving collinear (left) and ultrasoft (right) gluon radiation. Both loop integrals are scaleless and vanish in dimensional regularization.} \end{center} \end{figure} Ultrasoft gluons couple to the collinear and non-relativistic sector as well as to the P-wave production and annihilation operators. The exchange of ultrasoft gluons between the collinear states shown in the diagram on the right of Figure~\ref{fig:CollinearUltrasoft} yields only scaleless integrals. The interactions in the EFT must be multipole expanded. At leading order in the multipole expansion ultrasoft gluons couple to the net color charge of the $t\bar{t}$ state since the large wavelength $\lambda\sim1/(m_t(1-z))$ gluons cannot resolve the spatial separation $a_\text{B}\sim1/(m_t\sqrt{1-z})$ of the top pair. The first non-vanishing term in the multipole expansion for color singlet states is therefore the chromoelectric term $\psi^\dagger\,\mathbf{x}\cdot\mathbf{E}\,\psi$ which is suppressed by $\alpha_s^{1/2}\sqrt{1-z} \sim \alpha_s^{3/2}$. Similarly the ultrasoft gluon term in the covariant derivative in the P-wave operators is suppressed by $\alpha_s^{1/2}\sqrt{1-z} \sim \alpha_s^{3/2}$ with respect to the derivative term. A single insertion of either of these subleading terms vanishes by rotational invariance~\cite{Falgari:2012hx}. Thus, contributions from the subleading interactions require at least two insertions and first appear at nrNNNLO. The effects of higher-order potentials enter as corrections to the non-relativistic Green function. The nrNNNLO S-wave and nrNLO P-wave Green functions have been computed for $t\bar{t}$ production in $e^+e^-$ collisions near threshold \cite{Beneke:2015kwa,Beneke:2013kia}. We determine the $\alpha_s^{0,1}$ terms in the nrNNLO P-wave Green function in Appendix~\ref{sec:Pwave}. Up to the considered order the resonant amplitudes hence take the simple factorized form \begin{equation} \mathcal{A}_\text{resonant} = \sum\limits_{k,l}\,\mathcal{N}_{kl}(1-z)\,C_{gg\to t\bar{t}}^{(k)}\,C_{t\bar{t}\to F}^{(l)}\,G_{S,P}(1-z). \label{eq:resonant_factorization} \end{equation} The Wilson coefficients $C_{gg\to t\bar{t}}^{(k)},\,C_{t\bar{t}\to F}^{(l)}$ are perturbative in $\alpha_s$ and independent of~$z$. We can compute them via matching to the full Standard Model, i.e.~by performing a Taylor expansion of the on-shell amplitudes for $gg\to t\bar{t},\, t\bar{t}\to F$ around the top threshold and comparing to the matrix elements of the effective operators $\mathcal{O}_{gg\to t\bar{t}}^{(k)},\,\mathcal{O}_{t\bar{t}\to F}^{(l)}$. Subleading terms in the Taylor expansion in $(1-z)$ correspond to higher-dimensional operators, which contain derivatives acting on the non-relativistic top and anti-top fields. Since $(1-z)\sim\alpha_s^2$, we only require matrix elements with at most one subleading operator up to nrNNLO. The normalization factors $\mathcal{N}_{kl}$ are either $z$-independent, if the operators $\mathcal{O}_{gg\to t\bar{t}}^{(k)}$ and $\mathcal{O}_{t\bar{t}\to F}^{(l)}$ are of leading order in the non-relativistic expansion, or proportional to $(1-z)\sim\alpha_s^2$, if one of the operators is of subleading order. To achieve the accuracies specified in~\eqref{eq:scaling} we require the following ingredients \begin{itemize} \item nrLO: \begin{itemize} \item the tree-level coefficients $C_{gg\to t\bar{t}}^{(k)},\,C_{t\bar{t}\to F}^{(l)}$ \item the nrLO Green function $G_{S,P}(1-z)$ \end{itemize} \item nrNLO: the above and \begin{itemize} \item the one-loop coefficients $C_{gg\to t\bar{t}}^{(k)},\,C_{t\bar{t}\to F}^{(l)}$ \item the nrNLO Green function $G_{S,P}(1-z)$ \end{itemize} \item the order $\alpha_s^{0,1}$ terms at nrNNLO: the above and \begin{itemize} \item the tree-level coefficients $C_{gg\to t\bar{t}}^{(k)},\,C_{t\bar{t}\to F}^{(l)}$ for the $(1-z)$-suppressed operators \item the $\alpha_s^{0,1}$ terms in the nrNNLO Green function $G_{S,P}(1-z)$ \end{itemize} \item nrNNLO: the above and \begin{itemize} \item the two-loop coefficients $C_{gg\to t\bar{t}}^{(k)},\,C_{t\bar{t}\to F}^{(l)}$ \item the nrNNLO Green function $G_{S,P}(1-z)$ \end{itemize} \end{itemize} As mentioned before, it is sufficient to know the nrNNLO terms proportional to $\alpha_s^0$ and $\alpha_s^1$ in order to construct approximations to two-loop (NLO) fixed-order amplitudes (cf. Fig~\ref{fig:poco}). The remaining nrNNLO terms of the relative order $\alpha_s^2(1-z)^{3/2}$ will be important for the reconstruction of the three-loop amplitude. Since its determination requires the calculation of the two-loop matching coefficients $C_{gg\to t\bar{t}}^{(k)}$ and $C_{t\bar{t}\to F}^{(l)}$ as the most complicated ingredient, we postpone this to future work. The one-loop coefficients $C_{t\bar{t}\to F}^{(l)}$ are finite after field and mass renormalization. The one-loop coefficients $C_{gg\to t\bar{t}}^{(k)}$, however, require additional IR subtractions since the virtual amplitude by itself is not IR safe. Our results for the threshold expansion of the form factors are given in~\eqref{eq:triangle_threshold1}--\eqref{eq:triangle_threshold4} and Appendix~\ref{sec:ggHH_Results} together with the details of the IR subtractions. Together with the nrNLO expression for the P-wave Green function~\cite{Beneke:2013kia} these results are sufficient to determine the leading and next-to-leading non-analytic terms in the threshold expansion of the form factors at \emph{any order} in $\alpha_s$. Another interesting, yet more involved, application of our formalism is Higgs plus jet production. Here, we shortly comment on that, but leave a more careful assessment to future work. The amplitudes $gg\to Hg$, $gq\to Hq$ and $q\bar{q}\to Hg$ obey the same structure of~\eqref{eq:Master_eq} near the top threshold but the corresponding 'resonant' matrix elements are more complicated since the final state now contains a color-charged particle. Ultrasoft gluons can then be exchanged between the initial state, the final state and the intermediate top pair which is in a color octet state and no longer decouples. In \cite{Beneke:2009rj,Beneke:2010da,Falgari:2012hx} it was demonstrated for arbitrary color structures that the 'resonant' matrix elements in forward-scattering amplitudes factorize into the convolution of a non-relativistic Green function, therein called the potential function, and an ultrasoft function, therein called the soft function. At leading power this follows from field transformations that decouple the collinear and non-relativistic fields from the ultrasoft fields. The extension to higher orders requires a careful assessment of the subleading interactions and was performed to NNLL in~\cite{Beneke:2009rj,Beneke:2010da,Falgari:2012hx}. Following these derivations we identified no aspect that would obstruct the extension to Higgs plus jet production and therefore conjecture that an analogous factorization formula holds for the corresponding amplitudes. \section{Comparison with the exact result\label{sec:numerics}} As a proof of method, we compare our results at LO and NLO with the results in full top mass dependence for Higgs pair production. While at LO, the Higgs pair production cross section is known in full mass dependence since the late 80's \cite{Eboli:1987dy,Glover:1987nx, Plehn:1996wb}, the computation of the NLO QCD corrections is quite involved, due to the many scales of the problem. The first work on the NLO corrections was based on the heavy top mass limit \cite{Dawson:1998py} reweighted with the matrix elements squared of the full LO results (HEFT). The real corrections in full top mass dependence have been computed in \cite{Maltoni:2014eza, Frederix:2014hta}, while the virtual corrections have been kept in HEFT. The computation of the virtual corrections in full top mass dependence became available only recently in \cite{Borowka:2016ehy, Borowka:2016ypz}. \subsection{Numerical setup\label{sec:numerical_setup}} For the numerical evaluation we choose a centre-of-mass energy of $\sqrt{s}=14\text{ TeV}$. The Higgs boson mass has been set equal to $m_{H}=125\text{ GeV}$ and the top quark mass to $m_t=173\text{ GeV}$. We do not account for bottom quark loops as they contribute with less than 1\% at LO. We have adopted the PDF set {\tt NNPDF3.0} \cite{Ball:2014uwa}. The strong coupling constant is set to $\alpha_s(M_Z)=0.118$ at LO and NLO. The renormalization scale has been set to $M_{HH}/2$, where $M_{HH}$ denotes the invariant mass of the Higgs boson pair, as suggested by the NNLL soft gluon resummation performed in \cite{Shao:2013bz,deFlorian:2015moa}. \par We construct our Pad\'e approximants at LO (NLO) as described in Section~\ref{sec:method} by solving numerically the 8 (7) equations from the LME \cite{Degrassi:2016vss} and threshold expansion, given in Section \ref{sec:ggH_Pade} and Appendix~\ref{sec:ggHH_Results}, by means of the {\tt FORTRAN} routine {\tt MINPACK}~\cite{minpack}.\footnote{We provide a {\tt FORTRAN} routine of the Pad\'e approximated matrix elements upon request.} For every phase space point we construct a total of 100 Pad\'e approximants $[n/m]$, where $a_{R}$ takes a random value between [0.1,10], $n,m \in [1,6]$ at LO and $n,m \in [1,5]$ at NLO, and take the mean value. From that we obtain an error estimate on every form factor by taking the standard deviation. For the computation of the cross section or the virtual corrections we add up the errors stemming from the different form factors quadratically. Pad\'e approximants with poles in $\text{Re}(z) \in [0,8]$ and $\text{Im}(z)\in [-1,1]$ were excluded, since functions with poles close-by in the complex plane could have an unwanted resonant behaviour. The running time per phase space point for the construction of 100 Pad\'e approximants at NLO is usually below 6 s. \subsection{Comparison at LO} In Table \ref{tab:LOcxn} we give the results for the LO cross section in different approximations. The first row, $[n/m]$ w/o THR, symbolizes the cross section obtained with Pad\'e approximants constructed without input from the threshold expansion, where $n,m \in [1,3]$ and approximants with poles as described above have been excluded. The result we obtain when including the threshold information and using the specifications described in Section~\ref{sec:numerical_setup} is denoted by $[n/m]$. With $[n/n\pm 1, 3]$ we symbolize the results we find when only the Pad\'e approximants [5/2], [4/3], [3/4] and [2/5] are used.\footnote{ Note however that these are mainly [5/2] and [4/3] Pad\'e approximants as [3/4] and [2/5] usually are excluded by our pole criterion.} Finally, we give the full LO cross section (obtained with {\tt HPAIR} \cite{hpair}) in the fourth row of Table \ref{tab:LOcxn}. As can be inferred from the table, the Pad\'e approximants provide a very good approximation for the full cross section, in particular if only the most diagonal and next-to-diagonal Pad\'e approximants are constructed. The threshold expansion proves to be essential for a good approximation. As expected, the standard deviation computed from the construction of 100 $[n/m]$ Pad\'e approximants with random $a_R$ and different $n,m$ becomes smaller if we construct only the most diagonal and next-to-diagonal Pad\'e approximants. \begin{table} \begin{center} \renewcommand{\arraystretch}{1.2} \begin{tabular}{cc} \toprule & $\sigma$ [fb] \\ \midrule $[n/m]$ w/o THR & $19.9\pm 5.4$ \\ $[n/m]$ & $21.7 \pm 1.1$\\ $[n/n\pm 1,3] $& $ 21.3 \pm 0.4 $ \\ full & 21.3 \\ \bottomrule \end{tabular} \end{center} \caption{Numbers for the total LO cross section and standard deviation from the construction of 100 Pad\'e approximants. \label{tab:LOcxn} } \end{table} \begin{figure}[ht!] \centering \includegraphics[width=14cm]{Figures/MHHdist_LO} \caption{Invariant Higgs mass distribution for the full LO cross section (dark blue), the $[n/ n\pm 1,3]$ Pad\'e approximants (pink line) and the Pad\'e constructed without threshold expansion (light blue). The standard deviation of the Pad\'e lines are shown by the semi-transparent regions with the corresponding color. The pink band is barely wider than the width of the curves and hardly visible.\label{fig:LOMHH}} \end{figure} \par In Fig.~\ref{fig:LOMHH} we show the invariant Higgs mass distribution for the full result (dark blue), the $[n/n \pm 1,3] $ Pad\'e approximants (pink) and the Pad\'e approximants without the threshold expansion (light blue). While the $[n/n\pm 1,3] $ full Pad\'e approximants fit the shape of the invariant mass distribution in full mass dependence almost perfectly, the approximation where the threshold expansion is not included (hence the approximation is only built from the LME) fits the shape only for small invariant mass. The error on the construction of the approximation including the threshold expansion is rather small whereas if the approximation is constructed only from the LME, the error becomes much larger in particular above the threshold. \par We thus conclude that at LO our approximation of the mass effects by Pad\'e approximants works well as long as the conditions obtained from the threshold expansion are included. Using only nearly diagonal Pad\'e approximants leads to a result with smaller error with values closer to the true result. \subsection{Comparison at NLO} Finally, we compare our results to the computation of the NLO corrections in full top mass dependence of Ref.~\cite{Borowka:2016ehy, Borowka:2016ypz}. In the framework of Ref.~\cite{Heinrich:2017kxx} a grid and an interpolation function with numerical values for the virtual corrections of Ref.~\cite{Borowka:2016ehy, Borowka:2016ypz} have been provided. \\ In order to fit the conventions of Ref.~\cite{Heinrich:2017kxx} we define the finite part of the virtual corrections as \begin{equation} \begin{split} \mathcal{V}_{fin}=&\frac{\alpha_s^2(\mu_R)}{16 \pi^2}\frac{\hat{s}^2}{128 v^4}\Bigg[\left|\mathcal{M}_{born}\right|^2\left(C_A \pi^2 - C_A \log^2\left(\frac{\mu_R^2}{\hat{s}}\right)\right)\\ +& 2 \left\{(F_1^{1l})^*\left(F_1^{2l,[n/m]}+F_1^{2\Delta}\right)+(F_2^{1l})^*\left(F_2^{2l, [n/m]}+F_2^{2\Delta}\right)+\text{h.c.}\right\}\Bigg] \end{split} \end{equation} with \begin{equation} \left|\mathcal{M}_{born}\right|^2=\left|F_1^{1l} \right|^2 + \left|F_{2}^{1l}\right|^2 \end{equation} and $F_1$ defined in eq.~\eqref{eq:f1}. For $F_x^{2l,[n/m]}$ we use the matrix elements constructed with the Pad\'e approximant $[n/m]_{\tilde{f}}$. All other matrix elements are used in full top mass dependence. The form factors $F_x^{2\Delta}$ stem from the double triangle contribution to the virtual corrections and can be expressed in terms of one-loop integrals. They are given in Ref.~\cite{Degrassi:2016vss} in full top mass dependence. In the heavy top mass limit they become \begin{equation} F^{2\Delta}_1\to \frac{4}{9}, \hspace*{0.5cm} F^{2\Delta}_2\to -\frac{4}{9}\frac{p_T^2}{2\hat{t}\hat{u}}(\hat{s}- 2 m_H^2). \end{equation} The contribution of the double triangle diagrams to the virtual corrections is only of the order of a few per cent \cite{Grober:2015cwa}. \begin{table} \begin{center} \renewcommand{\arraystretch}{1.15} \begin{tabular}{cr@{.}lc c c c c} \toprule \multicolumn{3}{c}{} & \multicolumn{4}{c}{$\mathcal{V}_{fin}\times 10^4$}\\ \cmidrule(l){4-7} $M_{HH} $\,[GeV] & \multicolumn{2}{c}{$p_T$\,[GeV]}& HEFT & $[n/m]$ & $[n/n\pm 0, 2]$ & full \\ \midrule 336.85 & 37 & 75 & $0.912$ & $0.996 \pm 0.004$ & $0.990 \pm 0.001$ & $0.996 \pm 0.000$ \\ 350.04 & 118 & 65 & $1.589$ & $1.933 \pm 0.012$ & $1.937 \pm 0.010$ & $1.939 \pm 0.061$ \\ 411.36 & 163 & 21 & $4.894$ & $4.326 \pm 0.183$ & $4.527 \pm 0.069$ & $4.510 \pm 0.124$ \\ 454.69 & 126 & 69 & $6.240$ & $5.300 \pm 0.192$ & $5.114 \pm 0.051$ & $5.086 \pm 0.060$ \\ 586.96 & 219 & 87 & $7.797$ & $4.935 \pm 0.583$ & $5.361 \pm 0.281$ & $4.943 \pm 0.057$ \\ 663.51 & 94 & 55 & $8.551$ & $5.104 \pm 1.010$ & $4.096 \pm 0.401$ & $4.120 \pm 0.018$ \\ \bottomrule \end{tabular} \end{center} \caption{Numbers for the virtual corrections for some representative phase space points for the HEFT result reweighted with the full Born cross section (as in Ref.~\cite{Dawson:1998py}), the Pad\'e-approximated ones and the full calculation \cite{Heinrich:2017kxx}. \label{tab:NLOVfin} } \end{table} \par In Table~\ref{tab:NLOVfin} we compare values for the full computation of the virtual corrections obtained from the grid of Ref.~\cite{Heinrich:2017kxx}, the HEFT results rescaled with the full Born cross section (as e.g.~implemented in {\tt HPAIR}), and the Pad\'e approximations including all possible approximants without poles in $\text{Re}(z) \in [0,8]$ and $\text{Im}(z)\in[-1,1]$ (called $[n/m]$) and the ones where we only construct diagonal [3/3] and next-to diagonal [4/2] and [2/4] approximants (called $[n/n\pm 0, 2]$). The errors given in the table are, in case of the Pad\'e-approximated results, due to the construction of the different approximants and due to the rescaling with $a_R$. For the full results the error stems from internal binning in the grid. As can be inferred from the table, the Pad\'e construction approximates the full result quite well. It provides a much better approximation than the HEFT results with a generally reliable error estimate. While up to $M_{HH}=450$\text{ GeV} the Pad\'e method provides an excellent approximation on the level of $\lesssim 2\%$, for larger invariant masses and $p_T$ the results worsen gradually. As already anticipated from the LO results, constructing only diagonal and next-to diagonal Pad\'e approximants improves both the error and the values of the virtual corrections with respect to the full result. Indeed we even find that only constructing diagonal Pad\'e approximants gives results even closer to the full result. Since this does not allow for a reliable error estimate any more (the error would then solely stem from the variation of $a_R$) we do not discuss this here any further. \par \begin{figure}[ht!] \centering \includegraphics[width=14cm]{Figures/comp_NLO_wthres} \caption{Finite part of the virtual corrections, $\mathcal{V}_{fin}$, as a function of $M_{HH}$ for $p_T=100\text{ GeV}$. The light blue points are the reweighted HEFT results, the pink points the virtual corrections in full top mass dependence from the interpolation function provided with Ref.~\cite{Heinrich:2017kxx}, the dark blue points are from the diagonal and off-diagonal Pad\'e approximants with their standard deviation and the turquoise points with standard deviation are the Pad\'e approximants constructed without the threshold expansion. \label{fig:NLOcomp}} \end{figure} In Fig.~\ref{fig:NLOcomp} we show for $p_T=100\text{ GeV}$ the virtual corrections ${\cal V}_{fin}$ for varying $M_{HH}$ for the Pad\'e approximations $[n/n\pm 0,2]$, the Pad\'e approximants constructed only from the LME, the full result and the reweighted HEFT results. Again, we can see that contrary to the HEFT results the Pad\'e approximation can reproduce the correct scaling with the invariant mass of the full result. The quality of the approximation is improved significantly with the inclusion of the threshold expansion. The error of the Pad\'e approximation increases with the invariant mass. Note that the full result has, apart from the previous error from the internal binning, also an error due to the interpolation procedure. We do not quantify this error but in comparison to the HEFT grid provided with Ref.~\cite{Heinrich:2017kxx} we conclude that while in the range up to $M_{HH}\lesssim 570 \text{ GeV}$ this error is negligible, it will be a few \% for larger $M_{HH}$. The comparison with the numerical results of~\cite{Heinrich:2017kxx} demonstrates that our prescription for the uncertainty related to the construction of Pad\'e approximants also provides a reasonable error estimate at NLO. \par In conclusion, we see that for the NLO corrections the Pad\'e approximation reproduces the correct scaling behaviour for small and moderate invariant mass and $p_T$. Since the cross section peaks around $M_{HH}\approx 400\text{ GeV}$ and $p_T\approx 150 \text{ GeV}$ this will lead to a reliable approximation and reliable error estimate also for the full cross section. It can be expected that both the error and the difference with respect to the full result improves once more input is used (i.e. higher orders in the threshold expansion, higher orders in the LME, possibly input from a small mass expansion). \section{Conclusions and outlook\label{sec:conclusion}} We have reconstructed the top-quark mass dependence of the one and two loop virtual amplitudes for Higgs pair production in gluon fusion with Pad\'e approximants based on the LME of the amplitude~\cite{Degrassi:2016vss} and new analytic results near the top threshold $\hat{s}=4m_t^2$. We observe perfect agreement of the one-loop results with the exact expressions once the additional conditions from the threshold terms are imposed. Significant deviations are observed when only the LME is used to construct Pad\'e approximants, but we still find agreement within the uncertainty estimate of our reconstruction, which is based on variation of the rescaling parameter $a_R$ and the use of different $[n/m]$ approximants. At the two-loop level the full result can be reproduced in the entire phenomenologically relevant range within typical uncertainties ranging from below $\pm 3\%$ in the region $M_{HH}\leq450$ GeV up to about $\pm 20\%$ for $M_{HH}=700$ GeV. Thus, our method allows for a determination of the total cross section including top-quark mass effects at NLO where the uncertainty due to the reconstruction is negligible compared to the scale uncertainty which is of the size of $\pm13$\%~\cite{Borowka:2016ehy, Borowka:2016ypz}. This represents considerable progress compared to the rescaled HEFT and LME approximations where a reliable uncertainty estimate is not possible. Our method can also be systematically improved by including higher orders in the LME or threshold expansions. We expect even better behaviour if one also considers the leading term in the small-mass expansion $z\to\infty$ which corresponds to the bottom-quark contribution expanded for small $m_b$. An approach for computations in this limit has recently been introduced \cite{Mueller:2015lrx,Melnikov:2016qoc,Melnikov:2017pgf}. Furthermore our results strongly suggest that the combination of the Pad\'e approximants of the NLO virtual corrections with the exact evaluation of the real corrections~\cite{Maltoni:2014eza, Frederix:2014hta} can reproduce differential distributions to high accuracy. There is a large number of possible applications for our method. To further increase the precision for Higgs pair production one needs to consider NNLO QCD corrections. The rescaled HEFT approximation for the NNLO corrections increases the cross section by 18\%~\cite{Borowka:2016ypz} which exceeds the estimate from scale variation at NLO. A NNLO computation which retains the full top-quark mass effects is clearly out of reach of the current technology. On the other hand, the LME has already been computed up to $1/m_t^4$ in~\cite{Grigo:2015dia} and we have determined the two first non-analytic terms in the threshold expansion. This presently available input only allows for the construction of Pad\'e approximants with $n+m=3$ where we do not expect stable behaviour, but a calculation of two or three more expansion parameters would allow the evaluation of NNLO corrections in the soft-virtual approximation of~\cite{Grigo:2015dia,deFlorian:2012za}. Additionally, one can study the NLO electroweak corrections involving top-quark loops. Of particular interest are the contributions involving additional Higgs bosons which alter the dependence of the cross section on the values of the Higgs self couplings. It is straightforward to apply our method to $gg \to HZ$ and the top-quark mediated $gg \to ZZ$ amplitude and at higher orders in perturbation theory. In all these cases, results in the LME have been obtained at two loops~\cite{Hasselhuhn:2016rqt,Campbell:2016ivq,Caola:2016trd} and for $gg\to H^{(*)}$ even at three loops~\cite{Harlander:2009bw,Pak:2009bx,Harlander:2009mq,Pak:2009dg,Harlander:2009my}. The determination of the threshold terms only requires the computation of the respective one-loop matching coefficients in~\eqref{eq:resonant_factorization}. Another phenomenologically very interesting case is Higgs plus jet production. The construction of Pad\'e approximants is also possible here but the computation of the threshold expansion is more involved as we outlined in Section~\ref{sec:computation}. Beyond LME results, also the leading term in the small-mass expansion is know for the relevant two-loop amplitudes~\cite{Melnikov:2016qoc,Melnikov:2017pgf}. Hence, the effects of this additional input on the reconstruction of top-quark mass effects can be studied in this case. \subsubsection*{Acknowledgements} We thank Johannes Schlenk for useful discussions and Matthias Kerner for clarifications regarding the grid of Ref.~\cite{Heinrich:2017kxx}. RG acknowledges useful discussion with Fabrizio Caola, Keith Ellis and Sebastian Kirchner in an early stage of this project. RG and AM are supported by a European Union COFUND/Durham Junior Research Fellowship under the EU grant number 609412.
1,108,101,566,228
arxiv
\section{Introduction} \label{sec:intro} \par\noindent The nature of non--equilibrium phenomena is diverse and rich, and a theory encompassing them is still in the making \cite{Kubo91,EvMo,Gall14,Bertini15}. Such a task requires, in particular, understanding the coupling with external agents or reservoirs that may locally allow the condition of detailed balance or its violation \cite{Lebowitz,Maes,BePuRoVu,Conti2013,CdMP1}. In this work we provide numerical results and a theory explaining the onset of stationary currents in deterministic conservative reversible systems made of $N$ point particles. Such currents are generated by non--equilibrium phase transitions, that result in a deterministic model of battery, which is phase space volumes preserving and time reversal invariant \cite{BDL10}. Flows and oscillations produced by this mechanism resemble those observed in biological systems or chemical reactions, cf.\ Refs. \cite{Kur84,Wil12} for classical and quantum oscillators, and Ref. \cite{Zha17} for experiments on time crystals. In particular, our single component deterministic model shows a realization of \textit{uphill currents}, {\em i.e.}\ currents opposing the driving fields, thus providing an instance of the so-called {\it negative absolute mobility} \cite{Eich03,Ros05,Muk18}. A theoretical description of uphill diffusions was given in \cite{CdMP3,CGGV18} for stochastic spin models coupled to external reservoirs; moreover, in \cite[Sec.4.5]{ACCG19} uphill currents were also obtained from the scaling limit of inhomogeneous random walks on a lattice. A non--equilibrium phase transition occurring in a deterministic particle system was recently observed in a model with two cavities connected by a single channel, allowing no stationary currents \cite{CCMRR2020}. The transition amounts to switching from a homogeneous state, in which approximatively the same number of particles lies in each urn, to an inhomogeneous state in which almost all particles gather in a single urn. The model studied in \cite{CCMRR2020} was also amenable to a stochastic interpretation, in terms of time-dependent Markov chains. In this work, we investigate the nature of the steady state in a two-urns model equipped also with a second channel, that permits to close the system as in a circuit. The main question we address here is two--fold. First, we shed light on the existence of non--equilibrium phase transitions for the circuit model, in which the second channel is designed to contrast the formation of particle gradients between the urns. Furthermore, we also discuss the emergence of stationary currents, flowing through the circuit and sustained by the phase transitions. We shall thus unveil a non--trivial phase diagram for our model, revealing that phase transitions indeed occur in certain regions of the parameter space and are always followed by stationary currents. The work is organized as follows. In Sec. \ref{sec:sec1} we introduce our model and also present the numerical results of our deterministic dynamics. In Sec. \ref{sec:sec2} we tackle the theoretical investigation of the model by means of probabilistic arguments, and also compare the theoretical prediction with the numerical results. We also highlight strength and limitations of the probabilistic model. More details on the probabilistic derivation are deferred to the Appendix \ref{sec:app}. Finally, conclusions are drawn in Sec. \ref{sec:sec3}. \section{The model} \label{sec:sec1} \par\noindent Our model consists of $N$ point particles that move in straight lines with speed $v=1$, and collide elastically with hard walls. Hence, from collision to collision, the particles follow these equations of motion: ${\bf \dot q}={\bf p}$ and ${\bf \dot p}={\bf 0}$. Therefore, their speed is preserved while their velocity is reflected with respect to the normal to the boundary of the table, at the collision point. The billiard table is made of two circular \emph{urns} of radius $r$, connected by two rectangular channels of widths $w,w'$ and lengths $\ell,\ell'$, called, respectively, first and second channel, cf.\ Fig.~\ref{fig:model}. The two urns will be referred to, in the sequel, as urn 1 and urn 2, respectively. The first channel is divided in two parts, each of length $\ell/2$, called \emph{gates}. Periodic boundary conditions are imposed by letting the second channel close the table on a circuit. \begin{figure}[h] \includegraphics[width = 0.45\textwidth]{figN-ccrr_circ01.pdf} \caption{The billiard table: the $N$ point particles are represented as small disks and velocities are represented by arrows. The two grey shaded regions, in the first channel, are the gates in which the bounce--back mechanism separately acts. The horizontal component of the velocity of the particles contained in one gate and moving toward the other is reversed whenever their number is larger than the prescribed threshold value $T$.} \label{fig:model} \end{figure} This constitutes an ergodic billiard \cite{Bunimovich,Bunim05}. We now add the \emph{bounce--back} mechanism in the first channel: when the number of particles in one gate, that are moving toward the other gate, exceeds a threshold $T$, the horizontal component of the velocity of those particles is reversed. The particles coming from the other gate are unaffected by this mechanism and continue their motion. Although this dynamics is deterministic, time reversible and phase volumes preserving, it can produce non--equilibrium phase transitions, because the bounce--back mechanism implements a sort of negative feedback, that promotes the onset of a non--equilibrium steady state. For $T \ge N$, the usual ergodic billiard dynamics is realized \cite{Bunimovich,SzaszB}. Thus, for large $N$, the vast majority of time is spent in a state in which approximately the same number of particles resides in each urn. That state, like any other, is abandoned to reach still other states, with frequency given by the ratio of the respective phase space volumes \cite{Lebowitz1999}, therefore no state is strictly stable. Nevertheless, the lifetime of the homogeneous phase rapidly becomes so long, with growing $N$, that such a lack of stability turns physically irrelevant even at moderately large $N$, consistently with Boltzmann's explanation of his $H$-theorem \cite{Kac59,Cercignani2006}. For $T < N$, ergodicity also guarantees that, sooner or later, the threshold will be exceeded in a gate: as long as that event does not occur, the dynamics is like the ergodic one, which eventually leads to a stationary homogeneous state. When the threshold is exceeded, the standard dynamics is interrupted by the bounce--back. As evidenced in \cite{CCMRR2020}, for large $N$ and sufficiently small $T/N$, a larger concentration of particles in one urn leads to an increased frequency of activation of the bounce--back mechanism in the adjacent gate, while particles can flow in from the other urn, incrementing the effect. As a consequence, one urn gets depleted of particles, while the other urn increases its population, until a steady state is reached in which the flow of particles per unit time in the two directions equalize. In this scenario, a microscopic fluctuation suffices to trigger the transition, even when starting from the homogeneous phase. In this work, the phenomenology is much richer: the second channel allows particles to flow freely, contrasting the trend toward inhomogeneous states. Both homogeneous and inhomogeneous states can thus be realized, and the latter support stationary self sustained currents, as in a battery. Each half of the table is now made of one urn, of the adjacent gate and of the adjacent semi--channel of length $\ell'/2$ of the second channel, cf.\ Fig.~\ref{fig:model}. Letting $N_1$ and $N_2$ be the number of particles in the two halves, with $N=N_1 + N_2$, we define the \emph{mass spread} by \begin{equation} \chi = \left| N_1 - N_2 \right|/N \label{mass} \end{equation} For simplicity, we define the net \emph{current} by taking the absolute value of the difference of the number of particles coming from opposite directions and crossing the vertical line separating the two gates, and by then dividing this quantity by the elapsed time $t$ \cite{Spohn91}. Namely, let $n_{12}(t)$ and $n_{21}(t)$ denote the number of particles that cross, during the time interval [0, t], the vertical line separating the two gates in the direction from urn 1 to urn 2 and in the opposite direction, respectively. The net current flowing in a given channel is thus given by the ratio \begin{equation} \frac{\left| n_{12}(t)-n_{21}(t)\right|}{t} \label{netJ} \end{equation} In the large $t$ limit, such a discretely defined current settles to a stationary value related to the asymptotic billiard current. Due to the symmetry of the model, it is irrelevant whether the net current is positive or negative. The model has been simulated as follows. Our numerical algorithm updates, at each time step, the position of all particles by moving them along straight lines, in the direction of their velocities, over a distance $v\ \delta t$. Here $\delta t$ denotes the time interval between two consecutive collisions of the particles with the physical boundaries of the table and also with the fictitious vertical lines marking either the boundary of each gate with the adjacent urn or the junction between the two gates in the first channel. Elastic reflections at the boundaries of the billiard table are implemented\footnote{The direction of the inward normal at the junction points between the urns and the channels depends on the origin of the colliding particle. Namely, a junction point is considered as belonging to the urn or to the channel if the incoming particle is originally located, respectively, in the urn or in the channel.}. The initial datum is chosen by fixing arbitrarily the number of particles inside each urn and selecting their positions and velocities at random with uniform distribution. Nevertheless, the values attained in the steady state by the observables in Eqs. \eqref{mass} and \eqref{netJ} resulted insensitive to the initial datum. In particular, in all our simulations we verified that, for any value of the parameters of the model, the same stationary values of the mass spread and the net current are numerically reached by starting both from $\chi(0)=0$ and from $\chi(0)=1$. In Fig.~\ref{fig:numris}, we have $N=10^3$, $w=0.3$, $r=1$, $\ell=1$, $\ell'=1$, and varying values of $T/N$ and $w'$. Initially, $N/2$ particles lie in each urn, while the channels are empty. Positions and velocities are taken at random with uniform distribution. The stationary values of $\chi$ and of the net current are computed averaging post--collision data, namely, right after the collision of each particle with the walls of the table. Simulations last $10^8$ collisions, corresponding, on average, to $10^5$ collisions per particle. \begin{figure} \begin{picture}(100,145)(90,0) \put(0,0){ \includegraphics[width = 0.55\textwidth, height=0.3\textwidth]{figN-ccrr_circ02.pdf} } \end{picture} \caption{Stationary values of the mass spread $\chi$ (left panel) and net current (right panel) for $N=10^3$, $w=0.3$, $r=\ell=\ell'=1$. The initial condition yields $\chi(0)=0$. In the left panel, the red pixels denote the homogeneous phase, whereas the other pixels refer to the inhomogeneous phase. The white lines mark the theoretical boundary between homogeneous and inhomogeneous steady states (see Eq. \eqref{transition}). } \label{fig:numris} \end{figure} For $T/N$ in an interval that depends on $w'$, and for small $w$ and $w'$, an inhomogeneous phase is observed together with a stationary net current flowing in the circuit. In particular, Fig.~\ref{fig:numris} shows that for $w'$ below a certain critical value, three different regimes are produced by variations of $T/N$, corresponding to two non--equilibrium phase transitions. The agreement between the theoretical solid white lines and the numerical results is imperfect because our theoretical calculations rely on probabilistic arguments, which are justified by the ergodic hypothesis, in the large $N$ and small $w,w'$ limits. Hence, the theory better describes the simulations if $N$ grows. In Ref.~\cite{CCMRR2020}, where only the first channel is present, the growth of $N$ produces only one interface between homogeneous and inhomogeneous phases, that occurs at a specific $T/N$ value, for fixed geometrical parameters. The second channel results, instead, in a more complex phenomenology, because the free motion of particles passing through it tends to equilibrate $N_1$ and $N_2$. Therefore, two contrasting mechanisms are at work, and their interplay, between $w'$ and $T/N$ in particular, determines the steady state. \begin{figure*} \centering \begin{subfigure}[] {\includegraphics[width=0.38\textwidth]{figN-ccrr_circ04a.pdf}} \end{subfigure} \hskip -1.5 cm \begin{subfigure}[] {\includegraphics[width=0.38\textwidth]{figN-ccrr_circ04b.pdf}} \end{subfigure} \hskip -1.5 cm \begin{subfigure}[] {\includegraphics[width=0.38\textwidth]{figN-ccrr_circ04c.pdf}} \end{subfigure} \caption{Currents as functions of time, for $r =\ell = \ell' = 1$, $0.01 = w'\ll w = 0.3$, $N = 10^3$, and initial datum such that $\chi(0)=0$. For $T = 2$ (panel (a)) and $T = 18$ (panel (c)), a homogeneous steady state with zero net current is reached. For $T = 7$ (panel (b)), a stationary net current with $N_1\ne N_2$ arises Disks (squares) represent numerically computed flows out of urn 1 (urn 2) in channel 1; triangles (diamonds) represent flows out of urn 1 (urn 2) in channel 2. Black dashed lines are the theoretical values (obtained from the probabilistic model) of the stationary currents; solid red (blue) lines denote the net currents in the first (second) channel. The parameter $\eta$ is plotted in the insets as a function of $N_2/N$, while the red disks indicate the stable state reached by the deterministic dynamics.}\label{fig:comp05} \end{figure*} \begin{figure}[h] \includegraphics[width = 0.4\textwidth]{figN-ccrr_circ04d.pdf} \caption{Mass spread with the same values of the parameters and of the threshold considered in Fig.~\ref{fig:comp05}: homogeneous and inhomogeneous states are both stable (blue curve, $T=2$); only the inhomogeneous state is stable (red curve, $T=7$); only the homogeneous state is stable (black curve, $T=18$). The dotted and dashed lines indicate the theoretical values of the mass spread for the inhomogeneous states at $T=2$ and $T=7$, respectively. The initial datum is such that $\chi(0)=1$.} \label{fig:comp06} \end{figure} In fact, higher $T/N$ values make the bounce--back mechanism less likely, hence particles flow more easily through the first channel, while lower values make particles more likely to bounce back. Flow through the second channel decreases or increases when $w'$ does. Figure~\ref{fig:comp05} shows how two different phase transitions can be encountered. For $T=2$ (panel (a)), we are in the left region of Fig.~\ref{fig:numris}. Here, a homogeneous phase arises, because in the first channel particles frequently bounce back, making left and right flows vanish, but $N_1$ and $N_2$ equalize thanks to the second channel. For $T=18$ (panel (c)), in the region to the right of Fig.~\ref{fig:numris}, the first channel allows particles to flow almost freely, with a 0 net current, while just a few particles cross the second channel, because much smaller than the first: $w' = w/30$. For $T=7$ (panel (b)), we fall in the centre of Fig.~\ref{fig:numris}, where an inhomogeneous state, characterized by $N_1\neq N_2$ and by a stationary net current, persists longer than our simulations. The reason is that the bounce--back phenomenon is only partly mitigated by the flow through the second channel. As the net current in the first channel flows \textit{uphill}, {\em i.e.}\ against the population gradient, and \textit{downhill} in the second channel, the first channel acts like an \textit{emf}. The numerical results illustrated in Fig.~\ref{fig:comp05} agree with excellent numerical accuracy with the theoretical prediction (black dashed lines) discussed in Sec. \ref{sec:sec2}. Stationary uphill currents in presence of a non--equilibrium phase transition have been previously observed for stochastic dynamics in \cite{CdMP2,CdMP3,CGGV18}; an analogous behavior has also been identified in \cite{CC17} for locally perturbed zero--range processes. In these cases, uphill currents stem either from a local inhomogeneity in the jump rates, or from the non--equilibrium coupling of the bulk dynamics with external reservoirs, that breaks detailed balance. Our deterministic conservative dynamics accounts, instead, for the work done by the bounce--back mechanism. This phenomenology can be understood introducing a variation of the probabilistic model of Ref.~\cite{CCMRR2020}, that agrees with our deterministic dynamics in the large $N$ and small $w,w'$ limits. Details can be found in the Appendix. \section{Theoretical derivation} \label{sec:sec2} \par\noindent Using the uniformity of the distribution of the particles and of their velocities, one first obtains that the number of particles in urn 1, say, entering channel 1 (or channel 2) per unit time is given by $N_1 w v/(\pi A)$ (or by $N_1 w' v/(\pi A)$), where \begin{equation} \label{eq000} \begin{array}{rcl} A & \!\!=& \!\! \pi r^2 -r^2\arcsin\frac{w}{2r}+\frac{1}{4}w\sqrt{(2r)^2-w^2} \\ & \!\!& \!\! \phantom{\pi r^2} -r^2\arcsin\frac{w'}{2r}+\frac{1}{4}w'\sqrt{(2r)^2-w'^2} \end{array} \end{equation} The number of particles leaving urn 1 and successfully crossing the first channel, per unit time, is then reduced by the bounce--back mechanism to \begin{equation} N_1 {w v \over \pi A} {\Gamma[T,N_1 w \ell/(4 A)] \over (T-1)!} \end{equation} $ \label{eq020} \Gamma[y,x] = \int_x^\infty t^{y-1}e^{-s}\,\textup{d}s \;, \ y>0 \,, $ being the Euler incomplete $\Gamma$ function. Thus, in the probabilistic model, the number of particles leaving urn 1 per unit time, and reaching urn 2, minus those going from urn 2 to urn 1 is given by: \begin{equation} \label{eq007} \eta = \frac{N_1v}{\pi A} \Big[ w \frac{\Gamma[T,\lambda_1]}{(T-1)!} + w' \Big] - \frac{N_2v}{\pi A} \Big[ w \frac{\Gamma[T,\lambda_2]}{(T-1)!} + w' \Big] \end{equation} where $\lambda_i=N_i w \ell/(4 A)$, for $i=1,2$. Correspondingly, a steady state implies $\eta = 0$, an equation that can be solved for $N_2$. From Eq. \eqref{netJ} it is immediately seen that the condition of stationarity amounts to the equality between the net current flowing uphill in the first channel and the net current flowing downhill in the second channel, as in a circuit. Inspection shows that $N_2=N/2$ is a solution of $\eta=0$ and also that, for certain parameters values, $\eta$ changes sign in intervals not containing $N/2$. Given its continuity, in those cases in which $\eta$ has more than one zero in $[N/2,N]$, one may ask which of the steady states of the probabilistic model is stable. Given the smoothness of $\eta$, the linear stability is given by the sign of $\left( {\partial\eta}/{\partial N_2} \right)$: if positive the steady state is unstable, if negative it is stable. The points at which this derivative vanishes delimit the domains of stability of different steady states; hence, as a definition of the theoretical transition line, we shall consider the locus of points such that \begin{equation} \frac{\partial\eta}{\partial N_2}\biggr\rvert_{N_2=N/2}=0 \quad .\label{transition} \end{equation} In other words, we collect the points where the homogeneous solution of the equation $\eta=0$ becomes unstable. The stability criterion based on the derivative of $\eta$ is illustrated in Fig.~\ref{fig:comp05}. The inset in the panel (a) of Fig.~\ref{fig:comp05} shows two stable steady states for the probabilistic model, but only the homogeneous one is actually observed in the simulations. This is in accord with the initial condition being homogeneous. Possible departures from this state, with $N=10^3$, are expected to be extremely rare. The panel (c) illustrates a case in which the homogeneous state is stable, is the only steady state for the probabilistic model, and is reached with the deterministic dynamics. In the panel (b), the homogeneous state is unstable for the probabilistic model, and is not observed in the simulations, despite initially $N_1=N_2$. While this shows that the probabilistic model describes quite well the currents in the deterministic dynamics, Fig.~\ref{fig:comp06} shows that some difference remains at finite $N$. In particular it reports the behavior of the mass spread, for the same values of the parameters and of the threshold considered in Fig.~\ref{fig:comp05}, with an initial datum yielding $\chi(0)=1$. The red line shows the convergence to an inhomogeneous steady state, also evidenced in the panel (b) of Fig. ~\ref{fig:comp05}. The black line shows the estabishment of a stable homogeneous state (see panel (c) of Fig. ~\ref{fig:comp05}), in which the mass spread rapidly drops down around 0. Finally, the blue line illustrates the case, highlighted in the panel (a) of Fig.~\ref{fig:comp05}, in which a homogeneous and one inhomogeneous states are both stable for the probabilistic model. Starting at $\chi(0)=1$, the deterministic dynamics converges toward and lingers over the inhomogeneous steady state, but then it moves away, eventually converging to the homogeneous state. Therefore, for our finite $N$, the lifetime of the inhomogeneous state is short, that of the homogeneous state is very long. That is why the latter looks globally attracting for the deterministic dynamics. In the left panel of Fig.~\ref{fig:comp01}, a horizontal slice of Fig.~\ref{fig:numris}, net currents are plotted as functions of $T/N$ for fixed $w'$. Theoretical predictions and data from simulations are compared, which reveals that for small values of $w'$ the match is good already at $N=10^3$. The right panel of Fig.~\ref{fig:comp01} highlights the {\em battery} phenomenon, with the first channel generating the \textit{emf}. The resulting net current, at fixed $w'$, is linear with the mass spread $\chi$, and its slope increases with $w'$, closely following the theoretical prediction. The linearity is better realized for smaller $w'$, consistently with the conditions for the applicability of the probabilistic model to the deterministic dynamics. \begin{figure} \includegraphics[width = 0.27\textwidth]{figN-ccrr_circ05b.pdf} \hskip -1.2 cm \includegraphics[width = 0.27\textwidth]{figN-ccrr_circ06a.pdf} \caption{Net currents as a function of $T/N$ (left panel) and of $\chi$ (right panel) for $N=10^3$, $w=0.3$, $r=\ell=\ell'=1$, $w'=0.0102$ (circles), $0.0204$ (squares), $0.0306$ (triangles), $0.0612$ (diamonds). The dashed lines correspond to the stable solutions of $\eta=0$, see Eq. \eqref{eq007}. } \label{fig:comp01} \end{figure} \section{Conclusions} \label{sec:sec3} \par\noindent We considered a deterministic conservative reversible particle system undergoing a non--equilibrium phase transition, induced by a bounce--back mechanism in one of the channels. Numerical simulations of the deterministic dynamics reveal the existence of a rich phase diagram in the plane $w'$--$T/N$, which includes states with stationary density gradients and stationary currents. Remarkably, the relation between the mass spread and the net current turns out being linear for small values of $w'$, in agreement with the basic tenets of ohmic transport \cite[Chapter 4]{dGM}. The numerical simulations of the deterministic dynamics are also supported by a theoretical analysis based on probabilistic arguments. The match between numerical and analytical results, that strictly requires the $N \rightarrow \infty$ and $w'\rightarrow 0$ limits, is strikingly good even for moderately large N and moderately small values of $w'$. Interestingly, the regime in which the probabilistic model may be meaningfully applied to the deterministic dynamics is relatively easy to achieve in practice. Some relevant open questions still lay ahead; one, in particular, concerns the existence of phase transitions and stationary currents when considering different geometries of the channels and/or of the cavities, or by adding long--range particle interactions. Further challenging mathematical questions concern the investigation of the thermodynamic limit of our model, the relaxation of the particle system toward a nonequilibrium steady state, which could even exhibit anomalous behavior \cite{Ryabov}, and applications to the modelling of physical and chemical kinetics. \vskip 15pt \noindent{\bf Acknowledgements.} ENMC and MC thank Karlstad University for its kind hospitality. OR is grateful to the Sapienza Universit\`a di Roma and the University of L'Aquila for their kind hospitality and thankfully acknowledges partial financial support of the GS Magnussons fond. LR has been partially supported by Ministero dell'Istruzione, dell'Universit\`a e della Ricerca (MIUR) Grant No. E11G18000350001 ``Dipartimenti di Eccellenza 2018- 2022''. The authors are grateful to the Laboratorio di Calcolo of SBAI, Sapienza Universit\`a di Roma.
1,108,101,566,229
arxiv
\section{Introduction} \label{intro} The seminal work of \cite{P1958} has set the foundations for the study of the solar wind and provided an analytical solution to the problem of a spherically symmetric, isothermal, pressure--driven wind. The wind starts as a subsonic flow from the surface of the Sun, gets accelerated due to the thermal pressure gradient, exceeds the sound speed and becomes a supersonic flow. While our current understanding of the the solar is enriched by the observations accumulated over half a century \citep{Kohl:1998, Marsch:2006}, Parker's picture of a pressure driven wind encapsulates a substantial part of the underlying physics. This picture has been extended by considering a polytropic gas equation of state (EOS), rotation and magnetic field treated with analytical and numerical solutions \citep{P1960, P1964a, P1964b, P1964c, P1965, P1966, Mel2004} and numerical solutions, e.g.~\cite{Weber1967, Suess:1975, Keppens1999, K2011, Wat2012}. While the main motivation and application of these studies has been the solar wind, stellar winds have been explored in several types of stars with radiation--driven winds \citep{Castor:1975, Mattsson:2010}. For instance, in the CAK formalism \citep{Castor:1975} the Parker analysis is enriched with radiation pressure. In more recent attempts, non--local--thermal equilibrium effects are included \citep{Sundqvist:2019}. Such winds impact the stellar environment including extra--solar planets and their atmospheres \citep{Preusse:2005, See:2014} but also the winds originating from extra--solar planets themselves \citep{Tian2005, Stone2009}. All these models explore a wide variety of parameters, beyond the ones measured in the solar system, creating an array of models applicable to diverse physical conditions. Furthermore, the problem of spherical accretion formulated by Bondi \cite{Bondi:1952} is mathematically similar to the wind problem except for the boundary conditions that correspond to zero velocity at infinity and maximum supersonic velocity at the surface of the star. Among the variety of wind prescriptions, here, we restrict our simulations to thermally driven winds to present the CPS method as a mathematical tool that can be implemented in this area of research. From a mathematical perspective, the sonic point corresponds to a singularity of the equations. In the case of an isothermal wind, the integration of the equations through the singularity is straight-forward, leading to the Bernoulli integral and a smooth solution. The introduction of a polytropic index, other than $\gamma=1$ that corresponds to the isothermal wind, leads to a more complicated problem that can only be solved numerically, or approximately through a semi--analytical approach. Numerical solutions are not straightforward due to the presence of the critical point. For instance, application of the shooting method and integration forward from the stellar surface requires fine tuning to converge to the solution that passes smoothly through the critical point. The numerical solution of \cite{Keppens1999} employs a different approach by integrating the dynamical evolution of the time-dependent Euler equations until the system relaxes to an equilibrium which then corresponds to the solar wind solution. While this approach leads to solutions that pass smoothly from the sonic point, it is computationally more demanding, being time--dependent, compared to a solution of the equilibrium differential equations. An other option is the integration on either side of the critical point and interpolation of the two solutions so that they match at the critical point \citep{P1964a}. This approach provides a smooth curve, however, it does not address the equation as the critical point and the solution found may correspond to a different curve, which is similar to the solution in question, but with some deviations at the critical point. In this paper, we propose and implement an alternative approach by integrating the equation on a complex contour and applying the numerical method of ``complex plane strategy'', (CPS). This method has been used in several other astrophysical initial value problems that suffer from mathematical singularities such as stellar polytropic models \citep{G1988}, solar system and the jovian satellites \citep{G1993, GV1994}), white dwarf models \citep{GH1992}, and the general--relativistic polytropic neutron star models \citep{PG2003, GKa2008, GK2014, GK2015}. The main strength of this method is that it can solve numerically equations with singularities without the need of a piece--wise integration, leading to the omission of the critical points, or the need of fine--tuning, but it rather allows the direct numerical integration of the equation in the entire domain. This comes at the expense of converting the real--valued problem to a complex--valued one and integrating along a contour on the imaginary plane. In this work, we apply the CPS on the problem of a polytropic wind. We reproduce the analytical isothermal solution, using it as a benchmark for the numerical calculation. We extend our study to the polytropic solution and we discuss the results. The structure of this paper is as follows: We present the mathematical setup of the problem in \S~\ref{setup}, where we also analyse the critical points and illustrate how the CPS treats them. We present our results in section \S~\ref{results} by comparing them against the existing results and extending them to various problems. We discuss our results in \S~\ref{discussion}. We conclude in section \S~\ref{conclusion}. \section{Problem setup} \label{setup} \subsection{Isothermal wind} \label{isosetup} The simplest model for the solar wind is that of an isothermal spherically symmetric flow described by a system of two ordinary differential equations arising from the conservation laws of mass and momentum respectively. The equations take the following form (\cite{P1958}, cf. equations~(10), (11)) \begin{equation} \frac{d}{dr} \left( \rho \, v \, r^2 \right) = 0 \, , \label{syseq1} \end{equation} \begin{equation} \rho \, v \frac{dv}{dr} \, + \frac{dP}{dr} \, +\rho \, \frac{G \, M_{\odot}}{r^2} = 0 \, . \label{syseq2} \end{equation} where $P$ is the pressure, $\rho$ is the density, $v$ is the radial velocity, $G$ is Newton's constant, $r$ is the distance from the centre of the Sun and $M_{\odot}$ is the solar mass. As the flow is isothermal, the pressure and density are connected through the following relation $P = \rho v_{s,b}^2$, where the sound speed is expressed in the following form \begin{equation} v^2_{s,b}=\left( 2 \, k \, T_b/m_p \right) \, , \label{v_so1} \end{equation} with $k$ the Boltzmann constant and $m_p$ the proton mass, with protons obeying a Maxwell--Boltzmann distribution of temperature $T_b$. As the flow is isothermal, the sound speed is constant throughout the flow. We can conveniently rewrite the system of equations~(\ref{syseq1})--(\ref{syseq2}) in dimensionless form by introducing the following variables; the dimensionless radius $\xi=r/R_{\odot}$ where $R_{\odot}$ is the solar radius, the Mach number $M(\xi)=v(\xi)/v_{s,b}$ and the dimensionless density $\bar{\rho}(\xi)=\rho(\xi)/\rho_b$ where $\rho_b$ is the density of the solar wind at $R_{\odot}$. Substituting the dimensionless quantities, the system take the following form \begin{equation} \frac{d}{d\xi} \left( \bar{\rho} \, M \, \xi^2 \right) = 0 \, , \label{syseq3} \end{equation} \begin{equation} \bar{\rho} \, M \frac{dM}{d\xi}\, + \frac{d\bar{\rho}}{d\xi} \, +\lambda\frac{\bar{\rho}}{\xi^2} = 0\, . \label{syseq4} \end{equation} The parameter $\lambda$ is defined as follows \citep{P1958} \begin{equation} \lambda = \frac{1}{2}\left( \frac{v_{esc}}{v_{s,b}} \right)^2 = \frac{G \, M_{\odot} \, m_p}{2 \, R_{\odot} \, k \, T_b} \, . \label{eqlam} \end{equation} where the solar escape velocity is $v_{esc}= \sqrt{\frac{2\, G\, M_{\odot}}{R_{\odot}}}$. Indicatively, at the solar corona the above quantities take the values of $v_{esc} \approx 617$km/sec, $v_{s,b} \approx 130$km/sec and $T_b \approx 10^6$K \citep{P1958}. Substitution of the continuity equation into the momentum equation leads to an ordinary differential equation for the Mach number as a function of the dimensionless radius $\xi$ \begin{equation} \frac{1}{M^2} \, \frac{dM^2}{d\xi} = \frac{2}{\xi^2} \, \frac{2 \, \xi - \lambda}{M^2-1} \, , \label{macheq} \end{equation} The above equation is the so--called Mach equation, which can be conveniently written in the following form \begin{equation} \frac{dM}{d\xi} = \frac{M}{\xi^2} \, \frac{2 \, \xi - \lambda}{M^2-1} \, . \label{veleq} \end{equation} It is evident from the above equations that the critical point is located at $r_c=\frac{\lambda}{2}$ where the kinetic energy of an elementary volume is equal its thermal energy. At the critical point the flow velocity becomes equal to sound speed, thus the flow becomes transonic. Beyond this point the solar wind becomes supersonic. The Mach equation is a non--linear ordinary differential equation that can be integrated by separation of variables. Integration leads to the Bernoulli equation \begin{equation} \textrm{ln} \, M -\frac{M^2}{2} = \textrm{ln} \, B - \textrm{ln} \, \xi^2 -\frac{\lambda}{\xi} \, , \label{bereq} \end{equation} where $B$ is the Bernoulli integral, representing energy. The value of $B$ can be determined by solving equation~(\ref{bereq}) for $M=1$ and $\xi=\lambda/2$, obtaining \begin{equation} B_c = \frac{\lambda^2 \, e^{\frac{3}{2}}}{4} \, . \label{bcrit} \end{equation} If we choose a different value for $B\neq B_c$, this will lead us to another family of solutions, either the ``breeze'' solutions that never become supersonic or solutions that remain supersonic for the entire flow. One can also obtain expressions for $f(M,\xi)$ that satisfy equation (\ref{bereq}), but are not physically acceptable as have two different values for the velocity at the same point. Each solution arises from a different set of boundary conditions at $\xi_c$ \citep{Fitz2014}. In particular, if the Mach number at $\xi=\xi_c$ is $M_c > 1$, the corresponding value of solar wind velocity is supersonic everywhere, otherwise, if $M_c < 1$, it remains subsonic everywhere leading to the breeze solutions. We can further obtain the Bondi accretion solution where the velocity is zero at infinity, reaches the sound speed at the critical point and becomes supersonic close to the star, which corresponds to accretion, instead of wind \citep{Bondi:1952}. By applying our method we can discriminate between the wind and Bondi solution by using the Mach number at the critical point as a parameter. We discuss on this issue in \S~\ref{Isores}. \subsection{Polytropic wind} \label{polysetup} The isothermal model postulates that the wind is heated very efficiently and reaches a constant temperature everywhere immediately. If a more realistic approach is adopted by assuming a less efficient heating mechanism, the solar wind problem can be approximated by a polytropic equation relating the pressure $P$ to the density through a power law of the following form \begin{equation} P = K \, \rho^{\gamma} \, , \label{poleq} \end{equation} where $K$ is the polytropic constant and $\gamma$ is the polytropic index. A special case is the adiabatic atmosphere with a polytropic index $\gamma=5/3$. In this case, a fluid element does not exchange heat as it propagates away from the surface of the Sun. A softly heated atmosphere has $ 1.1 < \gamma < 5/3$, the limited heated atmosphere corresponds to $\gamma = 1.1$, and the intensively heated atmosphere corresponds to $1.0 < \gamma < 1.1$ \citep{P1965rev}. Recent measurements suggest an effective polytropic index for the solar corona corresponds to $\gamma = 1.1 \pm 0.02$ \citep{VD2011}. The EOS of an ideal gas (\ref{poleq}) \begin{equation} P = \frac{\kappa}{\bar{m}} \, \rho \, T \, , \label{poleqT} \end{equation} where $\bar{m}$ is the average molecular mass of the fluid with a typical value in case of the solar wind $0.6 \, m_p$ \citep{Mercier2015}, which is the one we use in this study for polytropic model. This allows us to express the sound speed at the base of solar wind $v_{s,b}$ for a polytropic gas of index $\gamma$ in the following form \begin{equation} v^2_{s,b} = \frac{dP}{d\rho} \mathrel{\bigg|}_{\rho=\rho_b}= \gamma \, K \, \rho_b^{\gamma-1} = \frac{\gamma \, \kappa T_b}{\bar{m}} \, , \label{v_sb} \end{equation} where index $b$ refers to quantities at the base of the solar wind. As in the isothermal model, the base of the solar wind is located at the solar radius $R_{\odot}$ and the order of magnitude of the density there is $\rho_b = 10^8\bar{m}~$cm$^{-3}$ \citep{Newkirk1967, Mercier2015}. Similarly to the isothermal case, we define dimensionless quantities for the radius $\xi=r/R_{\odot}$, density $\bar{\rho}(\xi)=\rho(\xi)/\rho_b$ and the Mach number \begin{equation} M(\xi)=v(\xi)/v_s(\xi)=M_o(\xi)/\sqrt{\bar{\rho}^{\gamma-1}} \, , \label{machnum} \end{equation} where $M_o(\xi)= v(\xi)/v_{s,b}$ is the ratio of the flow velocity over the sound speed at the base of the corona ($R_{\odot}$). Since the temperature is not constant the sound speed is a function of radius as well, thus $v_s=v_s(\xi)$. Substituting the dimensionless expressions for $r, \, \rho$, $v$ into equation~(\ref{machnum}), and $dP/d\rho$ by equation~(\ref{v_sb}) in the conservation laws of mass equation~(\ref{syseq1}) and momentum equation~(\ref{syseq2}), we deduce a system of two ordinary differential equations that describes the polytropic model of the solar wind \begin{equation} \frac{d}{d\xi} \left( \bar{\rho} \, M_o \, \xi^2 \right) = 0 \, , \label{syseqpol1} \end{equation} and \begin{equation} M_o \frac{dM_o}{d\xi}\, + \frac{\lambda}{\xi^2} + \frac{1}{\gamma-1} \, \frac{d}{d\xi}\bar{\rho}^{\gamma-1} = 0\, . \label{syseqpol2} \end{equation} where we introduce the dimensionless parameter $\lambda = \frac{G \, M_{\odot}}{r_b \, v^2_{s,b}}$ and $\mu=\bar{\rho} \, M_o \, \xi^2=\frac{\dot{M}}{4 \, \pi \, \bar{\rho}_b \, v_{s,b} \, r_b^2}$ is the dimensionless mass--loss rate or flow parameter and $\dot{M}$ is the mass loss rate. The last parameter can be evaluated by the following expression \begin{equation} \mu=\frac{v_b}{v_{s,b}} = \frac{\lambda^2}{2} \left[ \frac{2-2 \, \lambda \, (\gamma-1)}{5-3 \, \gamma} \right]^{\frac{1}{\gamma-1}-\frac{3}{2}}\, . \label{eqmiu} \end{equation} where we have integrated the Bernoulli equation from base up to the critical point. Higher order terms of the ration $v_b/v_{s,b}\ll 1$ vanish and then we obtain the expression by solving the remaining equation for $v_b/v_{s,b}$ which is an other form of mass--loss rate. In addition, $\mu$ is related to the density function $\bar{\rho}(\xi)$ via equation \begin{equation} \bar{\rho} = \frac{\mu}{M_o \, \xi^2} \mathrel{\bigg|}_{\xi=\xi_c, \, M_o=M_c} = \left( \frac{4 \, \mu}{\lambda^2} \right)^{\frac{2}{5-3\, \gamma}}\, . \label{eqrho} \end{equation} which is the following equation~(\ref{eqmiu}) combined with the expression~(\ref{critm}) for $M_c$. The value $\mu = \lambda^2/4$ corresponds to $\bar{\rho}_c = 1.0$ and the solution is identical to the one of isothermal model ($\gamma=1$). The system of equations~(\ref{syseqpol1})--(\ref{syseqpol2}) leads to the dimensionless Bernoulli integral \begin{equation} \frac{M_o^2}{2} - \frac{\lambda}{\xi} + \frac{1}{\gamma-1} \, \left( \frac{\mu}{M_o \, \xi^2} \right)^{\gamma-1} = E\, . \label{bereqpol} \end{equation} where $E$ is the dimensionless specific energy of the flow. The pair of parameters $(\mu, E)$ determines the flow and $M_o$ is a function of $(\xi, \mu, E)$. Because of the existence of the critical point one needs to integrate the ordinary differential equation numerically. The minimization of the first derivative in $\xi$ of equation~(\ref{bereqpol}) gives the equation of the Mach number in the polytropic model (cf. equation~(3) in \citep{Habbal1983}, and equation~(2.2a) in \citep{Bailyn1985}, with $f=1, \, d=0$.) \begin{equation} \frac{1}{M_o^2} \frac{dM_o^2}{d\xi} = \frac{2}{\xi} \frac{2-\frac{\lambda}{\xi}\left( \frac{1}{\bar{\rho}^{\gamma-1}} \right)}{\frac{M_o^2}{\bar{\rho}^{\gamma-1}}-1} \, . \label{macheqpol1} \end{equation} Then by substituting equation~(\ref{machnum}) into equation~(\ref{macheqpol1}) we obtain the Mach equation for the polytropic model \begin{equation} \frac{dM_o}{d\xi} = \frac{M_o}{\xi} \frac{2-\frac{\lambda}{\xi} \frac{1}{\bar{\rho}^{\gamma-1}}}{\frac{M_o^2}{\bar{\rho}^{\gamma-1}}-1} \, . \label{macheqpol} \end{equation} for $\gamma=1$ we recover the isothermal wind equation~(\ref{veleq}). The critical point corresponds to $M=1$ and $\xi= \xi_c =\frac{\lambda}{2 M_o^2(\xi_c)}$ where $M_c=M_o(\xi_c)$. We refer to this in \S~\ref{paths}. In the last equation~(\ref{macheqpol}), we solve directly the system of equations~(\ref{syseqpol1}--\ref{syseqpol2}). Therefore, the solution of equation~(\ref{macheqpol}) will provide the expression of a polytropic wind. \subsection{Mathematical singularities of the problem} \label{paths} The solution of the non--linear ordinary differential equation~(\ref{veleq}) by a direct numerical integration requires special treatment of the singularities of the problem. In particular, the critical point \begin{equation} \left( M_c=1, \qquad \, \xi_c=\lambda/2 \right) \, , \label{critp} \end{equation} in the isothermal wind ordinary differential equation~(\ref{veleq}), leads to an expression where the derivative $\frac{dM}{d\xi}$ becomes equal to an undefined expression $\left( \frac{0}{0} \right)$. This case can be resolved using the l'Hospital rule. Application of the CPS leads to the removal the previous mathematical singularity and the solution of equation~(\ref{veleq}) by a direct numerical integration and without the aid of the Bernoulli integral through the technique of integration the right and the left side of (\ref{bereq}). Furthermore, CPS permits the calculation of the slope $\frac{dM}{d\xi}$ at each point of function $M(\xi)$ without application of l'Hospital rule. In the less trivial case of the polytropic non--linear equation~(\ref{macheqpol}), a similar critical point occurs $( M_c,\, \xi_c)$. The evaluation of the parameters of the critical point is not as straightforward as in the isothermal wind, and its location is a function of the physical quantities of the polytropic fluid, $\xi_c(\lambda, \mu, \gamma)$ or equivalently of the density at critical point $\bar{\rho_c}$ which is given by equation~(\ref{eqrho} (figures~\ref{figisopolden}, \ref{figisopolgam}) \begin{equation} \left( M_c = \sqrt{\bar{\rho}^{\gamma-1}}, \qquad \, \xi_c = \frac{\lambda}{2} \, \frac{1}{\bar{\rho}^{\gamma-1}} \right) \, . \label{critm} \end{equation} The critical point $M_c=1$, $\xi=\xi_c$ can be calculated as a function of parameters $\lambda, \, \mu$ as \begin{equation} \left( M_c = \left(\frac{4 \, \mu}{\lambda^2}\right)^\frac{\gamma-1}{5-3\, \gamma}, \quad \, \xi_c = \left(\frac{\lambda}{2}\right)^{\gamma+1} \left(\frac{1}{\mu^{2\, \gamma-2}}\right)^\frac{1}{5-3\, \gamma} \right) \, . \label{critm2} \end{equation} \subsection{Numerical integration on the complex plane} \label{integration} CPS is a suitable numerical method for the integration of ordinary differential equations on the complex plane, either along a real interval when the independent variable $r$ is real, or along a complex contour when $r$ is complex, to detour critical points. The key idea is to transform the real--valued functions of real variable, the ``real distance’’, $r \in \mathbb{R}$, into complex--valued functions of the ``complex distance’’, $r \in \mathbb{C}$, which constitutes the complex path along which integration proceeds, \citep{GV1994, GK2014}. A similar implementation has been developed in problems of dynamics where the time plays the role of the independent variable ``complex time’’ (e.g. \cite{Orendt2009}). This way, any indeterminate forms such as $\left(\frac{0}{0}\right)$ appearing in the equations are removed. Provided that the imaginary part of the physical quantities remains small, one can ensure that the numerical solution approaches the actual solution of the problem. We are interested in the real--valued function of one real variable $M(\xi)$, ($\xi$ is the ``real distance’’) to detour the critical points discussed in section \ref{paths}. The initial conditions for equation~(\ref{veleq}) are \begin{equation} M(\xi_i) =1, \label{incon} \end{equation} at $\xi_i=\lambda/2$. The initial conditions of equation~(\ref{macheqpol}) are \begin{equation} M_o(\xi_i) = M_c, \label{incon} \end{equation} at $\xi_i=\xi_c$, where $M_c$ is given by equation~(\ref{critm}). Here we note that index $i$ indicates the value at the starting point of integration where the initial conditions are applied. In the framework of CPS, we introduce the function $M(\xi)=\bar{M}(\xi)+i\, \breve{M}(\xi)$ as a complex function of a complex variable $\xi=\bar{\xi}+i\,\breve{\xi}$. The integration contour $\xi$ corresponds to a straight--line segment parallel to the real axis and at a small imaginary $i\,\breve{\xi}$ from it. As a result initial conditions~(\ref{incon}) become \begin{equation} M_i = \bar{M}_i + i\, \breve{M}_i \, , \label{icon1} \end{equation} at $\xi_i=\bar{\xi}_i+i\,\breve{\xi}_i$, where the imaginary part $\breve{M}_i$ is zero or small compared to $\bar{M}_i$, respectively. More specifically, values of real and imaginary parts of each quantity at $\xi_i$ are \begin{subequations} \begin{equation} \bar{M}_i =1 \, , \qquad \breve{M}_i=10^{-4} \, \label{icon2a} \end{equation} \begin{equation} \bar{\xi}_i =\frac{\lambda}{2} \, , \qquad \breve{\xi}_i=10^{-4} \, . \label{icon2c} \end{equation} \end{subequations} In polytropic models, we introduce the complex function $M_o(\xi)=\bar{M}_o(\xi)+i\, \breve{M}_o(\xi)$ in the differential equation~(\ref{macheqpol}). The values of the real parts of the initial conditions $\bar{M}_o (\xi_i)$ and $\bar{\xi}_i$ are given by equations~(\ref{critm2}). All the other values of the initial conditions remain as the corresponding ones in equations~(\ref{icon2a}, \ref{icon2c}), respectively. In the differential equation~(\ref{macheqpol}), we define the function $\bar{\rho}(\xi)=\mathrm{Re}(\bar{\rho}(\xi))+i\,\mathrm{Im}(\bar{\rho}(\xi))$ as a complex function of complex variable $\xi$. At the critical point~(\ref{critm2}, the value of $\mathrm{Re}(\bar{\rho}(\xi_i))$ is given by equation~(\ref{eqrho}) and the imaginary part is set to $\mathrm{Im}(\bar{\rho}(\xi_i))=0$. After the numerical integration we keep the real part $\mathrm{Re}(\bar{\rho}(\xi))$ which is used with $\bar{M}_o(\xi)$ in equation~(\ref{machnum}) to give the solution of Mach function $\bar{M}(\xi)$. We integrate equation~\ref{veleq} on two complex contours, one from point $\xi_{start}=\xi_i$ forward to $\xi_{end1}=100 \times \lambda$ and an other backwards to $\xi_{end2}=0$. The accuracy of the solutions is related to choice of the imaginary part in the CPS. The estimation of the accuracy of this method is achieved through the solution of the equations a closed complex contour (initial and final point being the same). The integration error is equal to the difference of the solution at the end point minus the one at the start point, which formally should have been equal to zero \citep{GV2012}. We then focus on the implementation of CPS in this problem by integrating equation~(\ref{veleq}) in different two complex contours. The ``direct" one that is described in the previous paragraph and one square contour as defined by the edges $\xi_1 \, (\bar{\xi}_i, 0)$, $\xi_2 \, (10, 0)$, $\xi_3 \, (10, 1)$, $\xi_4 \, (\bar{\xi}_i, 1)$ where the first value in the coordinates is the real value $\bar{\xi}$ and the second is the imaginary value $\breve{\xi}$. The integration step is $10^{-2}$ both along the real and imaginary axis and the initial conditions are the same as in equations~\ref{icon2a}, \ref{icon2c}. The real parts of the numerical solutions in both cases are identical as we can see in figure~\ref{square-zoom-2}. In the upper panel, we show the deviation between the square contour and the line contour. The lower panel refers both to the closed complex contour $\xi$ that we use to integrate the ordinary differential equation and to the real part of the Mach number function, $\bar{M}$ we obtain from the application of CPS. \begin{figure} \includegraphics[width=0.5\textwidth]{figs/square-flat.png} \quad \includegraphics[width=0.5\textwidth]{figs/square-contour.jpg} \caption{\label{square-zoom-2} Solution of equation~(\ref{veleq}), $\bar{M}$ vs $\bar{\xi}$, under initial conditions $\bar{M}_i=1$ and $\breve{M}_i=10^{-4}$ via the linear complex contour~(\ref{icon2c}) and a square complex contour (upper figure). Solution of equation~(\ref{veleq}), $\bar{M}$ vs integrating square contour in complex plane $\xi_1 \, (\bar{\xi}_i, 0)$, $\xi_2 \, (10, 0)$, $\xi_3 \, (10, 1)$, $\xi_4 \, (\bar{\xi}_i, 1)$, where $\bar{\xi}_i=5.45$ (lower figure).} \end{figure} The physical meaning of the physical quantity is encapsulated in the real part of the solution $\breve{M}$, whereas, the imaginary part $\breve{M}_i$ is an auxiliary quantity allowing the integration of the equation and needs to remain small, similar to its initial value. Therefore, the values of the imaginary parts of the functions involved in our problem $\breve{M}$, $\breve{M}_o$, and $\mathrm{Im}(\bar{\rho}(\xi))$ converge to sufficiently small quantities at the same or smaller order of magnitude with the initial one. In general, because of the requirement for the real part of the solution $\bar{M}$ to remain in a physical framework, conceding the fact that we already know the values of the solution, we approach empirically the appropriate initial value of the imaginary parts of the functions ($\breve{M}$, $\breve{M}_o$ etc), such as the value of $\breve{M}$ is much smaller than the value of $\bar{M}$ and the latter remains physically accepted. This approximation is based on the main aim of CPS to remove the singularity at the critical point by assuming a small imaginary part at integrating contour, $\breve{\xi}_i$, as well as a small one (at the same point) of the imaginary part of a complex function in comparison to its real part. Larger imaginary parts omay lead to other solutions without physical meaning. For example, they can be related to a discontinuity at the critical point. In the present problem the value of $\breve{M}$ acts as a parameter which for positive values leads to the Parker solution and for negative values to the Bondi one. This issue is discussed in \S~\ref{Isores} At this point we need to clarify some properties related to the CPS. The CPS does not calculate the integral of a complex function along a closed contour in the complex plane (such as equations~(\ref{syseqpol2}, \ref{bereqpol}), but integrates numerically an ODE of a complex function (such as equations~(\ref{veleq}, \ref{macheqpol}) on the complex plane. This problem does not contain poles in the region of interest but only one singularity which is used as initial point of the numerical integration, as we have discussed about this issue in \S~\ref{paths}. This is the reason that CPS is an appropriate method for solving this problem. CPS under some appropriate modifications could be applied in cases of ordinary differential equations contain logarithms or power laws forms as it is presented in detail in \cite{GK2014}. \section{Results} \label{results} The code for our simulations is written in Fortran and for compiling we use the GNU Fortran compiler, gfortran, which belongs to the GNU Compiler Collection (http://gcc.gn u.org/) and is licensed under the GNU General Public License (http://www.gnu.org/licenses/gpl.html). It has been installed by the TDM-GCC ‘‘Compiler Suite for Windows’’ (http://tdm-gcc.tdragon.net/), which is free software distribu\-ted under the terms of the GPL. We use the Fortran package dcrkf54.f95 \citep{GV2012}, a Runge-–Kutta-–Fehlberg code of fourth and fifth order, appropriately modified for the solution of complex initial value problems, with highly complex expressions for their ordinary differential equations along contours (not necessarily simple or closed) prescribed as continuous chains of straight--line segments. In what follows, we refer and plot only the real part of the solution $\bar{M}(\xi)$ and $\xi$ at figures refers to $\bar{\xi}$. \subsection{Isothermal Wind results} \label{Isores} We integrate Mach equation~(\ref{veleq}) assuming the initial conditions in equations~(\ref{icon2a}-\ref{icon2c}). The integration is performed on a complex contour with imaginary part $\breve{\xi}_i=10^{-4}$ and $\breve{M}_i=10^{-4}$. We obtain the Parker isothermal wind solution, as the one resulting from the Bernoulli equation~(\ref{bereq}) for $M_c=1$ and $B=B_c$ which is the physically acceptable solution. We set the temperature at the base of the corona $T_b$, which is used in the solution of equations~(\ref{v_so1}) and (\ref{eqlam}). We note that a choice of a negative initial imaginary part i.e.~$\breve{M}_i=-1 \times 10^{-4}$ leads to Bondi accretion solution. Integration of the Mach equation (\ref{veleq}) with initial conditions equal to $\bar{M}_i=1.5, \, 2.0$ and $\breve{M}_i=10^{-4}$ via a complex contour (\ref{icon2c}) leads to the same solution of the Bernoulli equation (\ref{bereq}) for $M_c>1$ and $B<B_c$, which corresponds to a supersonic solar wind velocity everywhere with a local minimum at the vicinity of the critical point. We further integrate the Mach equation~(\ref{veleq} assuming the following initial conditions $\bar{M}_i=2 \times 10^{-1}, \,5 \times 10^{-1}$ and $\breve{M}_i=10^{-4}$ on the same complex contour~(\ref{icon2c}). We obtain the solution of the Bernoulli equation~(\ref{bereq}) for $M_c<1$ and $B<B_c$ which corresponds to a breeze, a flow that is everywhere subsonic, with a local maximum at at the vicinity of the critical point. While this solution is mathematically valid, it cannot represent the flow of the solar wind as it never becomes supersonic. The above solutions are plotted in figure~(\ref{fig:figall}) where the temperature of the isothermal wind is set to $T_b = 10^6$K and the critical point is located at $\bar{\xi}_i=5.77$. \begin{figure} \includegraphics[width=1.0\textwidth]{figs/figall.png \caption{\label{fig:figall} The solutions obtained with the CPS of $\bar{M}$ versus $\bar{\xi}$, of all cases for an isothermal spherical flow.} \end{figure} As shown in figure~(\ref{fig:figall}), we can distinguish unphysical solutions from the physical ones. The Bondi accretion solution and the ones noted as $M_c > 1$ have a super--sonic flow at the base of the corona. This behaviour is not consistent with a static solar photosphere. Solutions with $M_c \leq 1$ correspond to a sub--sonic flow in the vicinity of $\xi_c$ and cannot describe the solar corona \citep{Fitz2014}. We investigate the role of the complex contour in the integration of Mach equation~(\ref{veleq} by repeating the previous computations via a real contour, where $\breve{\xi}=0$. For subsonic $(\bar{M}_i < 1)$ or supersonic solutions $(\bar{M}_i > 1)$ we recover the same solutions as in figure~(\ref{fig:figall}). If we use the initial conditions of equation~(\ref{icon2a}) the profile of velocity changes and we take an unphysical solution, with a discontinuity in the derivative of $M(\xi)$, instead of the expected smooth solution. The solutions for $\bar{M}(\xi)$, for the integration of the complex function $M(\xi)$ on the real and complex contours $\bar{\xi}$, $\xi$ respectively, is shown in figure~(\ref{fig4uni}). The upper panel contains the solutions of equation~(\ref{veleq}) for integration on a complex contour with imaginary part $\breve{\xi}=10^{-2},\, 10^{-4}, \, 0$ where $\bar{M}_i=1$ and $\breve{M}_i=10^{-4}$ for $T_b = 4 \times 10^6$K. The lower panel shows to solutions of equation~(\ref{veleq}) for integration in the same complex contour but for initial conditions where the real part remains the same $\bar{M}_i=1$ but the imaginary is set to $\breve{M}_i=10^{-2}$. In all cases the solutions $\breve{\xi}=10^{-2},\, 10^{-4}$ are identical to each other. In addition, the behaviour of solution for $\breve{\xi} > 0$ does not change even if $\breve{\xi} < 0$ so their plots are omitted in figures~(\ref{fig4uni}). The form of the solution changes only for the integration on real contour. This result confirms the validity of CPS as an appropriate method which providing the physical solutions of the solar wind problem normally by avoiding numerical problems and without restrictions in values of its parameters. We note that there is the restriction of $\bar{M}_i \neq 0$ because of the singularity which appears in denominator of equation~(\ref{veleq}) at the critical point. Also, we examine the case of integration on a complex contour $\breve{\xi}=10^{-4}$ with critical point $\bar{M}_i=1$ and $\breve{M}_i=-10^{-2}, \, -10^{-4}$ for $T_b = 4 \times 10^6$K and recover the Bondi accretion solution in both cases. \begin{figure} \includegraphics[width=0.5\textwidth]{figs/fig-m4.png} \quad \includegraphics[width=0.5\textwidth]{figs/fig-m2.png} \caption{\label{fig4uni}Solution of equation~(\ref{veleq}), $\bar{M}$ vs $\bar{\xi}$ in two cases for $\breve{\xi}=10^{-2}$ and $\breve{\xi}=10^{-4}$, under initial conditions with real part $\bar{M}_i=1$ and two different values of imaginary part, $\breve{M}_i=10^{-4}$ (upper panel) and $\breve{M}_i=10^{-2}$ (lower panel) via a real contour, where $\breve{\xi}=0$.} \end{figure} In figure~\ref{figvsvr}, we plot the CPS solution for the Parker solar wind, where $v = M \, v_s$ and $r$ is the distance from the sun. We verify that for temperatures in the physically accepted range, $T\sim 1$-$2\times 10^6$K, the flow velocity is in the range of hundreds of kilometers per second at 1 AU. The critical surface where the solar wind makes the transition from subsonic to supersonic flow lies a few solar radii away from the Sun (i.e., $r_c\sim 5\,R_\odot$) \citep{Priest1987, Fitz2014}. \begin{figure} \includegraphics[width=1.0\textwidth]{figs/vs_v_r.png} \caption{\label{figvsvr} Solar wind velocity $v$ versus distance $r$ form the solar corona for various temperatures $T_b$. The dashed line is the orbit of Earth which is at $150 \times 10^6$ km. Here we have to mention that the centre of the sun corresponds to the origin of axis in all other figures of this study.} \end{figure} \subsection{Polytropic winds} Since we have been able to recover the Parker isothermal wind solution, we further investigate the more complicated solutions of polytropic models. We integrate equation~(\ref{macheqpol}) with initial conditions~(\ref{critm2} for the real parts of equations~(\ref{icon2a}) and (\ref{icon2c}). Equation~(\ref{macheqpol}) is an ordinary differential equation of the $M_o$, the flow velocity scaled to the speed of sound at the base of the solar corona, therefore, we use equation~(\ref{machnum} so that we recover the Mach number $M$ at the end of numerical integration. The initial condition are the ones mentioned above correspond to $\bar{M_o}$. We have made three groups of numerical runs so that we investigate the behaviour of the numerical solution subject to the following three parameters: the polytropic index, the flow parameter (mass--loss rate) and the temperature. First, we illustrate the role of the polytropic index $\gamma$ and the flow parameter $\mu$ at the critical point $\bar{\rho}_c$ in the velocity. Here, we set the above quantities as well as the temperature at the coronal base, $T_b$. These quantities are needed for the evaluation of $\lambda$ and the sound speed at the coronal base, $v_{s,b}$. We also explore the solutions for different mass-loss rate parameters. Then, we construct the solution for solar wind by providing only the values of ($\gamma, \, T_b$) and after that, we calculate the values of $\mu$, $\bar{\rho}_c$ and $\lambda$. All these computations are done with an integration step $10^{-3}$ both forward and backward. The imaginary parts have values $\breve{\xi}_i=10^{-4}$ and $\breve{M}_i=10^{-4}$. \subsubsection{Polytropic index dependence} In figure~(\ref{figisopolgam}), we plot the solution of $\bar{M}$ versus $\xi$ for the isothermal model with $\bar{M}=1$ in comparison to four cases of polytropic indices $\gamma = 1.00, \, 1.05, \, 1.08\, 1.10$. All results are obtained for given temperatures and mass loss at the surface of the Sun $T_b=4\times10^6$K and $\mu= 4 \times 10^{-4}$. We confirm that in each polytropic model the location of the critical point $\left( \bar{M}_i, \, \bar{\xi}_i \right)$, changes for the different polytropic index. One important consequence of the application of CPS in the polytropic model of solar wind is that we find the physical solutions for every value of $\gamma$ in $\xi \rightarrow 0$ for $M_o \rightarrow 0$, and from computational and mathematical point of view, we can extend our calculation in a range of $\gamma > 3/2$, although in this region there are no wind solutions with physical meaning. As mentioned in \cite{Bailyn1985} above this limit the solution becomes supersonic for $\xi \rightarrow 0$ and $M_o \rightarrow \infty$ which is a nonphysical behaviour and should be rejected. We compare the solutions for the Parker isothermal model against five polytropic models with different $\gamma$ in figure~(\ref{figisopolgam}) where we keep the same flow parameter $\mu= 4 \times 10^{-4}$, and $T_b$ is given for each case from equation~(\ref{eqmiu}). We note that in each case the density at the critical point varies according to equation~(\ref{eqrho}) for the same value of parameter $\mu$. The deviation between the solutions for isothermal model, note in figure as (im), and the corresponding one for $\gamma = 1.00$, note in figure as (pm4), is because of the different assumption in the molecular mass of fluid. In figure~(\ref{figisopolgam}) it is shown that as the polytropic index increases, the critical point shifts to larger distance from the star and the transition from subsonic to supersonic flow becomes smoother laeding to lower velocities at large distances. \begin{figure} \includegraphics[width=1.0\textwidth]{figs/figisopolgam2.png} \caption{\label{figisopolgam} Solutions of $\bar{M}$ versus $\bar{\xi}$ for isothermal model with $\bar{M}_i=1$ in comparison to four different cases of polytropic model with $\gamma = 1.00, \, 1.05, \, 1.08, \, 1.10$. All results are with $T_b=4\times10^6$ K and $\mu= 4 \times 10^{-4}$.} \end{figure} \subsubsection{Flow parameter dependence} We have also studied the role of the flow parameter $\mu$ or equivalently of the density $\bar{\rho}_c$ in the polytropic model by solving for model pm1 ($\gamma = 1.10$) assuming six different values of flow parameter $\mu = 4\times 10^{0}, \,10^{-1}, \, 10^{-2}, \, 10^{-3}, \,10^{-4}$ and compare against pm4 ($\gamma = 1.00$, $\mu = 4\times 10^{-4}$). We plot the solutions in figure~(\ref{figisopolden}). \begin{figure} \includegraphics[width=1.0\textwidth]{figs/figisopolden2.png} \caption{\label{figisopolden} Solutions of $\bar{M}$ versus $\bar{\xi}$ for five cases of polytropic model with $\gamma = 1.10$ and $\mu = 4 \times 10^{-4}, \, 4 \times 10^{-3}, \, 0.04, \, 0.4, \ 4.0$, respectively. All results are for $T_b=4\times 10^6$K. } \end{figure} In both figures~\ref{figisopolgam} and \ref{figisopolden}, the intersection of the horizontal line for $M=1$ with each curve are in a different point so the critical point is changing in each case. The variation of critical point according to the value of $\gamma$ are discussed in \S~4.1 of \cite{Keppens1999} where their different positions are indicated in figure~(1) there. Also, as the flow parameter decreases, while keeping the same polytropic index, the transition point shifts further from the star and the wind transitions from subsonic to supersonic with a smaller derivative. This leads to lower velocities. Following the opposite route and starting by assuming the profile pm1e (identical to pm1 shown in figure~\ref{figisopolgam}) with high $\mu$ to pm1a, we notice that the increase of $\mu$ leads to higher values of velocities. As a result, profiles of pm1b and pm1a intersect the profile of pm4. \subsubsection{Temperature dependence} We have also performed a second group of numerical experiments to study the role of temperature at the coronal base, $T_b$ and polytropic index, $\gamma$, in polytropic models. We plot models with $\gamma = 1.10, \, 1.08, \, \,1.05, \,1.0$ for two different temperatures $T_b=1.5\times10^6, \, 4\times10^6$K in figures~(\ref{T15vargamma}, \ref{T4vargamma}). We further plot the isothermal models in the same temperatures for $\bar{M}_i=1$ and $\breve{M}_i=10^{-4}$ which are marked in these figures as $T_b=1.5$ and $4$, respectively, for comparison. These computations have been made with integration step $10^{-3}$ both forward and backward from the critical point. The imaginary parts has values $\breve{\xi}_i=10^{-4}$ and $\breve{M}_i=10^{-4}$ and the initial conditions takes the values calculated by equation~(\ref{critm2}). In figure~(\ref{T15vargamma}), we notice that for low $T_b$ the increase of $\gamma$ do not change the values of $\mu$ and $\bar{\rho}_c$ drastically but both are higher than the corresponding ones for models in figure~(\ref{T4vargamma}) where the values of $T_b$ is higher. Specifically the difference in $\mu$ is from 3 to 5 times of order and in $\bar{\rho}_c$ is from 3 to 6 times of order. This fact drives to a higher differences in values of initial conditions (see equation~(\ref{critm2})) between the models in figure~(\ref{T15vargamma}) than the difference in comparison to the models in figure~(\ref{T4vargamma}). In figure~(\ref{T15vargamma}), the differences of values of $\mu$ and $\bar{\rho}_c$ between models are low but their values as absolute magnitudes (cause of $T_b$) are high so the curves of them are very different. In figure~(\ref{T4vargamma}), values of $\mu$ and $\bar{\rho}_c$ between models are higher but their values as absolute magnitudes (because of $T_b$) are low so the curves of them are slightly different. As $T_b$ increases $\mu$ increases as well, by examining the isothermal model or the same $\gamma$ in polytropic model. For example, the order of magnitude of $\mu$ is $10^{-1}$ in case of $T_b=4\times10^6$K and $10^{-5}$--$10^{-3}$ in case of $T_b=1.5\times10^6$K, according to (\ref{eqmiu}). From our numerical experiments we verify that for $T_b \geq 3\times10^6$K, the solutions cross each other because the value of $\mu$ is comparable to values of $\lambda$, $M_o$, and $\xi$ in equation~(\ref{macheqpol}). Consequently, the resulting differential equation is quite different in such cases. This is shown in figure~(\ref{T4vargamma}). The same effect is presented in figure~(\ref{figisopolden}) where the curve of pm4 intersects the others. If we compare the solution of model $T_b=1.5\times10^6$K with the corresponding one for $\gamma = 1.0$ in figure~(\ref{T15vargamma}) and the one of $T_b= 4\times10^6$ with the curve for $\gamma = 1.0$ in figure~(\ref{T4vargamma}), we note that they are slightly different although they refer to the same model. This difference owing to the different assumption in molecular mass of fluid. In isothermal models we assume $\bar{m}=m_p/2$ but in polytropic models we assume $\bar{m}=0.6m_p$ (see \S~\ref{polysetup}). \begin{figure} \includegraphics[width=1.0\textwidth]{figs/T15c.png \caption{\label{T15vargamma} Solutions of $\bar{M}$ versus $\bar{\xi}$ in comparison for five cases of polytropic model with $\gamma = 1.11, \, 1.10, \, 1.08, \, \,1.05, \,1.0$ and the corresponding isothermal model in same temperature. All results are with $T_b=1.5\times10^6$ K and have $\mu= 2 \times 10^{-1}$, the first 3 models and $\mu= 1.5$ the last one. Also, $\bar{\rho}_c = 5.1 \times 10^{-2}, \,5.5 \times 10^{-2}, \, 6.0 \times 10^{-2}, \,0.5$, respectively.} \end{figure} \begin{figure} \includegraphics[width=1.0\textwidth]{figs/T4c.png} \caption{\label{T4vargamma} Solutions of $\bar{M}$ versus $\bar{\xi}$ in comparison for five cases of polytropic model with $\gamma = 1.11, \, 1.10, \, 1.08, \, 1.05, \,1.0$ and the corresponding isothermal model in same temperature. All results are with $T_b=4\times10^6$ K and have $\mu= 6.2 \times 10^{-6}, \,1.2 \times 10^{-4}, \, 9.1 \times 10^{-4}, \, 10.7$, respectively. Also, $\bar{\rho}_c = 2.5 \times 10^{-8}, \,1.25 \times 10^{-6}, \, 2.1 \times 10^{-5}, \,0.5$, respectively. The x-coordinates of their critical points are $\bar{\xi}_i=1.81, \, 1.80, \, 1.78, \, 1.76, \, 1.73$, and $ 1.44$ for the isothermal model.} \end{figure} \section{Discussion} \label{discussion} The purpose of this study is to revisit the problem of the solar wind by solving it via the CPS, demonstrating that this numerical technique is applicable and useful in astrophysical problems with critical points, singularities, undefined or indeterminate forms in the integration contours of their differential equations. While the CPS is not widely used, it is indeed an appropriate numerical method that avoids mathematical pathologies that appear in initial value problems. Ordinary differential equations are integrated directly without using other algebraic techniques and one obtains the physically acceptable solutions. In particular, here the existence of a critical point does not allow, using conventional methods, the integration from this particular point, but rather one needs to start from a point just before or after. The implementation of CPS allow us to use exactly the critical point as initial point of numerical integration and then move forward and backward in order to take the physical solution, at the expense of using a complex valued function. If we try to solve equations~(\ref{macheq}) or (\ref{macheqpol}) by a simple ``shooting method'' using a common solver we have to approximate the initial condition $M_c=1$ at the critical point before and after it by the supersonic and subsonic solution respectively and then use interpolation at this point to obtain the physical solution. For instance, if we would like to get the physical solution for the isothermal model, we can solve equation~(\ref{macheq}) for two different initial conditions $M_c = 1^+, 1^-$ and in each trial we shift the initial condition closer to one, see how the the solutions approach the physical one as $M_c \rightarrow 1$ the figure~\ref{fig:figall}. This treatment needs multiple runs until fine tuning is achieved, plus it is needed an interpolation. On the other hand, by using l'Hospital rule we need an analytical solution which may complicate the calculation. In addition, the integration path has a singularity at the origin of the real axis ($\bar{\xi}=0$) as $\xi$ appears at the denominator in the ordinary differential equations~(\ref{macheq}) and (\ref{macheqpol}). This singularity vanishes if the integration proceeds on a complex contour parallel to the real axis on a distance equal to imaginary part $\breve{\xi}$. Furthermore, when we follow the complete CPS and transform the real function $M(\xi)$ to a complex one, we are able to integrate the ordinary differential equation~(\ref{macheq}) on a real axis by avoiding the singularity at the origin but we do not get the physical solutions, see figure~(\ref{fig4uni}). By using the CPS as described in \S~\ref{integration} we avoid smoothly all the mathematical pathologies of the problem and verify the topology of solutions both for isothermal and polytropic solar wind models. Also, we check the Bernoulli integral both in isothermal and polytropic models by calculating $B$ in equation~(\ref{bereq}) and $E$ in equation~(\ref{bereqpol}) using the solutions obtained by CPS. Indeed, the numerical experiments show a deviation of these values within the numerical error of the numerical method. In conclusion, the implementation of CPS in problem of solar wind velocity yields the physical solution of the problem directly through a numerical integration in cases of our study. This allows someone to test other promising models for describing solar wind flows from one hand plus to investigate the existence of other solutions in the topology of existing ones or further interesting physical characteristic, from the other. Overall, CPS is a promising numerical method for studying and other flow problems in astrophysics. From the physical point of view, we verify the displacement of sonic point, for polytropic models as the the polytropic index increases. More specifically as the polytropic index increases the sonic point $\xi_c$ (for $M_c=1$) shifted to a further distance from the coronal of the star, as we shown in figures~(\ref{figisopolgam}, \ref{T15vargamma}) and (\ref{T4vargamma}). This is a consequence of the fact that a higher polytropic index leads to a less efficiently heated atmosphere around from the stars and at the values of $\gamma = 5/3$ it becomes adiabatic. Consequently, an isothermal atmosphere gives a critical point closer to the star and the adiabatic environment gives a critical point further out, keeping all other parameters the same. This behaviour is physically expected because an intensively heated atmosphere provides more energy to the wind which obtains a supersonic velocities earlier than in adiabatic case. However, in polytropic model is not allowed to examine cases up to $\gamma = 5/3$ because the solar parameters set restriction in polytropic index within a range of $[1, 1.183]$ \citep{Keppens1999, Wat2012}. The observations of the acceleration region of the fast solar wind that have been done by UVCS shown that the wind could not be accelerated by thermodynamic expansion in case of the Sun \citep{Kohl:1998}. In addition, although Parker solar wind model predicts that the wind should make the transition to supersonic flow at about 5 solar radii far from the surface, the transition at sonic point appears to be located at only 1 solar radius above the photosphere. This fact suggests that there is some other mechanism, yet to be found, which accelerates the solar wind and is not predicted by theory of Parker, \citep{Priest1987, Fitz2014}. As a consequence, it is necessary either to improve the existing theory or to check other alternative theories of solar wind. In this direction CPS seems to be a useful technique for solving the problem of solar or star outflows and carrying out these tests. Of course, CPS does not provide an different physical explanation for the solar wind but only an alternate numerical method which can be applied in the existing solar wind theory. Concerning the applicability of the CPS method to systems of ordinary differential equations, we need to mention that it has been tested in similar problems. A typical case is the study of magnetic field of a neutron star \citep{GK2011}, where the relativistic Grad--Shafranov equation is solving as a nonhomogeneous Sturm--Liouville problem with nonstandard boundary conditions. All the implementations of CPS concern problems so far, have been made on systems of ordinary differential equations. Therefore CPS constitutes a suitable method for studying several models of self--similar wind outflows or similar astrophysical problems. \section{Conclusion} \label{conclusion} In the present investigation we have examined the one dimensional steady problem of the solar wind using the CPS demonstrating that this numerical method is appropriate for solving initial value problems of astrophysical flows, especially the simulations that assume a polytropic model in their framework. The ease of solutions, which are computationally less expensive compared to corresponding simulations of the time--dependent problem has allowed us to explore the parameter space focusing on the role of temperature, adiabatic index and mass loss rate in the solution. Some other challenging issues which are worth investigation are whether CPS can be generalised to relativistic Parker winds by using Taub equation of state \citep{Taub1948}, to magnetized astrophysical flows by using Weber--Davis mo\-del \citep{Weber1967} containing more than one critical point, to rotating winds \citep{Keppens1999}, to disc winds \citep{Wat2012}, or to self similar solutions which deal with non linear ODEs. All these subjects are beyond the purpose of the present paper and we intend to study them in subsequent bunch of publications. \section{Acknowledgements} VK and KNG thank Nektarios Vlahakis and Kanaris Tsin\-ganos for insightful discussions on the solutions of polytropic winds. The simulations where performed on RIGIL Computing System of the University of Patras, funded by the research programme of the University of Patras ELKE FK-80951. \bibliographystyle{unsrtnat}
1,108,101,566,230
arxiv
\section{Introduction} \label{Intro} Count data are classically observed in many applied fields such as in actuarial science when evaluating risk and the pricing of insurance contracts \citep[e.g.,][]{Claims}, in genetics to model the number of genes involved in phenotype variability \citep[e.g.,][]{Genes} or in ecology to model species abundance \citep[e.g.,][]{Abundance}. While Poisson models and regression are well established choices for these type of data, they are not suitable for overdispersed data, which typically occur with an excess of zeroes or extreme large values. To overcome such limitations the use of Poisson mixture models has been proposed. This assumes the Poisson's intensity is no longer an unknown fixed value, but a positive random variable. Mixture approach induces overdispersed distributions with more zeroes and high values compared to the classical Poisson model \citep{Shaked}. A variety of mixture distributions has been already proposed \citep{Karlis}. Classical examples include the gamma distribution \citep{Greenwood}, the lognormal \citep{Bulmer} or the Bernoulli \citep{Lambert}. From a general point of view, any distribution with a non negative support, finite or not, can be a potential candidate as a mixing distributions. Some tests exist to verify whether data are overdispersed \citep{YHA} or if they come from a mixed Poisson distribution \citep{Carriere}. These tests justify the use of Poisson mixtures, but do not make claims on what type of mixing distribution should be selected. To our knowledge, there are no studies that propose a solution to this problem. This paper aims to propose a new strategy to select an appropriate and efficient mixing distribution family. \medskip Usually one may choose to fit some predetermined Poisson mixtures and keep the best model based on some criteria. However the following example shows the difficulties in the mixing distribution choice. A sample of size $n=500$ has been simulated from a Poisson-Beta type II with parameters equal to $a = 1$ and $b = 2.2$. This choice ensures finite expectation and variance for the mixed Poisson distribution and are equal to $0.83$ and $8.47$ respectively. Such a distribution has been used to model accident proneness \citep{Holla}. First and as expected, the use of a simple Poisson model fails to properly fit data (see Figure \ref{fig:example}). In particular, it does not capture well the high frequency of zeroes and large values once its parameter is estimated ($\hat{\lambda}= 1.02$). \begin{figure}[H] \centering \includegraphics[width=\textwidth]{Graphe_Intro.png} \caption{Fitted Poisson, Poisson-Gamma, Poisson-Inverse-Gamma and Poisson-Lognormal models to data simulated from a Poisson-Beta type II distribution with $a = 1$ and $b = 2.2$ compared to its empirical distribution (Dark Gray) } \label{fig:example} \end{figure} \pagebreak Alternative latent distributions on $\lambda$ have been used for this simulation, either the gamma or lognormal as classically proposed in ecology \citep{JSDM} or the inverse-gamma applied in actuarial science for liability insurance claims \citep{Tzougas}. Inference is performed in a Bayesian framework (details in Section \ref{Section_Mixture}.2). While the Poisson-gamma, \textit{i.e.} negative binomial, or the Poisson-lognormal are popular choices, the Poisson-inverse-gamma is privileged for this example. Indeed, the \textit{posterior} model probabilities of the negative binomial and Poisson-lognormal are respectively $0$\% and $38.8$\% compared to $61.2$\% for the Poisson-inverse-gamma. The latter mixture distinguishes itself even if the three models behave similarly with regards to Figure \ref{fig:example}.\medskip The reason that the inverse-gamma is privileged in this example is due to its tail behavior which is similar to the beta type II distribution. We establish in this paper that an adequate choice for the distribution on $\lambda$ can be made by analysing such a property. This paper is organized as follow. Section \ref{Section_Mixture} presents a classification of various Poisson mixtures based on their tail behavior using extreme value theory. Using these new classifications, we construct in Section \ref{Section_Strategy} a strategy to choose a family of distributions for $\lambda$. This strategy is presented in the form of a decision tree where each step leads to an adequate category of Poisson mixtures. Simulations for each section are also presented to attest the relevance and usefulness of such an approach. \section{Poisson mixture tail behavior}\label{Section_Mixture} In this section, we present the fundamental results in extreme value theory and the restrictions when it comes to discrete distributions. Following this, we present three categories of Poisson mixtures that characterise different tail behaviors and conclude with simulations to assess the interest of selecting such a category. \subsection{Theoretical foundations and Poisson mixtures categories} The tail behavior of a distribution can be studied using extreme value theory. Such a statistical approach analyses how the maximum of a distribution $F$ stabilizes. The theory says that $F$ belongs to a max domain of attraction if it exists two normalizing sequences $a_n > 0$ and $b_n$ such that $F^n(a_n x - b_n)$ converges to a non-degenerate distribution when $n \to \infty$ \citep{Resnick}. There are three possible domains of attraction named Weibull, Gumbel and Fréchet. These domains describe the asymptotic tail behavior of $F$ and correspond respectively to finite, exponential and heavy tailed distributions. In the sequel, they will be denoted by $\mathcal{D}_-$, $\mathcal{D}_0$ and $\mathcal{D}_+$, and we will write this property by $F \in \mathcal{D}$ where $\mathcal{D}$ is one of the three domains. In the following, we assume that the mixing distribution $\lambda$ is supported on $\mathbb{R}_+$, then only $\mathcal{D}_0$ and $\mathcal{D}_+$ are considered. Usual continuous distributions belong to a domain, this is not the case for discrete random variables. Indeed, a necessary condition for a discrete distribution $F$ to be in a domain of attraction is the property of long tailed \citep{Anderson} defined by: \begin{equation} \lim_{n \to \infty} \frac{1-F(n+1)}{1-F(n)} = 1. \end{equation} In particular well known discrete distributions do not satisfy this property such as among others Poisson, negative binomial or yet geometric distributions. Even though the latter two are Poisson mixtures with a gamma distributed mixing parameter, which belongs to $\mathcal{D}_0$, the domain of attraction is not carried over by the mixture distribution. However \citet{Anderson} and \citet{Shimura} showed that if a discrete distribution verifies \begin{equation} \lim_{n \to \infty} \frac{1-F(n+1)}{1-F(n)} = L \in (0,1) \end{equation} then $F$ is, in a sense, 'close' to the Gumbel domain. More precisely, \citet{Shimura} showed that property (2) implies that $F$ is the discretization of a unique continuous distribution belonging to $\mathcal{D}_0$. On the other hand, \cite{Anderson} showed that there is a sequence $b_n$ such that $F^n(x+b_n)$ has infimum and supremum limits bounded by two different Gumbel distributions, implying that $F$ is not far from this domain.\bigskip Because Poisson mixture distributions are discrete distributions, they are constrained to the long tailed property in order to have a domain. Otherwise, they may be close to the Gumbel domain. But Poisson mixtures are uniquely identifiable by the distribution on $\lambda$ \citep{Feller}, this means that its tail behavior is dependent of the latter. Therefore, we need to understand conditions on $\lambda$ that allow Poisson mixture distribution to inherit a domain or to be close to the Gumbel one. \cite{Perline} established some conditions for such preservation. From this point forward, we note by $F$ and $f$ the cumulative distribution function (cdf) and the probability density function (pdf) for $\lambda$ and $F_M$ the cdf of the resulting Poisson mixture. Firstly, if $F \in \mathcal{D}_+$ and is such that $\lim_{x \to \infty} \frac{x f(x)}{1-F(x)} = \alpha$ for some $\alpha > 0$ (1\textsuperscript{st} Von Mises condition), then $F_M \in \mathcal{D}_+$. Secondly, if $F\in \mathcal{D}_0$, $\lim_{x \to \infty} \frac{d}{dx}\left[ \frac{1-F(x)}{f(x)} \right] = 0$ (3\textsuperscript{rd} Von Mises condition) and $\frac{f(x)}{1-F(x)} = o(x^{-\delta})$ as $x \to \infty$ for some $\delta \geq \frac{1}{2}$, then $F_M \in \mathcal{D}_0$.\bigskip These results clarify some conditions for which domain of attraction of the mixing distribution is propagated to the associated Poisson mixture distribution. Very naturally, we denote these situations by two categories of Poisson mixtures: \textbf{Fréchet} and \textbf{Gumbel}. A broad set of distributions satisfies the 1\textsuperscript{st} Von Mises condition, for example the Fréchet, folded-Cauchy, beta type II, the inverse-gamma or the gamma/beta type II mixture \citep{Irwin}. Unfortunately, examples are scarce for the Gumbel domain. Indeed the extra condition on the hazard function is quite restrictive. Some examples are the lognormal, the Benktander type I and II \citep{Benktander} or the Weibull distribution, with further restrictions on the parameters for the latter two cases. The Gumbel category does not encompass cases like the Poisson-gamma; while the mixing distribution belongs to $\mathcal{D}_0$, it does not satisfy the additional condition on the hazard function. In order to categorise such Poisson mixtures, we study distributions on $\lambda$ that behave like the gamma. \begin{definition}[\citet{Willmot}] A density $f$ is a \textbf{gamma type} if \begin{equation}\label{gamma_type} \lim_{x \to \infty} \frac{f(x)}{C(x)x^{\alpha} e^{-\beta x}} = 1 \end{equation} where $C(x)$ is a locally bounded function on $\mathbb{R}_+$ and slowly varying, i.e. $\lim_{t \to \infty} \frac{C(tx)}{C(t)} = 1$ for every $x \in \mathbb{R}_+$ (see \cite{Bingham}), $\alpha \in \mathbb{R}$ and $\beta > 0$. \end{definition} \noindent Using gamma type distributions in the Poisson mixture context allows us to extend the categorisation to cases where $F \in \mathcal{D}_0$, but $F_M \not\in \mathcal{D}_0$. First, we prove that such a mixing distribution belongs to $\mathcal{D}_0$ (see Proposition \ref{prop1}). Finally Theorem \ref{thm::valiquette} establishes that $F_M \not\in \mathcal{D}_0$ and quantifies the closeness to the Gumbel domain.\bigskip \begin{proposition}\label{prop1} If $F$ is a gamma type distribution, then $F \in \mathcal{D}_0$. \end{proposition} \begin{proof} Let $\overline{F}$ be the survival function of a gamma type distribution. A sufficient condition for $F \in \mathcal{D}_0$ is to show that $F$ has an exponential tail, i.e. $\lim_{x\to \infty} \overline{F}(x+k)/\overline{F}(x) = e^{-\beta k}$ where $k \in \mathbb{R}$ \citep{Shimura}. Using L'Hôpital's rule and equation (\ref{gamma_type}), the limit becomes $$\lim_{x\to \infty} \frac{\overline{F}(x+k)}{\overline{F}(x)} = e^{-\beta k} \lim_{x \to \infty} \frac{C(x+k)}{C(x)}.$$ It remains to show that the latter limit is equal to $1$. Because $C(\cdot)$ is slowly varying, we can use the Karamata representation $$C(x) = c(x) \exp\left( \int_1^x t^{-1} \eta(t) dt \right),$$ where $c(\cdot)$ and $\eta(\cdot)$ are both functions from $\mathbb{R}_+$ to $\mathbb{R}_+$, $\lim_{x \to \infty} c(x) = c > 0$ and $\lim_{x \to \infty} \eta(x) = 0$ \citep{Bingham}. Then the limit equals $$\lim_{x \to \infty} \frac{C(x+k)}{C(x)} = \lim_{x \to \infty} \exp \left( \int_x^{x+k} t^{-1} \eta(t) dt \right).$$ Because $\eta(x) \to 0$, for any $\varepsilon > 0$ then $0 < \eta(x) < \varepsilon$ for $x$ large enough. Then $$0 < \int_x^{x+k} t^{-1} \eta(t) dt < \varepsilon \int_x^{x+k} t^{-1} dt = \varepsilon \log\left( \frac{x+k}{x} \right) < \varepsilon \log(1 + k)$$ which implies the limit is equal to $1$ and establishes the sufficient condition. \end{proof}\bigskip \begin{theorem} Let $F_M$ be a Poisson mixture with $\lambda$ distributed according to a gamma type distribution $F$. Then for any integer $k \geq 1$, $\lim_{n \to \infty} \frac{1-F_M(n+k)}{1-F_M(n)} = \left(1+\beta \right)^{-k} \in (0,1)$. In particular, $F_M$ is not long tailed ($k=1$). \label{thm::valiquette} \end{theorem} \begin{proof} Let $P_M$ and $\overline{F}_M$ be the probability and survival functions of a Poisson mixture using a gamma type mixing distribution, then \cite{Willmot} showed that $$\lim_{n \to \infty} \frac{P_M(n)}{C(n) n^\alpha (1+\beta)^{-(n+\alpha +1)}} = 1.$$ Using this result for integer $k$, we obtain \begin{align*} \lim_{n\to \infty} \frac{\overline{F}_M(n+k+1) - \overline{F}_M(n+k)}{\overline{F}_M(n+1) - \overline{F}_M(n)} &= \lim_{n \to \infty} \frac{P_M(n+k+1)}{P_M(n+1)}\\ &= \left(\frac{1}{1+\beta}\right)^k \lim_{n \to \infty} \frac{C(n+k+1)}{C(n+1)}, \end{align*} where the last limit converges to $1$ using a similar proof as in Proposition 1. Because $\overline{F}_M$ is monotonically decreasing, the proof can be concluded by applying the Stolz-Cesàro theorem. \end{proof}\bigskip \noindent This result allows to characterize a third category: the \textbf{pseudo-Gumbel}. It includes a broad class of mixing distributions among others gamma, gamma/Gompertz, exponential, exponential logarithmic, inverse-Gaussian and its generalization. \citeauthor{Perline}'s result and Theorem \ref{thm::valiquette} lead to consider now three categories for Poisson mixtures allowing the clarification of the mixing distribution choices. For examples, see Table \ref{tab:categories} and the supplementary materials for details. Additionally, Theorem \ref{thm::valiquette} also quantifies how 'close' those Poisson mixtures are with the quantity $(1 + \beta)^{-1}$ involved in the limit. Indeed, if $\beta \to 0$, $\frac{1-F_M(n+1)}{1-F_M(n)} \to 1$, i.e. it approaches a long tailed distribution. Such property can blur the distinction between Gumbel and pseudo-Gumbel for some Poisson mixtures. \begin{table}[!ht] \resizebox{\textwidth}{!}{ \begin{tabular}{ |c|c|c| } \hline Mixing ($\lambda$) & Poisson mixture ($P_M)$ & Category \\ \hline Fréchet($a$, $\sigma$) & Poisson-Fréchet & Fréchet\\ Folded-Cauchy($\mu$, $\sigma$) & Poisson-folded-Cauchy & Fréchet\\ Inverse-gamma($a$, $b$) & Poisson-inverse-gamma & Fréchet\\ Beta-II($a$, $b$) & Poisson-beta-II & Fréchet \\ Gamma/Beta-II-mixture(r,a,b) & Generalized Waring & Fréchet \\ \hline Lognormal($\mu$, $\sigma$) & Poisson-lognormal & Gumbel\\ Weibull($a$, $b$) & Poisson-Weibull & Gumbel (if $a < 0.5$)\\ Benktander-I($a$, $b$) & Poisson-Benktander-I & Gumbel\\ Benktander-II($a$, $b$) & Poisson-Benktander-II & Gumbel (if $b < 0.5$)\\ \hline Exponential($a$) & Geometric & Pseudo-Gumbel\\ Gamma($a$, $b$) & Negative binomial & Pseudo-Gumbel\\ Inverse-Gaussian($\mu$, $\sigma$) & Sichel & Pseudo-Gumbel\\ Generalized inverse-Gaussian($a$, $b$, $p$) & PGIG & Pseudo-Gumbel\\ \hline \end{tabular}} \caption{Examples of Poisson mixture and associated categories} \label{tab:categories} \end{table}\bigskip \subsection{Impact of mixing distribution choice on goodness of fit} Are the categories previously defined useful to distinguish how the Poisson mixtures behaved? How can these categories be efficiently used when it comes to model selection? To answer these questions, we simulated 100 samples of different Poisson mixtures with size $n=250$ using the gamma, Fréchet and lognormal distributions on $\lambda$, each one being a representative of the three categories. For each sample, the Poisson mixture is fitted with the same three distributions and the best model is kept using a Bayesian framework. This can be done with the \texttt{rstan} package for the language \texttt{R} \citep{Stan} to estimate the parameters by MCMC. The best model is then kept using the highest \textit{posterior} model probability. Those probabilities are approximated using the bridge sampling computational technique \citep{Meng} and the dedicated \texttt{R} package \texttt{Bridgesampling} \citep{Bridge}. All results are based on the following priors: a $\mathrm{gamma}(1,1)$ distribution for positive parameters and a $\mathrm{Normal}(0,1)$ for real parameters. Moreover, we simulated for each sample 4 MCMC with 10000 iterations each in order to ensure reasonable convergence for the parameter estimations and for the \textit{posterior} model probabilities. Results are presented in Table \ref{tab:} and, in general, the most selected model stood out. The only exception being the gamma(2,2) where the lognormal and the gamma Poisson mixtures are evenly selected throughout the simulations. \begin{table}[!ht] \centering \begin{tabular}{ |c||c|c|c| } \hline Poisson mixture & Fréchet & Lognormal & Gamma \\ \hline Fréchet(1,1) & \bf{95} & 5 & 0\\ Fréchet(2,1) & \bf{77} & 17 & 6\\ \hline Lognormal(0,1) & 16 & \bf{63} & 21\\ Lognormal(1,1) & 6 & \bf{86} & 8\\ \hline Gamma(2,1) & 0 & 23 & \bf{77}\\ Gamma(2,2) & 1 & 47 & \bf{52}\\ \hline \end{tabular} \caption{Selected model frequencies for each Poisson mixture simulation with the highest frequency in bold.} \label{tab:} \end{table}\bigskip \pagebreak This example shows the importance in the comparison of various mixing distributions. However, this approach based on systematic comparisons may suffer from computational limitations and is difficult to used in practice where a selection objective involves too much possibilities for the mixing distribution. Indeed, such an approach requires an appropriate choice of priors, a high number of MCMC iterations, and a study of convergence for each mixing distribution. Here, we assume the same priors and the same number of iterations for each mixing distribution in order to systematically simulate all our categories. In reality, each case must be studied with care and such an approach depends too much on the latent variables of the mixture model. As an alternative, we propose a simple strategy that uses directly the data and allows the user to focus on a specific family of mixing distributions. In doing so, the estimation of the latent variable can be done as a last step. The proposed alternative (see Section \ref{Section_Strategy}) relies on the use of a sequential approach by selecting first the most appropriate category (Gumbel, Fréchet or pseudo-Gumbel) and next by comparing only a few representative distributions belonging to this selected domain. For instance, such representative distributions are lognormal/Benktander-I for the Gumbel category, inverse-gamma/folded-Cauchy for the Fréchet category or a gamma/inverse-Gaussian for the pseudo-Gumbel case. \section{Strategy}\label{Section_Strategy} This section proposes a strategy to choose a mixing distribution on $\lambda$ using the categories defined in Section \ref{Section_Mixture}. As previously mentioned, an excess of zeroes and extreme values create overdispersion in count data which induces a particular tail behavior. The main idea is to choose mixing distributions among the three categories reflecting which ones best fit the empirical tail behavior. Peaks-over-threshold (POT) method \citep{Coles} is well adapted for this purpose. This technique analyses the distribution of the excesses defined by $Y - u|Y > u$. \cite{Pickands}, \cite{Balkema} showed that $Y$ belongs to a domain of attraction if and only if the distribution of the excesses can be uniformly approached by a generalized Pareto distribution (GPD) as $u$ tends to the right endpoint of the distribution of $Y$. The corresponding cdf is given by \begin{equation} {H}_{\gamma, \sigma}(y) = \begin{cases} 1- \left(1 + \gamma \frac{y}{\sigma} \right)^{-1/\gamma} & \text{if $\gamma \neq 0$}\\ 1- \exp\left( -\frac{y}{\sigma} \right) & \text{otherwise} \end{cases} \end{equation} with support $\mathbb{R}_+$ if $\gamma \geq 0$ or $\left[0; -\frac{\sigma}{\gamma} \right]$ if $\gamma < 0$ and where $\gamma \in \mathbb{R}$ and $\sigma > 0$ are respectively shape and scale parameters. The sign of the $\gamma$ parameter is intrinsically related to the domain of attraction. Indeed, the distribution of $Y$ belongs to $\mathcal{D}_-$, $\mathcal{D}_0$ or $\mathcal{D}_+$ if $\gamma < 0$, $\gamma = 0$ or $\gamma > 0$ respectively. Therefore, fitting a GPD to the excesses of the count data can inform whether or not Poisson mixture distribution belongs to a known domain of attraction and, if so, which one. \subsection{Decision tree} Suppose overdispersed count data for which we need to fit a Poisson mixture. Our strategy to select appropriate mixing distributions is based on a decision tree (Figure \ref{fig:tree}) leading to the three categories defined in Section \ref{Section_Mixture}. The first step consists in the selection of a threshold $u$ large enough for the data and to fit a GPD to the excesses. The choice of $u$ can either be based on empirical quantile or on studying the mean residual life plot \citep{Threshold} and the GPD parameters can be efficiently estimated using maximum likelihood \citep{Coles}.\bigskip \begin{figure}[H] \centering \includegraphics[width=\textwidth]{Arbre.png} \caption{Decision tree for Poisson mixtures} \label{fig:tree} \end{figure} Two situations arise: first, if excesses are correctly fitted using the GPD model (left side in Figure~\ref{fig:tree}), then we propose to use a distribution belonging to Gumbel or Fréchet such that the resulting Poisson mixture distribution remains in these domains. Indeed, if the excesses can be approximated by a GPD, then the Poisson mixture must be in a domain of attraction (see \cite{Pickands}, \cite{Balkema}). Therefore, the categories defined by \cite{Perline} are an adequate set of mixing distributions to choose from. The choice of the specific category is directly based on the estimation of the shape parameter obtained at the first step. Testing whether $\gamma = 0$ or not can be done with the deviance statistic \citep{Coles}. If it is the case, we should use a Poisson mixture in the Gumbel category. Else if $\gamma$ is positive, the Fréchet category should be prioritize. Otherwise, if $\gamma$ is negative, this strategy cannot assess which mixing distribution should be used. However, no such case has been relevant in our study. \bigskip The second situation (right side in Figure~\ref{fig:tree}) corresponds to the case where GPD model is not well adapted. In this case, any mixing distributions such that the Poisson mixture belongs to Gumbel or Fréchet domains of attraction should be avoided. A distribution from the pseudo-Gumbel category should potentially be favoured. As demonstrated by \citet{Shimura}, such discrete random variables originate from an unique continuous distribution in $\mathcal{D}_0$ that has been discretized. That is why we transform the excesses to continuous values thanks to a \textit{jittering} technique consisting in the addition of a continuous random noise \citep{Nagler}. For an application, \cite{Coeurjolly-Rousseau} used this technique to study the Poisson's median and to construct an estimator for $\lambda$. In our case, the excesses $Y-u|Y > u$ are discrete and greater or equal to $1$. However the GPD with $\gamma \geq 0$ is defined on $(0, \infty)$. To transform the excesses into the same support, we jittered by subtracting a $\mathrm{Uniform(0,1)}$. With these ``jittered`` data points, we fit a GPD again by fixing $\gamma = 0$ and testing again if it is adequate. If, in this case, the fit is adequate, we consider that the data are pseudo-Gumbel. Otherwise, one should proceed to another approach in order to choose a mixing distribution. However, we rarely encountered this situation in our simulations. \subsection{Evaluation of a sequential approach for mixing distribution selection} In order to study the performance of the proposed strategy, various Poisson mixtures samples have been simulated. The decision tree is then systematically applied using the \texttt{evd} package \citep{EVD} for maximum likelihood estimation of GPD parameters, the modified Anderson Darling test for the goodness-of-fit and deviance statistic for testing the nullity of the shape parameter. Let $X_1, \dots, X_m$ denote $m$ i.i.d random variables ordered as $X_{(1)} \leq \dots \leq X_{(m)}$. The modified Anderson-Darling test statistic for a distribution $H$ is defined by $$T(X_1, \dots, X_m) = \frac{n}{2} - 2\sum_{i=1}^m H(X_{(i)}) - \sum_{i=1}^m\left[ 2 - \frac{2i - 1}{n} \right] \log(1-H(X_{(i)})). $$ As presented in \cite{GPD_Review}, this statistic has an asymptotic distribution defined by a weighted sum of $\chi_1^2$. For this simulation study, the $m$ random variables are the excesses and $H$ is the GPD. We point that such a test works for any distribution $H$. However, some tests exist specifically when $H$ is a GPD. See \cite{Toulemonde} or \cite{Villasenor-Gonzalez} for examples. \pagebreak Finally, to test $H_0:\gamma = 0$ versus $H_1:\gamma\neq 0$, we fit the two models, that is the complete one and the restricted one, evaluate the corresponding log likelihoods namely $\mathcal{L}_1$ and $\mathcal{L}_0$ and we conclude with the deviance statistic $D = 2\left(\mathcal{L}_1 - \mathcal{L}_0 \right)$ which follows a $\chi_1^2$ under suitable conditions \citep{Coles}. \\ The following simulation scheme is then applied in the language \texttt{R}: \begin{enumerate} \item For a fixed sample size $n$ and a Poisson mixture $F_M$ with fixed parameters, simulate the mixed Poisson observations $\mathbf{Y} :=\left(Y_1, \dots, Y_n\right)$. \item For a threshold $u$ based on the sample $\mathbf{Y}$ (example: $95\mathrm{th}$ quantile), get the excesses $\mathbf{X} := \mathbf{Y} - u | \mathbf{Y} > u$. \item Calculate the MLE of $\gamma$ and $\sigma$ of the GPD for $\mathbf{X}$ using \texttt{evd::fpot} function using the Nelder-Mead optimization method. \item Test the GPD for $\mathbf{X}$ with the modified Anderson Darling test ($\alpha = 0.05$). The p-values are calculated with a bootstrap approach using $250$ iterations (see \cite{GPD_Review}). \item Evaluate which category the sample is classified to using the decision tree with the following outcomes: \begin{enumerate} \item If the test at step 4 for $\mathbf{X}$ is not rejected, use the deviance statistic to find which domain of attraction $\mathbf{X}$ belongs to. If $\gamma < 0$ is significant, the sample fails to have a category. \item Else repeat steps 3 and 4 for the jittered excesses $\mathbf{X}^c := \mathbf{X} - \mathrm{Unif}(0,1)$ and by fixing $\gamma = 0$. If the GPD is not rejected, the sample belongs to pseudo-Gumbel category. Otherwise the sample fails to have a category. \end{enumerate} \item Repeat 1000 times the steps 1 to 5. \end{enumerate} Distributions from the three categories are tested using these steps with $n$ equal to $1000$ or $2000$ and the threshold fixed at the $95 \mathrm{th}$ and $97.5 \mathrm{th}$ empirical quantiles for $n = 1000$ and we add the $98.5\mathrm{th}$ empirical quantile for $n = 2000$. For the Fréchet category, Fréchet and folded-Cauchy mixing distributions are simulated. For the Gumbel category, lognormal and Weibull mixing distributions have been simulated. Finally, gamma and inverse-Gaussian mixing distributions are simulated for the pseudo-Gumbel. Results are presented in Table 3 and also in the Supplementary Materials section (other sets of parameters, different sample sizes and different thresholds). \begin{table}[H] \centering \resizebox{0.8\textwidth}{!}{ \begin{tabular}{|c|c|c|c|} \hline Mixing distribution & Average excesses & GPD Rejection & Category \\ \hline Fréchet(1,1) & 48.727 & 0.069 & 0.917 (Fréchet)\\ Folded-Cauchy(0,1) & 48.243 & 0.078 & 0.896 (Fréchet)\\ \hline Lognormal(1,1) & 46.750 & 0.126 & 0.720 (Gumbel)\\ Weibull(0.5, 1) & 46.246 & 0.133 & 0.754 (Gumbel)\\ \hline Gamma(2,1) & 36.200 & 0.704 & 0.635 (Pseudo-Gumbel)\\ Inverse-Gaussian(1,2) & 38.977 & 0.856 & 0.709 (Pseudo-Gumbel)\\ \hline \end{tabular} } \label{tab:sim} \caption{Average number of excesses, sample proportion of GPD rejection and of the most frequent category for the simulations with $n = 1000$ and $u = 95\mathrm{th}$ empirical quantile} \end{table} These simulations aim to assess whether the decision tree adequately identifies the Poisson mixture categories. To do so, we calculate the proportion of samples where the GPD is rejected in the first branch and the proportion of the most frequent category where the Poisson mixture is identified. For cases like the Gumbel and Fréchet categories, we should see a low GPD rejection frequency. Conversely, the pseudo-Gumbel category should have a high GPD rejection. For all cases, we should have a high proportion of samples adequately identified to their category.\bigskip As presented in Table 3, most of the Poisson mixtures simulated can be adequately identified to the appropriate category. For the mixing distributions Fréchet(1,1), folded-Cauchy(0,1), lognormal(1,1) and Weibull(0.5,1), the GPD is mostly adequate for the excesses due to its low rejection rate. For pseudo-Gumbel mixtures Gamma(2,1) and inverse-Gaussian(1,2), the GPD is mostly rejected and, once the excesses are jittered, the Gumbel domain is found. Moreover, the simulation results considering a Weibull($a$,$b$) as a mixing distribution are relevant with the theory. Indeed, as mentioned in Table 1, the Poisson-Weibull is in the Gumbel category if its parameter $a$ is smaller than $0.5$. In comparison to the limit case Weibull(0.5, 1) in Table 3, we also tested the Weibull(1,1) and found that this mixture does not belong to Gumbel category (Table 2 Supp. Material).\bigskip Some interesting factors have been identified concerning the categorization. First, as noted by \cite{Hitz}, the discrete excesses need a certain amount of variability in order to have a smooth adjustment to the GPD. If the variance is not high enough, it can be difficult in practice to adequately identify the mixture category. For example, based on the simulations, the Fréchet(2,1) Poisson mixture belongs to the Fréchet domain only after the excesses are jittered (Table 1 Supp. Material). This distribution has its expectation defined which results in a more stable sample compared to the Fréchet(1,1). Here, both distributions do not have a defined variance because their parameter $a \leq 2$. Still, because the Fréchet(2,1) is the limit case, it leads to a less volatile sample of excesses. This situation is identified in the lognormal case as well. Indeed, the Poisson mixture using the lognormal(0,1) has a smaller variance compare to mixture with a lognormal(1,1). In our simulations, the former case has some difficulty to be adequately identified to the Gumbel category compared to the latter (Table 2 Supp. Material).\bigskip Another important aspect is the choice of the threshold $u$. Indeed, the threshold affects the variance and the domain of attraction inferred by the data. For example, the lognormal(0,1) may have difficulties to be identified due to its variance, but once the $u$ is large enough we do find the Gumbel category. For instance when $n = 1000$ and $u = 95\mathrm{th}$ quantile, 257 over 1000 samples are classified to the Gumbel category compared to 476 over 1000 samples in the pseudo-Gumbel category. However, for the same $n$ and $u = 97.5\mathrm{th}$, 790 over 1000 samples are classified to the Gumbel category compared to 64 over 1000 samples in the pseudo-Gumbel category. Therefore a larger threshold $u$ is necessary in this case. However, it can be too large for some distributions like the gamma. In particular, when $n = 1000$ and $u = 95\mathrm{th}$ quantile there should be in average $50$ excesses, but Table 3 indicates that the gamma(2,1) has $36.2$ excesses. This can be explained by the lack of different values in the right tail of the Poisson mixture which leads to an underrepresented sample of excesses. Such discrepancy greatly affect the categorisation. For example, with $n = 2000$ and $u = 98.5\mathrm{th}$ quantile, the gamma(2,1) has in average $21.927$ excesses and are mostly classified in the Gumbel domain (Table 3 Supp. Material). Clearly this inappropriate classification is due to the lack of excesses.\bigskip Finally, some pseudo-Gumbel Poisson mixtures can be very close to the Gumbel domain. For instance, the inverse-Gaussian$(1,2)$ is adequately classified, but the inverse-Gaussian$(2,1)$ is mostly classified in the Gumbel category (Table 3 Supp. Material). This can be explained by the 'closeness' property described in Theorem 1. Indeed, the density of a inverse-Gaussian$(\mu,\sigma)$ can be represented by $$f(x) = C(x) x^{-3/2} \exp\left(-\frac{\sigma}{2\mu^2} x\right)$$ where $C(x)= C \exp\left(-\frac{\sigma}{2x}\right)$ with $C$ the normalizing constant. By equation (3), $\beta = \sigma/2\mu^2$ and substituting the values $\mu = 2$ and $\sigma = 1$ gives $\beta = 1/8$. Therefore the limit defined in Theorem 1 indicates that the resulting Poisson mixture has $$\lim_{n\to \infty} \frac{1-F_M(n+1)}{1-F_M(n)} = \frac{8}{9}.$$ Because this limit is pretty close to that of a long tailed distribution, i.e. the limit is near to $1$, the distinction between pseudo-Gumbel and Gumbel gets blurred. To visualize how this 'closeness' affects the fit, let the parameter $\mu = 2$ be fixed and vary $\sigma$ from $0.1$ to $8$ for the inverse-Gaussian. For each value of $\sigma$, simulate 500 samples of size $n=2000$ from the Poisson mixture, fix the threshold $u$ to the $97.5 \mathrm{th}$ empirical quantile and calculate the proportion of samples where the GPD is rejected with p-value $\alpha = 0.05$. In theory, the limit from Theorem 1 should approach $0$ when $\sigma$ gets larger, which implies getting further from the Gumbel domain and results into more rejection of the GPD. This result is reflected in Figure \ref{fig:reject} when $\sigma$ approaches $8$. \begin{figure}[H] \centering \includegraphics[width=0.65\textwidth]{Rejet_IG.png} \caption{Proportion of inverse-Gaussian($2$, $\sigma$) Poisson mixture samples (size $n=2000$) where the GPD has been rejected ($\alpha = 0.05$) for the excesses ($u = 97.5 \mathrm{th}$ quantile) according to $\sigma$.} \label{fig:reject} \end{figure} \section{Conclusion and perspectives} Overdispersed count data are commonly observed in many applied fields and Poisson mixtures are appealing to model such data \citep{Karlis}. However, the choice of the appropriate mixing distribution is a difficult task relying mainly on empirical approaches related to modelers subjectivity or on intensive computational techniques combined with goodness-of-fit test or information criteria. In this paper, we proposed a new strategy based on the analysis of the tail behavior of the data. We extend the usual Gumbel and Fréchet domains of attraction introducing the pseudo-Gumbel category for Poisson count data. In particular, we show how tail behavior can provide a great amount of information to evaluate the mixing distributions. Based on a sequential strategy and decision tree, we proposed a useful and efficient approach to select the most appropriate category allowing to focus on a more restrictive set of potential candidates. The choice of the most appropriate distribution within a given category is not dealt with in this paper. Some strategies can be proposed helping the choice of such potential candidates. More specifically, it could be based on the simplicity either for the inferential step or for the inclusion of covariates or yet for biological interpretations. Moreover, recently, tremendous researches have been developed to jointly model count data. For instance, the joint species distribution models are proposed extending classical species distribution models in ecology \citep{JSDM} and are often based on the use of the multivariate lognormal distribution \citep{Aitchison,Robin}. Based on our approach, various and flexible models could be developed combining different mixing distribution belonging to different categories (Gumbel, Fréchet or pseudo-Gumbel) and the use of copulas to model dependencies structures between continuous mixing distributions \citep{Nelsen}. \section*{Acknowledgments} This research was supported by the GAMBAS project funded by the French National Research Agency (ANR-18-CE02-0025) and the french national programme LEFE/INSU. We also thank Éric Marchand for the helpful comments and fruitful discussions.
1,108,101,566,231
arxiv
\section{Introduction} \label{s:intro} The identification of models with guaranteed simulation accuracy is of great importance in all applications where long range predictions and the related error bounds are used for a robust decision-making task. Examples include resource planning, operations scheduling, and predictive control. In this paper, we address this problem for the case of discrete-time, linear time-invariant systems. Our aim is to obtain, from a finite data set, a one-step-ahead model of the system and a measure of its accuracy, in terms of bounds on the simulation error. We want to derive such bounds point-wise in time, for a long, possibly infinite, future simulation horizon, under the action of known future input signals. The most popular identification procedures are studied in a stochastic framework, see e.g. \cite{ljung1999system}, where theoretical guarantees have been derived assuming that the noise signals are ruled by a probability distribution function. However, many applications feature unknown stochastic properties of the noise, or no sensible statistical hypotheses can be made at all \cite{fogel1982value}. Motivated by these difficulties, Set Membership identification approaches have been developed under different hypotheses, such as bounded noise and uncertainties, pioneered by \cite{schweppe1968recursive} and \cite{witsenhausen1968sets}. The Set Membership approach provides a way to identify models of systems and to measure their quality without any probabilistic assumptions, referring only to the given data set and noise bounds \cite{kurzhanski1994modeling}, \cite{milanese2013bounding}, \cite{milanese2005model}, \cite{milanese1991optimal}, \cite{walter1992recursive}. In most of the existing works, the noise bound is assumed to be known a priori, which can be a limiting assumption as well. One of the few exceptions is \cite{bai1998convergence}, where the authors propose a way to estimate the noise bound using a probabilistic reasoning.\\ \noindent Another relevant aspect is the purpose of the identification process. Models tuned for multi-step prediction give better performance when used for simulation, e.g. in Model Predictive Control (MPC) schemes, see \cite{farina2011simulation}, \cite{lauri2010pls}. Several approaches address the multi-step-ahead identification problem, see e.g. \cite{lauri2010pls}, \cite{potts2014improving}, \cite{shook1991identification}, \cite{shook1992control}, mainly in a stochastic framework. These approaches do not provide a way to quantify the model quality in terms of bounds on the simulation error, which could be directly exploited in robust decision making. In this paper, we resort to the Set Membership framework and consider linear systems with bounded noise where, contrary to most existing works, the bound is a-priori unknown. These settings are valid in most real-world applications, where only a rough idea of the noise intensity might be available. We present new theoretical results that allow one to estimate the noise bound from data. A preliminary version of these results has been published in \cite{LF_CDC_18}. Here, we extend the findings to the multiple-input, multiple-output case, and to the case of a predictor structure derived from a state-space representation. Moreover, we introduce a new result to estimate the worst-case simulation error bounds for any simulation horizon, up to infinity. We derive a clear link between the obtained infinite-horizon bound and the estimated noise bounds, model order, system decay rate, and horizon used in the model identification routine. The identification procedure stemming from such theoretical results is composed of four steps: 1) estimation of the noise bound; 2) estimation of the system order; 3) estimation of the impulse responses' decay rates; 4) identification of the model parameters. In this process, the concept of Feasible Parameter Set (FPS) is exploited to define the guaranteed simulation error bounds for a given model, and to constrain the parameters to be identified. We finally prove that the models derived with our procedure are guaranteed to be asymptotically stable, a property that is non-trivial to enforce during the identification phase, see \cite{cerone2011enforcing}. The estimation of the noise bound, of the model order and decay rate, and the analysis of the properties of the finite-horizon and infinite-horizon error bounds, together with the results on the asymptotic stability of the identified models, are the main novelties of our work with respect to the Set Membership literature. We test the proposed procedure both in a numerical example, where the true quantities are known and the method can be evaluated in full, and in a real-world experimental application, pertaining to the roll rate dynamics of an autonomous glider. The paper is organized as follows. Section \ref{s:probl_form} contains assumptions and problem formulation. In Section \ref{s:ms_sm_appr} the new theoretical results are presented. Section \ref{s:pred_ident} deals with the identification of the predictor parameters. Section \ref{s:ss_form} extends the obtained results to the state-space model structure with measured state. Section \ref{s:results} presents the numerical and experimental results, and Section \ref{s:conclusions} concludes the paper. \section{Working assumptions\\ and problem formulation} \label{s:probl_form} \subsection{Assumptions on the system, model structure and order} We consider a discrete time, linear time invariant (LTI) system in the form: \begin{equation} \label{eq:sist_desc} \begin{aligned} x(k+1)&=Ax(k)+Bu(k) \\ z(k)&=Cx(k), \end{aligned} \end{equation} with state $x(k)\in\mathbb{R}^n$, input $u(k) \in \mathbb{R}^m$ and output $z(k)\in\mathbb{R}^q$. Here $k\in\mathbb{Z}$ denotes the discrete time variable. The output measurement $y(k) \in \mathbb{R}^q$ is affected by an additive noise $d(k)\in\mathbb{R}^q$, leading to: \begin{equation} \label{eq:disturbed_output} y(k)=z(k)+d(k). \end{equation} We denote with $z_i(k),\,y_i(k),\,d_i(k)$, the $i$-th component of vectors $z(k),\,y(k),\,d(k)$, respectively, where $i=1,\hdots,q$. \begin{remark} \label{rm:index_range_dropping} All of the theoretical developments and practical algorithms have to be applied to each output component individually. Therefore, for the sake of notational simplicity, the notation $i=1,\ldots,q$ will be omitted. \end{remark} \begin{assumption} \label{as:asympt_stable} The system \eqref{eq:sist_desc} is asymptotically stable. \end{assumption} \begin{assumption} \label{as:bounded_dist} The measurement noise and the system input are bounded. In particular: \begin{itemize} \item $|d_i(k)| \leq \bar{d}_{0_i},\; \forall k\in\mathbb{Z}, \; \bar{d}_0 \in \mathbb{R}^{q}.$ \item $u(k)\in\mathbb{U}\subset \mathbb{R}^m, \; \forall k \in \mathbb{Z}, \; \mathbb{U} \; \text{compact}.$ \end{itemize} \end{assumption} \begin{assumption} \label{as:obs_and_reach} The system \eqref{eq:sist_desc} is fully observable and reachable. \end{assumption} \noindent Assumptions \ref{as:asympt_stable} and \ref{as:bounded_dist} are common in system identification problems in real-world applications. Assumption \ref{as:obs_and_reach} is made for simplicity, as it can be relaxed by considering only the observable and controllable sub-space of the system state. Under Assumption \ref{as:obs_and_reach}, for any given $p \in \mathbb{N}$, the output equations can be written in auto-regressive form with exogenous input (ARX): \begin{equation} \label{eq:output_gen_pred_form} z_i(k+p)=\psi_{i_p}(k)^T\theta_{i_p}^0, \end{equation} where $^T$ denotes the matrix transpose operation, and the regressor $\psi_{i_p}(k)$ is given by: \begin{equation}\label{eq:regressor_n} \begin{array}{rcl} \psi_{i_p}(k)&=&\left[ Z_{i_n}^T(k) \; U_{p,n}^T(k) \right]^T\in \mathbb{R}^{n+m(n+p-1)}\\ Z_{i_n}(k)&=& \left[ z_i(k) \; z_i(k-1) \; \hdots \; z_i(k-n+1) \right]^T\in \mathbb{R}^{n}\\ U_{p,n}(k)&=&\left[u(k+p-1)^T \; \hdots\right.\\ & & \left. u(k)^T \; \hdots \; u(k-n+1)^T \right]^T\in \mathbb{R}^{m(n+p-1)}. \end{array} \end{equation} In addition, $\theta_{i_p}^0 \in \mathbb{R}^{n+m(n+p-1)}$ is the vector of the true system parameters, which is given by $\theta_{i_p}^0=\left[\theta_{i_{p,z}}^{0^T} \; \theta_{i_{p,u}}^{0^T} \right]^T$, where $\theta_{i_{p,z}}^0$ consists of the parameters related to past values of the output $z_i$, and the entries of $\theta_{i_{p,u}}^0$ are the parameters related to past and future input values. For a discrete time LTI system of the form \eqref{eq:sist_desc}, if all the eigenvalues of $A$ have magnitude strictly smaller than 1 (Assumption \ref{as:asympt_stable}), then, for any initial condition $x_0$ and for any bounded input $u$ such that $\Vert u_i(k)\Vert <M, \, \forall k$, $i=1,\hdots,m$, the system outputs are bounded by \[ \left\Vert z_i(k) \right\Vert_2 \leq \left\Vert C_i\right\Vert_2\cdot\left\Vert A^k \right\Vert_2\cdot\left\Vert x_0\right\Vert_2 + M \left\Vert C_i\right\Vert_2\cdot \sum_{j=0}^{k-1}\left\Vert A^j\right\Vert_2\cdot \left\Vert B \right\Vert_2, \] with $i=1,\hdots,q$, $k>0$, and $\Vert A^k \Vert < L \rho^k$, where $0<\rho<1$, $L>0$, see e.g. \cite{zadeh2008linear}. Thus, under Assumption \ref{as:asympt_stable}, the system parameters are bounded by exponentially decaying trends: \begin{equation} \label{eq:sists_dec_bound} \begin{aligned} &\left\vert \theta_{i_{p,u}}^{0,(l)} \right\vert \leq L_i\rho_i^{\lceil \frac{l}{m} \rceil} \, , \; l=1,\hdots,m(n+p-1) \\ &\left\vert \theta_{i_{p,z}}^{0,(l)} \right\vert \leq L_i\rho_i^{p+l}, \; l=1,\hdots,n \end{aligned} \end{equation} where $^{(l)}$ denotes the $l$-th entry of a vector, $\lceil \; \rceil$ denotes the ceiling function, and $L_i$, $\rho_i$ are scalars that depend on the system matrices in \eqref{eq:sist_desc}.\\ The one-step-ahead dynamics of the system output are then given by \eqref{eq:output_gen_pred_form} with $p=1$. For any $p>1$, the elements of the parameter vector $\theta_{i_p}^0$ are polynomial functions of the entries of $\theta_{i_1}^0$, i.e.: \begin{equation} \label{eq:polynomial_functions} \theta_{i_p}^0=h_{p,n}(\theta_{i_1}^0). \end{equation} \noindent The explicit expressions of the polynomial functions $h_{p,n}:\mathbb{R}^{n(m+1)}\rightarrow\mathbb{R}^{n+m(n+p-1)}$ can be readily obtained by recursion of \eqref{eq:output_gen_pred_form} with $p=1$ and are omitted here for simplicity.\\ We consider a model structure given by $q$ one-step-ahead predictors, one for each output signal, written in the ARX form as: \begin{equation} \label{eq:1s_pred_classic_form} \hat{z}_i(k+1)=\varphi_{i_1}(k)^T\theta_{i_1}, \end{equation} where the regressor $\varphi_{i_p}(k)$ is given by: \begin{equation}\label{eq:regressor_o} \begin{array}{rcl} \varphi_{i_p}(k)&=&\left[ Y_{i_o}^T(k) \; U_{p,o}^T(k) \right]^T\in \mathbb{R}^{o+m(o+p-1)}\\ Y_{i_o}(k)&=& \left[ y_i(k) \; y_i(k-1) \; \hdots \; y_i(k-o+1) \right]^T\in \mathbb{R}^{o}\\ U_{p,o}(k)&=&\left[u(k+p-1)^T \; \hdots\right.\\ & & \left. u(k)^T \; \hdots \; u(k-o+1)^T \right]^T\in \mathbb{R}^{m(o+p-1)}. \end{array} \end{equation} In practice, $\varphi_{i_1}(k)$ is the counterpart of $\psi_{i_1}(k)$ with order $o$ (model order) instead of $n$ (system order), and corrupted by noise (compare \eqref{eq:regressor_n} and \eqref{eq:regressor_o}), while $\theta_{i_1} \in \mathbb{R}^{o(m+1)}$ denotes the vector of model parameters to be identified from data. \begin{assumption} \label{as:model_order} The user-selected model order $o$ is such that $o\geq n$. \end{assumption} \noindent This assumption is needed to derive part of our theoretical results. In practice, one can initially choose a very large order to make sure that Assumption \ref{as:model_order} is satisfied, and then use our Theorem \ref{th:conv_lambda_diff_d} and the related Procedure \ref{p:o_est_procedure} (both presented in the next section) to obtain a tighter upper-estimate of $n$. \subsection{Multi-step predictors and assumption on data} \label{s:multitep_pred} In our method, we resort to the concept of multi-step predictors. For a LTI system, the multi-step predictor of the $i$-th system output, pertaining to a given horizon $p>1$, has the following general form: \begin{equation} \label{eq:p_pred_classic} \hat{z}_i(k+p)=\varphi_{i_p}(k)^T\theta_{i_p}, \end{equation} We refer to $p$ equivalently as the \emph{prediction horizon} or \emph{simulation horizon} in this paper. If the multi-step predictor is obtained by iteration of the one-step-ahead model \eqref{eq:1s_pred_classic_form}, then, similarly to \eqref{eq:polynomial_functions}, the elements of the parameter vector $\theta_{i_p}$ are polynomial functions of the entries of $\theta_{i_1}$, denoted as: \begin{equation} \label{eq:polynomial_model} h_{p,o}:\mathbb{R}^{o(m+1)}\rightarrow\mathbb{R}^{o+m(o+p-1)},\,p\geq 1 \end{equation} and obtained by recursion of \eqref{eq:p_pred_classic} with $p=1$. Let us now denote with $\psi_{i_{p_o}}(k)$ the noise-free version of $\varphi_{i_p}(k)$ \eqref{eq:regressor_o}, i.e. using variable $z_i$ instead of $y_i$. Under Assumptions \ref{as:asympt_stable}-\ref{as:bounded_dist}, it follows that: \begin{equation}\nonumber \label{eq:psi_set_arx_case} \psi_{i_{p_o}}(k)\in \Psi_{i_{p_o}}, \, \Psi_{i_{p_o}} \, \text{compact}, \, \forall p \in \mathbb{N}, \, \forall k \in \mathbb{Z}. \end{equation} Moreover, also the regressor $\varphi_{i_p}(k)$ belongs to a compact set, indicated as $\Phi_{i_p}$: \begin{equation} \label{eq:phi_set_arx_case} \varphi_{i_p}(k) \in \Phi_{i_p}=\Psi_{i_{p_o}} \oplus \mathbb{D}_{i_p}, \; \forall p \in \mathbb{N}, \; \forall k \in \mathbb{Z}, \end{equation} where $F\oplus M=\{f+m:f\in F, \, m \in M \}$ is the Minkowski sum of sets $F$, $M$, and $\mathbb{D}_{i_p}\subset \mathbb{R}^{o+m(o+p-1)}$,\small \begin{equation} \label{eq:D_arx_def} \mathbb{D}_{i_p} \doteq\left\{ \left[ d_i^{(1)}, \hdots, d_i^{(o)}, 0, \hdots, 0 \right]^T: \left\vert d_i^{(l)} \right\vert \leq \bar{d}_{0_i},\,l=1,\ldots,o \right\}. \end{equation}\normalsize Namely, $\mathbb{D}_{i_p}$ is the set of all possible noise realizations that can affect the system output values in $\varphi_{i_p}$. We assume that a finite number of measured pairs $(\tilde{y}(k),\tilde{u}(k))$ is available for the model identification task, where $\tilde{\cdot}$ is used to denote a sample of a given variable. For each simulation horizon $p$, these data form the following set of sampled regressors and corresponding output values: \begin{equation} \label{eq:samp_dataset_arx}\nonumber \tilde{\mathscr{V}}_{i_p}^N \doteq \left\{ \tilde{v}_{i_p}(k)=\begin{bmatrix} \tilde{\varphi}_{i_p}(k) \\ \tilde{y}_{i_p}(k) \end{bmatrix}, \, k=1,\hdots,N \right\}, \end{equation} where $\tilde{\mathscr{V}}_{i_p}^N \subset \mathbb{R}^{1+o+m(o+p-1)}$ and $\tilde{y}_{i_p}(k) \doteq \tilde{y}_i(k+p)$. Here, for simplicity and without loss of generality, we consider that the number of sampled regressors $N$ is the same for any considered value of $p$. The set $\tilde{\mathscr{V}}_{i_p}^N$ can be seen as a countable, sampled version of its continuous counterpart, $\mathscr{V}_{i_p}$: \begin{equation} \label{eq:cont_dataset_arx}\nonumber \mathscr{V}_{i_p} \doteq \left\{ v_{i_p}=\begin{bmatrix} \varphi_{i_p} \\ y_{i_p} \end{bmatrix}, \, y_{i_p} \in Y_{i_p}(\varphi_{i_p}), \, \forall \varphi_{i_p} \in \Phi_{i_p} \right\}, \end{equation} where $\mathscr{V}_{i_p} \subset \mathbb{R}^{1+o+m(o+p-1)}$, and $Y_{i_p}(\varphi_{i_p}) \subset \mathbb{R}$ represents the compact set of all the possible output values corresponding to each regressor $\varphi_{i_p} \in \Phi_{i_p}$ and to every possible noise realization $d_i:|d_i|\leq \bar{d}_{0_i}$. Let us define the distance between $\tilde{\mathscr{V}}_{i_p}^N$ and $\mathscr{V}_{i_p}$ as: \[ d_2 \left( \mathscr{V}_{i_p},\tilde{\mathscr{V}}_{i_p}^N \right)\doteq \underset{v_1 \in \mathscr{V}_{i_p}^{\phantom{0}}}{\textrm{max}} \underset{v_2 \in \tilde{\mathscr{V}}_{i_p}^{N}}{\textrm{min}} \left\Vert v_2-v_1 \right\Vert_2 \] We consider the following assumption on the data set: \begin{assumption} \label{as:info_data} For any $\beta>0$, there exists a value of $N<\infty$ such that $d_2 \left( \mathscr{V}_{i_p},\tilde{\mathscr{V}}_{i_p}^N \right) \leq \beta$. \end{assumption} \noindent Assumption \ref{as:info_data} pertains to the informative content of the sampled data set. It means that, by adding more points to $\tilde{\mathscr{V}}_{i_p}^N$, the set of all the system trajectories of interest is densely covered. This can be seen as a persistence of excitation condition combined with a bound-exploring property of the noise signal $d$. \subsection{Problem formulation} We are now in position to formalize the problem addressed in this paper. \\$\,$\\ \fbox{\parbox{0.98\columnwidth}{ \textbf{Problem 1}. Under Assumptions \ref{as:asympt_stable}-\ref{as:info_data}, use the available data sets $\tilde{\mathscr{V}}_{i_p}^N$ to: \begin{enumerate} \item Estimate the noise bounds $\bar{d}_{0_i}$; \item Select the model order $o\approx n$; \item Estimate the parameters $L_i$, $\rho_i$ defining the system's decaying trend \eqref{eq:sists_dec_bound}; \item Identify the model parameters $\theta_{i_1}$ exploiting the knowledge of the estimated quantities \item For the model parameters $\theta_{i_1}$ obtained from the previous step, estimate worst-case bounds on the simulation error $z(k+p)-\hat{z}_i(k+p)$ for $p\in[0,\infty)$. \end{enumerate}}} \section{Estimation of the noise bound, model order, decay trend, and simulation error bounds} \label{s:ms_sm_appr} The key to address points 1)-4) of \textbf{Problem 1} is the analysis of the multi-step predictors of the form \eqref{eq:p_pred_classic}. At first, we will consider the multi-step predictor for each simulation horizon $p$ as an independent function, neglecting the fact that the true system \eqref{eq:output_gen_pred_form} (and the wanted model \eqref{eq:1s_pred_classic_form}) define implicitly multi-step predictors, whose parameters are linked by polynomial functions $h_{p,n}(\cdot)$ \eqref{eq:polynomial_functions} (and $h_{p,o}(\cdot)$ \eqref{eq:polynomial_model}). We will introduce such links later on, as constraints in the identification procedures of Section \ref{s:pred_ident}.\\ The starting base for our new results are the findings described in \cite{TFFS}, briefly recalled next. \subsection{Preliminary results} \label{ss:preliminary_res} Under Assumptions \ref{as:asympt_stable}-\ref{as:bounded_dist}, the error between the true $p$-steps-ahead system output and its prediction \eqref{eq:p_pred_classic} is bounded for any finite $p$: \begin{equation}\nonumber \label{eq:global_eps_def} \left\vert y_i(k+p)-\varphi_{i_p}^T \theta_{i_p} \right\vert \leq \bar{\varepsilon}_{i_p}(\theta_{i_p})+\bar{d}_i, \end{equation} where $\bar{d}_i\geq 0$ denotes an estimate of the true noise bound $\bar{d}_{0_i}$, and $\bar{\varepsilon}_{i_p}(\theta_{i_p})$ represents the global error bound related to given multi-step model parameters $\theta_{i_p}$, i.e. it holds for all the possible values of $\varphi_{i_p}$ in $\Phi_{i_p}$. \noindent Theoretically, the global error bound $\bar{\varepsilon}_{i_p}(\theta_{i_p})$ is the solution to the following optimization problem: \begin{equation} \label{eq:global_eps_calc} \begin{array}{c} \bar{\varepsilon}_{i_p}(\theta_{i_p}) = \min\limits_{\varepsilon\in \mathbb{R}^+} \varepsilon \\ \text{subject to} \\ \left\vert y_{i_p}-\varphi_{i_p}^T \theta_{i_p} \right\vert \leq \varepsilon +\bar{d}_i, \; \forall \left(\varphi_{i_p},y_{i_p} \right): \begin{bmatrix} \varphi_{i_p} \\ y_{i_p} \end{bmatrix} \in \mathscr{V}_{i_p}. \end{array} \end{equation} Moreover, among all possible parameter values, one is interested in those that minimize the corresponding global error bound: \begin{equation} \label{eq:opt_multistep} \bar{\varepsilon}_{i_p}^0= \min\limits_{\theta_{i_p}\in \Omega_p} \bar{\varepsilon}_{i_p}(\theta_{i_p}), \end{equation} where the set $\Omega_p$ is a compact approximation of $\mathbb{R}^{o+m(o+p-1)}$ (e.g. an hypercube defined by $\|\theta_{i_p}\|_\infty\leq10^{100}$) introduced to technically replace $\inf$ and $\sup$ operators with $\min$ and $\max$, respectively.\\ \noindent Problems \eqref{eq:global_eps_calc}-\eqref{eq:opt_multistep} are intractable. Using the available finite set of data points, one can however compute an estimate $\underline{\lambda}_{i_p}\approx\bar{\varepsilon}_{i_p}^0$ solving the following Linear Program (LP): \begin{equation} \label{eq:lambda_p_calc} \begin{array}{c} \underline{\lambda}_{i_p} = \min \limits_{\theta_{i_p}\in\Omega_p,\,\lambda\in\mathbb{R}^+} \lambda \\ \text{subject to} \\ \left\vert \tilde{y}_{i_p}-\tilde{\varphi}_{i_p}^T \theta_{i_p} \right\vert \leq \lambda +\bar{d}_i, \; \forall \left(\tilde{\varphi}_{i_p},\tilde{y}_{i_p} \right) : \begin{bmatrix} \tilde{\varphi}_{i_p} \\ \tilde{y}_{i_p} \end{bmatrix} \in \tilde{\mathscr{V}}_{i_p}^N. \end{array} \end{equation} Under Assumptions \ref{as:bounded_dist}-\ref{as:info_data}, the following properties hold (see \cite{TFFS} for the derivation): \begin{subequations}\label{eq:properties_preliminary} \begin{equation} \underline{\lambda}_{i_p} \leq \bar{\varepsilon}_{i_p}^0\label{eq:properties_preliminary_a} \end{equation} \begin{equation} \forall \eta \in (0,\bar{\varepsilon}_{i_p}^0], \; \exists N < \infty: \; \underline{\lambda}_{i_p} \geq \bar{\varepsilon}_{i_p}^0-\eta\label{eq:properties_preliminary_b} \end{equation} \end{subequations} i.e. the estimated bound $\underline{\lambda}_{i_p}$ converges to $\bar{\varepsilon}_{i_p}^0$ from below. \subsection{Theoretical properties of the multi-step error bound} \label{ss:prop_lambda} In \cite{TFFS}, the results \eqref{eq:properties_preliminary} are exploited to build a FPS for any finite value of $p$ and to estimate the worst-case error of a given multi-step predictor, again for a finite simulation horizon. However, no result and/or systematic procedure to fulfill the assumptions on the noise bound (supposed to be known in \cite{TFFS}) and the model order were provided. These aspects limit the applicability of the approach, since in practice the true values of $\bar{d}_{0_i}$ and $n$ are often unknown and one has to resort to heuristics to choose $\bar{d}_i$ and $o$. We now introduce two new results that solve this issue, allowing one to derive a convergent estimate $\bar{d}_{i}\approx\bar{d}_{0_i}$, as well as estimates of the system order and, additionally, of the impulse response decay trend. The main conceptual innovation with respect to the preliminary results of \cite{TFFS} is to analyze not only each value of $\underline{\lambda}_{i_p}$ separately, but also the course of this quantity as a function of the horizon $p$. \begin{theorem} \label{th:conv_lambda_diff_d} If Assumptions \ref{as:bounded_dist}-\ref{as:info_data} hold, then, for any arbitrarily small $\eta>0$, $\exists N < \infty$ such that \begin{equation}\label{eq:thm1_convergence} \left( \bar{d}_{0_i} - \bar{d}_i \right)-\eta \leq \lim_{p \to \infty}{\underline{\lambda}_{i_p}} \leq \left( \bar{d}_{0_i} - \bar{d}_i \right). \end{equation} \end{theorem} \begin{proof} See the Appendix. \end{proof} \begin{corollary} \label{c:lambda_rate_conv} If Assumptions \ref{as:bounded_dist}-\ref{as:info_data} hold, and if the estimated noise bound is correctly chosen as $\bar{d}_i=\bar{d}_{0_i}$, then, for any arbitrarily small $\eta>0$, $\exists N < \infty$ such that \begin{equation}\label{eq:corollary1_bound}\nonumber \underline{\lambda}_{i_p} = \left\Vert \theta^0_{i_p} \right\Vert_1 \bar{d}_{0_i}-\eta \leq n \bar{d}_{0_i} L_i \rho_i^{p+1}. \end{equation} \end{corollary} \begin{proof} See the Appendix. \end{proof} \begin{remark} \label{rm:consequence_of_th_lambda} Theorem \ref{th:conv_lambda_diff_d} and Corollary \ref{c:lambda_rate_conv} imply three consequences useful to estimate the noise bound and system order: \begin{enumerate} \item With a large enough data set, the estimated bound $\underline{\lambda}_{i_p}$ \eqref{eq:lambda_p_calc} converges, as $p$ increases, to the difference between the true noise bound, $\bar{d}_{0_i}$, and the estimated one, $\bar{d}_i$. We will use this result to estimate $\bar{d}_{0_i}$; \item When $\bar{d}_i=\bar{d}_{0_i}$ and $o<n$ (i.e. Assumption \ref{as:model_order} is not met), then $\underline{\lambda}_{i_p}$ converges (besides a quantity $\eta$ that can be made arbitrarily small with a larger data set) to a non-zero value as $p \to \infty$, due to model order mismatch (see the proof of Theorem \ref{th:conv_lambda_diff_d} for more details). We will exploit this property to estimate the system order; \item Assuming the noise bound is chosen as $\bar{d}_i\simeq\bar{d}_{0_i}$, then the estimated bound $\underline{\lambda}_{i_p}$ converges to zero as $p\to \infty$ with the same decay trend as that of the true system parameters, dictated by the system dominant eigenvalues. We will exploit this property to estimate the system decay rate. \end{enumerate} \end{remark} \subsection{Estimation of noise bound, system order and decay trend} \label{ss:estim_d_o_rho} We propose three procedures to estimate the noise bound, system order and the decay trend, respectively. This information will be used in Section \ref{ss:FPS_and_tau_def} to define the FPS for any finite $p$ and the guaranteed simulation error bound of any predictor up to a finite $p$.\\ We start by estimating $\bar{d}_{0_i}$ resorting to Theorem \ref{th:conv_lambda_diff_d} (see also point 1) of Remark \ref{rm:consequence_of_th_lambda}): \begin{procedure} \caption{Estimation of $\bar{d}_0$} \label{p:d_bar_est_procedure} Choose a large value as initial guess of $o$. Then, for all $i=1,\ldots,q$, carry out the following steps: \begin{enumerate} \item Initialize $\bar{d}_i$ with a value small enough to ensure $\bar{d}_i<\bar{d}_{0_i}$ (e.g. $\bar{d}_i\simeq0$); \item Compute $\underline{\lambda}_{i_p}$ \eqref{eq:lambda_p_calc} for increasing $p$ values, until it converges to a constant quantity $e_{d_i}\simeq(\bar{d}_{0_i}-\bar{d}_i)$ as $p \to \infty$; \item Correct the initial guess of $\bar{d}_i$ by adding $e_{d_i}$; \item Verify that $\underline{\lambda}_{i_p} \xrightarrow{p \to \infty}0$ with the new value of $\bar{d}_i$; \end{enumerate} Take the resulting vector $\bar{d}=[\bar{d}_1,\hdots,\bar{d}_q]^T$ as estimate of the true one, $\bar{d}_0$. \end{procedure} \\ \noindent This addresses point 1) of \textbf{Problem 1}. After completing Procedure \ref{p:d_bar_est_procedure}, we can compute a finite simulation horizon value $\bar{p}_i$ such that: \begin{equation} \label{eq:bar_p} \begin{array}{c} \bar{p}_i=\min\limits_{\bar{p}\in\mathbb{N}}\bar{p}\\ \text{subject to}\\ \underline{\lambda}_{i_p}<\delta,\, \forall p \geq \bar{p} \end{array} \end{equation} where $\delta\approx 0$ is a suitable tolerance, e.g. $10^{-8}$, to account for the asymptotic behavior of $\underline{\lambda}_{i_p}$ (see Theorem \ref{th:conv_lambda_diff_d}). This tolerance can be used to check the convergence of $\underline{\lambda}_{i_p}$ in step 5 of Procedure \ref{p:d_bar_est_procedure}, i.e. to verify that $\exists \tilde{p}: \underline{\lambda}_{i_p}<\delta \; \forall p>\tilde{p}$ \\ Exploiting the values of $\bar{p}_i$, we can then estimate the system order resorting to the observation of point 2) of Remark \ref{rm:consequence_of_th_lambda}: \begin{procedure} \caption{Estimation of $n$} \label{p:o_est_procedure} \begin{enumerate} \item Set $\bar{d}_i$ to the values resulting from Procedure \ref{p:d_bar_est_procedure}, and compute $\bar{p}_i$ as in \eqref{eq:bar_p}; \item Set a large starting value of $o$; \item Gradually decrease $o$, recalculating all the $\underline{\lambda}_{i_p}$, until a value of $o$ is found, such that $\exists p>\bar{p}_i : \underline{\lambda}_{i_p}>\delta$, with $\delta$ used in \eqref{eq:bar_p}. Denote as $\underline{o}$ such a value. \item Set the model order as $o=\underline{o}+1$. \end{enumerate} \end{procedure} \\ \noindent This addresses point 2) of \textbf{Problem 1}. At the end of Procedure \ref{p:o_est_procedure}, one shall choose the largest value of $o$ among all $i=1,\ldots,q$. Finally, we estimate the system decay trend from that of $\underline{\lambda}_{i_p}$, exploiting observation 3) of Remark \ref{rm:consequence_of_th_lambda}, thus addressing also point 3) of Problem 1. \begin{procedure} \caption{Estimation of $L_i$ and $\rho_i$} \label{p:decay_est_procedure} \begin{enumerate} \item Take $\bar{d}_i$, $\bar{p}_i$, and $o$ resulting from Procedures \ref{p:d_bar_est_procedure}-\ref{p:o_est_procedure}; \item Compute two scalars, $L_i',\,\hat{\rho}_i$ as: \begin{equation} \label{eq:est_L_rho_optprob} \begin{aligned} \left[L_i', \hat{\rho}_i \right] = &\text{arg} \min_{L_i',\hat{\rho}_i} \left\Vert \boldsymbol{f}_{i_{\lambda}} - \boldsymbol{g}_{i_{L\rho}} \right\Vert_2^2 \\ &\text{subject to} \\ &\boldsymbol{g}_{i_{L\rho}} \succeq \boldsymbol{f}_{i_{\lambda}} \\ &L_i'>0 \\ &0< \hat{\rho}_i <1 \end{aligned} \end{equation} where $\boldsymbol{f}_{i_{\lambda}} \doteq [\underline{\lambda}_{i_1} \; \cdots \; \underline{\lambda}_{i_{\bar{p}_i}}]^T$, $\boldsymbol{g}_{i_{L\rho}}\doteq [g_{i_{L\rho}}(1) \; \cdots \; g_{i_{L\rho}}(\bar{p}_i) ]^T$, $g_{i_{L\rho}}(p)=L_i' \hat{\rho}_i^{p+1}$, and $\succeq$ denotes element-wise inequalities. \item Compute $\hat{L}_i$ as (from Corollary \ref{c:lambda_rate_conv}): \begin{equation} \label{eq:L_eff_arx} \hat{L}_i=\dfrac{L_i'}{o\bar{d}_i}, \end{equation} \item Set $\hat{L}_i,\,\hat{\rho}_i$ as estimates of $L_i$ and $\rho_i$, respectively. \end{enumerate} \end{procedure} \\ \noindent Problem \eqref{eq:est_L_rho_optprob} is always feasible, since one can always choose large-enough values of $L_i'$ to satisfy its constraints. Moreover, the cost function results to be convex inside the feasible set, as it can be shown by computing its curvature and checking that it is positive for feasible $(L_i',\hat{\rho}_i)$ pairs. \subsection{Feasible Parameter Sets and finite-horizon simulation error bound} \label{ss:FPS_and_tau_def} The quantities estimated so far are instrumental to build the Feasible Parameter Set (FPS) for any finite simulation horizon $p$. Namely, such sets contain all possible multi-step predictor parameters $\theta_{i_p}$ that are consistent with the available data set, up to the tolerance given by the global error bound $\bar{\varepsilon}_{i_p}^0$ and noise bound $\bar{d}_{0_i}$, and the other available information on the system at hand. Since the computed bound $\underline{\lambda}_{i_p}$ is lower than $\bar{\varepsilon}_{i_p}^0$, due to the use of a finite data set (property \eqref{eq:properties_preliminary_a}), it is customary to employ a scaling factor $\alpha>1$ to estimate the global error bound: \begin{equation} \label{eq:eps_hat_def} \hat{\bar{\varepsilon}}_{i_p} = \alpha \underline{\lambda}_{i_p}, \; \alpha>1. \end{equation} We can now define, for the $p$-steps-ahead predictor of the $i$-th system output, the set $\Theta_{i_p}$ of parameter values that are consistent with the measured data, and with the estimated noise bound and global error bound. Several works in the Set Membership literature prefer to lower the computational effort resorting to outer approximation of the FPS, e.g. via intervals \cite{sun2003recursive}, ellipsoid \cite{bertsekas1971recursive,durieu2001multi,filippova1996ellipsoidal}, parallelotopes \cite{chisci1996recursive,vicino1996sequential}, zonotopes \cite{alamo2005guaranteed,bravo2006bounded,combastel2003state,wang2018zonotope}, or constrained zonotopes \cite{scott2016constrained}. \cite{walter1989exact} proposes a recursive exact polytopic representation, able to cope also with time-varying systems. Here, we decided to adopt an exact description of the FPS by defining it as a polytope using an inequality description (H-representation): \begin{equation} \label{eq:FPS_gen_def}\nonumber \begin{aligned} \Theta_{i_p} \doteq \bigg\{ \theta_{i_p} &: \left\vert \tilde{y}_{i_p} - \tilde{\varphi}_{i_p}^T \theta_{i_p} \right\vert \leq \hat{\bar{\varepsilon}}_{i_p}+\bar{d}_i, \\ &\quad \forall \left(\tilde{\varphi}_{i_p},\tilde{y}_{i_p} \right) : \begin{bmatrix} \tilde{\varphi}_{i_p} \\ \tilde{y}_{i_p} \end{bmatrix} \in \tilde{\mathscr{V}}_{i_p}^N \bigg\}. \end{aligned} \end{equation} The set $\Theta_{i_p}$, if bounded, is a polytope with at most $2N$ facets. If $\Theta_{i_p}$ is unbounded, then this indicates that the data collected from the system are not informative enough, and new data should be acquired. In \cite{TFFS}, the set $\Theta_{i_p}$ was taken as FPS for the predictors pertaining to the horizon $p$. Here, we provide a further refinement by adding the constraints on the estimated decay trend obtained in Section \ref{ss:estim_d_o_rho}. Let us define the polytope: \begin{equation} \label{eq:gamma_set_def_arx}\nonumber \begin{aligned} \Gamma_{i_p}\doteq \bigg\{ \theta_{i_p}: &\left\vert \theta_{i_{p,z}}^{(l)} \right\vert \leq \hat{L}_i \hat{\rho}_i^{p+l}, \; l\in [1,o], \\ &\wedge \left\vert \theta_{i_{p,u}}^{(l)} \right\vert \leq \hat{L}_i \hat{\rho}_i^{\lceil \frac{l}{m} \rceil}, \; l\in [1,m(o+p-1)] \bigg\}. \end{aligned} \end{equation} Then, we define the Feasible Parameter Sets as: \begin{equation} \label{eq:FPS_with_decay} \Theta_{i_p}^{L\rho}\doteq\Theta_{i_p} \cap \Gamma_{i_p}. \end{equation} Note that this new FPS is always compact, since $\Gamma_{i_p}$ is. The FPS is used to derive the worst-case simulation error bound obtained by a given predictor with parameters $\theta_{i_p}$: \begin{equation} \label{eq:tau_global_def} \tau_{i_p}(\theta_{i_p})= \max_{\varphi_{i_p} \in \Phi_{i_p}^{\phantom{0}}} \max_{\theta \in \Theta_{i_p}^{L\rho}} \left\vert \varphi_{i_p}^T \left(\theta - \theta_{i_p} \right) \right\vert + \hat{\bar{\varepsilon}}_{i_p}. \end{equation} Namely, this bound is the worst-case absolute difference between the output $\hat{z}_i(k+p)=\varphi_{i_p}^T(k) \theta_{i_p}$, predicted using the parameters $\theta_{i_p}$, and the one predicted by any other parameter vector in the FPS, plus the worst-case prediction error $\hat{\bar{\varepsilon}}_{i_p}$ related to all $\theta_{i_p}\in\Theta_{i_p}^{L\rho}$. In a way similar to $\bar{\varepsilon}_{i_p}^0$, it is not possible to exactly compute the bound \eqref{eq:tau_global_def} using a finite data set. Thus, we introduce an estimate $\hat{\tau}_{i_p}(\theta_{i_p})$, which, under Assumption \ref{as:info_data}, converges to $\tau_{i_p}(\theta_{i_p})$ from below as the number of data points increases, see \cite{TFFS}. Such an estimate is then inflated by a scalar $\gamma>1$ to account for the uncertainty due to the usage of a finite data set: \begin{equation} \label{eq:tau_hat_global_def} \hat{\tau}_{i_p}(\theta_{i_p})=\gamma \left( \max_{\tilde{\varphi}_{i_p}\in \tilde{\mathscr{V}}_{i_p}^N} \max_{\theta \in \Theta_{i_p}^{L\rho}} \left\vert \tilde{\varphi}_{i_p}^T \left(\theta - \theta_{i_p} \right) \right\vert \right) + \hat{\bar{\varepsilon}}_{i_p}, \; \gamma>1. \end{equation} The estimation of the bound defined by \eqref{eq:tau_hat_global_def}, corresponding to point 5) of \textbf{Problem 1}, will then be performed on the models identified using the approaches proposed in Section \ref{s:pred_ident}. Note that \eqref{eq:tau_hat_global_def} can be recast as an LP, after the preliminary solution of $2N$ LPs which can be parallelized. If the estimated error bounds $\hat{\bar{\varepsilon}}_{i_p}$ and $\hat{\tau}_{i_p}(\theta_{i_p})$ are larger than the corresponding theoretical values $\bar{\varepsilon}_{i_p}^0$ and $\tau_{i_p}(\theta_{i_p})$, respectively, and the estimated decay rate parameters are such that $\hat{\rho} \in [\rho,1)$ and $\hat{L}_i \geq L_i$, then it is easy to show that the multi-step predictor $\theta_{i_p}^0$, obtained from the true system and possibly appropriately padded with zero entries if $o>n$, belongs to the FPS $\Theta_{i_p}^{L\rho}$. In this case, by construction, the bound $\hat{\tau}_{i_p}(\theta_{i_p})$ is such that: \begin{subequations}\label{eq:z_under_bound_tau} \begin{eqnarray} |z_i(k+p)-\hat{z}_i(k+p) | &\leq& \hat{\tau}_{i_p}(\theta_{i_p})\label{eq:sim_bound_z}\\ |y_i(k+p)-\hat{z}_i(k+p) | &\leq& \hat{\tau}_{i_p}(\theta_{i_p})+\bar{d}_i\label{eq:sim_bound_y} \end{eqnarray} \end{subequations} i.e. it is the desired simulation error bound for the considered finite horizon $p$. The parameters $\alpha$ in \eqref{eq:eps_hat_def} and $\gamma$ in \eqref{eq:tau_hat_global_def} essentially express how much we are confident in the informative content of the data set. A ``large'' value of $\alpha$ might produce an overly conservative error bound $\hat{\bar{\varepsilon}}_{i_p}$ and, consequently, larger FPSs, while a choice of $\alpha$ close to 1 might produce an error bound that could be invalidated by future data, if the available data set has a poor informative content. Similarly, a ``large'' $\gamma$ might give a conservative bound $\hat{\tau}_{i_p}$.\\ \begin{remark}\label{rm:bounds_conservativeness} In a real application, one will never know whether the scaling factors $\alpha,\gamma$ are too conservative. Conversely, it is easy to understand when these factors are too small, by checking whether the FPS is empty for any $p$. If this happens, for example if one chooses a too small $\alpha$ value, then the prior assumptions and/or estimated bounds are invalidated by data. Thus, verifying that all the FPSs are non-empty (which is an easy task since they are all polytopes) is a way to check the informative content of our data set and the choice of parameter $\alpha$. This check can be carried out using new data collected in a validation experiment, or in real-time if the FPSs and system model are to be updated on-line. A similar reasoning applies to the bound $\hat{\tau}_{i_p}(\theta_{i_p})$ and scalar $\gamma$: conservativeness can be evaluated by checking the bound against new measured data and evaluating whether the simulation error magnitude ever violates $\hat{\tau}_{i_p}(\theta_{i_p})$ by more than $\bar{d}_i$ (see \eqref{eq:sim_bound_y}). \end{remark} \subsection{Infinite-horizon simulation error bound} \label{s:tau_inf_intro} The error bound \eqref{eq:tau_hat_global_def} requires the computation of the FPSs for each horizon $p$ of interest, potentially up to a very large value. Since each FPS is a polytope whose complexity generally grows with the number of available data, the computation of a large number of bounds $\hat{\tau}_{i_p}$ can become impractical. To solve this problem, in this section we present new results that allow one to estimate the simulation error bound for any future horizon, beyond a (sufficiently large) finite value $\bar{p}$. In particular, we propose an iterative expression to compute the simulation error bound for $p>\bar{p}$, based on the previous computation of the bounds $\hat{\tau}_{i_p}$ for $p=1,\ldots,\bar{p}$. Furthermore, we provide results indicating how the value of $\bar{p}$ should be chosen in order to keep the computational effort at a minimum, and obtain a bound which is non-divergent with $p$ and not excessively conservative. Before proceeding further, the following remark is in order. \begin{remark} \label{rm:model_inf_tau} The results presented in the remainder of this section are derived considering model parameters that satisfy the conditions $h_{p,o}(\theta_{i_1}) \in \Gamma_{i_p}, \; \forall p \in [2,\bar{p}]$. Later on, in Section \ref{s:pred_ident}, we will include explicitly such conditions in the identification procedure, so that the computed models will always enjoy this property. This establishes a connection between the derived theoretical results and the proposed computational methods to identify a model. \end{remark} Given the multi-step predictors described by \eqref{eq:p_pred_classic}, and having computed the error bounds defined by \eqref{eq:tau_hat_global_def} up to $\bar{p}$, if $h_{p,o}(\theta_{i_1}) \in \Gamma_{i_p}, \; \forall p \in [2,\bar{p}]$, the simulation error at horizon $\bar{p}+j$, $j>1$, is such that: \begin{equation} \label{eq:err_bound_arx_pgrande} \begin{array}{l} \left\vert z_i(k+\bar{p}+j) - \hat{z}_i(k+\bar{p}+j) \right\vert \leq \\ \hat{\tau}_{i_{\bar{p}}}(\theta_{i_{\bar{p}}})+ \sum\limits_{m=1}^{\min\{j,o\}} \left( \hat{\tau}_{i_{j-m+1}}(\theta_{i_{j-m+1}}) +\bar{d}_i \right)\hat{L}_i\hat{\rho}_i^{\bar{p}+m}. \end{array} \end{equation} See the Appendix for a derivation. Then, considering that $\hat{L}_i\hat{\rho}_i^{\bar{p}+1}>\hat{L}_i\hat{\rho}_i^{\bar{p}+2}>\hdots>\hat{L}_i\hat{\rho}_i^{\bar{p}+o}$, and that $ \sum\limits_{m=1}^{\min\{j,o\}}\hat{L}_i\hat{\rho}_i^{\bar{p}+m} \leq o\hat{L}_i\hat{\rho}_i^{\bar{p}+1}, $ we can derive an over-estimate of the simulation error bound $\hat{\tau}_{i_{\bar{p}+j}}$ as: \begin{equation} \label{eq:tau_arx_pgrande_upbound} \begin{array}{l} \left\vert z_i(k+\bar{p}+j) - \hat{z}_i(k+\bar{p}+j) \right\vert \leq \hat{\tau}_{i_{\bar{p}+j}}(\theta_{i_{\bar{p}+j}})\\ \leq \hat{\tau}_{i_{\bar{p}}}(\theta_{i_{\bar{p}}})+ \left( \hat{\tau}_{i_{max_{\{j,o\}}}}+\bar{d}_i \right)o\hat{L}_i\hat{\rho}_i^{\bar{p}+1}, \end{array} \end{equation} where $\hat{\tau}_{i_{max_{\{j,o\}}}}=\max\{\hat{\tau}_{i_{j-o}}(\theta_{i_{j-o}}), \hdots, \hat{\tau}_{i_j}(\theta_{i_j}) \}$. \\ Note that, if $j\geq \bar{p}+1$, the term $\hat{\tau}_{i_j}(\theta_{i_j})$ is not computed using \eqref{eq:tau_hat_global_def}, but resorting to \eqref{eq:tau_arx_pgrande_upbound}. For example, when $j \in (\bar{p}, 2\bar{p}]$, the simulation error bound $\hat{\tau}_{i_{\bar{p}+j}}(\theta_{i_{\bar{p}+j}})$ is bounded as: \begin{equation} \label{eq:tau_arx_pgrande_iter1} \resizebox{1\columnwidth}{!}{ $ \begin{aligned} \hat{\tau}&_{i_{\bar{p}+j}}(\theta_{i_{\bar{p}+j}}) \leq \hat{\tau}_{i_{\bar{p}}}(\theta_{i_{\bar{p}}})+ \left( \hat{\tau}_{i_{max_{\{j,o\}}}}+\bar{d}_i \right) o\hat{L}_i\hat{\rho}_i^{\bar{p}+1} \leq \hat{\tau}_{i_{\bar{p}}}(\theta_{i_{\bar{p}}}) \\ &+ \left( \hat{\tau}_{i_{\bar{p}}}(\theta_{i_{\bar{p}}})+ \left( \hat{\tau}_{i_{max_{\{l,2o\}}}}+\bar{d}_i \right)o\hat{L}_i\hat{\rho}_i^{\bar{p}+1} +\bar{d}_i \right) o\hat{L}_i\hat{\rho}_i^{\bar{p}+1}, \end{aligned} $} \end{equation} where $l=j-\bar{p}$. \\ Thus, we can derive the following iterative expression to compute an over-estimate of the simulation error bound, where the considered horizon is denoted by $p=\ell\bar{p}+j$, with $\ell,j\in\mathbb{N}$ and $j\in[1,\bar{p})$: \begin{equation} \label{eq:tau_arx_pgrande_iter} \begin{aligned} \hat{\tau}_{i_{\ell\bar{p}+j}}&(\theta_{i_{\ell\bar{p}+j}}) \leq \hat{\tau}_{i_{\bar{p}}}(\theta_{i_{\bar{p}}}) \left( 1+\chi_{i,\bar{p}}+\chi_{i,\bar{p}}^2+\hdots+\chi_{i,\bar{p}}^{\ell-1} \right) +\\ &+ \bar{d}_i \left( \chi_{i,\bar{p}}+\chi_{i,\bar{p}}^2+\hdots+\chi_{i,\bar{p}}^{\ell} \right) + \tau_{i_{max_{\{j,\ell o\}}}}\chi_{i,\bar{p}}^\ell \end{aligned} \end{equation} where $\chi_{i,\bar{p}}=o\hat{L}_i\hat{\rho}_i^{\bar{p}+1},$ and $\tau_{i_{max_{\{j,\ell o\}}}}=\max\{\hat{\tau}_{i_{j-\ell o}}(\theta_{i_{j-\ell o}}), \hdots, \hat{\tau}_{i_j}(\theta_{i_j}) \}.$ In general, the over-estimate \eqref{eq:tau_arx_pgrande_iter} may diverge as $\ell$ increases. The next result provides a condition on $o,\,\hat{L}_i,\,\hat{\rho}_i,$ and $\bar{p}_i$ to guarantee convergence: \begin{theorem} \label{th:conv_tau_inf_ARX} Consider any $\theta_{i_1}$ such that $h_{p,o}(\theta_{i_1}) \in \Gamma_{i_p}, \; \forall p \in [2,\bar{p}]$. Define $\hat{\tau}_{i_{\infty}}$ as: \begin{equation} \label{eq:tau_inf_ARX_def} \hat{\tau}_{i_{\infty}}(\theta_{i_{\bar{p}}}) \doteq \hat{\tau}_{i_{\bar{p}}}(\theta_{i_{\bar{p}}}) \left( \frac{1}{1-\chi_{i,\bar{p}}} \right) + \bar{d}_i \left( \frac{\chi_{i,\bar{p}}}{1-\chi_{i,\bar{p}}} \right). \end{equation} Then, \begin{equation} \label{eq:thm2_convergence} \hat{\tau}_{i_p}\xrightarrow{p \to \infty}\hat{\tau}_{i_{\infty}}\iff o\hat{L}_i\hat{\rho}_i^{\bar{p}+1} <1 \end{equation} \end{theorem} \begin{proof} See the Appendix. \end{proof} \begin{remark} The convergence condition of Theorem \ref{th:conv_tau_inf_ARX} depends on $o$, $\hat{L}_i$ and $\hat{\rho}_i$, obtained using Procedures \ref{p:d_bar_est_procedure}-\ref{p:decay_est_procedure}, which in turn depend on the system at hand, on the collected data, and on $\bar{p}$, which is chosen by the user during the identification procedure. Therefore, for given values of $o$, $\hat{L}_i, \, \hat{\rho}_i$, the value of $\bar{p}$ should be chosen large enough to satisfy \eqref{eq:thm2_convergence}. \end{remark} Assuming that condition \eqref{eq:thm2_convergence} is met, then the quantity $\hat{\tau}_{i_{\infty}}$ \eqref{eq:tau_inf_ARX_def} is the wanted infinite-horizon simulation error bound. The next results provide further insight on the bound \eqref{eq:tau_inf_ARX_def} and, in particular, on whether convergence of $\hat{\tau}_{i_p}$ to $\hat{\tau}_{i_{\infty}}$ is from above or below. If condition \eqref{eq:thm2_convergence} holds, we can compute the difference $\hat{\tau}_{i_{\infty}}(\theta_{i_{\bar{p}}})-\hat{\tau}_{i_{\ell\bar{p}+j}}(\theta_{i_{\ell\bar{p}+j}})$ by means of truncated geometric series, leading to: \begin{equation} \label{eq:dist_tau_inf_ARX} \begin{aligned} &\hat{\tau}_{i_{\infty}}(\theta_{i_{\bar{p}}})-\hat{\tau}_{i_{\ell\bar{p}+j}}(\theta_{i_{\ell\bar{p}+j}}) = \\ &\; \; =\hat{\tau}_{i_{\bar{p}}}(\theta_{i_{\bar{p}}}) \left( \frac{\chi_{i,\bar{p}}^\ell}{1-\chi_{i,\bar{p}}} \right) + \bar{d}_i \left( \frac{\chi_{i,\bar{p}}^{\ell+1}}{1-\chi_{i,\bar{p}}} \right) -\tau_{i_{max_{\{ j, \ell o \} }}} \chi_{i,\bar{p}}^\ell. \end{aligned} \end{equation} The terms multiplying $\hat{\tau}_{i_{\bar{p}}}$ and $\bar{d}_i$ converge to their limit (see \eqref{eq:tau_inf_ARX_def}) from below, while the term $\tau_{i_{max_{\{ j, \ell o \} }}} \chi_{i,\bar{p}}^\ell$ converges to zero from above as $\ell\to \infty$. Thus, in general it is possible that $\hat{\tau}_{i_{\ell\bar{p}+j}} > \hat{\tau}_{i_{\infty}}$ for some $\ell$ and $j$. The next Lemma is concerned with this aspect. \begin{lemma} \label{l:overshoot_cond} Let $\tau_{i_{max}}$ be defined as: \[ \tau_{i_{max}}=\max \{\hat{\tau}_{i_1}(\theta_{i_1}),\hdots,\hat{\tau}_{i_{\bar{p}}}(\theta_{i_{\bar{p}}})\}. \] If $h_{p,o}(\theta_{i_1}) \in \Gamma_{i_p}, \; \forall p \in [2,\bar{p}]$, and if \begin{equation} \label{eq:lemma_1_condition} \tau_{i_{max}} < \frac{\hat{\tau}_{i_{\bar{p}}}+\bar{d}_i \chi_{i,\bar{p}}}{1-\chi_{i,\bar{p}}}, \end{equation} then $\hat{\tau}_{i_{\ell\bar{p}+j}}(\theta_{i_{\ell\bar{p}+j}}) \leq \hat{\tau}_{i_{\infty}}(\theta_{i_{\bar{p}}}), \; \forall \ell,j$. Otherwise, there could exist at least a pair $(\ell,j)$ such that $\hat{\tau}_{i_{\ell\bar{p}+j}}(\theta_{i_{\ell\bar{p}+j}})>\hat{\tau}_{i_{\infty}}(\theta_{i_{\bar{p}}})$. \end{lemma} \begin{proof} See the Appendix. \end{proof} Note that the condition \eqref{eq:lemma_1_condition} of Lemma \ref{l:overshoot_cond} is in a sense adverse to the convergence condition \eqref{eq:thm2_convergence} of Theorem \ref{th:conv_tau_inf_ARX}. Since $\hat{\rho}_i<1$ by definition, there exists always a value of $\bar{p}$ large enough to satisfy \eqref{eq:thm2_convergence}. On the other hand, the right-hand side of \eqref{eq:lemma_1_condition} decreases as $\bar{p}$ increases, while $\tau_{i_{max}}$ is only weakly dependent on $\bar{p}$. Thus, Lemma \ref{l:overshoot_cond} suggests to pick a ``small'' value of $\bar{p}$, while Theorem \ref{th:conv_tau_inf_ARX} is generally satisfied with ``large'' $\bar{p}$. If one is interested in finding a finite simulation time such that, for any larger horizon, the simulation error bound converges from below to the infinite-horizon value, then the following result can be exploited. \begin{remark} \label{rm:end_of_overshoot} Assume condition \eqref{eq:lemma_1_condition} is not satisfied, and take a small increase on the value of the asymptotic error bound, e.g. given by $\delta_i=10^{-2}\cdot \hat{\tau}_{i_{\infty}}(\theta_{i_{\bar{p}}})$. Define $\bar{\ell}$ as: \[ \begin{array}{c} \bar{\ell}=\min\limits_{\ell}\ell\\ \text{subject to}\\ \left\vert \frac{\hat{\tau}_{i_{\bar{p}}}+\bar{d}_i \chi_{i,\bar{p}}}{1-\chi_{i,\bar{p}}} - \tau_{i_{max}} \right\vert \chi_{i,\bar{p}}^{\bar{\ell}} < \delta_i \end{array} \] Then, as a straightforward consequence of Lemma \ref{l:overshoot_cond}, the following result holds: \begin{equation} \label{eq:l2_cond}\nonumber \hat{\tau}_{i_{\ell\bar{p}+j}}(\theta_{i_{\ell\bar{p}+j}})\leq \hat{\tau}_{i_{\infty}}(\theta_{i_{\bar{p}}})+\delta_i, \; \forall \ell \geq \bar{\ell}, \; \forall j \end{equation} \end{remark} Finally, we show that Theorem \ref{th:conv_tau_inf_ARX} is also instrumental to derive a sufficient condition for the parameter vector $\theta_1$ to yield an asymptotically stable model. \begin{theorem} \label{th:asymptotic_stability} Let Assumptions \ref{as:bounded_dist}-\ref{as:info_data} hold, and further assume that the chosen value of $\bar{p}$ satisfies \eqref{eq:thm2_convergence}. Consider a generic parameter vector $\theta_{i_1}\in\mathbb{R}^{o(m+1)}$. If \[ h_{p,o}(\theta_{i_1}) \in \Gamma_{i_p}, \; \forall p \in [2,\bar{p}], \] then the corresponding ARX model \eqref{eq:1s_pred_classic_form} is asymptotically stable. \end{theorem} \begin{proof} See the Appendix. \end{proof} Summing up, the findings and procedures described so far address points 1)-3) and 5) of \textbf{Problem 1}. In the next section, we present two approaches to identify the one-step-ahead model \eqref{eq:1s_pred_classic_form} exploiting these results, thus dealing also with point 4) of \textbf{Problem 1}. \section{Predictor identification} \label{s:pred_ident} \subsection{Method I} \label{ss:pred_id_method_1} In this first approach, the parameters are estimated as: \begin{subequations}\label{eq:opt_prob_method_1} \begin{eqnarray} &\hat{\theta}_{i_1}= \text{arg} \min\limits_{\theta_{i_1}}{\left\Vert \boldsymbol{\tau}_i(\theta) \right\Vert_{\infty}} \label{eq:opt_prob_method_1_a}\\ &\text{subject to} \nonumber\\ &h_{p,o}(\theta_{i_1}) \in \Theta_{i_p}^{L \rho}, \; \forall p \in [1,\bar{p}] \label{eq:opt_prob_method_1_b} \end{eqnarray} \end{subequations} where $\boldsymbol{\tau}_i=\left[ \hat{\tau}_{i_1}(\theta) \; \hat{\tau}_{i_2}(\theta) \; \hdots \; \hat{\tau}_{i_{\bar{p}}}(\theta) \right]^T$, $\hat{\tau}_{i_p}(\theta)$ is defined as in \eqref{eq:tau_hat_global_def}. Namely, we thus aim to minimize the worst global error bound among all the simulation steps of interest, while ensuring that the resulting multi-step predictors comply with the derived FPSs. Problem \eqref{eq:opt_prob_method_1} is equivalent to: \begin{equation} \label{eq:opt_prob_method_1_large} \resizebox{1\columnwidth}{!}{ $ \begin{aligned} \hat{\theta}_{i_1}=&\text{arg} \min\limits_{\theta_{i_1}}{\left( \max_{p \in [1,\bar{p}]^{\phantom{0}}}{\max_{k=1,\hdots,N^{\phantom{0}}}{\max_{\theta \in \Theta_{i_p}^{L\rho}}{ \left\vert \tilde{\varphi}_{i_p}(k)^T(\theta-\theta_{i_p}) \right\vert + \hat{\bar{\varepsilon}}_{i_p} }}} \right)} \\ &\text{subject to} \\ &h_{p,o}(\theta_{i_1}) \in \Theta_{i_p}^{L\rho}, \; \forall p \in [1,\bar{p}] \end{aligned} $} \end{equation} This can be reformulated into a simpler optimization problem. The first step is to split the absolute value in the cost function of \eqref{eq:opt_prob_method_1_large} into two terms, by introducing the following quantities: \begin{equation*} \check{\varphi}_{i_p}(k)= \begin{cases} \tilde{\varphi}_{i_p}(k) \qquad \qquad \text{if} \quad k\leq N \\ -\tilde{\varphi}_{i_p}(k-N) \quad \text{if} \quad k>N \end{cases} \, \text{for} \; \, k=1,\hdots,2N. \end{equation*} Then, let us define: \begin{equation}\label{eq:preliminary_LP} c_{i_{k_p}} \doteq \max_{\theta \in \Theta_{i_p}^{L\rho}}{\check{\varphi}_{i_p}(k)^T \theta, \quad k=1,\hdots,2N, \quad p=1,\hdots,\bar{p}}. \end{equation} The values of $c_{i_{k_p}}$ are computed by solving $2N\bar{p}$ linear programs (LPs). Then, \eqref{eq:opt_prob_method_1_large} can be reformulated as:\small \begin{subequations} \label{eq:opt_prob_method_1_small} \begin{eqnarray} &\hat{\theta}_{i_1} = \text{arg} \min\limits_{\theta_{i_1},\zeta}{\zeta} \label{eq:opt_prob_method_1_small_a}&\\ &\text{subject to} \nonumber&\\ &c_{i_{k_p}}- \check{\varphi}_{i_p}(k)^T h_{p,o}(\theta_{i_1}) \leq \zeta, \;\forall k \in [1,2N],\; \forall p \in [1,\bar{p}] \label{eq:opt_prob_method_1_small_b}&\\ &\theta_{i_1} \in \Theta_{i_1}^{L\rho} \label{eq:opt_prob_method_1_small_c}&\\ &h_{p,o}(\theta_{i_1}) \in \Theta_{i_p}^{L\rho}, \; \forall p \in [2,\bar{p}]& \label{eq:opt_prob_method_1_small_d} \end{eqnarray} \end{subequations}\normalsize \eqref{eq:opt_prob_method_1_small} is a nonlinear optimization program (NLP) with linear cost \eqref{eq:opt_prob_method_1_small_a}, $2N\bar{p}$ nonlinear constraints \eqref{eq:opt_prob_method_1_small_b} (that require the preliminary solution of the $2N\bar{p}$ LPs \eqref{eq:preliminary_LP}), $2N$ linear constraints \eqref{eq:opt_prob_method_1_small_c}, finally $2N(\bar{p}-1)$ nonlinear constraints \eqref{eq:opt_prob_method_1_small_d}. All nonlinear constraints are polynomial, thus Jacobian and Hessians can be efficiently computed analytically and exploited in the numerical solver. A possible alternative is to use a quadratic cost in \eqref{eq:opt_prob_method_1}, e.g. $\boldsymbol{\tau}_i(\theta)^TQ\boldsymbol{\tau}_i(\theta)$ where $Q$ is a symmetric positive definite weighting matrix. This would penalize a weighted average of the simulation error bounds over the considered horizon $\bar{p}$, instead of its worst-case as done in \eqref{eq:opt_prob_method_1}. In this case, a similar reformulation can be carried out, resulting in an NLP with quadratic cost and linear and polynomial constraints. \subsection{Method II} \label{ss:pred_id_method_2} In the second approach, we search the one-step-ahead model that minimizes a standard simulation error criterion, while enforcing membership to the FPS $\Theta_{i_1}^{L\rho}$ and the exponentially decaying behavior of the iterated predictors parameters for $p>1$ up to $\bar{p}$. The corresponding NLP is: \begin{subequations} \label{eq:opt_prob_method_2} \begin{eqnarray} \hat{\theta}_{i_1}=&\text{arg} \min\limits_{\theta_{i_1} \in \Theta_{i_1}^{L\rho}}{\left\Vert \tilde{\boldsymbol{Y}}_i-\hat{\boldsymbol{Z}}_i(\theta_{i_1}) \right\Vert_2^2} \\ &\text{subject to} \nonumber\\ &h_{p,o}(\theta_{i_1}) \in \Gamma_{i_p}, \; \forall p \in [2,\bar{p}]\label{eq:opt_prob_method_2b} \end{eqnarray} \end{subequations} where \[ \resizebox{1\columnwidth}{!}{ $\begin{aligned} &\tilde{\boldsymbol{Y}}_i=\left[ \tilde{y}_i(1) \; \tilde{y}_i(2) \; \hdots \; \tilde{y}_i(N) \right]^T\\ &\hat{\boldsymbol{Z}}_i(\theta_{i_1})=\left[\tilde{\varphi}_{i_1}(0)^T \theta_{i_1} \; \tilde{\varphi}_{i_2}(0)^T h_{2,o}(\theta_{i_1}) \; \hdots \; \tilde{\varphi}_{i_N}(0)^T h_{N,o}(\theta_{i_1}) \right]^T. \end{aligned}$} \] Problem \eqref{eq:opt_prob_method_2} is a NLP with polynomial cost function, $2N$ linear constraints and $2(\bar{p}-1)$ polynomial constraints. Also in this method, Jacobian and Hessians can be efficiently computed analytically. \begin{remark} \label{rm:id_algo} The optimization problems of Methods I and II are always feasible by construction. The inclusion of constraints \eqref{eq:opt_prob_method_1_b} and \eqref{eq:opt_prob_method_2b} guarantees consistency with the results of Section \ref{s:ms_sm_appr}, including asymptotic stability of the derived models, as shown by Theorem \ref{th:asymptotic_stability}. It is also possible to adopt variations, e.g. by adding more constraints to Method II, like $\theta_{i_p}\in \Theta_{i_p}^{L\rho}$ for some selected $p\in[1,p_{max}]$. \end{remark} \subsection{Computational aspects}\label{ss:comput_aspects} Computational effort is often the main drawback in Set Membership identification. The optimization problems \eqref{eq:opt_prob_method_1_small} and \eqref{eq:opt_prob_method_2} are constrained Nonlinear Programs (NLP), thus they are not convex in general. Finding a feasible point for this class of problems can be computationally hard, even when this point is guaranteed to exist like in our case. In our tests in Section \ref{s:results}, the NLPs are solved resorting to Sequential Quadratic Programming (SQP) algorithms (MatLab's \verb|fmincon|). In the literature (e.g., \cite{nocedal2006numerical}) the guaranteed global convergence of SQP to a local minimizer has been proven, under rather mild assumptions. Yet, in applications these assumptions are not easy to verify. What we can however observe are the practical performance obtained with such a well-established numerical approach. Given the non-convex nature of the NLPs, for each problem instance we ran the solver several times, each one with a different initialization value, to evaluate whether it gave consistent results and to choose the best local optimum among the resulting ones. In particular, in all the runs for either NLPs \eqref{eq:opt_prob_method_1_small} or \eqref{eq:opt_prob_method_2} (around 200 for each problem and for both the numerical example and the real-world application in Section \ref{s:results}), the SQP algorithm was always able to converge to a feasible local minimizer.\\ The complexity of the NLP mainly depends on the FPSs, which are used to define the constraints and to compute the simulation error bounds. The FPSs are polytopes whose number of facets generally grows linearly with the number of data points, and whose dimensionality grows linearly with the horizon $p$ in the multi-step approach adopted here. To reduce complexity, in the literature there are several contributions proposing to outer-approximate the FPSs, see references provided in Section \ref{ss:FPS_and_tau_def}. These approaches present different trade-offs between complexity reduction and conservativeness.\\ An alternative we prefer in our context, where computational time is not critical, since the identification is carried out off-line, is to resort to a redundant constraint removal procedure. In Method I and II, the set membership constraints are nonlinear in the optimization variable $\theta_{i_1}$. However, for each $p$ the corresponding FPS features $2N$ inequalities that are linear in the entries of $\theta_{i_p}$. Therefore, we can split \[ h_{p,o}(\theta_{i_1}) \in \Theta_{i_p}, \; \forall p, \] into \[\theta_{i_p} \in \Theta_{i_p} \; \wedge \theta_{i_p}=h_{p,o}(\theta_{i_1}), \; \forall p, \] and then carry out a redundant constraint removal routine on each set of linear constraints $\theta_{i_p} \in \Theta_{i_p}$ for $p\in[1,\bar{p}]$. In \cite{paulraj_comparative}, a comparative analysis of different redundant constraints identification approaches is presented. We tried in our tests the one described in \cite{caron_mcdonald_ponic}, which is based on the minimization of each linear constraint function, subject to the remaining constraints. Then, if the obtained optimal value is positive, the constraint is marked as redundant and can be removed from the set. This method requires the solution of as many LPs as the number of original constraints, for each FPS. However, in our tests it consistently reduces the total number of constraints we are dealing with. \section{State-space predictor form} \label{s:ss_form} When the state is measurable, we can identify a predictor model of the form \eqref{eq:sist_desc}, where $C$ is replaced by the identity matrix. Therefore, the $p$-steps-ahead $i$-th output is: \begin{equation} \label{eq:output_ss_form} z_i(k+p)=x_i(k+p)=C_i A^px(k)+C_i \sum_{j=1}^p A^{j-1}Bu(k+p-j), \end{equation} where $C_i$ is the $i$-th row of the identity matrix. We form the regressor $\psi_p \in \mathbb{R}^{n+mp}$ as: \begin{equation} \label{eq:regr_ss_def}\nonumber \psi_p(k)= \left[ x(k)^T \; u(k)^T \; u(k+1)^T \; \cdots \; u(k+p-1)^T \right]^T, \end{equation} and the parameter vector $\theta^0_{i_p} \in \mathbb{R}^{n+mp}$ is: \begin{equation} \label{eq:param_vect_ss_form}\nonumber \theta^0_{i_p}= \begin{bmatrix} C_iA^p & C_iA^{p-1}B & C_iA^{p-2}B & \hdots & C_iAB & C_iB \end{bmatrix}^T. \end{equation} Then, \eqref{eq:output_ss_form} can be written as $z_i(k+p)=\psi_p(k)^T \theta^0_{i_1}$. \\ Note that, differently from the ARX form considered in the previous sections, the regressor is now the same for all the $n$ output equations. The noise-corrupted measure of the system state is $y(k)=z(k)+d(k)$. We define the one-step-ahead model as: \begin{equation} \label{eq:pred_1s_ss_form} \hat{z}_i(k+1)=\varphi_1(k)^T \theta_{i_1}, \end{equation} where $\varphi_1(k)=\left[y(k)^T \; u(k)^T \right]^T \in \mathbb{R}^{n+m}$, and $\theta_{i_1}=\left[ C_i A \; C_i B \right]^T \in \mathbb{R}^{n+m}$. Then, the multi-step predictors are obtained by iteration of \eqref{eq:pred_1s_ss_form}, and their parameters are polynomial functions of the parameters of the one-step-ahead predictor, denoted as $\theta_{i_p}=h_{p,n}(\theta_{i_1})\in \mathbb{R}^{n+mp}$. \\ Under Assumptions \ref{as:asympt_stable}-\ref{as:bounded_dist}, the regressor $\psi_p$ belongs to a compact set $\Psi_p$: \begin{equation*} \psi_p(k)\in \Psi_p \subset \mathbb{R}^{n+mp}, \; \Psi_p \text{ compact, } \forall p \in \mathbb{N}, \; \forall k \in \mathbb{Z}, \end{equation*} and $\varphi_p$ belongs to a compact set $\Phi_p$: \begin{equation*} \varphi_p(k) \in \Phi_p = \Psi_p \oplus \mathbb{D}_p, \; \forall p \in \mathbb{N}, \; \forall k \in \mathbb{Z}, \end{equation*} where $\mathbb{D}_p \doteq \left\{ \left[ d^T,0,\hdots,0 \right]^T : |d| \leq \bar{d}_0 \right\}$. \\ The sampled data set is defined as: \begin{equation*} \tilde{\mathscr{V}}_{i_p}^N \doteq \left\{ \tilde{v}_{i_p}(k)= \begin{bmatrix} \tilde{\varphi}_p(k) \\ \tilde{y}_{i_p}(k) \end{bmatrix}, \; k=1,\hdots,N \right\} \subset \mathbb{R}^{1+n+mp}, \end{equation*} with $\tilde{y}_{i_p}(k) \doteq \tilde{y}_i(k+p)$, and its continuous counterpart is: \begin{equation*} \resizebox{1\columnwidth}{!}{ $ \mathscr{V}_{i_p} \doteq \left\{ v_{i_p}= \begin{bmatrix} \varphi_p \\ y_{i_p} \end{bmatrix}: \, y_{i_p} \in Y_{i_p}(\varphi_p), \, \forall \varphi_p \in \Phi_p \right\} \subset \mathbb{R}^{1+n+mp}, $} \end{equation*} where $Y_{i_p}(\varphi_p) \subset \mathbb{R}$ is the compact set of all the possible $i$-th output values corresponding to each regressor $\varphi_p \in \Phi_p$, and to every possible noise realization $d_i: |d_i|\leq \bar{d}_{0_i}$. Assumption \ref{as:info_data} and its consequences apply also here, as in Section \ref{s:multitep_pred}. Moreover, all the results presented in Section \ref{s:ms_sm_appr} can be straightforwardly extended to the case of the predictor defined in \eqref{eq:pred_1s_ss_form}. The main difference is that here the statement of Corollary \ref{c:lambda_rate_conv} becomes $\lambda_{i_p} = \bar{d}_0^T \left\vert (C_iA^p)^T \right\vert \leq \left\Vert \bar{d}_0 \right\Vert_1 L_i \rho_i^{p+1}$, and thus \eqref{eq:L_eff_arx} becomes $\hat{L}_i=\nicefrac{L_i'}{\left\Vert \bar{d} \right\Vert_1}$. Furthermore, also the results presented in Section \ref{s:tau_inf_intro} can be extended to the predictor model defined by \eqref{eq:pred_1s_ss_form}. Here, going through the same reasoning of \eqref{eq:err_bound_arx_pgrande}-\eqref{eq:tau_arx_pgrande_iter1} leads to: \begin{equation} \label{eq:err_bound_ss_pgrande}\nonumber \begin{aligned} &\left\vert z_i(k+\ell\bar{p}+j) - \hat{z}_i(k+\ell\bar{p}+j) \right\vert \leq \hat{\tau}_{i_{\ell\bar{p}+j}}(\theta_{i_{\ell\bar{p}+j}}) \leq \\ &\; \; \leq \hat{\tau}_{i_{\bar{p}}}(\theta_{i_{\bar{p}}}) \sum_{m=0}^{\ell-1} \chi_{i,\bar{p}}^m + \left\Vert \bar{d} \right\Vert_1 \sum_{m=1}^\ell \chi_{i,\bar{p}}^m + \left\Vert \hat{\tau}_j \right\Vert_1 \chi_{i,\bar{p}}^\ell, \end{aligned} \end{equation} where $\hat{\tau}_j=\left[\hat{\tau}_{1_j}(\theta_{1_j}), \, \hdots, \, \hat{\tau}_{n_j}(\theta_{n_j}) \right]^T$ and $\chi_{i,\bar{p}}=\hat{L}_i \hat{\rho}_i^{\bar{p}+1}$. \\ Theorem \ref{th:conv_tau_inf_ARX} and the related Remarks and Lemmas apply straightforwardly to \eqref{eq:pred_1s_ss_form}, with minor modifications: the convergence condition of Theorem \ref{th:conv_tau_inf_ARX} is here given by $\left\vert \chi_{i,\bar{p}} \right\vert = \left\vert \hat{L}_i \hat{\rho}_i^{\bar{p}+1} \right\vert < 1$, and \eqref{eq:tau_inf_ARX_def} becomes \[ \hat{\tau}_{i_{\infty}}(\theta_{i_{\bar{p}}})=\hat{\tau}_{i_{\bar{p}}}(\theta_{i_{\bar{p}}}) \left( \frac{1}{1-\chi_{i,\bar{p}}} \right) + \left\Vert \bar{d} \right\Vert_1 \left( \frac{\chi_{i,\bar{p}}}{1-\chi_{i,\bar{p}}} \right). \] Lemma \ref{l:overshoot_cond} and Remark \ref{rm:end_of_overshoot} still apply, but here $\tau_{i_{max}}$ has to be replaced with $\left\Vert \hat{\tau}_j \right\Vert_1$. \\ Finally, $\hat{\theta}_{i_1}$ is identified resorting to the methods presented in Section \ref{s:pred_ident}, and the estimated system matrices $\hat{A}\approx A$ and $\hat{B}\approx B$ are built as: \[ \begin{array}{l} \hat{A}= \left[\begin{array}{c} \hat{\theta}_{1_1}^{(1:n)}\\ \vdots\\ \hat{\theta}_{n_1}^{(1:n)}\\ \end{array}\right] ,\; \hat{B}= \left[\begin{array}{c} \hat{\theta}_{1_1}^{(n+1:m)}\\ \vdots\\ \hat{\theta}_{n_1}^{(n+1:m)}\\ \end{array}\right], \end{array} \] where $\hat{\theta}_{i_1}^{(j:l)}$ denotes the elements of vector $\hat{\theta}_{i_1}$ from the $j$-th entry to the $l$-th entry. \section{Simulation and experimental results}\label{s:results} \subsection{Simulation results} \label{s:sim_res} We first assess the performance of the proposed identification procedure in a numerical example, and compare the results with those of established identification approaches: the prediction error method (PEM), and the simulation error method (SEM). PEM approach identifies the model parameters by minimizing the squared $\ell_2$-norm of the one-step-ahead prediction error. SEM approach is based on the minimization of the squared $\ell_2$-norm of the simulation error, where the simulation of the system output is obtained by iteration of the prediction model, and corresponds to the unconstrained version of \eqref{eq:opt_prob_method_2}. More details can be found e.g. in \cite{soderstrom1989system} or \cite{tomita1992equation}. The numerical example analyzed here also gives insight on the procedures proposed in Section \ref{ss:estim_d_o_rho}. We consider the following one input, three outputs underdamped asymptotically stable system in continuous time $t$: \begin{equation} \label{eq:underdamp_sys_ss} \begin{aligned} \dot{x}(t) &= \begin{bmatrix} 0 & 0 & -160 \\ 1 & 0 & -24 \\ 0 & 1 & -10.8 \end{bmatrix} x(t)+\begin{bmatrix} 160 \\ 0 \\ 0 \end{bmatrix} u(t) \\ y(t)&=x(t)+d(t) \end{aligned} \end{equation} \begin{figure}[thpb] \centering \includegraphics[width=1.0\columnwidth]{lambda_ARX_d07007} \caption{Numerical example: estimated values of $\underline{\lambda}_{i_p}$ with $\bar{d}=\left[0.7 \; 0.7 \; 0.07 \right]^T$ for the ARX predictor case. Solid line: $\underline{\lambda}_{1_p}$; dashed line: $\underline{\lambda}_{2_p}$; dotted line: $\underline{\lambda}_{3_p}$.} \label{f:lambda_for_est_d_ARX} \end{figure} The system eigenvalues are: $s_1=-10$ and $s_{2,3}=-0.4 \pm i 3.98$, and the output measurements are affected by uniformly distributed random noise, with $\bar{d}_{0}=[1\;1\;0.1]^T$. The input takes value in the set $\{-1; \; 0; \; 1 \}$ randomly every 4 time units. The considered data set is composed of 10000 input and output data points collected with a sampling frequency of 10 samples per time unit. The first half of the data set is used for the identification phase, while the second half is used for validation. \begin{figure}[thpb] \centering \begin{tabular}{c} \includegraphics[width=0.94\columnwidth]{lambda_ARX_o_4_mod} \\ \includegraphics[width=0.94\columnwidth]{lambda_ARX_o_3_mod} \\ \includegraphics[width=0.94\columnwidth]{lambda_ARX_o_2_mod} \end{tabular} \begin{tikzpicture}[baseline,overlay] \node[font=\color{black}] at (-0.2,2.79) {\small (a)}; \node[font=\color{black}] at (-0.2,0.365) {\small (b)}; \node[font=\color{black}] at (-0.2,-2.05) {\small (c)}; \end{tikzpicture} \caption{Numerical example: estimated values of $\underline{\lambda}_{i_p}$ with $\bar{d}=\left[1 \; 1 \; 0.1 \right]^T$ for the ARX predictor case. Solid line: $\underline{\lambda}_{1_p}$; dashed line: $\underline{\lambda}_{2_p}$; dotted line: $\underline{\lambda}_{3_p}$. Fig. (a): $o=4$; fig. (b): $o=3$; fig. (c): $o=2$. The dashed vertical lines indicate the value of $\bar{p}=\max\limits_i{\bar{p}_i}$ obtained for the chosen $\bar{d}$, and are used to set $o$ at the lowest possible value such that all $\underline{\lambda}_{i_p}=0, \, \forall p>\bar{p}$.} \label{f:lambda_ARX_fun_o} \end{figure} To carry out a complete analysis, we consider both the ARX model formulation and the state-space one. In Procedure \ref{p:d_bar_est_procedure}, we start with a guess of the noise bound $\bar{d}=\left[0.7 \; 0.7 \; 0.07 \right]^T$, and compute the corresponding values of $\underline{\lambda}_{i_p}$, for $p \in [1,150]$, resorting to \eqref{eq:lambda_p_calc}. The results are depicted in Fig. \ref{f:lambda_for_est_d_ARX} for the ARX case; a similar behavior is obtained for the state-space model. As predicted by Theorem \ref{th:conv_lambda_diff_d}, $\underline{\lambda}_p$ converges to $\left[ 0.3 \; 0.3 \; 0.03 \right]^T$, which corresponds to $\bar{d}_{0_i}-\bar{d}_i$. Then, we set the noise bound to $\bar{d}=\left[1 \; 1 \; 0.1 \right]^T$, which is indeed consistent with the real one. Fig. \ref{f:lambda_ARX_fun_o} depicts the results of Procedure \ref{p:o_est_procedure}. It correctly indicates $o=3$ as the minimum model order of the ARX predictors. Then, we carry out Procedure \ref{p:decay_est_procedure} to estimate the parameters $\hat{L}_i$ and $\hat{\rho}_i$ of the exponentially decaying trend, see \eqref{eq:est_L_rho_optprob} and \eqref{eq:L_eff_arx}. For the ARX predictor, the resulting parameters are $\hat{L}=\left[3.094 \; 2.162 \; 0.259 \right]^T$ and $\hat{\rho}=\left[0.959 \; 0.959 \; 0.959 \right]^T$, while for the state-space case we obtain $\hat{L}=\left[3.982 \; 0.956 \; 0.092 \right]^T$ and $\hat{\rho}=\left[0.961 \; 0.965 \; 0.961 \right]^T$. Fig. \ref{f:Lr_ARX} shows the estimated decay bounds over the corresponding values of $\underline{\lambda}_{i_p}$ for the ARX model structure. Similar results are obtained for the state-space structure. \begin{figure}[thpb] \centering \includegraphics[width=1.0\columnwidth]{Lr_ARX} \caption{Numerical example: estimated values of $\underline{\lambda}_{i_p}$ and of the corresponding bound $\hat{L}_i\hat{\rho}_i^p$ for the ARX predictor case. Solid line: $\underline{\lambda}_{1_p}$; dashed line: $\underline{\lambda}_{2_p}$; dotted line: $\underline{\lambda}_{3_p}$. The exponentially decaying bounds are represented with thin continuous lines which lie over the corresponding $\underline{\lambda}_{i_p}$.} \label{f:Lr_ARX} \end{figure} \begin{figure}[thpb] \centering \includegraphics[width=1.0\columnwidth]{Tau_G_ARX_y2} \caption{Numerical example: guaranteed simulation error bound $\hat{\tau}_{2_p}$ on $\hat{z}_2$ for the ARX predictor. Dotted line with `$+$': Method I; solid line with `$\diamond$': Method II; dashed line with `$\square$': SEM approach; dash-dot line with: `$\circ$': PEM approach.} \label{f:tau_G_ARX} \end{figure} \begin{table}[ht] \caption{\label{t:G_tau_err_ARX}Numerical example: guaranteed simulation error bound and worst-case prediction error on validation data for the ARX predictor.} \centering \setlength\tabcolsep{3.5pt} \small \begin{tabular}{c c c c c c c c c c} \toprule & & \multicolumn{4}{c}{i=1} & \multicolumn{4}{c}{i=3} \\ \cmidrule(lr){3-6}\cmidrule(lr){7-10} & $p$: & $1$ & $8$ & $19$ & $27$ &$1$ & $12$ & $35$ & $50$ \\ \midrule \multirow{2}{*}{SEM} & $\tau_{i_p}$ & 8.11 & \textbf{2.72} & 8.13 & 6.10 & 0.76 & 1.20 & 0.45 & \textbf{0.22} \\ & $e_{i_p}$ & 4.11 & \textbf{1.90} & \textbf{4.06} & \textbf{3.27} & 0.46 & 0.61 & 0.30 & 0.19 \\ \midrule \multirow{2}{*}{Method II} & $\tau_{i_p}$ & \textbf{6.26} & 5.03 & \textbf{7.36} & \textbf{5.92} & \textbf{0.79} & \textbf{0.91} & \textbf{0.40} & 0.24 \\ & $e_{i_p}$ & \textbf{3.15} & 4.01 & 4.17 & 3.40 & \textbf{0.36} & \textbf{0.39} & \textbf{0.24} & \textbf{0.18} \\ \bottomrule \end{tabular} \normalsize \end{table} \begin{table}[ht] \caption{\label{t:G_tau_err_SS}Numerical example: guaranteed simulation error bound and worst-case prediction error on validation data for the state-space predictor.} \centering \setlength\tabcolsep{3.5pt} \small \begin{tabular}{c c c c c c c c c c} \toprule & & \multicolumn{4}{c}{i=1} & \multicolumn{4}{c}{i=3} \\ \cmidrule(lr){3-6}\cmidrule(lr){7-10} & $p$: & $1$ & $12$ & $35$ & $50$ &$1$ & $8$ & $19$ & $27$ \\ \midrule \multirow{2}{*}{SEM} & $\tau_{i_p}$ & 9.97 & 13.9 & 6.34 & 3.45 & 0.27 & 0.64 & 0.29 & 0.31 \\ & $e_{i_p}$ & 4.58 & 8.85 & 3.21 & 2.47 & 0.21 & 0.38 & 0.25 & 0.27 \\ \midrule \multirow{2}{*}{Method II} & $\tau_{i_p}$ & \textbf{6.45} & \textbf{7.55} & \textbf{3.41} & \textbf{2.21} & \textbf{0.27} & \textbf{0.31} & \textbf{0.16} & \textbf{0.13} \\ & $e_{i_p}$ & \textbf{3.03} & \textbf{3.54} & \textbf{2.04} & \textbf{1.76} & \textbf{0.18} & \textbf{0.19} & \textbf{0.15} & \textbf{0.13} \\ \bottomrule \end{tabular} \normalsize \end{table} \begin{figure}[thpb] \centering \includegraphics[width=0.99\columnwidth]{ErrV_G_ARX_y2} \caption{Numerical example: worst-case validation error $e_{2_p}$ on $\hat{z}_2$ for the ARX predictor. Dotted line with `$+$': Method I; solid line with `$\diamond$': Method II; dashed line with `$\square$': SEM approach; dash-dot line with: `$\circ$': PEM approach.} \label{f:err_G_ARX} \end{figure} The parameters of the predictors are eventually identified using Methods I and II, and the FPS are defined as in \eqref{eq:FPS_with_decay}, where $\hat{\bar{\varepsilon}}_{i_p}$ is obtained from $\underline{\lambda}_{i_p}$ with $\alpha=1.2$, see \eqref{eq:eps_hat_def}. As benchmark, we employ predictors identified using the PEM and SEM approaches. We compare the performance of the identified models in terms of guaranteed simulation error bounds $\hat{\tau}_{i_p}(\theta_{i_p})$, computed over the identification data set with $\gamma=1.1$, and of worst-case validation error, defined as: \[ e_{i_p}=\max_{k=1,\hdots,N} \left\vert \tilde{y}_i(k+p)-\hat{z}_i(k+p) \right\vert \] and calculated over the validation data set. Fig. \ref{f:tau_G_ARX} depicts the obtained guaranteed error bounds related to the output $z_2$ for the identified ARX models, while Fig. \ref{f:err_G_ARX} presents the corresponding observed worst-case validation error. \begin{figure}[thpb] \centering \begin{tabular}{c} \small (a) \normalsize \\ \includegraphics[width=0.92\columnwidth]{Y2_sim_G_ARX_small} \\ \small (b) \normalsize \\ \includegraphics[width=0.92\columnwidth]{Y2_sim_G_ARX_zoom_small} \end{tabular} \caption{Numerical example, Fig. (a): simulated output $\hat{z}_2$ with the ARX predictor; Fig. (b): detailed view. Black solid line: measured output $\tilde{y}_2$; red solid line: real system output $z_2$; dashed line: simulated output with SEM predictor; dash-dotted line: simulated output with PEM predictor; dotted line: simulated output with Method II predictor.} \label{f:Y_sim_G_ARX} \end{figure} It can be noted that the model identified with Method I achieves (as expected from the employed cost criterion) the smallest worst-case (over $p$) guaranteed error bound, however at the cost of a higher guaranteed bound for longer horizon, as compared to Method II and SEM. Qualitatively similar outcomes are obtained for the other outputs and for the state-space model structure. More values of $\hat{\tau}_{i_p}$ and $e_{i_p}$ are reported in Tables \ref{t:G_tau_err_ARX} and \ref{t:G_tau_err_SS}. These results indicate that the proposed identification Method II has comparable, and often better, performance with respect to the SEM approach, in terms of both error bound and observed validation error, and overall better performance than the other two approaches. In particular, we notice that the predictor identified using Method II has good performance in long-range simulation, as the SEM approach, but also with better performance for short horizon values, outperforming the SEM. In particular, Fig. \ref{f:err_G_ARX} and Tables \ref{t:G_tau_err_ARX} and \ref{t:G_tau_err_SS} show how the predictor identified using Method II is able to provide small one-step-ahead prediction error, as the PEM approach, and small simulation error, as the SEM approach, combining the advantages of the two identification approaches. This is possible thanks to the constraints $\theta_{i_1}\in \Theta_{i_1}^{L\rho}$ in \eqref{eq:opt_prob_method_2}, which are able to improve the performance over the SEM approach in terms of one-step-ahead prediction error. \begin{figure}[thpb] \centering \begin{tabular}{c} \small (a) \normalsize \\ \includegraphics[width=0.98\columnwidth]{Y_pred_p1_and_bound_G_ARX_y2} \\ \small (b) \normalsize \\ \includegraphics[width=0.98\columnwidth]{Y_sim_and_bound_G_ARX_y2} \end{tabular} \caption{Numerical example, Fig. (a): one-step prediction of output $\hat{z}_2$ with the ARX predictor ($p=1$); Fig. (b): simulated output $\hat{z}_2$ with the ARX predictor. Black solid line: measured output $\tilde{y}_2$; red solid line: real system output $z_2$; dashed line: predicted/simulated output with Method II predictor; thin black lines: Method II predictor error bounds.} \label{f:Y_bound_G_ARX} \end{figure} The model identified using Method I, on the other hand, obtains a lower simulation error for short horizon with respect to the other approaches, at the cost of a higher simulation error for longer horizon. This stems from the fact that we are minimizing the worst-case error over the whole horizon. Using a quadratic cost in \eqref{eq:opt_prob_method_1_large} in order to minimize the average error, as commented in Remark \ref{rm:id_algo}, could partly improve this issue. Besides the worst-case performance, Tables \ref{t:G_ARX_RMSE} and \ref{t:G_SS_RMSE} present exemplifying values of the root mean squared error (RMSE) for the predictors obtained using different identification methods, having respectively an ARX and a state-space formulation. The RMSE is calculated over the validation data set as: \[ \text{RMSE}=\sqrt{\frac{\sum_{k=1}^N \Big( \tilde{y}_i(k+p)-\hat{z}_{i}(k+p) \Big)^2}{N}}, \] i.e. it considers the $p$-steps-ahead simulation error. The results in the tables confirm the good performance of Method II, since the obtained predictor yields better (for short horizon) or similar RMSE as compared with SEM. The predictor identified with Method I has good performance for short simulation horizons, but its error increases for longer ones. \begin{figure}[thpb] \centering \includegraphics[width=0.99\columnwidth]{Tau_iter_G_ARX_y2} \caption{Numerical example: infinite-horizon error bound $\hat{\tau}_{2_p}$ for the ARX predictor identified using Method II. Solid line: bound calculated using \eqref{eq:tau_hat_global_def} for $p \in [1,120]$; dotted line with `$\circ$': iterative bound \eqref{eq:tau_arx_pgrande_iter} with $\bar{p}=80$; dashed line with `$\circ$': infinite-horizon bound \eqref{eq:tau_inf_ARX_def} with $\bar{p}=80$; dotted line with `$\square$': iterative bound with $\bar{p}=100$; dashed line with `$\square$': infinite-horizon bound with $\bar{p}=100$.} \label{f:tau_iter_G_ARX} \end{figure} \begin{table}[ht] \caption{\label{t:G_ARX_RMSE}Numerical example, validation data: Root Mean Square Error for $p$-step-ahead prediction and simulation for ARX models.} \centering \setlength\tabcolsep{3.5pt} \small \begin{tabular}{c c c c c c c c} \toprule \multicolumn{2}{c}{RMSE} & $p=1$ & $p=10$ & $p=20$ & $p=30$ & $p=60$ & sim \\ \midrule & $y_1$ & 5.539 & 21.42 & 26.32 & 27.77 & 30.27 & 30.56 \\ PEM & $y_2$ & \textbf{0.930} & 1.287 & 1.636 & 1.775 & 1.923 & 1.937 \\ & $y_3$ & 0.097 & 0.179 & 0.222 & 0.234 & 0.246 & 0.248 \\ \midrule & $y_1$ & 1.523 & 1.651 & 1.366 & \textbf{0.728} & \textbf{0.620} & \textbf{0.580} \\ SEM & $y_2$ & 1.018 & \textbf{0.935} & 0.787 & 0.667 & 0.661 & 0.577 \\ & $y_3$ & 0.159 & 0.185 & 0.163 & 0.082 & 0.065 & \textbf{0.059} \\ \midrule & $y_1$ & \textbf{0.979} & 1.661 & 1.431 & 1.344 & 1.403 & 1.411 \\ Method I & $y_2$ & 0.946 & 0.987 & 0.983 & 1.016 & 1.148 & 1.176 \\ & $y_3$ & \textbf{0.095} & \textbf{0.101} & \textbf{0.100} & 0.102 & 0.109 & 0.119 \\ \midrule & $y_1$ & 1.178 & \textbf{1.278} & \textbf{1.082} & 0.894 & 0.898 & 0.897 \\ Method II & $y_2$ & 0.978 & 0.941 & \textbf{0.750} & \textbf{0.589} & \textbf{0.577} & \textbf{0.573} \\ & $y_3$ & 0.130 & 0.134 & 0.106 & \textbf{0.064} & \textbf{0.060} & \textbf{0.059} \\ \bottomrule \end{tabular} \normalsize \end{table} Fig. \ref{f:Y_sim_G_ARX} presents an example of time-course of the system output $z_2$, comparing the real, measured and simulated values. In the detailed view of Fig. \ref{f:Y_sim_G_ARX} (b) it is possible to appreciate how the simulation obtained using Method II predictor overlaps the true system output $z_2$. Fig. \ref{f:Y_bound_G_ARX} displays an other example of time-course of the system output, comparing the real and measured values with the one-step-ahead prediction, Fig. \ref{f:Y_bound_G_ARX} (a), and with the long-range simulation, Fig. \ref{f:Y_bound_G_ARX} (b), reporting in both cases the corresponding error bounds. From Fig. \ref{f:Y_bound_G_ARX} (b) it is possible to notice that the guaranteed error bound for the long-range simulation case is smaller than the amplitude of the noise $d$. Thus, the distance of $\tilde{y}_2$ from $z_2$ is often greater then the error bound of $\hat{z}_2$. Fig. \ref{f:tau_iter_G_ARX} depicts the comparison between the simulation error bound $\hat{\tau}_{2_p}$ calculated using the definition \eqref{eq:tau_hat_global_def} for $p\in[1,120]$, the iterative error bound \eqref{eq:tau_arx_pgrande_iter} and the infinite-horizon error bound \eqref{eq:tau_inf_ARX_def}, obtained setting $\bar{p}=80$ and $\bar{p}=100$, for the case of the predictor having an ARX structure, identified using Method II. Here, it is possible to notice that the iterative and the infinite-horizon error bounds become a tighter upper-bound of $\hat{\tau}_{2_p}$, obtained from its definition, as $\bar{p}$ increases. \\ Fig. \ref{f:Tau_var_alpha} shows the effects of the choice of $\alpha$ in \eqref{eq:eps_hat_def} on the identification performance. Here, different values of $\alpha$ are used, repeating the identification procedure using Method II, and computing the simulation error bound $\hat{\tau}_{2_p}$ for the obtained models. It is possible to see that for $\alpha=1$ the obtained FPS is too small, resulting in a validation error $e_{2_p}$ that violates the provided error bound, as motivated by Remark \ref{rm:bounds_conservativeness}. Moreover, we can see that, with a smaller $\alpha$, the constraint $\theta_{i_1}\in \Theta_{i_1}^{L\rho}$ provides a reduced error for short prediction horizons, at the price of an increase of the error for longer horizons, whereas a bigger value of $\alpha$ obtains the opposite effect. Table \ref{t:G_ARX_RMSE_var_alpha} presents the RMSE obtained by models identified using Method II with different values of $\alpha$. Here, it is possible to appreciate that a small increase of $\alpha$ reduces the simulation RMSE, but the improvements significantly reduce after a certain value (e.g. $\alpha=1.2$ for $y_1$ and $y_3$), making it useless to choose a greater $\alpha$, which will only provide an increase in the one-step-ahead error, as shown from Fig. \ref{f:Tau_var_alpha}. \begin{figure}[thpb] \centering \includegraphics[width=1.0\columnwidth]{Tau_vs_ep_alpha} \caption{Numerical example: worst-case validation error $e_{2_p}$ and guaranteed simulation error bound $\hat{\tau}_{2_p}$ on $\hat{z}_2$ for the ARX predictor identified using Method II for different values of $\alpha$. Solid line: $e_{2_p}$ for $\alpha=1.0$; dashed line: $e_{2_p}$ for $\alpha=1.1$; dotted line: $e_{2_p}$ for $\alpha=1.2$; light gray area: $\hat{\tau}_{2_p}$ for $\alpha=1.0$; medium gray area: $\hat{\tau}_{2_p}$ for $\alpha=1.1$; dark gray area: $\hat{\tau}_{2_p}$ for $\alpha=1.2$.} \label{f:Tau_var_alpha} \end{figure} \begin{table}[ht] \caption{\label{t:G_ARX_RMSE_var_alpha}Numerical example, validation data: simulation Root Mean Square Error for ARX models for different values of $\alpha$.} \centering \setlength\tabcolsep{4.75pt} \small \begin{tabular}{c c c c c c c c c} \toprule RMSE & $\alpha$: & $1.0$ & $1.05$ & $1.1$ & $1.15$ & $1.2$ & $1.25$ & $1.3$ \\ \midrule & $y_1$ & 2.67 & 1.89 & 1.70 & 1.50 & 1.29 & 1.27 & 1.27 \\ Method II & $y_2$ & 0.70 & 0.57 & 0.57 & 0.57 & 0.57 & 0.57 & 0.57\\ & $y_3$ & 0.14 & 0.08 & 0.07 & 0.07 & 0.06 & 0.06 & 0.06\\ \bottomrule \end{tabular} \normalsize \end{table} Finally, Tables \ref{t:G_eig} and \ref{t:G_sys_matrix} report a comparison between the eigenvalues and the $A$ and $B$ matrices of the discrete-time system, obtained applying the trapezoid approximation rule to \eqref{eq:underdamp_sys_ss}, and those of the model identified with the state-space predictors, for the various identification approaches. \begin{table}[ht] \caption{\label{t:G_SS_RMSE}Numerical example, validation data: Root Mean Square Error for $p$-step-ahead prediction and simulation for state-space models.} \centering \setlength\tabcolsep{3.5pt} \small \begin{tabular}{c c c c c c c c} \toprule \multicolumn{2}{c}{RMSE} & $p=1$ & $p=10$ & $p=20$ & $p=30$ & $p=60$ & sim \\ \midrule & $y_1$ & \textbf{1.041} & 1.437 & 1.790 & 1.947 & 2.181 & 2.214 \\ PEM & $y_2$ & \textbf{0.657} & 0.673 & 0.731 & 0.776 & 0.821 & 0.832 \\ & $y_3$ & \textbf{0.067} & 0.067 & 0.070 & 0.073 & 0.078 & 0.078 \\ \midrule & $y_1$ & 1.501 & 1.342 & 2.085 & 1.422 & 0.747 & 0.627 \\ SEM & $y_2$ & 0.824 & 0.907 & 0.612 & 0.663 & 0.604 & 0.607 \\ & $y_3$ & 0.073 & 0.099 & 0.079 & 0.079 & 0.075 & 0.075 \\ \midrule & $y_1$ & 1.067 & 1.110 & 1.272 & 1.184 & 1.242 & 1.260 \\ Method I & $y_2$ & 0.716 & 0.646 & 0.630 & 0.638 & 0.647 & 0.654 \\ & $y_3$ & \textbf{0.067} & \textbf{0.065} & 0.061 & 0.062 & 0.063 & 0.064 \\ \midrule & $y_1$ & 1.061 & \textbf{1.069} & \textbf{1.026} & \textbf{0.730} & \textbf{0.604} & \textbf{0.584} \\ Method II & $y_2$ & 0.726 & \textbf{0.620} & \textbf{0.600} & \textbf{0.600} & \textbf{0.582} & \textbf{0.584} \\ & $y_3$ & 0.069 & \textbf{0.065} & \textbf{0.060} & \textbf{0.060} & \textbf{0.059} & \textbf{0.059} \\ \bottomrule \end{tabular} \normalsize \end{table} \begin{table}[ht] \caption{\label{t:G_eig}Numerical example: real and identified system eigenvalues.} \centering \setlength\tabcolsep{3.5pt} \small \begin{tabular}{c c} \toprule & Eigenvalues \\ \midrule True system (trapezoid approximation)& $0.889\pm i 0.369 \, , \; 0.333$ \\ \midrule PEM (state-space predictor) & $0.877\pm i0.369\, , \; 0.020$ \\ \midrule SEM (state-space predictor) & $0.885\pm i0.372\, , \; 0.723$ \\ \midrule Method I (state-space predictor) & $0.884\pm i0.369\, , \; 0.349$ \\ \midrule Method II (state-space predictor) & $0.885\pm i0.373\, , \; 0.213$ \\ \bottomrule \end{tabular} \normalsize \end{table} \begin{table}[ht] \caption{\label{t:G_sys_matrix}Numerical example: real and identified system parameters.} \centering \setlength\tabcolsep{2.8pt} \small \begin{tabular}{c c c} \toprule & A & B \\ \midrule \begin{tabular}{@{}c@{}} True system (trapezoid\\ approximation) \end{tabular} & $\begin{bmatrix} 0.979 & -0.564 & -9.335 \\ 0.096 & 0.895 & -1.964 \\ 0.004 & 0.058 & 0.265 \end{bmatrix}$ & $\begin{bmatrix} 15.91 \\ 0.785 \\ 0.021 \end{bmatrix}$ \\ \midrule \begin{tabular}{@{}c@{}} SEM \\ (state-space predictor) \end{tabular} & $\begin{bmatrix} 1.095 & -1.882 & 3.252 \\ 0.090 & 0.976 & -2.819 \\ 0.006 & 0.039 & 0.422 \end{bmatrix}$ & $\begin{bmatrix} 15.21 \\ 0.817 \\ -0.016 \end{bmatrix}$ \\ \midrule \begin{tabular}{@{}c@{}} Method II \\ (state-space predictor) \end{tabular} & $\begin{bmatrix} 0.963 & -0.448 & -10.38 \\ 0.111 & 0.760 & -0.647 \\ 0.003 & 0.059 & 0.261 \end{bmatrix}$ & $\begin{bmatrix} 16.03 \\ 0.557 \\ 0.031 \end{bmatrix}$ \\ \bottomrule \end{tabular} \normalsize \end{table} \subsection{Experimental case study} \label{s:exp_res} \begin{figure}[thpb] \centering \includegraphics[width=1\columnwidth ,trim={0 3cm 0 0},clip]{Glider.pdf} \caption{Experimental case study: considered tethered aircraft during an autonomous take-off maneuver.} \label{f:glider} \end{figure} Here, we present the results obtained with the proposed identification approach applied to data acquired from real-world test flights of a small-scale prototype of an autonomous tethered aircraft, used for Airborne Wind Energy (AWE) generation, see Fig. \ref{f:glider} and \cite{fagiano2018glider}. We focus on the identification of a model of the roll-rate dynamics of the aircraft, resorting to a data set collected during several experiments. The data acquisition begins right after the take-off phase of each test flight, when the aircraft starts performing eight-shaped flight patterns parallel to the ground. The system description, along with more detail about the measurements and data set acquisition, is available in \cite{fagiano2018glider}. As a first approximation, the dynamical equation for the roll angle of the aircraft is given by: \begin{equation} \label{eq:glider_roll_eq} \ddot{\sigma}(t)=a_{\sigma}\dot{\sigma}(t)+b_{\sigma}u(t), \end{equation} where $a_{\sigma}$ and $b_{\sigma}$ are parameters to be identified, and $u(t)$ is the control input for the ailerons. Equation \eqref{eq:glider_roll_eq} is a reasonable linear approximation of the nonlinear turning dynamics when the aircraft flies parallel to the ground, as in the considered experiments. The aircraft is autonomous, i.e. it features a feedback controller that manipulates the aileron, rudder, and front propeller to achieve the desired figure-of-eight patterns, which are typical of AWE applications. The data set includes measures of the roll rate and of the ailerons input signal, acquired with a sampling frequency of 50 Hz, given by $\tilde{y}(t)=x(t)+d(t)$, where $d(t)$ is the unknown measurement noise, and $\tilde{u}(t)$, respectively. The identification data set is composed of 11000 samples of each signal, while the validation data set features 6600 data points. Since the system state is measurable, we resort to a state-space form predictor of order 1. We apply Procedures \ref{p:d_bar_est_procedure} and \ref{p:decay_est_procedure}, obtaining $\bar{d}=0.82$, $\hat{L}=1.31$, $\hat{\rho}=0.995$ and $\bar{p}=691$. Fig. \ref{f:lambda_glider_SS} depicts the behavior of the error bound $\underline{\lambda}_p$ after the estimation of the disturbance bound $\bar{d}$. Then, we resort to Method II to identify the unknown parameters of \eqref{eq:glider_roll_eq}, obtaining $\hat{a}_{\sigma}=0.959$ and $\hat{b}_{\sigma}=0.120$, and we test the predictor performance against PEM and SEM approaches. Figs. \ref{f:tau_glider_SS} and \ref{f:err_glider_SS} show a performance comparison in terms of guaranteed simulation error bound $\hat{\tau}_p$ and validation error $e_p$, while Fig. \ref{f:Y_sim_glider_SS} presents an example of time-course of the roll rate, both measured (validation data) and simulated. Table \ref{t:glider_SS_RMSE} shows the RMSE for different horizon lengths. These results confirm that the predictor identified with Method II represents a good trade-off between the PEM and the SEM approaches, combining the one-step-ahead accuracy of the first, with the simulation accuracy over longer horizons of the latter. \begin{figure}[thpb] \centering \includegraphics[width=0.99\columnwidth]{Lambda_glider_SS} \caption{Experimental case study: estimated value of $\underline{\lambda}_p$ using a predictor in the state-space form.} \label{f:lambda_glider_SS} \end{figure} \begin{figure}[thpb] \centering \includegraphics[width=0.99\columnwidth]{Tau_glider_SS} \caption{Experimental case study: guaranteed simulation error $\hat{\tau}_p$. Solid line with $\diamond$: Method II; dashed line with $\square$: SEM approach; dash-dotted line with $\circ$: PEM approach.} \label{f:tau_glider_SS} \end{figure} \begin{figure}[thpb] \centering \includegraphics[width=0.99\columnwidth]{ErrV_glider_SS} \caption{Experimental case study: validation error $e_p$. Solid line with $\diamond$: Method II; dashed line with $\square$: SEM approach; dash-dotted line with $\circ$: PEM approach.} \label{f:err_glider_SS} \end{figure} \begin{table}[ht] \caption{\label{t:glider_SS_RMSE}Experimental case study: Root Mean Square Error.} \centering \setlength\tabcolsep{3pt} \small \begin{tabular}{c c c c c c c c} \toprule RMSE & \footnotesize{$p=1$} & \footnotesize{$p=2$} & \footnotesize{$p=10$} & \footnotesize{$p=20$} & \footnotesize{$p=30$} & \footnotesize{$p=60$} & \small sim \\ \midrule PEM & 0.0477 & 0.0811 & 0.196 & 0.251 & 0.264 & 0.293 & 0.324 \\ \midrule SEM & 0.0481 & 0.0813 & 0.188 & 0.227 & 0.230 & 0.231 & 0.232 \\ \midrule Method II & 0.0478 & 0.0810 & 0.189 & 0.231 & 0.234 & 0.232 & 0.234 \\ \bottomrule \end{tabular} \normalsize \end{table} \begin{figure}[thpb] \centering \includegraphics[width=0.99\columnwidth]{Y2_sim_glider_SS_new \caption{Experimental case study: simulated roll rate $[\nicefrac{rad}{s}]$ with the state-space predictor. Solid line: measured roll rate $\tilde{y}$; dashed line: simulated roll rate with SEM predictor; dash-dotted line: simulated roll rate with PEM predictor; dotted line: simulated roll rate with Method II predictor.} \label{f:Y_sim_glider_SS} \end{figure} \section{Conclusions} \label{s:conclusions} We presented new results pertaining to the identification of linear systems with guaranteed simulation error bounds, resorting to a Set Membership framework. The theoretical findings lead to clear procedures to estimate the noise bound, model order and system decay trend. Moreover, we derived a simulation error bound for an infinite simulation horizon, together with its properties and convergence conditions. This bound allowed us to demonstrate that it is possible to use the decay rate constraints to enforce the asymptotic stability of the identified model. Then, we presented two methods to learn one-step-ahead prediction models exploiting the estimated quantities. Numerical simulations illustrate the validity and the performance of the proposed identification methods, which we compared to standard PEM and SEM identification approaches. Furthermore, an experimental case study illustrates the applicability on real data. Future work will be devoted to the extension of the proposed identification framework to the nonlinear case.
1,108,101,566,232
arxiv
\section{Code} The whole code (image registration, experiments to test density estimators, enforcing similarity...) is available on the following github repository: \url{https://github.com/Lydorn/netsimilarity} . \section{Proofs of the properties of the 1D similarity kernel} We give here the proofs at the properties of the 1-dimensional-output similarity kernel. \subsection{Proof of Theorem \ref{basicnet}} \begin{theorem} For any real-valued neural network $f_\theta$ whose last layer is a linear layer (without any parameter sharing) or a standard activation function thereof (sigmoid, tanh, ReLU...), and for any inputs $\mathbf{x}$ and $\mathbf{x}'$, $$\nabla_{\!\theta} f_\theta(\mathbf{x}) = \nabla_{\!\theta} f_\theta(\mathbf{x}') \;\;\implies\;\;f_\theta(\mathbf{x}) = f_\theta(\mathbf{x}') \,.$$ \end{theorem} \begin{proof} If the last layer is linear, the output is of the form $f_\theta(\mathbf{x}) = \sum_i w_i a_i(\mathbf{x}) + b$, where $w_i$ and $b$ are parameters in $\mathbb{R}$ and $a_i(\mathbf{x})$ activities from previous layers. The gradient $\nabla_{\!\theta} f_\theta(\mathbf{x})$ contains in particular as coefficients the derivatives $\frac{d f_\theta(\mathbf{x}) }{dw_i} = a_i(\mathbf{x})$. Thus $\nabla_{\!\theta} f_\theta(\mathbf{x}) = \nabla_{\!\theta} f_\theta(\mathbf{x}') \implies a_i(\mathbf{x}) = a_i(\mathbf{x}') \;\forall i$ in the last layer. The outputs can be then rebuilt: $f_\theta(\mathbf{x}) = \sum_i w_i a_i(\mathbf{x}) + b = \sum_i w_i a_i(\mathbf{x}') + b = f_\theta(\mathbf{x}')$. If the output is of the form $f_\theta(\mathbf{x}) = \sigma(c(\mathbf{x}))$ with $c(\mathbf{x}) = \sum_i w_i a_i(\mathbf{x}) + b$, then the gradient equality implies $\frac{d f_\theta(\mathbf{x}) }{db} = \frac{d f_\theta(\mathbf{x}') }{db}$, whose value is $\sigma'(c(\mathbf{x})) = \sigma'(c(\mathbf{x}'))$. Then, as $\sigma'(c(\mathbf{x}))\, a_i(\mathbf{x}) = \frac{d f_\theta(\mathbf{x}) }{dw_i} = \frac{d f_\theta(\mathbf{x}') }{dw_i} = \sigma'(c(\mathbf{x}'))\, a_i(\mathbf{x}')$, we can deduce $a_i(\mathbf{x}) = a_i(\mathbf{x}')$ for all $i$ provided $\sigma'(c(\mathbf{x})) \neq 0$. In that case, from these identical activities one can rebuild identical outputs. Otherwise, $\sigma'(c(\mathbf{x})) = \sigma'(c(\mathbf{x}')) = 0 $, which is not possible with strictly monotonous activation functions, such as tanh or sigmoid. For ReLU, $\sigma'(c(\mathbf{x})) = 0 \implies \sigma(c(\mathbf{x})) = 0$ and thus $f_\theta(\mathbf{x}) = f_\theta(\mathbf{x}') = 0$. The same reasoning holds for other activation functions with only one flat piece (such as the ReLU negative part), \ie for which the set $\sigma(\sigma'^{-1}(\{0\}))$ is a singleton. \end{proof} \subsection{Proof of Corollary \ref{alphasim}} \begin{corollary} Under the same assumptions, for any inputs $\mathbf{x}$ and $\mathbf{x}'$, $$\begin{array}{crcl} & k^C_\theta(\mathbf{x}.\mathbf{x}') = 1 & \implies & \nabla_{\!\theta} f_\theta(\mathbf{x}) = \nabla_{\!\theta} f_\theta(\mathbf{x}') \,, \vspace{1mm} \\ \mathrm{hence} & k^C_\theta(\mathbf{x}.\mathbf{x}') = 1 & \implies & f_\theta(\mathbf{x}) = f_\theta(\mathbf{x}') \,. \\ \end{array}$$ \end{corollary} \begin{proof}$\;\;$ $k^C_\theta(\mathbf{x}.\mathbf{x}') = 1$ means $\frac{ \nabla_{\!\theta} f_\theta(\mathbf{x}) }{ \| \nabla_{\!\theta} f_\theta(\mathbf{x}) \|} \cdot \frac{ \nabla_{\!\theta} f_\theta(\mathbf{x}') }{ \| \nabla_{\!\theta} f_\theta(\mathbf{x}') \|} = 1$, which implies $\exists\, \alpha \in \mathbb{R}^*, \; \nabla_{\!\theta} f_\theta(\mathbf{x}) = \alpha\, \nabla_{\!\theta} f_\theta(\mathbf{x}')$. We need to show that $\alpha = 1$. Under the assumptions of Theorem \ref{basicnet}, following its proof: \begin{itemize} \item either the last layer is linear, the output is of the form $f_\theta(\mathbf{x}) = \sum_i w_i a_i(\mathbf{x}) + b$, and then $\nabla_{\!b} f_\theta(\mathbf{x}) = \alpha \nabla_{\!b} f_\theta(\mathbf{x}')$ while $\frac{d f_\theta(\mathbf{x}) }{db} = 1$ and $\frac{d f_\theta(\mathbf{x}') }{db} =1$, hence $\alpha = 1$; \item either the output is of the form $f_\theta(\mathbf{x}) = \sigma(c(\mathbf{x}))$ with $c(\mathbf{x}) = \sum_i w_i a_i(\mathbf{x}) + b$, and then $\sigma'(c(\mathbf{x})) = \nabla_{\!b} f_\theta(\mathbf{x}) = \alpha \nabla_{\!b} f_\theta(\mathbf{x}') = \alpha \,\sigma'(c(\mathbf{x}'))$, while, for any $i$, $\sigma'(c(\mathbf{x}))\, a_i(\mathbf{x}) = \frac{d f_\theta(\mathbf{x}) }{dw_i} = \alpha \frac{d f_\theta(\mathbf{x}') }{dw_i} = \alpha \,\sigma'(c(\mathbf{x}'))\, a_i(\mathbf{x}')$. Thus, supposing $\sigma'(c(\mathbf{x})) \neq 0$, we obtain $a_i(\mathbf{x}) = a_i(\mathbf{x}')\; \forall i$, and thus we can rebuild from the activities $c(\mathbf{x}) = c(\mathbf{x}')$, from which $\sigma'(c(\mathbf{x})) = \sigma'(c(\mathbf{x}'))$ and thus $\alpha = 1$. Otherwise, $\sigma'(c(\mathbf{x})) = \sigma'(c(\mathbf{x}')) = 0$ and the two full gradients $\nabla_{\!\theta} f_\theta(\mathbf{x})$ and $\nabla_{\!\theta} f_\theta(\mathbf{x}')$ are 0 and thus equal. \end{itemize} The conditions for $\;k^C_\theta(\mathbf{x}.\mathbf{x}') = 1 \implies \nabla_{\!\theta} f_\theta(\mathbf{x}) = \nabla_{\!\theta} f_\theta(\mathbf{x}')\;$ to hold are actually much weaker: it is sufficient that in the whole network architecture there exists \emph{one} useful neuron (in the sense of the next paragraph) of that type (so called \emph{linear} but actually affine). \end{proof} \subsection{Proof of Theorem \ref{basicnet2}} \begin{theorem} For any real-valued neural network $f_\theta$ without parameter sharing, if $\nabla_{\!\theta} f_\theta(\mathbf{x}) = \nabla_{\!\theta} f_\theta(\mathbf{x}')$ for two inputs $\mathbf{x}, \mathbf{x}'$, then all useful activities computed when processing $\mathbf{x}$ are equal to the ones obtained when processing $\mathbf{x}'$. \end{theorem} We name \emph{useful} activities all activities whose variation would have an impact on the output, \ie all the ones satisfying $\frac{d f_\theta(\mathbf{x}) }{da_i} \neq 0$. This condition is typically not satisfied when the activity is multiplied by 0, \ie $w_i = 0$, or when it is negative and followed by a ReLU, or when all its contributions to the output annihilate together (\eg, a sum of two neurons with opposite weights: $f_\theta(\mathbf{x}) = \sigma( a_i(\mathbf{x}) ) - \sigma( a_i(\mathbf{x}) )$). \begin{proof} Let $a_i(\mathbf{x})$ be a useful activity (for $\mathbf{x}$). It is fed to at least one useful neuron, whose pre-activation output is of the form $c(\mathbf{x}) = \sum_i w_i a_i(\mathbf{x}) + b$. Then $\frac{d f_\theta(\mathbf{x}) }{db} = \frac{d f_\theta(\mathbf{x}) }{dc} \neq 0$ (the output of the neuron is useful), and $\frac{d f_\theta(\mathbf{x}) }{dw_i} = \frac{d f_\theta(\mathbf{x}) }{db} a_i(\mathbf{x})$. From the gradient equality, $a_i(\mathbf{x}) = \frac{d f_\theta(\mathbf{x}) }{dw_i} / \frac{d f_\theta(\mathbf{x}) }{db} = \frac{d f_\theta(\mathbf{x}') }{dw_i} / \frac{d f_\theta(\mathbf{x}') }{db} = a_i(\mathbf{x}')$. \end{proof} \section{Higher output dimension} \label{sec:high2} We expand here all the mathematical aspects of the homonymous section of the article. \subsection{Derivation} Let us now study the case where $f_\theta(\mathbf{x})$ is a vector in $\mathbb{R}^d$ with $d > 1$. The optimal parameter change $\delta \theta$ to push $f_\theta(\mathbf{x})$ in a direction $\mathbf{v}$ (with a force $\varepsilon$) is less straightforward to obtain. First, one can define as many gradients as output coordinates: $\nabla_{\!\theta} f_\theta^i(\mathbf{x})$, for $i \in \llbracket 1, d \rrbracket$. This family of gradients can be shown to be linearly independent, unless the architecture of the network is specifically built not to. If for instance each output coordinate has its own bias parameter, \ie writes in the form $f_\theta^i(\mathbf{x}) = b_i + g_\theta(\mathbf{x})$ or $\sigma( b_i + g_\theta(\mathbf{x}) )$ with a strictly monotonous activation function $\sigma$, then the derivative \wrt $b_i$ will be 1 (or $\sigma'$) only in the $i$-th gradient and 0 in the other ones. Thus the $j$-th gradient contains in particular the subvector $(\frac{df^j}{db_i})_i = (\delta_{i=j})_i$, and the gradients are consequently independent. In the case where all coordinates depend on all biases, but not identically, as with a softmax, the argument stays true. Any parameter variation $\delta \theta \in \mathbb{R}^p$ can then be uniquely decomposed as: $$\delta\theta = \sum_{i=1}^d \alpha_i \nabla f_\theta^i(\mathbf{x}) \;+\;\gamma $$ where $\alpha_i \in \mathbb{R}$ and where $\gamma \in \mathbb{R}^p$ is orthogonal to all coordinate gradients. This parameter variation induces an output variation: $$ f_{\theta + \delta \theta} (\mathbf{x}) - f_\theta(\mathbf{x}) = \nabla_{\!\theta} f_\theta(\mathbf{x}) \; \delta \theta + O(\|\delta\theta\|^2)$$ $$= \left( \sum_i \alpha_i \nabla_{\!\theta} f^i_\theta(\mathbf{x}) \cdot \nabla f_\theta^j(\mathbf{x})\right)_j + 0 + O(\|\delta\theta\|^2)$$ $$= C \alpha + O(\|\alpha\|^2)$$ where $C$ is the correlation matrix of the gradients: $C_{ij} = \nabla_{\!\theta} f^i_\theta(\mathbf{x}) \cdot \nabla f_\theta^j(\mathbf{x})$. It turns out that $C$ is invertible: $$C \alpha = 0 \implies \alpha C \alpha = 0 \implies \alpha \nabla_{\!\theta} f_\theta(\mathbf{x})\, \nabla_{\!\theta} f_\theta(\mathbf{x}) \,\alpha = 0$$ $$\implies \|\nabla_{\!\theta} f_\theta(\mathbf{x})\, \alpha\|^2 = 0 \implies \sum_i \alpha_i \nabla f_\theta^i(\mathbf{x}) = 0$$ $\implies \alpha = 0$ as the $\nabla_{\!\theta} f^i_\theta(\mathbf{x})$ are linearly independent. Thus, for a desired output move in the direction $\mathbf{v}$ with amplitude $\varepsilon$, \ie $f_{\theta + \delta \theta} (\mathbf{x}) - f_\theta(\mathbf{x}) = \varepsilon \mathbf{v}$, one can compute the associated linear combination $\alpha = \varepsilon\, C^{-1}\mathbf{v}$ and thus the smallest associated parameter change $\delta\theta = \sum_i \alpha_i \nabla f_\theta^i(\mathbf{x})$. The output variation induced at any other point $\mathbf{x}'$ by this parameter change is then: $$f_{\theta + \delta \theta} (\mathbf{x}') - f_\theta(\mathbf{x}') = \left( \nabla_{\!\theta} f^i_\theta(\mathbf{x}') \cdot \delta \theta \right)_i + O(\|\delta\theta\|^2)$$ $$ = \left( \sum_j \alpha_j \nabla_{\!\theta} f^i_\theta(\mathbf{x}') \cdot \nabla_{\!\theta} f^j_\theta(\mathbf{x}) \right)_i + O(\|\delta\theta\|^2).$$ \begin{equation} = \varepsilon \, K_\theta(\mathbf{x}',\mathbf{x})\, C_\theta(\mathbf{x})^{-1}\, \mathbf{v} \, +\, O(\varepsilon^2) \end{equation} where the $d \times d$ kernel matrix $K_\theta(\mathbf{x},\mathbf{x}')$ is defined by $K^{ij}_\theta(\mathbf{x},\mathbf{x}') = \nabla_{\!\theta} f^i_\theta(\mathbf{x}) \cdot \nabla_{\!\theta} f^j_\theta(\mathbf{x}')$, and where the matrix $C_\theta(\mathbf{x}) = K_\theta(\mathbf{x},\mathbf{x})$ is the previously defined self-correlation matrix $C$. Its role is equivalent of the normalization by $\|\nabla_{\!\theta} f_\theta(\mathbf{x})\|^2$ in the 1D case, in plus of decorrelating the gradients. The interpretation of (\ref{eq:multidim}) is that if one moves the output for point $\mathbf{x}$ by $\mathbf{v}$, then the output for point $\mathbf{x}'$ will be moved also, by $M \mathbf{v}$, with $M = K_\theta(\mathbf{x},\mathbf{x}')\, K_\theta(\mathbf{x},\mathbf{x})^{-1}$. Note that these matrices $M$ or $K$ are only $d \times d$ where $d$ is the output dimension. They are thus generally small and easy to manipulate or inverse. \subsection{Normalized cross-correlation matrix} The normalized version of the kernel (\ref{eq:multidim}) is: \begin{equation} K_\theta^C(\mathbf{x},\mathbf{x}') \;=\; C_\theta(\mathbf{x})^{-1/2}\; K_\theta(\mathbf{x},\mathbf{x}')\; C_\theta(\mathbf{x}')^{-1/2} \end{equation} which is symmetric in the sense that $K_\theta^C(\mathbf{x}',\mathbf{x}) = K_\theta^C(\mathbf{x},\mathbf{x}')^T$. A matrix $K_\theta^C(\mathbf{x},\mathbf{x}')$ with small coefficients means that $\mathbf{x}$ and $\mathbf{x}'$ are relatively independent, from a neural network point of view (moves at $\mathbf{x}$ won't be transferred to $\mathbf{x}'$). On the opposite, the highest possible dependency is $K_\theta^C(\mathbf{x},\mathbf{x}) = \mathrm{Id}$. To study properties of this similarity measure, note that $K_\theta^C(\mathbf{x},\mathbf{x}') = (G^N_\mathbf{x})^T\, G^N_{\mathbf{x}'}$ with $G^N_\mathbf{x} = G_\mathbf{x} (G_\mathbf{x}^T G_\mathbf{x})^{-1/2}$, where $G_\mathbf{x} = \nabla_{\!\theta} f(\mathbf{x})$ : it is the product of normalized, decorrelated versions of the gradient. Indeed, at any point $\mathbf{x}$, the normalized gradient matrix $G^N_\mathbf{x}$ satisfies: $(G^N_\mathbf{x})^T\, G^N_{\mathbf{x}} = K_\theta^C(\mathbf{x},\mathbf{x}) = K_\theta(\mathbf{x},\mathbf{x})^{-1/2} K_\theta(\mathbf{x},\mathbf{x}) K_\theta(\mathbf{x},\mathbf{x})^{-1/2} = \mathrm{Id}$ and consequently $G^N_\mathbf{x}$ can be seen as an orthonormal family of vectors $G^{N,i}_\mathbf{x}$. The $L^2$ (Frobenius) norm of the ortho-normalized gradient $G^N_\mathbf{x}$ is thus: $$\big\|G^N_\mathbf{x}\big\|^2_F = \mathrm{Tr}((G^N_\mathbf{x})^T\, G^N_{\mathbf{x}}) = \mathrm{Tr}(\mathrm{Id}) = d \;.$$ At point $\mathbf{x}'$, $G^N_{\mathbf{x}'}$ is also an orthonormal family, but possibly arranged differently or generating a different subspace of $\mathbb{R}^p$. If $G^N_{\mathbf{x}}$ and $G^N_{\mathbf{x}'}$ generate the same subspace, then their product $(G^N_\mathbf{x})^T\, G^N_{\mathbf{x}'}$ is an orthogonal matrix $Q$ (change of basis) and its $L^2$ (Frobenius) norm is then $\big\|Q\big\|^2_F = \mathrm{Tr}(Q^T Q) = \mathrm{Tr}(\mathrm{Id}) = d$. Otherwise, $(G^N_\mathbf{x})^T\, G^N_{\mathbf{x}'}$ can be seen as a projection from one subspace to another one, each vector $G^{N,j}_{\mathbf{x}'}$ is projected onto the ortho-normal family $(G^{N,i}_\mathbf{x})_i$, and as a projection decreases the Euclidean norm, $\sum_i \left( G^{N,i}_\mathbf{x} \cdot G^{N,j}_{\mathbf{x}'} \right)^2 \leqslant \big\|G^{N,j}_{\mathbf{x}'}\big\|^2 = 1$. Thus: $$\big\|K_\theta^C(\mathbf{x},\mathbf{x}')\big\|_F = \sqrt{\sum_{ij} \left( G^{N,i}_\mathbf{x} \cdot G^{N,j}_{\mathbf{x}'} \right)^2} \leqslant \sqrt{d} \, .$$ Moreover, any coefficient of the kernel matrix satisfies: $$\left| K_\theta^{C, ij}(\mathbf{x},\mathbf{x}') \right| = \left| G^{N,i}_\mathbf{x} \cdot G^{N,j}_{\mathbf{x}'} \right| \leqslant \big\|G^{N,i}_\mathbf{x}\big\|_2 \, \big\|G^{N,j}_{\mathbf{x}'}\big\|_2 = 1$$ as each vector $G^{N,i}_\mathbf{x}$ is unit-norm. This implies in particular that the trace is bounded: $$-d \;\leqslant\; \mathrm{Tr}(K_\theta^C(\mathbf{x},\mathbf{x}')) \;\leqslant d.$$ To sum up, the similarity matrix $K_\theta^C(\mathbf{x},\mathbf{x}')$ satisfies the following properties: \begin{itemize} \setlength\itemsep{0em} \setlength{\parskip}{1pt} \item its coefficients are bounded, in $[-1,1]$ \item its trace is at most $d$ \item its (Frobenius) norm is at most $\sqrt{d}$ \item self-similarity is identity: $\forall \mathbf{x}, \,\;K_\theta^C(\mathbf{x},\mathbf{x}) = \mathrm{Id}$ \item the kernel is symmetric, in the sense that $K_\theta^C(\mathbf{x}',\mathbf{x}) = K_\theta^C(\mathbf{x},\mathbf{x}')^T$. \end{itemize} \subsection{Similarity in a single value} Note that when the trace is close to its maximal value $d$, the diagonal coefficients are close to 1, and their contribution to the Frobenius norm squared is close to $d$. Therefore, all non-diagonal coefficients are close to 0, and the matrix is close to $\mathrm{Id}$. And reciprocally, a matrix close to $\mathrm{Id}$ has a trace close to $d$. Thus, two related ways to quantify similarity in a single real value in $[-1,1]$ appear: \begin{itemize} \setlength\itemsep{1pt} \setlength{\parskip}{1pt} \item the distance to the identity $D = \big\| K_\theta^C(\mathbf{x},\mathbf{x}') - \mathrm{Id}\big\|_F$, which can be turned into a similarity as $1 - \frac{1}{\sqrt{d}} D$ or $1 - \frac{1}{2d} D^2$, since $D \in [0, 2\sqrt{d}]$ \item the normalized trace: $\frac{1}{d} \,\mathrm{Tr}\, K^C_\theta(\mathbf{x},\mathbf{x}')$, which is also the alignment with the identity: $\frac{1}{d} K_\theta^C(\mathbf{x},\mathbf{x}') \cdot_F \mathrm{Id}$, where $\cdot_F$ denotes the Frobenius inner product (\ie coefficient by coefficient). \end{itemize} The link between these two quantities can be made explicit by developing: $$\big\| K_\theta^C(\mathbf{x},\mathbf{x}') - \mathrm{Id} \big\|^2_F = \big\| K_\theta^C(\mathbf{x},\mathbf{x}') \big\|^2_F - 2 \mathrm{Tr}(K_\theta^C(\mathbf{x},\mathbf{x}')) + d$$ which rewrites as: $$\left(1 - \frac{D^2}{2d} \right) = \frac{\mathrm{Tr}(K_\theta^C(\mathbf{x},\mathbf{x}')) }{d} + \frac{1}{2}\left( 1 - \frac{\big\| K_\theta^C(\mathbf{x},\mathbf{x}') \big\|^2_F}{d} \right).$$ The last term lies in $[0,1]$ and measures the mismatch between the vector subspaces generated by the two families of gradients $\left(\nabla_{\!\theta} f^i(\mathbf{x})\right)_i$ and $\left(\nabla_{\!\theta} f^i(\mathbf{x}')\right)_i$. It is 1 when $f_\theta(\mathbf{x})$ and $f_\theta(\mathbf{x}')$ can be moved independently, and 0 when they move jointly (though not necessarily in the same direction). As our two similarity measures $1 - \frac{D^2}{2d}$ and $\frac{1}{d}\mathrm{Tr}(K_\theta^C(\mathbf{x},\mathbf{x}'))$ have same optimum ($\mathrm{Id}$) and are closely related, in the sequel we will focus on the second one and define: \begin{equation} k_\theta^C(\mathbf{x},\mathbf{x}') \;=\; \frac{1}{d} \,\mathrm{Tr}\, K^C_\theta(\mathbf{x},\mathbf{x}') \;. \end{equation} \subsection{Metrics on output: rotation-invariance} Similarity in $\mathbb{R}^d$, to compare $\mathbf{v}$ and $\mathbf{v}' = M\mathbf{v}$, might be richer than just checking whether the vectors are equal or close in $L^2$ norm. For instance, one could quotient the output space by the group of rotations, in order to express a known or desired equivariance of the network to rotations. If the output is the predicted motion of some object described in the input, one could wish indeed that if the input object is rotated by an angle $\phi$, then the output should be rotated as well with the same angle. In that case, given two inputs $\mathbf{x}$ and $\mathbf{x}'$ and associated output variations $\mathbf{v}$ and $\mathbf{v}'$, without knowing the rotation angle if applicable, one could consider all possible rotated versions $R_\phi \mathbf{v}' = R_\phi M \mathbf{v}$, where $R_\phi$ is the rotation matrix with angle $\phi$, and pick the best angle $\phi$ that maximizes the alignment $\mathbf{v} \cdot R_\phi M \mathbf{v}$, \ie such that $R_\phi M$ is the closest to the $d \times d$ identity matrix. This can be computed easily in closed form, for instance in the 2-dimensional case as follows. The $2 \times 2$ matrix of interest (Eq.~$\ref{eq:multidimkern}$) can be written as the product of two $p \times 2$ matrices of the form $G (G^T G)^{-1/2}$, where $G$ is the matrix containing the gradient of all coordinates. Rotating the coordinates of $G$ amounts to considering $G R_\phi (R_\phi^TG^T GR_\phi)^{-1/2} = G (G^T G)^{-1/2} R_\phi$ instead. Thus the effect of rotation is just right-multiplying our $2 \times 2$ matrix $M$ of interest (Eq.~$\ref{eq:multidimkern}$) by $R_\phi$. We are thus interested into getting $M R_\phi$ as close as possible to the $2 \times 2$ identity. For our trace-based similarity kernel (Eq.~\ref{eq:multidimkernsum}), this amounts to maximizing $\mathrm{Tr}(MR_\phi) = \cos(\phi)(M_{11}+M_{22}) + \sin(\phi)(M_{12}-M_{21})$ \wrt $\phi$, whose optimal value is: \begin{align*} k^{C, \mathrm{rot}}_\theta(\mathbf{x},\mathbf{x}') & = \frac{1}{2} \sqrt{(M_{11}+M_{22})^2 + (M_{12}-M_{21})^2}\\ & = \frac{1}{2} \sqrt{ \big\|M\big\|_F^2 + 2 \det M} \end{align*} where $M = K_\theta^C(\mathbf{x},\mathbf{x}')$. This quantity is indeed rotation-invariant, as the Frobenius norm and the determinant do not change upon rotations. Note that one could also consider instead the subspace match $\frac{1}{d}\big\|M\big\|_F^2 $. The main difference between the two is that the first one penalizes mirror symmetries (through $\det M$) while the second one does not. Note that other metrics are possible in the output space. For instance, the loss metric quantifies the norm of a move $\mathbf{v}$ by its impact on the loss $\left.\frac{dL(y)}{dy}\right|_{f_\theta(\mathbf{x})}(\mathbf{v})$. It has a particular meaning though, and is relevant only if well designed and not noisy, as seen in the remote sensing image registration example. Note also that in such a case the associated similarity would not be intrinsic anymore to the neural network as it depends on the loss. \section{Estimating density} \label{sec:estim2} \subsection{Toy problem} The toy problem used in the paper to test the various estimators for neighbor count estimation consists of predicting a one dimensional function, namely a sinusoid (such as in Fig.\ref{fig:toy_problem_2d} (a)). We can easily change the difficulty of the problem by using different values of frequency. The neural network would perform this mapping: $y = \sin(2 \pi f x), x \in [0, 1]$. A problem arises however when estimating the number of neighbors because the input space has 2 boundaries at $x=0$ and $x=1$, leading to fewer neighbors when $x$ approaches either of those boundaries. To avoid this problem, we transform the input space to a 2D circle. Namely, the task is now $y = sin(2\pi f \alpha(x)), x \in \{(\cos(2\pi\alpha), \sin(2\pi\alpha)), \alpha \in [0, 1]\}$, with the input space having no boundaries. The dataset is generated with n=2048 input points. The network used is fully-connected and has 5 hidden layers of 64 neurons trained with the Adam optimizer for 80 epochs with a base learning rate of $1e^{-4}$. An experiment consist of training the network on a dataset generated with a specific frequency f. Each experiment was repeated 5 times, in order to take the median of every result to limit the variance due to the neural network stochastic training. We can see in Fig.\ref{fig:toy_problem_2d} (b) the proposed soft estimate $k_\theta^C$ for each input point (projected to 1D). As expected we observe that the number of neighbors drops when the curvature is high: the objective changes quickly and the network adjusts to better distinguish inputs in places of higher curvature. \begin{figure} \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\linewidth]{toy_sinusoid-eps-converted-to.pdf} \caption{Function to predict.} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\linewidth]{toy_neighbors_soft-eps-converted-to.pdf} \caption{Neighbors soft estimate.} \end{subfigure} \caption{Toy problem with the frequency f = 2.} \label{fig:toy_problem_2d} \end{figure} \begin{figure} \centering \includegraphics[width=0.4\linewidth,trim={100 25 40 60},clip]{toy_neighbors_soft_vs_curvature_vs_alpha_0-eps-converted-to.pdf} \includegraphics[width=0.5\linewidth,trim={50 25 0 0},clip]{toy_neighbors_soft_vs_curvature_vs_alpha_2-eps-converted-to.pdf} \includegraphics[width=0.5\linewidth,trim={50 0 0 60},clip]{toy_neighbors_soft_vs_curvature_vs_alpha_1-eps-converted-to.pdf} \caption{3D plot of neighbors soft with varying frequency. Script and data to plot interactively in attached files. Run the bash script "main\_plot\_exps.paper.sh" to reproduce this exact figure. Alternatively use "main\_plot\_exps.py" with arguments of your choosing to plot different values (run "python main\_plot\_exps.py -h" to see possible arguments).} \label{fig:toy_problem_3d} \end{figure} \subsection{Other possible uses} \paragraph{Density homogeneity as an optimization criterion} The estimations above are meant to be done post-training. This said, one could control density explicitly, by computing the number of neighbors for all points, and asking it to be in a reasonable range, or in a reasonable proportion $q$ of the dataset size $\mathcal{D}$, by adding \eg to the loss $\sum_i \left( \frac{ N_S(\mathbf{x}_i) }{ \mathcal{D} } - q \right)^2$. Online learning could also make use of such tools, to sample first lowly-populated areas, where uncertainty is higher. \section{Enforcing similarity} We give here a few more details on the homonymous section of the paper. \subsection{Complexity} A gradient descent step on this quantity for a given pair $(\mathbf{x},\mathbf{x}')$ (in a mini-batch approach, \eg) requires the computation of the gradient $\nabla_\theta k^C_\theta(\mathbf{x},\mathbf{x}') = \nabla_\theta \left( \nabla_\theta f_\theta(\mathbf{x}) \cdot \nabla_\theta f_\theta(\mathbf{x}') \right)$. While a naive approach would require the computation of a second derivative, \ie a matrix of size $p \times p$ where $p$ is the number of parameters, it is actually possible to compute $\nabla_\theta k^C_\theta(\mathbf{x},\mathbf{x}') = \nabla_\theta \sum_i \frac{d f_\theta(\mathbf{x})}{d\theta_i} \frac{d f_\theta(\mathbf{x}')}{d\theta_i}$ in linear time $O(p)$, taking advantage of the serial structure of the computational graph. The framework enabling such computations is already available on common deep learning platforms, initially intended for the computation of $\nabla_\mathbf{x} \nabla_\theta f_\theta(\mathbf{x})$ for some variations on GANs. \subsection{Group invariance} \label{sec:group2} Dataset augmentation is a standard machine learning technique; when augmenting the dataset by a group transformation of the input (\eg, translation, rotation...) or by small intensity noise, new samples are artificially created, to augment the dataset size and hope for invariance to such transformations. One can ask the network to consider orbits of samples as similar with the technique above. Furthermore, if the group infinitesimal elements are expressible as differential operators $e_k$, one could require directly, for all $\mathbf{x}$, invariance in the tangent plane in the directions of these differential operators: $$\| \partial_\mathbf{x} \nabla_\theta f(\mathbf{x}) \cdot e_k(\mathbf{x}) \|^2$$ which is the limit of $\frac{1}{\varepsilon^2}\| \nabla_\theta f_\theta(\mathbf{x}) - \nabla_\theta f_\theta\left(\mathbf{x} + \varepsilon e_k(\mathbf{x})\right) \|^2$ when $\varepsilon \to 0$. For instance, in the case of image translations, the operator is $e: \mathbf{x} \mapsto \nabla_x \mathbf{x}(x)$ where $x$ denotes spatial coordinates in the image $\mathbf{x}$, as $\mathbf{x}(x+\tau) = \mathbf{x}(x) + \tau \cdot \nabla_x \mathbf{x}(x) + O(\tau^2)$. This is however not recommended, as representing a translation with such a spatially-local operator does not take into account the spatially-irregular nature of image intensities. Note that to the opposite of standard robustification techniques considering regularizers such as $\sum_{\mathbf{x}} \| \nabla_\theta f_\theta(\mathbf{x})\|^2$, we ask not gradients to be always small, but to be smooth, and in certain directions only. \subsection{Dynamics of learning: Experimentation details} The results in figure 6 show the average and standard deviation over 60 runs for each curve. The x-axis is the number of batches to the network is trained on (with a batch size of 16). The y-axis is the accuracy metric on the whole validation set. The network architecture is made of 2 convolutions layers (with a kernel size of 5), 2 linear layers and uses PReLU non-linearities. We used Adam with a learning rate of 1e-3 and no weight decay. We tested other architecures on MNIST: one with residual blocks, one deeper (8 convolutions) and one with tanh non-linearities. Similar results were observed on all cases. Additional tests were performed on CIFAR10 with a VGG architecture and only negligible benefits were observed. \section{Noisy Map Alignment Analysis} \label{sec:denoise2} The task here it to align maps in the form of a list of polygons with remote sensing images while using only the available noisy annotations. We analyze the model developed in a previous work \cite{anonymous}. Specifically, the model is trained in a multiple-rounds training scheme to iteratively align the available noisy annotations, which provides a better ground truth used to train a better model in the next round. An open question is why multiple rounds are needed in this noisy supervision setting, and why not all the noise can be removed in a single training step. More specifically, the model is made out of 4 neural networks. Each is trained on a different resolution (in terms of ground pixel size) and are applied in a multi-resolution pyramidal manner. In all our experiments we only analyzed the networks trained for a ground pixel size of 4 time smaller than the reference ground pixel size which is $0.3m$. We used the already-trained networks for each round, of which there are 3. The network was trained with small patches of (image, misaligned map) pairs from images of the Inria dataset \cite{maggiori2017dataset} and the Bradbury dataset \cite{bradbury_buildings_roads_height_dataset}. Ideally we would want to compute the similarities of every possible pairs of inputs, with a small patch size of $124$ px. However, given that a typical image of the training dataset is $1250 \times 1250$ px (after rescaling) and there are a few hundred of them (328 from the Inria dataset, only counting images where OSM annotations \cite{osm}), this would result in 32800 patches. The resulting amount of similarities to compute would be around half a billion. As the network has a few million of parameters and the output is 2D, each computation of similarity takes around $0.5$s. To make any computation feasible, we first sample 10 patches per image from the 328 of the Inria dataset. Those patches are chosen at random, as long as there is at least one building lying fully in the patch. As some images have rather sparse buildings, some images give less than 10 patches. We thus obtain 3045 patches representing the dataset. The amount of similarities to compute would be close to 5 million. To study all patches globally, we can use the soft neighbors estimator $k_\theta^C$ which has a linear complexity and allows us to compute the amount of neighbors for all 3045 patches in under an hour. However it is also interesting to go in deeper detail and compute similarities for some input pairs. We thus furthermore reduce the amount of pairs by estimating all similarities only for a very small number of patches, for example 10. This results in a $10\times3045$ similarity matrix. \subsection{Soft estimate on a sampling of the training dataset} In this section we present the results of computing the soft neighbors estimator $k_\theta^C$ on the 3045 sampled patches of inputs. We obtain results for the 3 networks of the 3 rounds of the noisy-supervision multi-rounds training scheme. Fig.\ref{fig:overall_hist} shows a histogram of the soft neighbors estimations. It additionally representative input patches for each bin of the histogram. Those representative patches are chosen so that their neighbor count is closest to the right edge of that bin. We especially observe that inputs in round 2 have more neighbors than the other 2 rounds. This particularity of round 2 will be seen throughout the remaining results. It is the round that aligns the most the annotations (see the Fig.2 on accuracy cumulative distributions in the paper). Round 3 does not perform any more alignment, that might be the reason why its results are different from those of round 2. \subsection{Similarities on pairs of input patches} In this section are the results for the computation of similarities between pairs of input patches. In a first experiment, for every round we chose the 10 patches shown in Fig.\ref{fig:overall_hist}, and computed their similarities with all the other 3045 patches. In order to visualize this data, we computed the 10-nearest neighbors in terms of similarity for each of those patches, see Fig.\ref{fig:round_0_overall_hist_k_nearest}, \ref{fig:round_1_overall_hist_k_nearest}, \ref{fig:round_2_overall_hist_k_nearest}. We computed the histogram of similarities as well, see Fig.\ref{fig:overall_hist_individual_hist}. In a second experiment, to better compare between rounds, we used another set of 10 patches, this time the same set for each round. Specifically, we sampled 10 patches from the bloomington22 image of the Inria dataset. As just before we computed the 10-nearest neighbors (Fig.\ref{fig:round_0_bloomington22_k_nearest}, \ref{fig:round_1_bloomington22_k_nearest}, \ref{fig:round_2_bloomington22_k_nearest}) and the histogram of similarities(Fig.\ref{fig:bloomington22_individual_hist}) for a visualization of those measures. Generally speaking, inputs in round 2 have more neighbors and the 10-nearest ones are closer than in other rounds (see Fig.\ref{fig:round_0_overall_hist_k_nearest}, \ref{fig:round_1_overall_hist_k_nearest}, \ref{fig:round_2_overall_hist_k_nearest} and Fig.\ref{fig:round_0_bloomington22_k_nearest}, \ref{fig:round_1_bloomington22_k_nearest}, \ref{fig:round_2_bloomington22_k_nearest}). For each parch, its closest neighbors generally (for similarity > 0.8) look similar from a human point of view. For example patches with sparse houses and trees have the same kind of neighbors. The same can be said for patches with parking lots and big roads. Another group are patches that are almost empty of buildings, with a lot of low vegetation. Other patch nearest neighbors are more difficult to interpret. In Fig.\ref{fig:overall_hist_individual_hist} and Fig.\ref{fig:bloomington22_individual_hist} we can see that for round 2, the spread of the similarities of the selected patches is smaller and the peak of the histogram are closer to the right, meaning all patches are closer than in other rounds. Additionally in Fig.\ref{fig:overall_hist_individual_hist} we can observe that the bottom patch has closer neighbors than the top patch, this is because the top patch corresponds to the left patch in \ref{fig:overall_hist} and the bottom one corresponds to the right patch in \ref{fig:overall_hist}. \input{preuves_denoising.tex} \subsection{Data augmentation as a label denoising technique} Data augmentation can be seen as label denoising, as it multiplies the number of neighbors. Indeed, in the infinite sampling limit, where the dataset becomes a probability distribution over all possible images, adding a transformed copy $\mathbf{x}' = T_\phi\, \mathbf{x}$ of a given point $\mathbf{x}$ (\eg rotating it with an angle $\alpha_\phi$ and adding small noise $\varepsilon_\phi$) means adding $(\mathbf{x}', l(\mathbf{x}))$ to the dataset, where $l(\mathbf{x})$ is the desired label for $\mathbf{x}$. But if $(\mathbf{x}', l(\mathbf{x}'))$ was already in the dataset, this amounts to enriching the possible labels for $\mathbf{x}'$. Supposing $T_\phi$ is an invertible transformation parameterized by $\phi$, full data augmentation (\ie for all possible $\phi$, applied on all points $\mathbf{x}$) enriches $\mathbf{x}'$ with all labels $l( T_\phi^{-1}(\mathbf{x}') )$. In case of i.i.d.~label noise, data augmentation will thus reduce this noise by a factor $\sqrt{\text{number of copies}}$. \begin{figure} \centering \begin{subfigure}[b]{\textwidth} \includegraphics[width=\linewidth]{netsimilarity_ds_fac_4_round_0_overall_hist-eps-converted-to.pdf} \caption{Round 1} \end{subfigure} \begin{subfigure}[b]{\textwidth} \includegraphics[width=\linewidth]{netsimilarity_ds_fac_4_round_1_overall_hist-eps-converted-to.pdf} \caption{Round 2} \end{subfigure} \begin{subfigure}[b]{\textwidth} \includegraphics[width=\linewidth]{netsimilarity_ds_fac_4_round_2_overall_hist-eps-converted-to.pdf} \caption{Round 3} \end{subfigure} \caption{Histogram of the soft estimate of neighbors on 3045 patches. Horizontal scale is different for each.} \label{fig:overall_hist} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{netsimilarity_ds_fac_4_round_0_from_overall_hist_k_nearest-eps-converted-to.pdf} \caption{\textbf{Round 1}: k-nearest neighbors with k=10. The 10 patches selected correspond to the 10 patches of Fig.\ref{fig:overall_hist} for that round.} \label{fig:round_0_overall_hist_k_nearest} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{netsimilarity_ds_fac_4_round_1_from_overall_hist_k_nearest-eps-converted-to.pdf} \caption{\textbf{Round 2}: k-nearest neighbors with k=10. The 10 patches selected correspond to the 10 patches of Fig.\ref{fig:overall_hist} for that round.} \label{fig:round_1_overall_hist_k_nearest} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{netsimilarity_ds_fac_4_round_2_from_overall_hist_k_nearest-eps-converted-to.pdf} \caption{\textbf{Round 3}: k-nearest neighbors with k=10. The 10 patches selected correspond to the 10 patches of Fig.\ref{fig:overall_hist} for that round.} \label{fig:round_2_overall_hist_k_nearest} \end{figure} \begin{figure} \centering \begin{subfigure}[b]{0.3\textwidth} \centering \caption{Round 1} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_0_from_overall_hist_individual_hist_00-eps-converted-to.pdf} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_0_from_overall_hist_individual_hist_01-eps-converted-to.pdf} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_0_from_overall_hist_individual_hist_02-eps-converted-to.pdf} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_0_from_overall_hist_individual_hist_03-eps-converted-to.pdf} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_0_from_overall_hist_individual_hist_04-eps-converted-to.pdf} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_0_from_overall_hist_individual_hist_05-eps-converted-to.pdf} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_0_from_overall_hist_individual_hist_06-eps-converted-to.pdf} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_0_from_overall_hist_individual_hist_07-eps-converted-to.pdf} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_0_from_overall_hist_individual_hist_08-eps-converted-to.pdf} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_0_from_overall_hist_individual_hist_09-eps-converted-to.pdf} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \caption{Round 2} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_1_from_overall_hist_individual_hist_00-eps-converted-to.pdf} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_1_from_overall_hist_individual_hist_01-eps-converted-to.pdf} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_1_from_overall_hist_individual_hist_02-eps-converted-to.pdf} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_1_from_overall_hist_individual_hist_03-eps-converted-to.pdf} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_1_from_overall_hist_individual_hist_04-eps-converted-to.pdf} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_1_from_overall_hist_individual_hist_05-eps-converted-to.pdf} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_1_from_overall_hist_individual_hist_06-eps-converted-to.pdf} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_1_from_overall_hist_individual_hist_07-eps-converted-to.pdf} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_1_from_overall_hist_individual_hist_08-eps-converted-to.pdf} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_1_from_overall_hist_individual_hist_09-eps-converted-to.pdf} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \caption{Round 3} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_2_from_overall_hist_individual_hist_00-eps-converted-to.pdf} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_2_from_overall_hist_individual_hist_01-eps-converted-to.pdf} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_2_from_overall_hist_individual_hist_02-eps-converted-to.pdf} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_2_from_overall_hist_individual_hist_03-eps-converted-to.pdf} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_2_from_overall_hist_individual_hist_04-eps-converted-to.pdf} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_2_from_overall_hist_individual_hist_05-eps-converted-to.pdf} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_2_from_overall_hist_individual_hist_06-eps-converted-to.pdf} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_2_from_overall_hist_individual_hist_07-eps-converted-to.pdf} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_2_from_overall_hist_individual_hist_08-eps-converted-to.pdf} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_2_from_overall_hist_individual_hist_09-eps-converted-to.pdf} \end{subfigure} \caption{Histograms of similarities shown for the same 10 patches as Fig.\ref{fig:overall_hist} and Fig.\ref{fig:round_0_overall_hist_k_nearest}, \ref{fig:round_1_overall_hist_k_nearest}, \ref{fig:round_2_overall_hist_k_nearest}.} \label{fig:overall_hist_individual_hist} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{netsimilarity_ds_fac_4_round_0_bloomington22_k_nearest-eps-converted-to.pdf} \caption{\textbf{Round 1}: k-nearest neighbors with k=10. The 10 patches are from from the bloomington22 image. Same patch selection across rounds.} \label{fig:round_0_bloomington22_k_nearest} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{netsimilarity_ds_fac_4_round_1_bloomington22_k_nearest-eps-converted-to.pdf} \caption{\textbf{Round 2}:k-nearest neighbors with k=10. The 10 patches are from from the bloomington22 image. Same patch selection across rounds.} \label{fig:round_1_bloomington22_k_nearest} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{netsimilarity_ds_fac_4_round_2_bloomington22_k_nearest-eps-converted-to.pdf} \caption{\textbf{Round 3}: k-nearest neighbors with k=10. The 10 patches are from from the bloomington22 image. Same patch selection across rounds.} \label{fig:round_2_bloomington22_k_nearest} \end{figure} \begin{figure} \centering \begin{subfigure}[b]{0.3\textwidth} \centering \caption{Round 1} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_0_bloomington22_individual_hist_00-eps-converted-to.pdf} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_0_bloomington22_individual_hist_01-eps-converted-to.pdf} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_0_bloomington22_individual_hist_02-eps-converted-to.pdf} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_0_bloomington22_individual_hist_03-eps-converted-to.pdf} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_0_bloomington22_individual_hist_04-eps-converted-to.pdf} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_0_bloomington22_individual_hist_05-eps-converted-to.pdf} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_0_bloomington22_individual_hist_06-eps-converted-to.pdf} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_0_bloomington22_individual_hist_07-eps-converted-to.pdf} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_0_bloomington22_individual_hist_08-eps-converted-to.pdf} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_0_bloomington22_individual_hist_09-eps-converted-to.pdf} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \caption{Round 2} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_1_bloomington22_individual_hist_00-eps-converted-to.pdf} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_1_bloomington22_individual_hist_01-eps-converted-to.pdf} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_1_bloomington22_individual_hist_02-eps-converted-to.pdf} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_1_bloomington22_individual_hist_03-eps-converted-to.pdf} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_1_bloomington22_individual_hist_04-eps-converted-to.pdf} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_1_bloomington22_individual_hist_05-eps-converted-to.pdf} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_1_bloomington22_individual_hist_06-eps-converted-to.pdf} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_1_bloomington22_individual_hist_07-eps-converted-to.pdf} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_1_bloomington22_individual_hist_08-eps-converted-to.pdf} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_1_bloomington22_individual_hist_09-eps-converted-to.pdf} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \caption{Round 3} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_2_bloomington22_individual_hist_00-eps-converted-to.pdf} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_2_bloomington22_individual_hist_01-eps-converted-to.pdf} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_2_bloomington22_individual_hist_02-eps-converted-to.pdf} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_2_bloomington22_individual_hist_03-eps-converted-to.pdf} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_2_bloomington22_individual_hist_04-eps-converted-to.pdf} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_2_bloomington22_individual_hist_05-eps-converted-to.pdf} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_2_bloomington22_individual_hist_06-eps-converted-to.pdf} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_2_bloomington22_individual_hist_07-eps-converted-to.pdf} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_2_bloomington22_individual_hist_08-eps-converted-to.pdf} \includegraphics[width=0.7\linewidth]{netsimilarity_ds_fac_4_round_2_bloomington22_individual_hist_09-eps-converted-to.pdf} \end{subfigure} \caption{Histograms of similarities shown for the same 10 patches as in Fig.\ref{fig:round_0_bloomington22_k_nearest}, \ref{fig:round_1_bloomington22_k_nearest}, \ref{fig:round_2_bloomington22_k_nearest}. Same patch selection across rounds.} \label{fig:bloomington22_individual_hist} \end{figure} \section{Dataset self-denoising} \label{backtodenoise} We now go back to the task described in Section~\ref{sec:denoising} and show how input similarity can be used to analyse experimental results and bring theoretical guarantees about robustness to label noise. \subsection{Similarity experimentally observed between patches} We studied the multi-round training scheme of \cite{anonymous} by applying our similarity measure to a sampling of input patches of the training dataset for one network per round. The principle of the multiple round training scheme is to reduce the noise of the annotations, obtaining aligned annotations in the end (more details in Appendix~\ref{sec:denoise2}). For a certain input patch, we computed its similarity with all the other patches for the 3 networks. With those similarities we can compute the nearest neighbors of that patch, see Fig. \ref{fig:k_nearest}. The input patch is of a suburb area with sparse houses and individual trees. The closest neighbors look similar as they usually feature the same types of buildings, building arrangement and vegetation. However sometimes the network sees a patch as similar when it is not clear from our point of view (for example patches with large buildings). For more in-depth results, we computed the histogram of similarities for the same patch, see Fig.~\ref{fig:bloomington22_individual_hist_02}. We observe that round 2 shows different neighborhood statistics, in that the patch is closer to all other patches than in other rounds. We observe the same behavior in 19 other input patches (see Appendix~\ref{sec:denoise2}). An hypothesis for this phenomenon is that the average gradient was not 0 at the end of that training round (due to optimization convergence issues, e.g.), which would shift all similarity histograms by a same value. Qualitatively, for patches randomly sampled, their similarity histograms tend to be approximately symmetric in round 2, but with a longer left tail in round 1 and a longer right tail in round 3. Neighborhoods thus seem to change across the rounds, with fewer and fewer close points (if removing the global histogram shift in round 2). A possible interpretation is that this would reflect an increasing ability of the network to distinguish between different patches, with finer features in later training rounds. \begin{figure} \center \includegraphics[width=\linewidth]{{netsimilarity_ds_fac_4_round_0_bloomington22_k_nearest_crop}.jpg} \includegraphics[width=\linewidth]{{netsimilarity_ds_fac_4_round_1_bloomington22_k_nearest_crop}.jpg} \includegraphics[width=\linewidth]{{netsimilarity_ds_fac_4_round_2_bloomington22_k_nearest_crop}.jpg} \caption{Example of nearest neighbors for a patch. Each line corresponds to a round. Each patch has its similarity written under it.} \label{fig:k_nearest} \end{figure} \begin{figure} \centering \begin{subfigure}[b]{0.3\textwidth} \centering \caption{Round 1} \includegraphics[width=\linewidth]{netsimilarity_ds_fac_4_round_0_bloomington22_individual_hist_02-eps-converted-to.pdf} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \caption{Round 2} \includegraphics[width=\linewidth]{netsimilarity_ds_fac_4_round_1_bloomington22_individual_hist_02-eps-converted-to.pdf} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \caption{Round 3} \includegraphics[width=\linewidth]{netsimilarity_ds_fac_4_round_2_bloomington22_individual_hist_02-eps-converted-to.pdf} \end{subfigure} \caption{Histograms of similarities for one patch across rounds.} \label{fig:bloomington22_individual_hist_02} \end{figure} \subsection{Comparison to the \emph{perceptual loss}} We compare our approach to the \emph{perceptual loss} on a nearest neighbor retrieval task. We notice that the \emph{perceptual loss} sometimes performs reasonably well, but often not. For instance, we show in Fig.~\ref{fig:comparison_percept} the closest neighbors to a structured residential area image, for the \emph{perceptual loss} (first row: not making sense) and for our similarity measure (second row: similar areas). \begin{figure} \rotatebox[origin=l]{90}{$\!$Perceptual}\includegraphics[width=0.98\linewidth]{round_0_perceptual_loss.jpg} \rotatebox[origin=l]{90}{$\!\!$Similarity}\includegraphics[width=0.98\linewidth]{round_0_notre_similarite.jpg} \hspace*{5mm}Source\hspace*{2.2mm} | \hspace*{3.2mm}Closest neighbor patches \caption{Closest neighbors to the leftmost patch, using the \emph{perceptual loss} (first row) and our similarity definition (second row).} \label{fig:comparison_percept} \end{figure} \subsection{From similarity statistics to self-denoising effect estimation} We now show how such similarity experimental computations can be used to solve the initial problem of Section~\ref{sec:denoising}, by explicitly turning similarity statistics into a quantification of the self-denoising effect. Let us denote by $y_i$ the true (unknown) label for input $\mathbf{x}_i$, by $\widetilde{y}_i$ the noisy label given in the dataset, and by $\widehat{y}_i = f_\theta(\mathbf{x}_i)$ the label predicted by the network. We will denote the (unknown) noise by $\varepsilon_i = \widetilde{y}_i - y_i$ and assume it is centered and i.i.d., with finite variance $\sigma_\varepsilon$. The training criterion is $ E(\theta) = \sum_j || \widehat{y}_j - \widetilde{y}_j ||^2 $. At convergence, the training leads to a local optimum of the energy landscape: $ \nabla_{\!\theta} E = 0 $, that is, $ \sum_j (\widehat{y}_j - \widetilde{y}_j) \nabla_{\!\theta} \widehat{y}_j = 0 $. Let's choose any sample $i$ and multiply by $\nabla_{\!\theta} \widehat{y}_i$ : using $\;k^I_\theta(\mathbf{x}_i,\mathbf{x}_j) = \nabla_{\!\theta} \widehat{y}_i . \nabla_{\!\theta} \widehat{y}_j\,$, we get: $$\;\; \sum_j (\widehat{y}_j - \widetilde{y}_j) \, k^I_\theta(\mathbf{x}_j, \mathbf{x}_i) = 0.$$ Let us denote by $ k^{IN}_\theta(\mathbf{x}_j,\mathbf{x}_i) = k^{I}_\theta(\mathbf{x}_j,\mathbf{x}_i) \big(\sum_j k^I_\theta(\mathbf{x}_j,\mathbf{x}_i)\big)^{-1} $ the column-normalized kernel, and by $ \E_k [ a ] =\, \sum_j\, a_j\, k^{IN}_\theta(\mathbf{x}_j,\mathbf{x}_i)$ the mean value of $a$ in the neighborhood of $i$, that is, the weighted average of the $a_j$ with weights $k^I_\theta(\mathbf{x}_j,\mathbf{x}_i)$ normalized to sum up to 1. This is actually a kernel regression, in the spirit of Parzen-Rosenblatt window estimators. Then the previous property can be rewritten as $ \,\E_k[ \widehat{y} ] = \E_k[ \widetilde{y} ]\, $. As $\, \E_k[ \widetilde{y} ] = \E_k[ y ] + \E_k[ \varepsilon ] \,$, this yields: $$\;\;\; \widehat{y}_i - \E_k[ y ] = \E_k[ \varepsilon ] + ( \widehat{y}_i - \E_k[ \widehat{y} ] ) $$ \ie the difference between the predicted $\widehat{y}_i$ and the average of the true labels in the neighborhood of $i$ is equal to the average of the noise in the neighborhood of $i$, up to the deviation of the prediction $\widehat{y}_i$ from the average prediction in its neighborhood. We want to bound the error $\| \widehat{y}_i - \E_k[ y ]\|$ without knowing neither the true labels $y$ nor the noise~$\varepsilon$. One can show that $\E_k[ \varepsilon ] \propto \var_\varepsilon(\E_k[ \varepsilon ])^{1/2} = \sigma_\varepsilon \, \| k^{IN}_\theta (\cdot,\mathbf{x}_i) \|_{L2}$. The denoising factor is thus the similarity kernel norm $\| k^{IN}_\theta (\cdot,\mathbf{x}_i)\|_{L2}$, which is between $1/\sqrt{N}$ and 1, depending on the neighborhood quality. It is $1/\sqrt{N}$ when all $N$ data points are identical, i.e. all satisfying $k^C_\theta(\mathbf{x}_i,\mathbf{x}_j) = 1$. On the other extreme, this factor is 1 when all points are independent: $k^I_\theta(\mathbf{x}_i,\mathbf{x}_j) =~0 \;\;$ $\forall i \neq j$. This way we extend \emph{noise2noise} \cite{noise2noise} to real datasets with non-identical inputs. In our remote sensing experiment, we estimate this way a denoising factor of 0.02, consistent across all training rounds and inputs ($\pm 10\%$), implying that each training round contributed equally to denoising the labels. This is confirmed by Fig.~\ref{fig:accuracies}, which shows the error steadily decreasing, on a control test where true labels are known. The shift $( \widehat{y}_i - \E_k[ \widehat{y} ] )$ on the other hand can be directly estimated given the network prediction. In our case, it is $4.4$px on average, which is close to the observed median error for the last round in Fig.~\ref{fig:accuracies}. It is largely input-dependent, with variance $3.2$px, which is reflected by the spread distribution of errors in Fig.~\ref{fig:accuracies}. This input-dependent shift thus provides a hint about prediction reliability. It is also possible to bound $( \widehat{y}_i - \E_k[ \widehat{y} ] ) = \E_k[ \widehat{y}_i - \widehat{y} ]$ using only similarity information (without predictions $\widehat{y}$). Theorem \ref{basicnet} implies that the application: $\frac{\nabla_{\!\theta} f_\theta(\mathbf{x})}{\|\nabla_{\!\theta} f_\theta(\mathbf{x})\|} \mapsto f_\theta(\mathbf{x})$ is well-defined, and it can actually be shown to be Lipschitz with a network-dependent constant (under mild hypotheses). Thus $$\| f_\theta(\mathbf{x}) - f_\theta(\mathbf{x}')\| \leqslant C \left\| \frac{\nabla_{\!\theta} f_\theta(\mathbf{x})}{\|\nabla_{\!\theta} f_\theta(\mathbf{x})\|} - \frac{\nabla_{\!\theta} f_\theta(\mathbf{x}')}{\|\nabla_{\!\theta} f_\theta(\mathbf{x}')\|} \right\| = \sqrt{2} C \sqrt{1 - k_\theta^C(\mathbf{x},\mathbf{x}')}\;,$$ yielding $\| \widehat{y}_i - \widehat{y}_j \| \leqslant \sqrt{2} C \sqrt{1 - k_\theta^C(\mathbf{x}_i,\mathbf{x}_j)} $ and thus $\big|\E_k[ \widehat{y}_i - \widehat{y} ]\,\big| \leqslant \sqrt{2} C \E_k\!\left[ \sqrt{1 - k_\theta^C(\mathbf{x}_i,\cdot)}\,\right]$. \section{Conclusion} We defined a proper notion of input similarity as perceived by the neural network, based on the ability of the network to distinguish the inputs. This brings a new tool to analyze trained networks, in plus of visualization tools such as grad-CAM \cite{gradcam}. We showed how to turn it into a density estimator, which was validated on a controlled experiment, and usable to perform fast statistics on large datasets. It opens the door to underfit/overfit/uncertainty analyses or even control during training, as it is differentiable and computable at low cost. We also showed that any desired similarity could be enforced during training, at reasonable cost, and noticed a dataset-dependent boosting effect that should be further studied along with robustness to adversarial attacks, as such training differs significantly from usual methods. Finally, we extended \emph{noise2noise} \cite{noise2noise} to the case of non-identical inputs, thus expressing self-denoising effects as a function of inputs' similarities. The code is available at \url{https://github.com/Lydorn/netsimilarity}~. \section*{Acknowledgments} We thank Victor Berger and Adrien Bousseau for useful discussions. This work benefited from the support of the project EPITOME ANR-17-CE23-0009 of the French National Research Agency~(ANR). \section{Enforcing similarity} \label{sec:enforce} The similarity criterion we defined could be used not only to estimate how similar two samples are perceived, after training, but also to incite the network, during training, to evolve in order to consider these samples as similar. \paragraph{Asking two samples to be treated as similar} If two inputs $\mathbf{x}$ and $\mathbf{x}'$ are known to be similar (from a human point of view), one can enforce their similarity from the network perspective, by adding to the loss the term:\vspace{-3mm} $$- k^C_\theta(\mathbf{x},\mathbf{x}') \; .$$ \paragraph{Asking a distribution of samples to be treated as similar} By extension, to enforce the similarity of a subset $\mathcal{S}$ of training samples, of size $n = |\mathcal{S}|$, one might consider the average pairwise similarity $k_\theta^C$ over all pairs, or the standard deviation of the gradients. Both turn out to be equivalent to maximizing the norm of the gradient mean $\mu = \frac{1}{n} \sum_{i\in\mathcal{S}} \frac{ \nabla_{\!\theta} f_\theta(\mathbf{x}_i) }{ \| \nabla_{\!\theta} f_\theta(\mathbf{x}_i) \|}$: $$\frac{1}{n (n-1)} \sum_{i,j\in\mathcal{S}, i\neq j} \!\!\!k^C_\theta(x_i,x_j) \;=\; \frac{n}{n-1} \|\mu\|^2 - \frac{1}{n-1} \;\;\;\;\;\;\mathrm{and}\;\;\;\;\;\; \var_{i\in\S}\, \frac{ \nabla_{\!\theta} f_\theta(\mathbf{x}_i) }{ \| \nabla_{\!\theta} f_\theta(\mathbf{x}_i) \|} = 1 - \|\mu\|^2\,.$$ In practice, common deep learning platforms are much faster when using mini-batches, but then return only the gradient sum $\sum_{i\in\mathcal{B}} \nabla_{\!\theta} f_\theta(\mathbf{x}_i)$ over a mini-batch $\mathcal{B}$, not individual gradients, preventing the normalization of each of them to compute $k_\theta^C$ or $\mu$. So instead we compare means of un-normalized gradients, over two mini-batches $\mathcal{B}_1$ and $\mathcal{B}_2$ comprising each $n_B$ samples from $\mathcal{S}$, which yields the criterion: $$n_B \, \frac{ \| \mu_1 - \mu_2 \|^2 }{ \| \mu_1 \| \| \mu_2 \| } \;\;\;\;\;\;\;\;\mathrm{where}\;\;\;\;\mu_k = \frac{1}{n} \sum_{i\in\mathcal{B}_k} \nabla_{\!\theta} f_\theta(\mathbf{x}_i) \, .$$ The factor $n_B$ counterbalances the $\frac{1}{\sqrt{n_B}}$ variance reduction effect due to averaging over $n_B$ samples. \paragraph{Group invariance} The distributions of samples asked to be seen as similar could be group orbits~\cite{cohen2016group}. A differential formulation of group invariance enforcement is also proposed in Appendix~\ref{sec:group2}. \paragraph{Complexity} The \emph{double-backpropagation} routine, available on common deep learning platforms, allows the optimization of such criteria~\cite{drucker1991double,hochreiter1995simplifying,rifai2011contractive,gulrajani2017improved}, roughly doubling the computational time of a gradient step. \paragraph{Dynamics of learning} Our approach enforces similarity not just at the output level, but within the whole internal computational process. Therefore, during training, information is provided directly to each parameter instead of being back-propagated through possibly many layers. Thus the dynamics of learning are expected to be different, especially for deep networks. To test this hypothesis, we train a small network on MNIST with and without the similarity criteria acting as an auxiliary loss (see Fig.~\ref{fig:dyn}). As a result, we observe an acceleration of the convergence very early in the learning process. It is worth noting that this effect can be observed across a wide range of different neural architectures. We performed additional experiments on toy datasets as well as on CIFAR10 with no or only negligible improvements. All together this suggests that using the similarity criteria during training may be beneficial to specific datasets as opposed to specific architectures, and indeed, as the class intra-variability in CIFAR10 is known to be high, considering all examples of a class of CIFAR10 as similar is less relevant. \section{Higher output dimension} \label{sec:higher} Let us now study the more complex case where $f_\theta(\mathbf{x})$ is a vector $\left( f^i_\theta(\mathbf{x}) \right)_{i \in [1,d]}$ in $\mathbb{R}^d$ with $d > 1$. Under a mild hypothesis on the network (output expressivity), always satisfied unless specially designed not to: \begin{theorem} \label{th:multidim} The optimal parameter change $\delta \theta$ to push $f_\theta(\mathbf{x})$ in a direction $\mathbf{v} \in \mathbb{R}^d$ (with a force $\varepsilon \in \mathbb{R}$), \ie such that $f_{\theta + \delta \theta} (\mathbf{x}) - f_\theta(\mathbf{x}) = \varepsilon \mathbf{v}$, induces at any other point $\mathbf{x}'$ the following output variation: \begin{equation} \label{eq:multidim} f_{\theta + \delta \theta} (\mathbf{x}') - f_\theta(\mathbf{x}') = \varepsilon \, K_\theta(\mathbf{x}',\mathbf{x})\, K_\theta(\mathbf{x},\mathbf{x})^{-1}\, \mathbf{v} \, +\, O(\varepsilon^2) \end{equation} where the $d \times d$ kernel matrix $K_\theta(\mathbf{x}',\mathbf{x})$ is defined by $K^{ij}_\theta(\mathbf{x}',\mathbf{x}) = \nabla_{\!\theta} f^i_\theta(\mathbf{x}') \cdot \nabla_{\!\theta} f^j_\theta(\mathbf{x})$. \end{theorem} The similarity kernel is now a matrix and not just a single value, as it describes the relation between moves $\mathbf{v} \in \mathbb{R}^d$. Note that these matrices $K_\theta$ are only $d \times d$ where $d$ is the output dimension. They are thus generally small and easy to manipulate or inverse. \paragraph{Normalized similarity matrix} The unitless symmetrized, normalized version of the kernel (\ref{eq:multidim}) is: \begin{equation} \label{eq:multidimkern} K_\theta^C(\mathbf{x},\mathbf{x}') \;=\; K_\theta(\mathbf{x},\mathbf{x})^{-1/2}\; K_\theta(\mathbf{x},\mathbf{x}')\; K_\theta(\mathbf{x}',\mathbf{x}')^{-1/2} \;. \end{equation} It has the following properties: its coefficients are bounded, in $[-1,1]$; its trace is at most $d$; its (Frobenius) norm is at most $\sqrt{d}$; self-similarity is identity: $\forall \mathbf{x}, \,\;K_\theta^C(\mathbf{x},\mathbf{x}) = \mathrm{Id}$; the kernel is symmetric, in the sense that $K_\theta^C(\mathbf{x}',\mathbf{x}) = K_\theta^C(\mathbf{x},\mathbf{x}')^T$. \paragraph{Similarity in a single value} \label{sec:singlevalue} To summarize the similarity matrix $K_\theta^C(\mathbf{x},\mathbf{x}')$ into a single real value in $[-1,1]$, we consider:\vspace{-2mm} \begin{equation} \label{eq:multidimkernsum} k_\theta^C(\mathbf{x},\mathbf{x}') \;=\; \frac{1}{d} \,\mathrm{Tr}\, K^C_\theta(\mathbf{x},\mathbf{x}') \;. \end{equation} It can be shown indeed that if $k_\theta^C(\mathbf{x},\mathbf{x}')$ is close to 1, then $K_\theta^C(\mathbf{x},\mathbf{x}')$ is close to $\mathrm{Id}$, and reciprocally. See Appendix~\ref{sec:high2} for more details and a discussion about the links between $\frac{1}{d} \,\mathrm{Tr}\, K^C_\theta(\mathbf{x},\mathbf{x}')$ and $\big\| K_\theta^C(\mathbf{x},\mathbf{x}') - \mathrm{Id}\big\|_F$. \paragraph{Metrics on output: rotation invariance} Similarity in $\mathbb{R}^d$ might be richer than just estimating distances in $L^2$ norm. For instance, for our 2D image registration task, the network could be known (or desired) to be equivariant to rotations. The similarity between two output variations $\mathbf{v}$ and $\mathbf{v}'$ can be made rotation-invariant by applying the rotation that best aligns $\mathbf{v}$ and $\mathbf{v}'$ beforehand. This can actually be easily computed in closed form and yields: $$ k^{C, \mathrm{rot}}_\theta(\mathbf{x},\mathbf{x}') \;=\; \frac{1}{2} \sqrt{ \big\| K_\theta^C(\mathbf{x},\mathbf{x}') \big\|_F^2 + 2 \det K_\theta^C(\mathbf{x},\mathbf{x}')} \; . $$ Note that other metrics are possible in the output space. For instance, the loss metric quantifies the norm of a move $\mathbf{v}$ by its impact on the loss $\frac{dL(y)}{dy}\big|_{f_\theta(\mathbf{x})}(\mathbf{v})$. It has a particular meaning though, is not intrinsic, and is not always relevant, \eg in the noisy label case seen in Section \ref{sec:denoising}. \paragraph{The case of classification tasks} When the output of the network is a probability distribution $p_{\theta,\mathbf{x}}(c)$, over a finite number of given classes $c$ for example, it is natural from an information theoretic point of view to rather consider $f^c_\theta(\mathbf{x}) = - \log p_{\theta,\mathbf{x}}(c)$. This is actually the quantities computed in the pre-softmax layer from which common practice directly computes the cross-entropy loss. It turns out that the $L^2$ norm of variations $\delta \!f$ in this space naturally corresponds to the Fisher information metric, which quantifies the impact of parameter variations $\delta \theta$ on the output probability $p_{\theta,\mathbf{x}}$, as $\mathrm{KL}(p_{\theta,\mathbf{x}}||p_{\theta+\delta\theta,\mathbf{x}})$. The matrices $K_{\theta}(\mathbf{x},\mathbf{x}) = \big(\, \nabla_{\theta} f_\theta^c(\mathbf{x}) \cdot \nabla_{\theta} f_\theta^{c'}(\mathbf{x}) \,\big)_{c,c'}$ and $F_{\theta,\mathbf{x}} = \E_c \left[ \nabla_{\theta} f_\theta^c(\mathbf{x})\; \nabla_{\theta} f_\theta^c(\mathbf{x})^T \right]$ are indeed to each other what correlation is to covariance. Thus the quantities defined in Equation (\ref{eq:multidimkernsum}) already take into account information geometry when applied to the pre-softmax layer, and do not need supplementary metric adjustment. \paragraph{Faster setup for classification tasks with many classes} \label{Lorisproj} In a classification task in $d$ classes with large $d$, the computation of $d \times d$ matrices may be prohibitive. As a workaround, for a given input training sample $\mathbf{x}$, the classification task can be seen as a binary one (the right label $c_R$ \vs the other ones), in which case the $d$ outputs of the neural network can be accordingly combined in a single real value. The 1D similarity measure can then be used to compare any training samples of the same class. When making statistics on similarity values $\E_{\mathbf{x}'}\big[ k^C_\theta(\mathbf{x},\mathbf{x}') \big]$, another possible task binarization approach is to sample an adversary class $c_A$ along with $\mathbf{x}'$, and hence consider $\nabla_{\!\theta} f^{c_R}_\theta(\mathbf{x}) - \nabla_{\!\theta} f^{c_A}_\theta(\mathbf{x})$. Both approaches will lead to similar results in Section~\ref{sec:enforce}. \section{Estimating density} \label{sec:density} In this section, we use similarity to estimate input neighborhoods and perform statistics on them. \subsection{Estimating the number of neighbors} Given a point $\mathbf{x}$, how many samples $\mathbf{x}'$ are similar to $\mathbf{x}$ according to the network? This can be measured by computing $k^C_\theta(\mathbf{x},\mathbf{x}')$ for all $\mathbf{x}'$ and picking the closest ones, \ie \eg the $\mathbf{x}'$ such that $k^C_\theta(\mathbf{x},\mathbf{x}') \geqslant 0.9$. More generally, for any data point $\mathbf{x}$, the histogram of the similarity $k^C_\theta(\mathbf{x},\mathbf{x}')$ over all $\mathbf{x}'$ in the dataset (or a representative subset thereof) can be drawn, and turned into an estimate of the number of neighbors of $\mathbf{x}$. To do this, several types of estimates are possible: \begin{itemize} \setlength\itemsep{0em} \setlength{\parskip}{1pt} \item hard-thresholding, for a given threshold $\tau \in [0,1]$: \hfill $N_\tau(\mathbf{x}) = \sum_{\mathbf{x}'} \mathbb{1}_{k^C_\theta(\mathbf{x},\mathbf{x}') \geqslant \tau}$ \vspace{0.5mm} \item soft estimate: \hfill $N_S(\mathbf{x}) \; = \; \sum_{\mathbf{x}'} k_\theta^C(\mathbf{x},\mathbf{x}')$ \vspace{0.5mm} \item less-soft positive-only estimate ($\alpha > 0$): \hfill $N^+_\alpha(\mathbf{x}) \; = \; \sum_{\mathbf{x}'} \mathbb{1}_{k_\theta^C(\mathbf{x},\mathbf{x}') > 0}\; k_\theta^C(\mathbf{x},\mathbf{x}')^\alpha$ \end{itemize} In practice we observe that $k_\theta^C$ is very rarely negative, and thus the soft estimate $N_S$ can be justified as an average of the hard-thresholding estimate $N_\tau$ over all possible thresholds $\tau$: $$\int_{\tau = 0}^1 \!\!\!N_\tau(\mathbf{x}) d\tau \;=\; \sum_{\mathbf{x}'} \int_{\tau = 0}^1 \!\!\mathbb{1}_{k^C_\theta(\mathbf{x},\mathbf{x}') \geqslant \tau} \, d\tau \;=\; \sum_{\mathbf{x}'} k^C_\theta(\mathbf{x},\mathbf{x}')\, \mathbb{1}_{k^C_\theta(\mathbf{x},\mathbf{x}') \geqslant 0} \;=\; N^+_1(\mathbf{x}) \;\simeq\; N_S(\mathbf{x})$$ \subsection{Low complexity of the soft estimate $N_S(\mathbf{x})$} \label{sec:NNcomplexity} The soft estimate $N_S(\mathbf{x})$ is rewritable as: $$\sum_{\mathbf{x}'} k^C_\theta(\mathbf{x},\mathbf{x}') = \sum_{\mathbf{x}'} \frac{ \nabla_\theta f_\theta(\mathbf{x}) }{\| \nabla_\theta f_\theta(\mathbf{x}) \|} \cdot \frac{ \nabla_\theta f_\theta(\mathbf{x}') }{\|\nabla_\theta f_\theta(\mathbf{x}')\|} = \frac{ \nabla_\theta f_\theta(\mathbf{x}) }{\| \nabla_\theta f_\theta(\mathbf{x}) \|} \cdot \mathbf{g} \;\;\;\;\mathrm{with}\;\;\;\mathbf{g}=\sum_{\mathbf{x}'} \frac{ \nabla_\theta f_\theta(\mathbf{x}') }{\|\nabla_\theta f_\theta(\mathbf{x}')\|}$$ and consequently $N_S(\mathbf{x})$ can be computed jointly for all $\mathbf{x}$ in linear time $O(|\mathcal{D}|p)$ in the dataset size $|\mathcal{D}|$ and in the number of parameters $p$, in just two passes over the dataset, when the output dimension is 1. For higher output dimensions $d$, a similar trick can be used and the complexity becomes $O(|\mathcal{D}|d^2p)$. For classification tasks with a large number $d$ of classes, the complexity can be reduced to $O(|\mathcal{D}|p)$ through an approximation consisting in binarizing the task (\cf end of Section~\ref{Lorisproj}). \subsection{Test of the various estimators} In order to rapidly test the behavior of all possible estimators, we applied them to a toy problem where the network's goal is to predict a sinusoid. To change the difficulty of the problem, we vary its frequency, while keeping the number of samples constant. Appendix~\ref{sec:estim2} gives more details and results for the toy problem. Fig.\ref{fig:toy_avg_all_measures} shows for each estimator (with different parameters when relevant), the result of their neighbor count estimation. When the frequency $f$ of the sinusoid to predict increases, the number of neighbors decreases in $\frac{1}{f}$ for every estimator. This aligns with our intuition that as the problem gets harder, the network needs to distinguish input samples more to achieve a good performance, thus the amount of neighbors is lower. In particular we observe that the proposed $N_S(\mathbf{x})$ estimator behaves well, thus we will use that one in bigger studies requiring an efficient estimator. \subsection{Further potential uses for fitness estimation} When the number of neighbors of a training point $\mathbf{x}$ is very low, the network is able to set any label to $\mathbf{x}$, as this won't interfere with other points, by definition of our similarity criterion $k_\theta(\mathbf{x},\mathbf{x}')$. This is thus a typical overfit case, where the network can learn by heart a label associated to a particular, isolated point. On the opposite, when the set of neighbors of $\mathbf{x}$ is a large fraction of the dataset, comprising varied elements, by definition of $k_\theta(\mathbf{x},\mathbf{x}')$ the network is not able to distinguish them, and consequently it can only provide a common output for all of them. Therefore it might not be able to express variety enough, which would be a typical underfit case. The quality of fit can thus be observed by monitoring the number of neighbors together with the variance of the desired labels in the neighborhoods (to distinguish underfit from just high density). \begin{figure} \begin{minipage}[c]{0.6\linewidth} \includegraphics[width=0.99\linewidth]{toy_avg_all_measures.pdf} \caption{Density estimation using the various approaches (log scale). All approaches behave similarly and show good results, except the ones with extreme thresholds.} \label{fig:toy_avg_all_measures} \end{minipage} \hfill \begin{minipage}[c]{0.35\linewidth} \includegraphics[width=\linewidth]{mnist_convergence_12_8.png} \caption{Validation accuracy of a neural network trained on MNIST with and without the similarity criterion (note that the x-axis is the number of minibatches presented to the network, not of epochs).} \label{fig:dyn} \end{minipage} \end{figure} \paragraph{Prediction uncertainty} A measure of the uncertainty of a prediction $f_\theta(\mathbf{x})$ could be to check how easy it would have been to obtain another value during training, without disturbing the training of other points. A given change $\mathbf{v}$ of $f_\theta(\mathbf{x})$ induces changes $\frac{k^I_\theta(\mathbf{x},\mathbf{x}')}{\| \nabla_\theta f_\theta(\mathbf{x}) \|^2} \mathbf{v}$ over other points $\mathbf{x}'$ of the dataset, creating a total $L^1$ disturbance $\sum_{\mathbf{x}'} \|\frac{k^I_\theta(\mathbf{x},\mathbf{x}')}{\| \nabla_\theta f_\theta(\mathbf{x}) \|^2} \mathbf{v}\|$. The uncertainty factor would then be the norm of $\mathbf{v}$ affordable within a disturbance level, and quickly approximable as $\frac{ \| \nabla_\theta f_\theta(\mathbf{x}) \|^2 }{ \sum_{\mathbf{x}'} k^I_\theta(\mathbf{x},\mathbf{x}')}$. \section{Proof details of the self-denoising effect quantification} \subsection{Magnitude of kernel-smoothed i.i.d. noise} We show here that $\E_k[ \varepsilon ] \propto \var_\varepsilon(\E_k[ \varepsilon ])^{1/2} = \sigma_\varepsilon \, \| k^{IN}_\theta \|_{L2}$. Let us denote by $\E_\varepsilon[\,]$ and $\var_\varepsilon(\,)$ the expectation and variance with respect to the random variable $\varepsilon$. As a reminder, by assumptions in the noise definition, $\varepsilon = (\varepsilon_i)_i$ is a random, i.i.d.~noise, centered and of variance $\sigma_\varepsilon$. This is not to be confused with the symbol $\E_k[\,]$, which was defined as, for any vector field $a$: $$\E_k [ a ] =\, \sum_j\, a_j\, k^{IN}_\theta(\mathbf{x}_j,\mathbf{x}_i) \; ,$$ \ie as the mean value of $a$ in the neighborhood of $i$, that is, the weighted average of the $a_j$ with weights $k^{IN}_\theta(\mathbf{x}_j,\mathbf{x}_i)$, which are positive and sum up to 1. Given a network and its associated kernel $k^{IN}_\theta$, we are interested in to knowing the typical values of $\E_k[ \varepsilon ]$ for random $\varepsilon$. First, the expectation over the noise of $\E_k[ \varepsilon ]$ is: $$ \E_\varepsilon\left[ \E_k[ \varepsilon ] \right] \;=\; \E_\varepsilon\left[ \sum_j\, \varepsilon_j\, k^{IN}_\theta(\mathbf{x}_j,\mathbf{x}_i) \right] \; = \; \sum_j \E_\varepsilon[\varepsilon_j]\, k^{IN}_\theta(\mathbf{x}_j,\mathbf{x}_i) \;=\; 0$$ as $\varepsilon$ is a centered noise. Thus the random variable $\E_k[ \varepsilon ]$ is also centered, and therefore its typical values are described by its standard deviation, which is the square root of its variance: $$\E_k[ \varepsilon ] \;\propto\; \var_\varepsilon\left(\E_k[ \varepsilon ]\right)^{1/2} \; .$$ The variance can be computed as follows: \begin{align*} \var_\varepsilon\left( \E_k[ \varepsilon ] \right) & \;=\; \E_\varepsilon\left[ \left( \sum_j\, \varepsilon_j\, k^{IN}_\theta(\mathbf{x}_j,\mathbf{x}_i) \right)^2 \right] \\ & \;=\; \E_\varepsilon\left[ \sum_j\, \varepsilon_j^2\, \left( k^{IN}_\theta(\mathbf{x}_j,\mathbf{x}_i) \right)^2 \right] \;\;\;\;\;\text{as } \varepsilon \text{ is i.i.d.} \\ & \;=\; \sigma^2_\varepsilon \, \sum_j\, \left( k^{IN}_\theta(\mathbf{x}_j,\mathbf{x}_i) \right)^2 \\ & \;=\; \sigma^2_\varepsilon\, \left\| k^{IN}_\theta(\cdot,\mathbf{x}_i) \right\|^2_{L2} \; . \\ \end{align*} As the weights $p_j = k^{IN}_\theta(\mathbf{x}_j,\mathbf{x}_i)$, for given $i$ and varying $j$, are positive and sum up to 1, they form a probability distribution. Hence the value of $\left\| k^{IN}_\theta(\cdot,\mathbf{x}_i) \right\|^2_{L2} = \|p\|^2_{L2}$ satisfies: \begin{itemize} \item $\|p\|_{L2} \leqslant 1$, $\;\;\;\;$ as $\sum_j p_j^2 \leqslant \sum_j p_j = 1$, with equality only when $p_j = p_j^2\; \forall j$, that is, all $p_j = 0$ except for one $p_{j^*} = 1$, which means $k^I_\theta(\mathbf{x}_j,\mathbf{x}_i) = 0 \;\;$ $\forall j \neq i$, which means that all data samples are fully independent from the network's point of view. \item $\|p\|_{L2} \geqslant \frac{1}{\sqrt{N}}$ $\;\;\;\;$ as $1 = \sum_j 1 \times p_j \leqslant \| 1 \|_{L2}\; \|p\|_{L2} = \sqrt{N} \, \|p\|_{L2} $ (Cauchy-Bunyakovsky-Schwarz), with equality reached for the uniform distribution: $p_j = \frac{1}{N} \, \forall j$, where $N$ is the number of data samples. This implies that all $k^C_\theta(\mathbf{x}_j,\mathbf{x}_i)$ are equal, for all $i,j$, hence they are all equal to $k^C_\theta(\mathbf{x}_i,\mathbf{x}_i) = 1$. This is the case studied in~\cite{noise2noise}: all input points are identical. \end{itemize} The denoising factor $\| k^{IN}_\theta (\cdot,\mathbf{x}_i) \|_{L2}$, which depends on the data point $\mathbf{x}_i$ considered, thus expresses where the neighborhood of $\mathbf{x}_i$ lies, between these two extremes (all $\mathbf{x}_j$ very different from $\mathbf{x}_i$, or all identical). Note: the results above remain valid when the output is higher-dimensional, under the supplementary assumption that the covariance matrix of the noise is proportional to the Identity matrix (\ie, the noises on the various coefficients of the label vector are independent from each other, and follow the same law, with standard deviation $\sigma_\varepsilon$). If not, the expression for $\mathrm{co}\!\var_\varepsilon\left( \E_k[ \varepsilon ] \right)$ is more complex, as $\Sigma_\varepsilon$ and $k^{IN}_\theta$ interact. Note that when the output is of dimension $d$, the kernel $k^{IN}_\theta(\mathbf{x}_j,\mathbf{x}_i)$ is a $d \times d$ matrix, thus the denoising factor $\left\| k^{IN}_\theta(\cdot,\mathbf{x}_i) \right\|^2_{L2}$ has to be replaced with the matrix $\sum_j k^{IN}_\theta(\mathbf{x}_j,\mathbf{x}_i)\, k^{IN}_\theta(\mathbf{x}_j,\mathbf{x}_i)^T$, which can be summarized by its trace, which is the $L^2$ norm of the Frobenius norms: $\Big\| \left\| k^{IN}_\theta(\cdot,\mathbf{x}_i) \right\|_F \Big\|^2_{L2}$. \subsection{The function: gradient $\mapsto$ output is Lipschitz} Theorem \ref{basicnet} implies that the application: $\frac{\nabla_{\!\theta} f_\theta(\mathbf{x})}{\|\nabla_{\!\theta} f_\theta(\mathbf{x})\|} \mapsto f_\theta(\mathbf{x})$ is well-defined. We show here that this application is also Lipschitz, with a network-dependent constant, under mild hypotheses. We consider the same assumptions as in Theorem~\ref{basicnet}~: $f_\theta$ is a real-valued network, whose last layer is a linear layer or a standard activation function thereof (such as sigmoid, tanh, ReLU...), without parameter sharing (in that last layer). We will also require that the derivative of the activation function is bounded, which is a safe assumption for all networks meant to be trained by gradient descent. Another, technical property (bounded input space) will be assumed in order to imply bounded gradients. A side note indicates how to rewrite the desired property if the input space is not bounded. \newcommand{\mathbf{u}}{\mathbf{u}} Let $\mathbf{x}$ and $\mathbf{x}'$ be any two inputs. We want to bound $\left|f_\theta(\mathbf{x}) - f_\theta(\mathbf{x}')\right|$ by $\|\mathbf{u} - \mathbf{u}'\|_2$ times some constant, where $\mathbf{u} = \frac{\nabla_{\!\theta} f_\theta(\mathbf{x})}{\|\nabla_{\!\theta} f_\theta(\mathbf{x})\|}$ and $\mathbf{u}' = \frac{\nabla_{\!\theta} f_\theta(\mathbf{x}')}{\|\nabla_{\!\theta} f_\theta(\mathbf{x}')\|}$. Let us denote the non-normalized gradients by $\mathbf{v} = \nabla_{\!\theta} f_\theta(\mathbf{x})$ and $\mathbf{v}' = \nabla_{\!\theta} f_\theta(\mathbf{x}')$. We have $\mathbf{u} = \frac{\mathbf{v}}{\|\mathbf{v}\|}$ and $\mathbf{u}' = \frac{\mathbf{v}'}{\|\mathbf{v}'\|}$. We will proceed in two steps: bounding $\left|f_\theta(\mathbf{x}) - f_\theta(\mathbf{x}')\right|$ by $\|\mathbf{v} - \mathbf{v}'\|_2$, and then $\|\mathbf{v} - \mathbf{v}'\|_2$ by $\|\mathbf{u} - \mathbf{u}'\|_2$. The first step is easy and actually sufficient to bound with a non-normalized similarity kernel $k_\theta = \mathbf{v} \cdot \mathbf{v}'$ the shift from the average prediction in the neighborhood. The second step provides a more elegant bound, in that it makes use of the normalized similarity kernel $k^C_\theta = \mathbf{u} \cdot \mathbf{u}'$, but that bound is a priori not as tight and requires more assumptions. \medskip \textbf{Case where the last layer is linear} The output of the network is of the form $$f_\theta(\mathbf{x}) = \sum_i w_i a_i(\mathbf{x}) + b \;, $$ where $w_i$ and $b$ are parameters in $\mathbb{R}$ and $a_i(\mathbf{x})$ activities from previous layers. Thus: \begin{align*} \left|f_\theta(\mathbf{x}) - f_\theta(\mathbf{x}')\right| & = \; \left| \sum_i w_i \, (a_i(\mathbf{x}) - a_i(\mathbf{x}'))\right| \\ & \leqslant \; \| \mathbf{w} \|_2 \, \| \mathbf{a}(\mathbf{x}) - \mathbf{a}(\mathbf{x}') \|_2\\ & \leqslant \; \| \mathbf{w} \|_2 \, \sqrt{ \sum_i ( v_i - v'_i )^2 } \end{align*} where the sum is taken over parameters $i$ in the last layer only, using the fact that activities $a_i$ in the last layer are equal to some of the coefficients of the gradient: $v_i := \frac{\partial f_\theta(\mathbf{x})}{\partial w_i} = a_i(\mathbf{x})$. Note that the derivative with respect to the shift $b$ is $v_b := \frac{\partial f_\theta(\mathbf{x})}{\partial b} = 1$, which ensures that the norm of $\mathbf{v}$ is at least 1. This implies: $$ \| \mathbf{u} - \mathbf{u}' \|_2 \; \geqslant \; \left| u_b - u'_b \right| \; = \; \left| \frac{1}{\|\mathbf{v}\|} - \frac{1}{\|\mathbf{v}'\|} \right| $$ which, combined with: $$ \left|v_i - v'_i\right| \;\;=\;\; \|\mathbf{v}'\|\; \left|\frac{1}{\|\mathbf{v}'\|} v_i - \frac{v'_i}{\|\mathbf{v}'\|}\right| \;\;=\;\; \|\mathbf{v}'\|\; \left| \, u_i - u'_i + \left( \frac{1}{\|\mathbf{v}'\|} - \frac{1}{\|\mathbf{v}\|} \right) v_i \, \right| $$ yields: $$ \left|v_i - v'_i\right| \;\; \leqslant \;\; \|\mathbf{v}'\| \,\Big( \left| \, u_i - u'_i \,\right| \,+\, \| \mathbf{u} - \mathbf{u}' \|_2 \,|v_i| \Big) \;\; \leqslant \;\; \|\mathbf{v}'\| \, \| \mathbf{u} - \mathbf{u}' \|_2 \, (1+|v_i|) $$ from which we finally obtain: $$ \left|f_\theta(\mathbf{x}) - f_\theta(\mathbf{x}')\right| \; \leqslant \; \left[ \| \mathbf{w} \|_2 \, \|\mathbf{v}'\| \, \sqrt{ \sum_i (1+|v_i|)^2 } \right] \, \| \mathbf{u} - \mathbf{u}' \|_2 $$ which is the bound we were searching for. For the term between brackets to be bounded by a network-dependent constant, one can suppose for instance that the derivative of the activation functions is bounded (which is usually the case for networks meant to be trained by gradient descent), and that the input space is bounded as well; in such cases indeed all coefficients of the gradient vector $\mathbf{v}$ or $\mathbf{v}'$ are bounded, as derivatives of a function composed of constant linear applications (except for the first layer which is a linear application whose factors are bounded inputs, when seen as an application defined on parameters) and of bounded-derivatives activation functions. \textbf{Note for unbounded input spaces: } If the input space is not bounded, the gradients are not bounded absolutely, as for instance the gradient with respect to a weight in the first layer is the input itself (times a chain product). In that case the application $\mathbf{x} \mapsto \mathbf{v}$ still satisfy a bound of the form $\|\mathbf{v}\| \leqslant (1+\|\mathbf{x}\|)\,A$, with $A$ a network-dependent constant (product of determiners of layer weight matrices and of the bound on activation function derivatives to the power: network depth), and thus the application $\mathbf{u} \mapsto f_\theta(\mathbf{x})$ still satisfies a bound of the form, for any $\mathbf{x}$, $\mathbf{x}'$: $$ \left|f_\theta(\mathbf{x}) - f_\theta(\mathbf{x}')\right| \;\; \leqslant \;\; B \;(1+\|\mathbf{x}\|)\; (1+\|\mathbf{x}'\|) \; \| \mathbf{u} - \mathbf{u}' \|_2 \; .$$ The last statement in the paper then becomes $$\big|\,\E_k[ \widehat{y}_i - \widehat{y} ]\,\big| \;\;\leqslant\;\; \sqrt{2}\, B \;(1+\|\mathbf{x}_i\|)\; \max_j (1+\|\mathbf{x}_j\|) \; \E_k\!\left[ \sqrt{1 - k_\theta^C(\mathbf{x}_i,\cdot)}\,\right] $$ which in practice rewrites as the original formulation: $$\big|\,\E_k[ \widehat{y}_i - \widehat{y} ]\,\big| \;\;\leqslant\;\; \sqrt{2}\, C\, \E_k\!\left[ \sqrt{1 - k_\theta^C(\mathbf{x}_i,\cdot)}\,\right] $$ by taking $C = B \max_j \left (1+\|\mathbf{x}_j\| \right)^2$, considering the actual diameter of the given dataset. \medskip \textbf{Case where the last layer is an activation function of a linear layer} The output of the network is of the form $$f_\theta(\mathbf{x}) = \sigma\left( \sum_i w_i a_i(\mathbf{x}) + b \right)\;, $$ and, as the derivative of $\sigma$ is assumed to be bounded, and as the weights $w_i$ are fixed, $f_\theta(\mathbf{x})$ is a Lipschitz function of the last layer activities $a_i(\mathbf{x})$. Therefore: \begin{align*} \left|f_\theta(\mathbf{x}) - f_\theta(\mathbf{x}')\right| & \leqslant \; K \, \| \mathbf{a}(\mathbf{x}) - \mathbf{a}(\mathbf{x}') \|_2 \; \end{align*} We will denote by $\alpha$ and $\alpha'$ the derivatives with respect to the shift $b$, which are this time: $$\left. \alpha := \mathbf{v}_b := \frac{\partial f_\theta(\mathbf{x})}{\partial b} = \sigma'\right|_{\sum_i w_i a_i(\mathbf{x}) + b} \;\;\;\;\;\;\;\; \text{and} \;\;\;\;\;\;\;\; \left. \alpha' := \mathbf{v}'_b := \frac{\partial f_\theta(\mathbf{x}')}{\partial b} = \sigma'\right|_{\sum_i w_i a_i(\mathbf{x}') + b} \; .$$ We proceed as previously: $$ \| \mathbf{u} - \mathbf{u}' \|_2 \; \geqslant \; \left| u_b - u'_b \right| \; = \; \left| \frac{\alpha}{\|\mathbf{v}\|} - \frac{\alpha'}{\|\mathbf{v}'\|} \right| $$ which, combined with: $$ \left|a_i - a'_i\right| \;\;=\;\; \left|\frac{v_i}{\alpha} - \frac{v'_i}{\alpha'}\right| \;\;=\;\; \frac{\|\mathbf{v}'\|}{\alpha'}\; \left|\frac{\alpha'}{\alpha \|\mathbf{v}'\|} v_i - \frac{v'_i}{\|\mathbf{v}'\|}\right| \;\;=\;\; \frac{\|\mathbf{v}'\|}{\alpha'}\; \left| \, u_i - u'_i + \frac{v_i}{\alpha}\left( \frac{\alpha'}{\|\mathbf{v}'\|} - \frac{\alpha}{\|\mathbf{v}\|} \right) \, \right| $$ yields: $$ \left|a_i - a'_i\right| \;\; \leqslant \;\; \frac{\|\mathbf{v}'\|}{\alpha'} \,\Big( \left| \, u_i - u'_i \,\right| \,+\, \| \mathbf{u} - \mathbf{u}' \|_2 \,|a_i| \Big) \;\; \leqslant \;\; \frac{\|\mathbf{v}'\|}{\alpha'} \, (1+|a_i|) \, \| \mathbf{u} - \mathbf{u}' \|_2 $$ from which we finally obtain: $$ \left|f_\theta(\mathbf{x}) - f_\theta(\mathbf{x}')\right| \; \leqslant \; \left[ K \, \frac{\|\mathbf{v}'\|}{\alpha'} \, \sqrt{ \sum_i (1+|a_i|)^2 } \right] \, \| \mathbf{u} - \mathbf{u}' \|_2 \; . $$ Note that $\alpha'$ is actually a factor of each coefficient of $\mathbf{v}'$, as the derivative of $f_\theta(\mathbf{x}')$ with respect to any parameter is a chain rule starting with $\left.\frac{\partial f_\theta(\mathbf{x}')}{\partial b} = \sigma'\right|_{\sum_i w_i a_i(\mathbf{x}') + b} = \alpha'$. To bound the term between brackets, the same assumptions as previously are sufficient. One can assume that $\alpha$ and $\alpha'$ are not 0, as, if they are, the problem is of little interest ($\mathbf{u}$ or $\mathbf{u}'$ being then not defined). \subsection{Additional proof detail} The kernel $k_\theta^C(\mathbf{x},\mathbf{x}')$, by definition, is the $L^2$ inner product between two unit vectors: $$k_\theta^C(\mathbf{x},\mathbf{x}') = \frac{\nabla_{\!\theta} f_\theta(\mathbf{x})}{\|\nabla_{\!\theta} f_\theta(\mathbf{x})\|} \cdot \frac{\nabla_{\!\theta} f_\theta(\mathbf{x}')}{\|\nabla_{\!\theta} f_\theta(\mathbf{x}')\|} \, .$$ As, for any two unit vectors $a$ and $b$: $$\| a-b \|^2 \;=\; a^2 + b^2 - 2\, a\cdot b \;=\; 2 \,(1 - a \cdot b) \; ,$$ we get: $$ \left\| \frac{\nabla_{\!\theta} f_\theta(\mathbf{x})}{\|\nabla_{\!\theta} f_\theta(\mathbf{x})\|} - \frac{\nabla_{\!\theta} f_\theta(\mathbf{x}')}{\|\nabla_{\!\theta} f_\theta(\mathbf{x}')\|} \right\| = \sqrt{2} \sqrt{1 - k_\theta^C(\mathbf{x},\mathbf{x}')} \; .$$ \section{Motivation: Dataset self-denoising} \label{sec:denoising} In remote sensing imagery, data is abundant but noisy \cite{mnih2012learning}. For instance RGB satellite images and binary cadaster maps (delineating buildings) are numerous but badly aligned for various reasons (annotation mistakes, atmosphere disturbance, elevation variations...). In a recent preliminary work~\cite{anonymous}, we tackled the task of automatically registering these two types of images together with neural networks, considering as ground truth a dataset of hand-picked relatively-well-aligned areas \cite{maggiori2017dataset}, and hoping the network would be able to learn from such a dataset of imperfect alignments. Learning with noisy labels is indeed an active topic of research \cite{sukhbaatar2014training,natarajan2013learning,li2017learning}. For this, we designed an iterative approach: train, then test on the training set and re-align it accordingly; repeat (for 3 iterations). The results were surprisingly good, yielding far better alignments than the ground truth it learned from, both qualitatively (Figure~\ref{fig:qualitative_results}) and quantitatively (Figure~\ref{fig:accuracies}, obtained on manually-aligned data): the median registration error dropped from 18 pixels to 3.5 pixels, which is the best score one could hope for, given intrinsic ambiguities in such registration task. To check that this performance was not due to a subset of the training data that would be perfectly aligned, we added noise to the ground truth and re-trained from it: the new results were about as good again (dashed lines). Thus the network did learn almost perfectly just from noisy labels. \begin{figure} \begin{minipage}[c]{0.42\linewidth} \includegraphics[width=0.99\linewidth]{qualitative_results.jpg} \smallskip\vspace{1mm} \caption{Qualitative alignment results \cite{anonymous} on a crop of bloomington22 from the Inria dataset \cite{maggiori2017dataset}. \textcolor{red}{Red: initial dataset annotations}; \textcolor{blue}{blue: aligned annotations round 1}; \g{green: aligned annotations round 2}.} \label{fig:qualitative_results} \end{minipage} \hfill \begin{minipage}[c]{0.55\linewidth} \vspace{-3mm} \hspace{-2.5mm}\includegraphics[width=1.05\linewidth]{accuracies.png} \vspace{-4mm} \caption{Accuracy cumulative distributions \cite{anonymous} measured with the manually-aligned annotations of bloomington22 \cite{maggiori2017dataset}. Read as: fraction of image pixels whose registration error is less than threshold $\tau$.} \label{fig:accuracies} \end{minipage} \end{figure} An explanation for this self-denoising phenomenon is proposed in \cite{noise2noise} as follows. Let us consider a regression task, with a $L^2$ loss, and where true labels $y$ were altered with i.i.d.~noise $\varepsilon$ of variance $v$. Suppose a same input $\mathbf{x}$ appears $n$ times in the training set, thus with $n$ different labels $y_i = y + \varepsilon_i$. The network can only output the same prediction for all these $n$ cases (since the input is the same), and the best option, considering the $L^2$ loss, is to predict the average $\frac{1}{n} \sum_i y_i$, whose distance to the true label $y$ is $O(\frac{v}{\sqrt{n}})$. Thus a denoising effect by a factor $\sqrt{n}$ can be observed. However, the exact same point $\mathbf{x}$ is not likely to appear several times in a dataset (with different labels). Rather, relatively \emph{similar} points may appear, and the amplitude of the self-denoising effect will be a function of their number. Here, the similarity should reflect the neural network perception (similar inputs yield the same output) and not an \emph{a priori} norm chosen on the input space. The purpose of this article is to express the notion of similarity from the network's point of view. We first define it, and study it mathematically, in Section~\ref{sec:sim}, in the one-dimensional output case for the sake of simplicity. Higher-dimensional outputs are dealt with in Section~\ref{sec:higher}. We then compute, in Section~\ref{sec:density}, the number of neighbors (\ie, of similar samples), and propose for this a very fast estimator. This brings new tools to analyze already-trained networks. As they are differentiable and fast to compute, they can be used during training as well, \eg, to enforce that given examples should be perceived as similar by the network (\cf Section~\ref{sec:enforce}). Finally, in Section~\ref{backtodenoise}, we apply the proposed tools to analyze a network trained with noisy labels for a remote sensing image alignment task, and formalize the self-denoising phenomenon, quantifying its effect, extending~\cite{noise2noise} to real datasets. \section{Similarity} \label{sec:sim} \subsection{Notions of similarities} The notion of similarity between data points is an important topic in the machine learning literature, obviously in domains such as image retrieval, where images similar to a query have to be found; but not only. For instance when training auto-encoders, the quality of the reconstruction is usually quantified as the $L^2$ norm between the input and output images. Such a similarity measure is however questionable, as color comparison, performed pixel per pixel, is a poor estimate of human perception: the $L^2$ norm can vary a lot with transformations barely noticeable to the human eye such as small translations or rotations (for instance on textures), and does not carry semantic information, \ie whether the same kind of objects are present in the image. Therefore, so-called \emph{perceptual losses} \cite{johnson2016perceptual} were introduced to quantify image similarity: each image is fed to a standard pre-trained network such as VGG, and the activations in a particular intermediate layer are used as descriptors of the image \cite{gatys2015texture,gatys2015neural}. The distance between two images is then set as the $L^2$ norm between these activations. Such a distance carries implicitly semantic information, as the VGG network was trained for image classification. However, the choice of the layer to consider is arbitrary. In the ideal case, one would wish to combine the information from all layers, as some are more abstract and some more detail-specific. But then the particular weights chosen to combine the different layers would also be arbitrary. Would it be possible to get a canonical similarity measure, well posed theoretically? More importantly, the previous litterature does not consider the notion of input similarity from the point of view of the neural network that is being used, but from the point of view of another one (typically, VGG) which aims at imitating human perception. A notable exception~\cite{koh2017understanding} transposes to machine learning the concept of influence functions in statistics~\cite{Hampel1974}. The differences with our definition of similarity might seem slight at first glance but they have important consequences: first, making use of the loss (and of its gradient and its Hessian) in the similarity measure has the issue that the expressed quantities are not intrinsic to the neural network but also depend on the optimization criterion used during training, which is problematic in the case of noisy labels as, at training convergence, the gradient of the loss with respect to the output points in random directions (remaining label noise that the network is not able to overfit). Second, the inverse of the Hessian appears in influence functions, while our definition makes use of gradients only. Another interesting related work~\cite{NNKernel} expresses neural networks as a kernel between test point and training points. Once again however the kernel definition relies on the training criterion. As a supplementary motivation for this study, neural networks are black boxes difficult to interpret, and showing which samples a network considers as similar would help to explain its decisions. Also, the number of such similar examples would be a key element for confidence estimation at test time. In this section we define a proper, intrinsic notion of similarity as seen by the network, relying on how easily it can distinguish different inputs. \subsection{Similarity from the point of view of the parameterized family of functions} Let $f_\theta$ be a parameterized function, typically a neural network already trained for some task, and $\mathbf{x}, \mathbf{x}'$ possible inputs, for instance from the training or test set. For the sake of simplicity, let us suppose in a first step that $f_\theta$ is real valued. To express the similarity between $\mathbf{x}$ and $\mathbf{x}'$, as seen by the network, one could compare the output values $f_\theta(\mathbf{x})$ and $f_\theta(\mathbf{x}')$. This is however not very informative, and a same output might be obtained for different reasons. Instead, we define similarity as the influence of $\mathbf{x}$ over $\mathbf{x}'$, by quantifying how much an additional training step for $\mathbf{x}$ would change the output for $\mathbf{x}'$ as well. If $\mathbf{x}$ and $\mathbf{x}'$ are very different from the point of view of the neural network, changing $f_\theta(\mathbf{x})$ will have little consequence on $f_\theta(\mathbf{x}')$. Vice versa, if they are very similar, changing $f_\theta(\mathbf{x})$ will greatly affect $f_\theta(\mathbf{x}')$ as well. \begin{figure}[ht] \hfill \begin{minipage}[c]{0.4\linewidth} \includegraphics[width=4.2cm]{figs/kernel2.pdf} \end{minipage} \begin{minipage}[r]{0.38\linewidth} \caption{\label{fig:kernel} Moves in the space of outputs. We quantify the influence of a data point $\mathbf{x}$ over another one $\mathbf{x}'$ by how much the tuning of parameters $\theta$ to obtain a desired output change $\mathbf{v}$ for $f_\theta(\mathbf{x})$ will affect $f_\theta(\mathbf{x}')$ as well.} \end{minipage} \end{figure} Formally, if one wants to change the value of $f_\theta(\mathbf{x})$ by a small quantity $\varepsilon$, one needs to update $\theta$ by $\delta\theta = \varepsilon\, \frac{ \nabla_{\!\theta} f_\theta(\mathbf{x}) }{ \| \nabla_{\!\theta} f_\theta(\mathbf{x}) \|^2} $. Indeed, after the parameter update, the new value at $\mathbf{x}$ will be: $$f_{\theta + \delta \theta} (\mathbf{x}) \;\;=\;\; f_\theta(\mathbf{x}) + \nabla_{\!\theta} f_\theta(\mathbf{x}) \cdot \delta \theta + O(\|\delta\theta\|^2) \;\;=\;\; f_\theta(\mathbf{x}) + \varepsilon + O(\varepsilon^2).$$ This parameter change induces a value change at any other point $\mathbf{x}'$ : $$f_{\theta + \delta \theta} (\mathbf{x}') \;=\; f_\theta(\mathbf{x}') + \nabla_{\!\theta} f_\theta(\mathbf{x}') \cdot \delta \theta + O(\|\delta\theta\|^2) \;=\; f_\theta(\mathbf{x}') + \varepsilon \frac{ \nabla_{\!\theta} f_\theta(\mathbf{x}') \cdot \nabla_{\!\theta} f_\theta(\mathbf{x}) }{ \| \nabla_{\!\theta} f_\theta(\mathbf{x}) \|^2} + O(\varepsilon^2).$$ Therefore the kernel $\displaystyle k^N_\theta(\mathbf{x},\mathbf{x}') = \frac{ \nabla_{\!\theta} f_\theta(\mathbf{x}) \cdot \nabla_{\!\theta} f_\theta(\mathbf{x}') }{ \| \nabla_{\!\theta} f_\theta(\mathbf{x}) \|^2}$ represents the influence of $\mathbf{x}$ over $\mathbf{x}'$: if one wishes to change the output value $f_\theta(\mathbf{x})$ by $\varepsilon$, then $f_\theta(\mathbf{x}')$ will change by $\varepsilon\, k^N_\theta(\mathbf{x},\mathbf{x}')$. In particular, if $k^N_\theta(\mathbf{x},\mathbf{x}')$ is high, then $\mathbf{x}$ and $\mathbf{x}'$ are not distinguishable from the point of view of the network, as any attempt to move $f_\theta(\mathbf{x})$ will move $f_\theta(\mathbf{x}')$ as well (see Fig.~\ref{fig:kernel}). We thus see $k^N_\theta(\mathbf{x},\mathbf{x}')$ as a measure of similarity. Note however that $k^N_\theta(\mathbf{x},\mathbf{x}')$ is not symmetric. \newpage \paragraph{Symmetric similarity: correlation} Two symmetric kernels natural arise: the inner product: \begin{equation} \label{eq:innerproductkern} k^I_\theta(\mathbf{x},\mathbf{x}') \;=\; \nabla_{\!\theta} f_\theta(\mathbf{x}) \cdot \nabla_{\!\theta} f_\theta(\mathbf{x}') \end{equation} and its normalized version, the correlation: \begin{equation} \label{eq:symkern} k^C_\theta(\mathbf{x},\mathbf{x}') = \frac{ \nabla_{\!\theta} f_\theta(\mathbf{x}) }{ \| \nabla_{\!\theta} f_\theta(\mathbf{x}) \|} \cdot \frac{ \nabla_{\!\theta} f_\theta(\mathbf{x}') }{ \| \nabla_{\!\theta} f_\theta(\mathbf{x}') \|} \end{equation} which has the advantage of being bounded (in $[-1,1]$), thus expressing similarity in a usual meaning. \subsection{Properties for vanilla neural networks} Intuitively, inputs that are similar from the network perspective should produce similar outputs; we can check that $k^C_\theta$ is a good similarity measure in this respect (all proofs are deferred to the Appendix): \begin{theorem} \label{basicnet} For any real-valued neural network $f_\theta$ whose last layer is a linear layer (without any parameter sharing) or a standard activation function thereof (sigmoid, tanh, ReLU...), and for any inputs $\mathbf{x}$ and $\mathbf{x}'$, $$\nabla_{\!\theta} f_\theta(\mathbf{x}) = \nabla_{\!\theta} f_\theta(\mathbf{x}') \;\;\implies\;\;f_\theta(\mathbf{x}) = f_\theta(\mathbf{x}') \,.$$ \end{theorem} \begin{corollary} \label{alphasim} Under the same assumptions, for any inputs $\mathbf{x}$ and $\mathbf{x}'$, $$\begin{array}{crcl} & k^C_\theta(\mathbf{x}, \mathbf{x}') = 1 & \implies & \nabla_{\!\theta} f_\theta(\mathbf{x}) = \nabla_{\!\theta} f_\theta(\mathbf{x}') \,, \vspace{1mm} \\ \mathrm{hence} & k^C_\theta(\mathbf{x}, \mathbf{x}') = 1 & \implies & f_\theta(\mathbf{x}) = f_\theta(\mathbf{x}') \,. \\ \end{array}$$ \end{corollary} Furthermore, \begin{theorem} \label{basicnet2} For any real-valued neural network $f_\theta$ without parameter sharing, if $\nabla_{\!\theta} f_\theta(\mathbf{x}) = \nabla_{\!\theta} f_\theta(\mathbf{x}')$ for two inputs $\mathbf{x}, \mathbf{x}'$, then all useful activities computed when processing $\mathbf{x}$ are equal to the ones obtained when processing $\mathbf{x}'$. \end{theorem} We name \emph{useful} activities all activities $a_i(\mathbf{x})$ whose variation would have an impact on the output, \ie all the ones satisfying $\frac{d f_\theta(\mathbf{x}) }{da_i} \neq 0$. This condition is typically not satisfied when the activity is negative and followed by a ReLU, or when it is multiplied by a 0 weight, or when all its contributions to the output cancel one another (\eg, a sum of two neurons with opposite weights: $f_\theta(\mathbf{x}) = \sigma( a_i(\mathbf{x}) ) - \sigma( a_i(\mathbf{x}) )$). \paragraph{Link with the \emph{perceptual loss}} For a vanilla network without parameter sharing, the gradient $\nabla_{\!\theta} f_\theta(\mathbf{x})$ is a list of coefficients $\nabla_{\!w_i^j} f_\theta(\mathbf{x}) = \frac{d f_\theta(\mathbf{x}) }{db_j}\, a_i(\mathbf{x})$, where $w_i^j$ is the parameter-factor that multiplies the input activation $a_i(\mathbf{x})$ in neuron $j$, and of coefficients $\nabla_{\!b_j} f_\theta(\mathbf{x}) = \frac{d f_\theta(\mathbf{x}) }{db_j}$ for neuron biases, which we will consider as standard parameters $b_j = w_0^j$ that act on a constant activation $a_0(\mathbf{x}) = 1$, yielding $\nabla_{\!w_0^j} f_\theta(\mathbf{x}) = \frac{d f_\theta(\mathbf{x}) }{db_j}\, a_0(\mathbf{x})$. Thus the gradient $\nabla_{\!\theta} f_\theta(\mathbf{x})$ can be seen as a list of all activation values $a_i(\mathbf{x})$ multiplied by the potential impact on the output $f_\theta(\mathbf{x})$ of the neurons $j$ using them, \ie $\frac{d f_\theta(\mathbf{x}) }{db_j}$. Each activation appears in this list as many times as it is fed to different neurons. The similarity between two inputs then rewrites: $$k^I_\theta(\mathbf{x},\mathbf{x}') = \!\!\!\sum_{\mathrm{activities\;} i}\!\!\! \lambda_i(\mathbf{x},\mathbf{x}')\; a_i(\mathbf{x}) \, a_i(\mathbf{x}') \;\;\;\;\;\; \text{where} \;\;\;\;\;\; \lambda_i(\mathbf{x},\mathbf{x}') = \!\!\!\!\!\!\sum_{\mathrm{neuron\;} j \mathrm{\;using\;} a_i} \frac{d f_\theta(\mathbf{x}) }{db_j} \frac{d f_\theta(\mathbf{x}') }{db_j}$$ are data-dependent importance weights. Such weighting schemes on activation units naturally arise when expressing intrinsic quantities; the use of natural gradients would bring invariance to re-parameterization \cite{RiemanNN_I, RiemanNN_II}. On the other hand, the inner product related to the perceptual loss would be $$\sum_{\mathrm{activities\;} i \neq 0}\;\lambda_{\mathrm{layer}(i)} \;a_i(\mathbf{x}) \, a_i(\mathbf{x}')$$ for some arbitrary fixed layer-dependent weights $\lambda_{\mathrm{layer}(i)}$. \subsection{Properties for parameter-sharing networks} When sharing weights, as in convolutional networks, the gradient $\nabla_{\!\theta} f_\theta(\mathbf{x})$ is made of the same coefficients (impact-weighted activations) but summed over shared parameters. Denoting by $\mathcal{S}(i)$ the set of (neuron, input activity) pairs where the parameter $w_i$ is involved, $$k^I_\theta(\mathbf{x},\mathbf{x}') \;\;=\;\; \sum_{\text{params}\;i} \left( \sum_{(j,k)\in\mathcal{S}_i} a_{k}(\mathbf{x}) \frac{d f_\theta(\mathbf{x}) }{db_{j}} \right) \left( \sum_{(j,k)\in\mathcal{S}_i} a_{k}(\mathbf{x}') \frac{d f_\theta(\mathbf{x}') }{db_{j}} \right)$$ Thus, in convolutional networks, $k^I_\theta$ similarity does not imply similarity of first layer activations anymore, but only of their (impact-weighted) spatial average. More generally, any invariance introduced by a weight sharing scheme in an architecture will be reflected in the similarity measure $k^I_\theta$, which is expected as $k^I_\theta$ was defined as the input similarity \emph{from the neural network perspective}. Note that this type of objects was recently studied from an optimization viewpoint under the name of Neural Tangent Kernel~\cite{NTK,LazyTraining} in the infinite layer width limit. \subsubsection*{Acknowledgments} {\small \bibliographystyle{plainnat}
1,108,101,566,233
arxiv
\section{Introduction} When we want to study $Q \bar{Q}$ interaction we should consider the effect of the medium in motion of $Q \bar{Q}$ , because this pair is not produced at rest in QGP. So, the velocity of the pair through the plasma has some effects on its interactions that should be taken into account. The interaction energy has a finite imaginary part at finite temperature that can be used to estimate the thermal width of the quarkonia \cite{nbma,ybm}. Calculations of Im$V_{Q \bar{Q}}$ relevant to QCD and heavy ion collisions were performed for static $Q \bar{Q}$ pairs using pQCD \cite{mlop} and lattice QCD \cite{arth,gaca,gcs} before AdS/CFT.\\ The AdS/CFT is a correspondence \cite{jmm,ssg,ew,oas} between a string theory in AdS space and a conformal field theory in physical space-time. It leads to an analytic semi-classical model for strongly coupled QCD. It has scale invariance, dimensional counting at short distances and color confinement at large distances. This theory describes the phenomenology of hadronic properties and demonstrate their ability to incorporate such essential properties of QCD as confinement and chiral symmetry breaking. In the AdS/CFT point of view the $AdS_{5}$ plays important role in describing QCD phenomena. So in order to describe a confining theory, the conformal invariance of $AdS_{5}$ must be broken somehow. Two strategies AdS/QCD background have been suggested in the literatures hard-wall model \cite{jee,hrg,hr,eka,jp,ldr} and soft-wall model \cite{ake,sjb,gfde,wdp,hfm,wde,bge,jni,hrgr,hfo,hjk,pcf,ave,aeg,aga,gfd,zab,tbt,avi}. In hard-wall model to impose confinement and discrete normalizable modes that is to truncate the regime where string modes can propagate by introducing an IR cutoff in the fifth dimension at a finite value $z_{0}\sim\frac{1}{\Lambda_{QCD}}$. Thus, the hard-wall at $z_0$ breaks conformal invariance and allows the introduction of the QCD scale and a spectrum of particle states, they have phenomenological problems, since the obtained spectra does not have Regge behavior. To remedy this it is necessary to introduce a soft cut off, using a dilaton field or using a warp factor in the metric \cite{jee,wde}. These models are called soft wall models. The soft-wall and hard-wall approach has been successfully applied to the description of the mass spectrum of mesons and baryons, the pion leptonic constant, the electromagnetic form factors of pion and nucleons, etc. On the other hand the study of the moving heavy quarkonia in space-time with AdS/QCD approach plays important role in interaction energy \cite{mst,msd,mmk,gac}. By using different metric backgrounds we see different effects on interaction energy. \\Evaluation of Im$V_{Q\bar{Q}}$ will yield to determine the suppression of ${Q\bar{Q}}$ in heavy ion collision\cite{sif}. The main idea is using boosted frame to have Re $V_{Q\bar{Q}}$ and Im $V_{Q\bar{Q}}$ \cite{fn} for ${Q\bar{Q}}$ in a plasma.\\ From viewpoint of holography, the AdS/CFT correspondence can describe a ``brocken conformal symmetry'', when one adds a proper deformed warp factor in front of the $AdS_5$ metric structure \cite{jer,gfdt,jba,mkr,tsss,tsa,shm,akek,oan,fzu,gfdet,jpsh,kghm,kghn,ccm,ugek,uek,dfze,hjp,shem,dlis}. So, $e^{cz^2}$ is a positive quadratic correction with z, the fifth dimension.\\ One natural question is about the connection between the warp factor and the potential $V_{Q\bar{Q}}$. In this work, the procedure of \cite{sif} is followed to evaluate the imaginary part of potential for an AdS metric background with deformation parameter in warp factor. It is interesting to see `` what will happen if meson be in a deformed AdS?''\\ It is a trend to see the effects of deformation parameter on Re$V_{Q\bar{Q}}$ and Im$V_{Q\bar{Q}}$ which are evidences for ``usual'' or ``unusual" behavior of meson in compare with the $c=0$ case. As expected in the limit of $c\rightarrow0$, all results are equal to the results of $AdS_5$ case. All above informations give us motivation to work on effect of the deformation parameter in $AdS$ metric background on real and imaginary parts of potential. So, we organized the paper as follows. In section 2, we discuss the case where the pair is moving perpendicularly to the joining axis of the dipole in deformed AdS, we assume this metric background for ${Q\bar{Q}}$ and find some relations for real and imaginary parts of potential. This example will be presented with some numerical results for different values of deformation parameter. Then we consider general orientation of ${Q\bar{Q}}$ in section 3 and follow the procedure as before. Section 4 would be our conclusion and some suggestions for future work.\\ \section{$ {Q \bar{Q}} $ in an deformed AdS, perpendicular case} In this section we consider soft-wall metric background with deformation parameter in warp factor at finite temperature case. So, we present general relations for real and imaginary parts of potential when the dipole is moving with velocity $\eta$ perpendicularly to the wind \cite{sif}. \\ In our case we apply the general result for deformed AdS, the dual gravity metric will be as: \begin{equation} ds^2=e^{2A(z)} [-f(z) dt^2+\Sigma_{i=1}^{i=3} dx^2+\frac{1}{f(z)} dz^2], \end{equation} Where $A (z)=-\ln \frac{z}{R}+\frac{1}{4}cz^2$ and $f(z)=1-(\frac{z}{z_{h}})^4$. As mentioned before $c$ is deformation parameter and $R$ is the AdS curvature radius, also $0 \leq z\leq z_{h}$, $z_{h}=\frac{1}{\pi T}$ and $T$ is boundary field theory's temperature. We have a dynamic dilaton in action for the background and we write our calculations in string frame. If dilaton is such that it enters directly in the worldsheet action in the form $ \phi R$. May be our concern is about the effect of a nontrivial dilaton profile to the string action. But somewhen people neglect it at the first step \cite{ugkm} and leave it for future study. Then one can check that the integral on the action correspond to worldsheet with higher genus. This means that we are doing string interactions and going to higher order in string perturbation theory. But now, for leading order calculations in genus, we need not to bother with this term even if the geometry has a dynamical dilaton. On the other hand one trace of dynamical dilaton can appear via temperature if we want to calculate it with \cite{GSH} approach. So, the exact temperature will be in hand. But we refer the reader to \cite{dlis} for the reasons that in deformed AdS model with quadratic correction in warp factor the ``temperature'' takes the form of AdS-SW BH temperature. So, we have a deformed AdS which in the limit $c\rightarrow0$ becomes $AdS_{5}$. This comparing results help us to underestand the effects of deformation parameter on the physical quantities such as interaction energy. Our calculations in the cases of $LT$, $Re V_{Q\bar{Q}} $ and $Im V_{Q\bar{Q}}$ give us motivation to compare results between different values of deformation parameter.\\ From metric background (2.1) one can obtain: \begin{equation} G_{00}=\frac{R^2}{z^2}[1-(\frac{z}{z_{h}})^4]e^{\frac{cz^2}{2}} \end{equation} \begin{equation} G_{xx}=\frac{R^2}{z^2}e^{\frac{cz^2}{2}} \end{equation} \begin{equation} G_{zz}=\frac{R^2}{z^2}[1-(\frac{z}{z_{h}})^4]^{-1}e^{\frac{cz^2}{2}}, \end{equation} with these definitions, \begin{eqnarray} \tilde{M}(z)\equiv M(z)\cosh ^2 \eta -N(z)\sinh ^2 \eta\ \end{eqnarray} \begin{eqnarray} \tilde{V}(z)\equiv V(z)\cosh ^2 \eta -P(z)\sinh ^2 \eta\ \end{eqnarray} \begin{equation} M(z)\equiv G_{00}G_{zz} \end{equation} \begin{equation} V(z)\equiv G_{00}G_{xx} \end{equation} \begin{equation} P(z)\equiv {G_{xx}}^2 \end{equation} \begin{equation} N(z)\equiv G_{xx}G_{zz}, \end{equation} we continue with hamiltonian, \begin{equation} H(z)\equiv\sqrt{\frac{\tilde{V}(z)}{\tilde{V}_{\ast}}\frac{\tilde{V} (z)-\tilde{V}_{\ast}}{\tilde{M}(z)}}, \end{equation} where $\tilde{V}_{\ast}$ means $\tilde{V}(z_{\ast})$ and $\ast$ is the deepest position of the string in the bulk.\\ The equation of motion and the boundary conditions of the string relates $L$(length of the line joining both quarks) with $z_{\ast}$ as follows, \begin{equation} \frac{L}{2}=\int_{r_{\ast}}^{\Lambda}\frac{dr}{H(r)}. \end{equation} So, for the corresponding case we have,\\ \begin{equation} \frac{L}{2}=-\int_{0}^{z_{\ast}}\frac{dz}{H(z)}. \end{equation} In order to relation between $S_{str}$ and $z_{\ast}$ we find the regularized integral \cite{fn} as, \begin{eqnarray} S_{str}^{reg}&=& \frac{T}{\pi \alpha'} \int_{r_*}^{\infty} dr \,\left[\sqrt{\tilde M(r)} \sqrt{\frac{\tilde V(r)}{\tilde V(r_*)}} \left(\frac{\tilde V(r)}{\tilde V(r_*)}-1 \right)^{-1/2}-\sqrt{M_0(r)}\right]\nonumber\\ &-&\frac{T}{\pi \alpha'}\int_{r_{h}}^{r_*}dr\,\sqrt{M_0(r)}, \end{eqnarray} and we obtain the following results \begin{equation} LT=\frac{2}{\pi}y_{h}\sqrt{1-y_{h}^{4}\cosh ^2 \eta} \int_{1}^{\infty} \frac{dy}{\sqrt{(y^4-y_{h}^4)[e^{\frac{cy_{h}^2}{\pi^{2} T^{2}}(\frac{1}{y^2}-1)}(y^4-y_{h}^4\cosh ^2 \eta)-(1-y_{h}^{4}\cosh ^2 \eta)]}} \end{equation} where \quad \quad $y=\frac{z_{\ast}}{z}$ \quad \quad and\quad \quad$y_{h}=\frac{z_{\ast}}{z_{h}}$ \\ \begin{eqnarray} S_{str}^{reg}&=&T^2\frac{\sqrt{\lambda}}{y_{h}} \lbrace\int_{1}^{\infty}dy[\frac{e^{\frac{cy_{h}^2}{\pi^{2} T^{2}}(\frac{1}{y^2}-\frac{1}{2})}(y^4-y_{h}^4\cosh ^2 \eta)}{\sqrt{(y^4-y_{h}^4)[e^{\frac{cy_{h}^2}{\pi^{2} T^{2}}(\frac{1}{y^2}-1)}(y^4-y_{h}^4\cosh ^2 \eta)-(1-y_{h}^{4}\cosh ^2 \eta)]}}\nonumber\\&-&e^{\frac{cy_{h}^2}{\pi^{2} T^{2}y^2}}]-\int_{0}^{1} dy\quad e^{\frac{cy_{h}^2}{\pi^{2} T^{2}y^2}}\rbrace , \end{eqnarray} Where $\lambda=\frac{R^4}{\alpha^2\prime}$ is and $ \alpha'$ is the 't Hooft coupling of the gauge theory. Finally, we find the real part of potential as\\ $Re V_{Q\bar{Q}}=\frac{S_{str}^{reg}}{T}$.\\ Now we present a derivation of relation for imaginary part of potential from \cite{fn}. The reader can see more details in that reference. From there we can say one should consider the effect of worldsheet fluctuations around the classical configuration $z_c(x)$, \begin{equation} z(x) = z_c(x) \rightarrow z(x) = z_c(x) + \delta z (x). \end{equation} And then the fluctuations should be taken into account in partition function so one arrives at, \begin{equation} Z_{str} \sim \int \mathcal{D} \delta z(x) e^{i S_{NG} (z_c(x) + \delta z (x))}. \end{equation} Then there is an imaginary part of potential in action so , by dividing the interval region of x into $2N$ points where $N\longrightarrow\infty$ that should be taken into account at the end of calculation we arrive at, \begin{equation} Z_{str} \sim \lim_{N\to \infty}\int d [\delta z(x_{-N})] \ldots d[ \delta z(x_{N})] \exp{\left[ i \frac{\mathcal{T} \Delta x}{2 \pi \alpha'} \sum_j \sqrt{M(z_j) (z'_j)^2 + V(z_j)}\right]}. \end{equation} Notice that we should expand $z_c(x_j)$ around $x=0$ and keep only terms up to second order of it because thermal fluctuations are important around $z_\ast$ which means $x=0$, \begin{equation} z_c(x_j) \approx z_\ast + \frac{x_j^2}{2} z_c''(0), \end{equation} With considering small fluctuations finally we will have, \begin{equation} V(z_j) \approx V_* + \delta z V'_* + z_c''(0) V'_* \frac{x_j^2}{2} + \frac{\delta z^2}{2} V''_*, \end{equation} where $V_\ast\equiv V(z_\ast)$ and $V'_\ast\equiv V'(z_\ast)$. With (2.20), (2.21) and (2.19) one can derive (2.22), (2.23) and (2.24), \begin{equation} S^{NG}_j = \frac{\mathcal{T} \Delta x}{2 \pi \alpha'} \sqrt{C_1 x_j^2 + C_2} \end{equation} \begin{equation} C_1 = \frac{z_c''(0)}{2} \left[ 2 M_* z_c''(0) + V_*' \right] \end{equation} \begin{equation} C_2 = V_* + \delta z V'_* + \frac{\delta z^2}{2} V''_*. \end{equation} For having $ Im V_{Q\bar{Q}}\neq 0$ the function in the square root of (2.22) should be negative. then, we consider j-th contribution to $Z_{str}$ as, \begin{equation} I_j \equiv \int\limits_{\delta z_{j min}}^{\delta z_{j max}} d(\delta z_j) \, \exp{\left[ i \frac{\mathcal{T} \Delta x}{2 \pi \alpha'} \sqrt{C_1 x_j^2 + C_2} \right]}, \end{equation} For every $\delta z $ between minimum and maximum of it's values which are the roots of $C_1 x_j^2 + C_2$ in $\delta z $, one leads to $C_1 x_j^2 + C_2 <0$. The extermal value of the function \begin{equation} D(\delta z_j) \equiv C_1 x_j^2 + C_2(\delta z_j) \end{equation} is, \begin{equation} \delta z = - \frac{V'_*}{V''_*}. \end{equation} So, $ D(\delta z_j)<0 \longrightarrow -x_c<x_j<x_c$ leads us to have an imaginary part in square root, where, \begin{equation} x_c = \sqrt{\frac{1}{C_1}\left[\frac{V'^2_*}{2V''_*} - V_* \right]}. \end{equation} \begin{figure} \centerline{\includegraphics[width=12cm]{LT-Yh,eta00}} \caption{$LT$ as a function of $y_{h}$ at $\eta=0$. $Q\bar{Q}$ is oriented to the hot wind and different values of deformation parameter are contributed. The solid blue curve corresponds to $\frac{c}{T^2}=50$, the dotted green curve to $\frac{c}{T^2}=25$ and the dashed red curve to $\frac{c}{T^2}=0$} \end{figure} \begin{figure} \centerline{\includegraphics[width=12cm]{LT-Yh,eta8}} \caption{$LT$ as a function of $y_{h}$ at $\eta=0.8$. $Q\bar{Q}$ is oriented to the hot wind and different values of deformation parameter are contributed. The solid blue curve corresponds to $\frac{c}{T^2}=50$, the dotted green curve to $\frac{c}{T^2}=25$ and the dashed red curve to $\frac{c}{T^2}=0$} \end{figure} \begin{figure} \centerline{\includegraphics[width=12cm]{LTMAX11}} \caption{$LT_{max}$ as a function of $\eta$. $Q\bar{Q}$ is oriented to the hot wind and different values of deformation parameter are contributed. The dashed blue curve corresponds to $\frac{c}{T^2}=50$ and the solid red curve to $\frac{c}{T^2}=25$.} \end{figure} \begin{figure} \centerline{\includegraphics[width=12cm]{ReV1}} \caption{ $ReV_{Q\bar{Q}}$ as a function of $LT$ at a fixed velocity $\eta=0.8$ the pair is oriented to the hot wind and and deformation parameter is zero} \end{figure} \begin{figure} \centerline{\includegraphics[width=12cm]{ReV2}} \caption{$ReV_{Q\bar{Q}}$ as a function of $LT$ at a fixed velocity $\eta=0.8$ the pair is oriented to the hot wind and scaled deformation parameter is 25} \end{figure} \begin{figure} \centerline{\includegraphics[width=12cm]{ReV3}} \caption{$ReV_{Q\bar{Q}}$ as a function of $LT$ at a fixed velocity $\eta=0.8$ the pair is oriented to the hot wind and scaled deformation parameter is 50} \end{figure} \begin{figure} \centerline{\includegraphics[width=12cm]{IMV}} \caption{Imaginary part of potential as a function of $LT$, at a fixed velocity $\eta=0.4$. The pair is oriented to the hot wind and different values of scaled deformation parameter are contributed. The solid blue curve corresponds to $\frac{c}{T^2}=0$ , the dashed red curve to $\frac{c}{T^2}=25$ and the dotted green curve to $\frac{c}{T^2}=50$.} \end{figure} If the square root in (2.28) is not real we should take $ x_c=0$. After all these conditions we can approximate $D(\delta z) $ by $ D(-\frac{V'_{\ast}}{V"_{\ast}})$ in $ I_j$, \begin{equation} I_j \sim \exp \left[ i \frac{\mathcal{T} \Delta x}{2 \pi \alpha'} \sqrt{C_1 x_j^2 + V_* - \frac{V'^2_*}{2V''_*}} \right]. \end{equation} The total contribution to the imaginary part, will be in hand with continuum limit. So, \begin{equation} \mathrm{Im} \, V_{Q\bar{Q}} = -\frac{1}{2\pi \alpha'} \int\limits_{|x|<x_c} dx \sqrt{-x^2 C_1 - V_* + \frac{V'^2_*}{2V''_*}}\,. \end{equation} And finally after evaluating the integral one can arrive at the expression for imaginary part of potential as, \begin{equation} \mathrm{Im} \, V_{Q\bar{Q}} = -\frac{1}{2 \sqrt{2} \alpha'} \sqrt{M_*} \left[\frac{V'_*}{2V''_*}-\frac{V_*}{V'_*} \right]. \end{equation} Now, we are ready to calculate the imaginary part of potential in case of $\tilde{M}_{\ast} >0$ so according to (2.31) and with our deformed AdS metric (2.1) we have following relation, \begin{eqnarray} \frac{Im V_{Q\bar{Q}}}{\sqrt{\lambda}}&=&-\frac{\pi T}{4\sqrt{2} y_{h}} e^{\frac{cy_{h}^2}{2\pi ^2 T^2}} \sqrt{\frac{1-y_{h}^4\cosh ^2 \eta}{1-y_{h}^4}}\nonumber\\ &\times &[\frac{\frac{2cy_{h}^2}{\pi ^2 T^2}(1-y_{h}^4\cosh ^2 \eta)+4}{\frac{cy_{h}^2}{\pi ^2 T^2}(2-10y_{h}^4\cosh ^2 \eta) +12}-\frac{(1-y_{h}^4\cosh ^2 \eta)}{2+\frac{cy_{h}^2}{2\pi ^2 T^2}(1-y_{h}^4\cosh ^2 \eta)}]. \end{eqnarray} In Fig.1 and 2 we can see the behavior of $ LT$ as a function of $ y_h$ for different values of deformation parameter for this perpendicular case. As we show the maximum of the $LT(y_h)$ which is an indicative of the limit of classical gravity calculations, increases with increasing deformation parameter.\\ On the contrary, increasing velocity reduces $LT_{max}$ , as it is mentioned in Fig.3. Furthermore, increasing deformation parameter increases $LT_{max}$ which has been used to define a dissociation length for the moving $Q\bar{Q}$ pair.\\ Fig 4,5 and 6 show the behavior of $ ReV_{Q\bar{Q}} $ as a function of $LT$. As we know for zero value of c in short distances the pair does not feel the moving plasma and upper branch shows saddle point of string action. In compare with that, when c is nonzero pair can feel moving plasma for all values of distances. In addition , we can see with increasing deformation parameter the real part of potential increases and the unphysical curve corresponds to $ q < q_{max}$ and the lower branch is the dominant contribution to the action which corresponds to $ q > q_{max}$.\\ The imaginary part of the potential corresponds to the dissociation properties of heavy quarkonia. In Fig.7 our results indicate that the thermal width of the pair increases with increasing the deformation parameter at a fixed velocity. \section{${Q \bar{Q}}$ in an deformed AdS at arbitrary angles} In this section we extend our calculations for arbitrary angles, it means that orientation of dipole can have any arbitrary angle with respect to velocity vector. As before we extract the real and imaginary parts of potential with method of \cite{sif} , $ \theta$ is the angle of the dipole with respect to the $ X_{d-1}$ and dipole is on the $ (X_1,X_{d-1})$ plane. The boundary conditions are, \begin{align} z\left(\pm \frac{L}{2} \sin \theta \right) & = 0 \nonumber \\ X_d \left(\pm \frac{L}{2} \sin \theta \right) & = \pm \frac{L}{2} \cos \theta \end{align} And the action is, \begin{equation} S_{str} = -\frac{\mathcal{T}}{2\pi \alpha'} \int d \sigma \mathcal{L}, \end{equation} where the lagrangian is defined as, \begin{equation} \mathcal{L} \equiv \sqrt{\left[ M(z) \cosh^2 \eta - N(z) \sinh^2 \eta \right] z'(\sigma)^2 + V(z) X_d'(\sigma)^2 + \left[ V(z) \cosh^2 \eta - P(z) \sinh^2 \eta \right] }. \end{equation} There are two constants of motion which are, \begin{equation} \mathcal{H} \equiv Q \equiv \mathcal{L} - \frac{dz}{d\sigma} \frac{\partial \mathcal{L}}{\partial z'} - \frac{dX_d}{d\sigma} \frac{\partial \mathcal{L}}{\partial X_d'} \end{equation} \begin{equation} K \equiv \frac{\partial \mathcal{L}}{\partial X_d'}, \end{equation} with (3.3), (3.4) and (3.5) after some algebra one can arrive at, \begin{align} Q^2 \left[ M(z) \cosh^2 \eta - N(z) \sinh^2 \eta \right] z'(\sigma)^2 + Q^2 V(z) X_{d-1}'(\sigma)^2 + \nonumber \\ + \left[ V(z) \cosh^2 \eta - P(z) \sinh^2 \eta \right] \left\{Q^2- \left[ V(z) \cosh^2 \eta - P(z) \sinh^2 \eta \right] \right\} = 0 \end{align} \begin{align} K^2 \left[ M(z) \cosh^2 \eta - N(z) \sinh^2 \eta \right] z'(\sigma)^2 + V(z) (K^2-V(z)) \, X_{d-1}'^2(\sigma) + \nonumber \\ + K^2 \left[ V(z) \cosh^2 \eta - P(z) \sinh^2 \eta \right]=0\,. \end{align} with inserting ${ X'_d}^2$ from (3.6) into (3.7) and doing some manipulations the result is, \begin{align} Q^2 V(z) \left[ M(z) \cosh^2 \eta - N(z) \sinh^2 \eta \right] z'(\sigma)^2 = \nonumber \\ = (V(z)-K^2) \left[ V(z) \cosh^2 \eta - P(z) \sinh^2 \eta \right]^2 - V(z) \left[ V(z) \cosh^2 \eta - P(z) \sinh^2 \eta \right] Q^2 , \end{align} and \begin{equation} \label{eq:eqmotionBfin} Q^2 V^2 (X_{d-1}')^2 = K^2 \left[ V(z) \cosh^2 \eta - P(z) \sinh^2 \eta \right]^2. \end{equation} It is clear that we must have $Z(\sigma=0)=Z_\ast$ , $Z'(\sigma=0)=0$ and $ X_d(\sigma=0)=0$ so, \begin{equation} \label{eq:relationUc} (V_\ast -K^2) (V_\ast \cosh^2 \eta - P_\ast \sinh^2 \eta) - V_\ast Q^2 = 0. \end{equation} Proceeding by boundary conditions (3.1) and equations of motion (3.8) and (3.9) we arrive at these two relations, \begin{align} \frac{L}{2} \sin \theta = &- Q \int_{0}^{z_\ast}\, dz \left\{ \frac{V(z)}{V(z) \cosh^2 \eta - P(z) \sinh^2 \eta} \right. \times \nonumber \\ & \times \left. \frac{ M(z) \cosh^2 \eta - N(z) \sinh^2 \eta}{\left[(V(z)-K^2)\left[V(z) \cosh^2 \eta - P(z) \sinh^2 \eta\right] - V(z) Q^2 \right]} \right\}^{-1/2}, \end{align} \begin{align} \frac{L}{2} \cos \theta =- K \int_{0}^{z_\ast}\, dz \, \sqrt{\frac{\left[M(z) \cosh^2 \eta - N(z) \sinh^2 \eta \right] \left[V(z) \cosh^2 \eta - P(z) \sinh^2 \eta\right]}{V(z)\left\{(V(z)-K^2)\left[V(z) \cosh^2 \eta - P(z) \sinh^2 \eta\right] - V(z) Q^2 \right\}}}. \end{align} Finally the action is, \begin{equation} S = -\frac{\mathcal{T}}{\pi \alpha'} \int_{0}^{z_\ast} \, dz \, \sqrt{\frac{ V(z) \left[M(z) \cosh^2 \eta - N(z) \sinh^2 \eta \right] \left[V(z) \cosh^2 \eta - P(z) \sinh^2 \eta\right]}{ \left\{(V(z)-K^2)\left[V(z) \cosh^2 \eta - P(z) \sinh^2 \eta\right] - V(z) Q^2 \right\}}}\,. \end{equation} After regularizing it we have, \begin{align} S_{reg} & =- \frac{\mathcal{T}}{\pi \alpha'} \int_{0}^{z_\ast} \, dz \, \left\{ \sqrt{\frac{ V(z) \left[M(z) \cosh^2 \eta - N(z) \sinh^2 \eta \right] \left[V(z) \cosh^2 \eta - P(z) \sinh^2 \eta\right]}{ \left\{(V(z)-K^2)\left[V(z) \cosh^2 \eta - P(z) \sinh^2 \eta\right] - V(z) Q^2 \right\}}} \right. \nonumber \\ & \left. - \sqrt{M_0 (z)} \right\} -\frac{\mathcal{T}}{\pi \alpha'} \int_{z_h}^{z_\ast} dz \sqrt{M_0 (z)}. \end{align} As before, in the absence of black brane $ M(z)$ for $T=0$ leads to $M_0$ and $ReV_{Q\bar{Q}}=\frac{S_{str}^{reg}}{T}$. For imaginary part we have two degrees of freedom $ Z(\sigma)$ and $ X_{d-1}(\sigma)$. The string partition function is, \begin{equation} \label{eq:thermalflucpartang} Z_{str} \sim \int D(\delta z) \, D(\delta X_{d-1}) e^{i S_{str} (\bar{z}+\delta z, \bar{X}_{d-1} + \delta X_d)}, \end{equation} where fluctuations $\delta z(\sigma)$ and $ \delta X_{d-1}(\sigma)$ are considered with $ \frac{\partial z}{\partial\sigma}\longrightarrow 0$ and $ \frac{\partial X_{d-1}}{\partial\sigma}\longrightarrow 0$.\\ As before with action (3.2) and partitioning the interval in $ 2N$ subintervals we arrive at, \begin{equation} Z_{str} \sim \left( \int_{-\infty}^{\infty} d(\delta z_{-N}) \, d(\delta X_{{d-1},-N}) \right) \cdots \left( \int_{-\infty}^{\infty} d(\delta z_{N}) \, d(\delta X_{{d-1},N}) \right) e^{i \frac{\mathcal{T} \Delta x}{2\pi \alpha'} \mathcal{L}_j}, \end{equation} and \begin{equation} \mathcal{L}_j = \sqrt{\tilde{M}(z(x_j)) (z'(x_j))^2 + V(z(x_j)) (X_{d-1}'(x_j))^2 + \tilde{V} (x_j)}. \end{equation} We expand the classical solution $ \bar{z}(0)$ around $\sigma=0$ to quadratic order on $\sigma$. If the string did not sag, then we would have $ X_{d-1}(\sigma)=\frac{\sigma}{\tan \tilde {\theta}}$ and around $\sigma=0$ we will have, \begin{equation} X_{d-1}(\sigma) = \frac{\sigma}{\tan \tilde{\theta}} + b \sigma^3 + O (\sigma^5), \end{equation} $\tilde{\theta}$ is equal to $\theta$ and b is a constant. Because of the symmetry of the problem under reflections with respect to the origin of the $(X_1,X_d)$ plane, $X_d(\sigma)$ must be an odd function of $ \sigma$ so, \begin{equation} X_{d-1}'(\sigma)^2 = \frac{1}{\tan^2 \tilde{\theta}} + \frac{6 b}{\tan \tilde{\theta}} \sigma^2. \end{equation} With inserting (3.19) into (3.17), one can arrive at, \begin{equation} \mathcal{L}_j = \sqrt{\tilde{C}_1 x_j^2 + \tilde{C}_2}, \end{equation} with these definition, \begin{equation} \tilde{C}_1 \equiv \tilde{M}_\ast\bar{z}''(0)^2+ \frac{1}{2} \left(\frac{V'_\ast}{\tan^2 \tilde{\theta}}+\tilde{V}'_\ast\right)\bar{z}''(0)+\frac{6 b}{\tan \tilde{\theta}} V_\ast \end{equation} \begin{equation} \tilde{C}_2 \equiv \left(\frac{V_\ast}{\tan^2 \tilde{\theta}}+\tilde{V}_\ast\right) + \left(\frac{V'_\ast}{\tan^2 \tilde{\theta}}+\tilde{V'}_\ast\right) \delta z + \left(\frac{V''_\ast}{\tan^2 \tilde{\theta}}+\tilde{V''}_\ast\right) \frac{(\delta z)^2}{2}. \end{equation} As previous section after some algebric calculations the explicite analytical expression for $ImV_{Q\bar{Q}}$ would be, \begin{equation} \label{eq:ImFQQang} \mathrm{Im}\,V_{Q\bar{Q}} = -\frac{1}{4\alpha'}\frac{1}{\sqrt{\tilde{C}_1}} \left[ \frac{\left(\frac{V_\ast'}{\tan^2\tilde{\theta}}+ \tilde{V}_\ast'\right)^2}{2 \left(\frac{V_\ast''}{\tan^2\tilde{\theta}}+ \tilde{V}_\ast''\right)} - \left(\frac{V_\ast}{\tan^2\tilde{\theta}}+ \tilde{V}_\ast\right) \right]\,. \end{equation} Again we emphasis that all above derivations about imaginary and real parts of potential are presented in references that we mentioned before, but we follow them here for convenience of the reader. Now we can come back to our main case and follow it with metric (2.1). With using (3.8) and (3.9) we will have, \begin{equation} q^2 \left(\frac{dy}{d\tilde{\sigma}}\right)^2 = (y^4 - \cosh^2\eta)((e^{\frac{c}{\pi^{2}T^2y^2}})(y^4-1)-p^2) - q^2(y^4-1) \quad \quad \mathrm{and} \end{equation} \begin{equation} \left(\frac{d\chi}{d\tilde{\sigma}}\right)^2 = \frac{p^2}{q^2} \left(\frac{y^4 - \cosh^2\eta}{y^4-1} \right)^2, \end{equation} where we defined the dimensionless variables $y\equiv z_h/z$, $\chi\equiv X_d /z_h$ and $\tilde{\sigma} \equiv \sigma /z_h$ as well as the dimensionless integration constants $q^2 \equiv Q^2z_h^4/R^4$ and $p^2 = K^2z_h^4/R^4$ also the boundary conditions become, \begin{align} y \left(\pm \pi \frac{LT}{2} \sin \theta \right) & =0 \nonumber \\ \chi \left(\pm \pi \frac{LT}{2} \sin \theta \right) & = \pm \pi \frac{LT}{2} \cos \theta. \end{align} So, (3.11), (3.12) and (3.10) lead to (3.27), (3.28) and (3.29), \begin{equation} \frac{LT}{2} \pi \sin \theta = q \int_{y_\ast}^{\tilde{\Lambda}} \, \frac{dy}{\sqrt{((e^{\frac{c}{\pi^{2}T^2y^2}})(y^4-1)-p^2)(y^4-\cosh^2\eta)-q^2(y^4-1)}} \quad \quad \mathrm{and} \end{equation} \begin{equation} \frac{LT}{2} \pi \cos \theta = p \int_{y_\ast}^{\tilde{\Lambda}} \, dy \, \frac{y^4-\cosh^2\eta}{y^4-1} \frac{1}{\sqrt{((e^{\frac{c}{\pi^{2}T^2y^2}})(y^4-1)-p^2)(y^4-\cosh^2\eta)-q^2(y^4-1)}}\,. \end{equation} \begin{equation} ((e^{\frac{c}{\pi^{2}T^2y_\ast ^2}})(y_\ast ^4-1)-p^2)(y_\ast ^4-\cosh^2\eta)-q^2(y_\ast ^4-1) = 0\,. \end{equation} Therefore the real part of potential is, \begin{eqnarray} \frac{\mathrm{Re} \, V_{Q\bar{Q}}}{T\sqrt{\lambda}}& =& \int_{y_\ast}^{\infty} dy \left[ \frac{e^{\frac{c}{\pi^{2}T^2y^2}}(y^4-\cosh^2\eta)}{\sqrt{(y^4-\cosh^2\eta)(e^{\frac{c}{\pi^{2}T^2y^2}}(y^4-1)-p^2)-q^2(y^4-1)}}-e^{\frac{c}{2\pi^{2}T^2y^2}}\right]\nonumber\\ &-&\int_0^{y_\ast} dy e^{\frac{c}{2\pi^{2}T^2y^2}}\,. \end{eqnarray} And from (3.23) we arrive at imaginary part of potential as, \begin{eqnarray} &\frac{\mathrm{Im} \, V_{Q\bar{Q}}}{T\sqrt{\lambda}}&=-\frac{\pi}{4} e^{\frac{c}{2\pi^{2}T^2y_{\ast}^2}}\nonumber\\ &\times&\frac{{\frac{[\frac{2cy_{\ast}}{\pi T}-4\pi Ty_{\ast}^3-\frac{2c}{\pi Ty_{\ast}^3}( \cos^2 \tilde{\theta} + \cosh^2 \eta \sin^2 \tilde{\theta})]^2}{2[20y_{\ast}^2\pi^2 T^2-14c-\frac{4c^2}{\pi ^2T^2y_{\ast}^2}-(\frac{4c^2}{\pi ^2T^2y_{\ast}^6}+\frac{2c}{y_{\ast}^4})( \cos^2 \tilde{\theta} + \cosh^2 \eta \sin^2 \tilde{\theta})]}}-(y_{\ast}^4-( \cos^2 \tilde{\theta} + \cosh^2 \eta \sin^2 \tilde{\theta}))} {\sqrt{y''(0)^2 (\frac{y_{\ast}^4-\cosh^2 \eta}{y_\ast^4-1})+\frac { [\frac{2cy_{\ast}}{\pi^2 T^2} -4y_{\ast} ^3-\frac{2c} {y_{\ast} ^3\pi^2 T^2} ( \cos^2 \tilde{\theta} + \cosh^2 \eta \sin^2 \tilde{\theta})]} {2\sin \tilde{\theta}}y''(0)+ \frac{6 \tilde{b}} {\tan \tilde{\theta}} (y_\ast ^4-1)}}.\nonumber\\ \end{eqnarray} \begin{figure} \centerline{\includegraphics[width=12cm]{LTq}} \caption{$LT$ as a function of $q$ at $\eta=1$ and $ \theta=\frac{\pi}{3}$. different values of deformation parameter are contributed. The solid green curve corresponds to $\frac{c}{T^2}=0$, the dotted red curve to $\frac{c}{T^2}=25$ and the dashed blue curve to $\frac{c}{T^2}=50$.} \end{figure} \begin{figure} \centerline{\includegraphics[width=12cm]{ReV10}} \caption{$ReV_{Q\bar{Q}}$ as a function of $LT$ at $\eta=1$ and $ \theta=\frac{\pi}{3}$ and deformation parameter is zero} \end{figure} \begin{figure} \centerline{\includegraphics[width=12cm]{ReV20}} \caption{$ReV_{Q\bar{Q}}$ as a function of $LT$ at $\eta=1$ and $ \theta=\frac{\pi}{3}$ and scaled deformation parameter is $\frac{c}{T^2}=25$} \end{figure} \begin{figure} \centerline{\includegraphics[width=12cm]{ReV30}} \caption{$ReV_{Q\bar{Q}}$ as a function of $LT$ at $\eta=1$ and $ \theta=\frac{\pi}{3}$ and scaled deformation parameter is $\frac{c}{T^2}=50$} \end{figure} \begin{figure} \centerline{\includegraphics[width=12cm]{IM}} \caption{Imaginary part of potential as a function of $LT$, at a fixed velocity $\eta=1$ and $ \theta=\frac{\pi}{3}$. different values of scaled deformation parameter are contributed. The solid green curve corresponds to $\frac{c}{T^2}=0$ , the dotted red curve to $\frac{c}{T^2}=25$ and the dashed blue curve to $\frac{c}{T^2}=50$.} \end{figure} We proceed by solving (3.29) numerically to have $ y_{\ast}$ as a function of q and p, then (3.27), (3.28), (3.30), (3.31) will be functions of p and q. On the other hand for finding p as a function of q, one can solve (3.27) and (3.28) for fixed $\theta$, after doing all these, $LT$ as a function of q is in hand. Before we start to calculate $Im V_{Q\bar{Q}}$ we should obtain $\tilde{\theta} $, $ y''(0)$ and $ \tilde{b}$. The $\tilde{\theta} $ is obtained from (3.25) at $\tilde{\sigma}=0 $ and $ y=y_{\ast}$. We should solve (3.24) and (3.25) with use of boundary conditions (3.26) , to evaluate $ y''(0)$ and $ \tilde{b}$. After doing all these calculations numerically, with $ y''(0)$ and $ \tilde{b}$ known, we can calculate $Im V_{Q\bar{Q}}$ as a function of q. So, we will survey $LT(q)$ , also real and imaginary parts of potential as a function of $LT$.\\ The result of cases with a fixed $\eta$ and different choices of $\theta$ besides a fixed $\theta$ and different choices of $\eta$ have been studeid in \cite{sif}, so we proceed by fixing both of them and choosing different values of deformation parameter.\\ In Fig. 8 we show $LT$ as a function of q for a fixed orientation of the dipole , fixed $\eta$ and different values of deformation parameter. we know $LT_{max}$ depends strongly on the rapidity $\eta$ and it decreases with increasing $\eta$ \cite{sif}. In our plots, we can see that $LT_{max}$ which indicates the limit of validity of classical gravity calculation, increases with increasing deformation parameter.\\ In Figs. 9,10 and 11 we present $ ReV_{Q\bar{Q}}$ as a function of LT. We can see for small values of LT which means short distances or small temperatures, there is a difference between $c=0$ and $c\neq 0$ cases. As we expected, when deformation parameter contributes to the calculation, the interaction of the pair is relevant with plasma, it is similar to the result of perpendicular case. The other point is that real part of potential has no intense alteration with varying angle for any value of deformation parameter.\\ In Fig. 12 we can see $ImV_{Q\bar{Q}}$ as a function of LT. It shows that for angle $\theta < \frac{\pi}{2} $ with decreasing angle, the imaginary part of potential becomes smaller for any values of deformation parameter. \section{Concolusion} In this article, we have used the method of \cite{sif} to investigate the real and imaginary parts of potential for moving heavy quarkonia in plasma with a gravity dual which has deformation parameter in warp factor. At the first step we considered $Q\bar{Q}$ pair oriented perpendicularly to the hot wind and after that we extended all calculations to arbitrary angles. We saw that for both perpendicular and arbitrary angle cases, the limit of classical gravity calculation increases with increasing deformation parameter. Also for nonzero values of c the pair feels moving plasma even in short distances, but for $c=0$ case the pair does not feel moving plasma at some small values of LT as we expected. We indicated when nonzero values of deformation parameter contribute to the imaginary part of potential, the thermal width of quarkonia increases with increasing deformation parameter. Results of perpendicular case in compare with arbitrary angle $\theta < \frac{\pi}{2} $ showed that with decreasing angle, the imaginary part of potential becomes smaller for any values of deformation parameter, but real part of potential has no intense alteration with varying angle for any value of deformation parameter. \\ Another interesting problem is instead of using the soft wall model we use hyperscaling violation metric background and discuss the moving mesons and investigate real and imaginary parts of potential. This problem with corresponding metric background for the moving meson in plasma media is in hand. \newpage \textbf{Acknowledgement}\\ The authors are grateful very much to S. M. Rezaei for support and valuable activity in numerical calculations.
1,108,101,566,234
arxiv
\section{Introduction} The evolution of a radiating star undergoing gravitational collapse, in the context of general relativity, has occupied the attention of researchers in astrophysics in recent times. The derivation of the junction conditions by Santos \cite{santos} has made it possible to obtain exact models of an interior spacetime with heat flux to match with the exterior Vaidya spacetime; at the boundary of the star the radial pressure is nonzero. A variety of exact solutions has been generated over the years to study the cosmic censorship hypothesis, gravitational collapse with dissipation, end state of superdense matter, dynamical stability of radiating matter and temperature profiles in the context of irreversible thermodynamics. De Oliviera {\it{et al.}} \cite{deOliviera} proposed a radiating model in which an initial static configuration leads to collapse. This approach may be adapted to study the end state of collapse as shown by Govender {\it{et al.}} \cite{govender1}. Kolassis {\it{et al.}} \cite{kolassis} assumed that the fluid trajectories are geodesic and generated exact solutions. These assumptions lead to particular solutions which may be used to study the physical features of the model such as the relaxational effects of the collapsing fluid on the temperature profile in theories of causal thermodynamics \cite{DiPrisco1}-\cite{govender4}. In a recent treatment Herrera {\it{et al.}} \cite{herrera2} proposed a model in which the form of the Weyl tensor was highlighted when studying radiative collapse. This approach has the advantage of simplifying the Einstein field equations. However, Herrera {\it{et al.}} were not able to solve the junction conditions; only an approximate solution was found. Maharaj and Govender \cite{maharaj} showed that it is possible to solve the field equations and the junction conditions exactly. Their solution is expressible in terms of elementary functions and contains the Friedmann dust solution as a special case. It is interesting to note that Herrera {\it{et al.}} \cite{herrera3} showed that other classes of solutions in terms of the elementary functions are possible. The exact solutions in both \cite{maharaj} and \cite{herrera3} depend upon the introduction of a transformation that linearises the boundary condition. The purpose of this paper is to demonstrate that it is possible to obtain other models by transforming the boundary condition to an Abel's equation which is necessarily nonlinear. We explicitly find exact solutions to the Abel equation under particular assumptions and thereby demonstrate that conformally flat radiating stars contain a richer structure than previously suspected. The main objective of this paper is to show that we can generate radiating relativistic stellar models without having to eliminate the nonlinearity at the boundary. In Section 2, we describe the basic features of the model for a radiating star and present the relevant differential equations. Results generated in previous investigations are briefly discussed in Section 3. These have been obtained by introducing a transformation that leads to a linear equation at the boundary. In Section 4, we introduce a new transformation at the boundary that leads to an Abel's equation. We show explicitly that a variety of exact solutions can be generated from the Abel equation. Consequently a variety of new models for radiating relativistic stars, with vanishing Weyl stresses, are possible. The physical features of the solutions are briefly considered in Section 5. \section{The Model} We consider a spherically symmetric radiating star undergoing shear-free gravitational collapse. The line element for shear-free matter interior to the boundary of the radiating star is given by \begin{equation} ds^2=-A^2dt^2+B^2[dr^2+r^2(d\theta^2+\sin^2\theta d\phi^2)]\label{line1} \end{equation} where $A=A(t,r)$ and $B=B(t,r)$ are the metric functions. The energy momentum tensor including radiation for the interior spacetime is\\ \begin{equation} T_{ab}=(\rho+p)u_au_b+pg_{ab}+q_au_b+q_bu_a \label{E-P} \end{equation} where the energy density $\rho$, the pressure $p$ and the heat flow vector $q_a$ are measured relative to the timelike fluid 4-velocity $u^a=\frac{1}{A}\delta^{a}_{0}.$ The heat flow vector assumes the form $q^a=(0,q,0,0)$ since $q^a u_a=0$ for radially directed heat flow. The nonzero components of the Einstein field equations, for the line element $(\ref{line1})$ and the energy momentum (\ref{E-P}), can be written as \begin{subequations} \label{EFI1} \begin{eqnarray} \rho&=&\frac{3}{A^2}\frac{\dot{B}^2}{B^2}-\frac{1}{B^2}\left(2\frac{B''}{B}-\frac{B'^2}{B^2}+\frac{4}{r}\frac{B'}{B}\right)\label{EFI1a}\\ p&=&\frac{1}{A^2}\left(-2\frac{\ddot{B}}{B}-\frac{\dot{B}^2}{B^2}+2\frac{\dot{A}}{A}\frac{\dot{B}}{B}\right) \nonumber \\ & & +\frac{1}{B^2}\left(\frac{B'^2}{B^2}+2\frac{A'}{A}\frac{B'}{B}+\frac{2}{r}\frac{A'}{A}+\frac{2}{r}\frac{B'}{B}\right)\label{EFI1b}\\ p&=&-2\frac{1}{A^2}\frac{\ddot{B}}{B}+2\frac{\dot{A}}{A^3}\frac{\dot{B}}{B}-\frac{1}{A^2}\frac{\dot{B}^2}{B^2}+\frac{1}{r}\frac{A'}{A}\frac{1}{B^2}\nonumber\\ & & +\frac{1}{r}\frac{B'}{B^3}+\frac{A''}{A}\frac{1}{B^2}-\frac{B'^2}{B^4}+\frac{B''}{B^3} \label{EFI1c} \\ q&=&-\frac{2}{AB^2}\left(-\frac{\dot{B'}}{B}+\frac{B'\dot{B}}{B^2}+\frac{A'}{A}\frac{\dot{B}}{B}\right)\label{EFI1d} \end{eqnarray} \end{subequations} \\ The Weyl tensor has all components proportional to \begin{equation} C_{2323}=\frac{r^4}{3}B^2\sin^2\theta \left[\left(\frac{A'}{A}-\frac{B'}{B}\right)\left(\frac{1}{r}+2\frac{B'}{B}\right)-\left(\frac{A''}{A}-\frac{B''}{B}\right)\right]\nonumber \end{equation} according to \begin{eqnarray} C_{2323}&=&-r^4\left(\frac{B}{A}\right)^2\sin^2\theta C_{0101}=2r^2\left(\frac{B}{A}\right)^2 \sin^2 \theta C_{0202} \nonumber \\ &=&2r^2\left(\frac{B}{A}\right)^2C_{0303}=-2r^2\sin^2 \theta C_{1212}=-2r^2C_{1313}\nonumber \end{eqnarray} \\ which represent the tidal forces. For conformal flatness these components must all vanish so that $C_{2323}=0$. This leads to a nonlinear partial differential equation which is easily solved so that \begin{equation} A=(C_1(t)r^2+1)B. \label{A} \end{equation} Now from (\ref{EFI1b}) and (\ref{EFI1c}), and using (\ref{A}), we obtain \begin{equation} \frac{B''}{B'}-2\frac{B'}{B}-\frac{1}{r}=0 \label{pr-isot} \end{equation} which is the condition of pressure isotropy. Equation (\ref{pr-isot}) is integrable and we get \begin{equation} B=\frac{1}{C_2(t)r^2+C_3(t)}\label{B} \end{equation} where $C_1(t), C_2(t)$ and $C_3(t)$ are functions of time. The forms for the metric functions $A$ and $B$ given above generate an exact solution to the Einstein field equations (\ref{EFI1}). The interior spacetime $(\ref{line1})$ has to be matched across the boundary $r=b$ to the exterior Vaidya spacetime \begin{equation} ds^2=-\left(1-\frac{2m(v)}{R}\right)dv^2-2dv dR + R^2(d\theta^2+\sin^2\theta d\phi^2) \label{vaidya} \end{equation} The hypersurface at the boundary is denoted by $\Sigma$. The junction conditions at $\Sigma$ have the form \begin{subequations}\label{junct} \begin{eqnarray} (Adt)_{\Sigma} &=& \left[\left(1-\frac{m}{R}+2\frac{dR}{dv}\right)^{1/2}dv\right]_{\Sigma} \label{juncta}\\ (rB)_{\Sigma}&=&R_{\Sigma} \label{junctb}\\ p_{\Sigma}&=&(qB)_{\Sigma}\label{junctc}\\ \left[m(v)\right]_{\Sigma}&=& \left[\frac{r^3}{2}\left(\frac{\dot{B}^{2}B}{A^2}-\frac{B'^2}{B}\right)-r^2B'\right]_{\sum} \label{junctd} \end{eqnarray} \end{subequations} on matching (\ref{line1}) and (\ref{vaidya}). For our model the junction conditions (\ref{junct}) reduce to the following nonlinear ordinary differential equation \begin{eqnarray} \ddot{C_2}b^2+\ddot{C_3}-\frac{3}{2}\frac{(\dot{C_2}b^2+\dot{C_3})^2}{C_2b^2+C_3}-\frac{\dot{C_1}b^2(\dot{C_2}b^2+\dot{C_3})}{C_1 b^2+1}-2(\dot{C_3}C_1-\dot{C_2})b\nonumber\\ +2\frac{C_1b^2+1}{C_2b^2+C_3}\left[C_2(C_2-2C_1C_3)b^2+C_3(C_1C_3-2C_2)\right]=0 \label{geqn} \end{eqnarray} \\ resulting from the (nonvanishing) pressure gradient across the hypersurface $\Sigma$. Equation (\ref{geqn}) governs the evolution of a radiating star with vanishing Weyl stresses. To complete the description of this radiating model we need to solve the remaining junction condition (\ref{geqn}). \section {Elementary solutions} The governing equation (\ref{geqn}) is a highly nonlinear equation presenting a formidable mathematical task to solve exactly in general. Note that in previous attempts to integrate (\ref{geqn}) assumptions were made that effectively linearised this boundary condition. We briefly summarise the known results. Herrera {\it{et al.}} \cite{herrera2} assumed the following approximate forms for the temporal functions \begin{equation} C_1=\epsilon c_1(t),\,\,\,\,\, C_2=0,\,\,\,\,\, C_3=\frac{a}{t^2},\label{assum1} \end{equation} where $ 0 < \epsilon << 1$ and $a > 0$, is a constant. With the assumptions contained in (\ref{assum1}), (\ref{geqn}) yields the approximate solution \begin{equation} C_1\approx C_1(0)\exp \left({\frac{-t^2}{2b^2}-\frac{2t}{b}}\right)\nonumber \end{equation} Note that on setting $C_1 = 0$ the solution reduces to a collapsing Friedmann dust sphere. Maharaj and Govender \cite{maharaj} were the first to determine a closed form solution for (\ref{geqn}). They assumed that $C_1 = C$ (a constant), $C_2 = 0$ and introduced the transformation $C_3\equiv u^{-2}$ so that (\ref{geqn}) takes the linear form \begin{equation} \ddot{u}-2Cb\dot{u}-(Cb^2+1)Cu=0 \nonumber \end{equation} in the new variable {\it{u}}. Three categories of closed form solutions in terms of the elementary functions were obtained depending on the nature of the roots of the characteristic equation. Herrera {\it{et al.}} \cite{herrera3} extended this treatment to obtain a wider class of solutions. They set $C_1 = C$ (a constant), $C_2=\alpha C_3$ and introduced the transformation $C_3(t)=u^{-2}(t)$ so that (\ref{geqn}) can be written as \\ \begin{equation} \ddot{u}-\frac{2(C-\alpha)b}{\alpha b^2+1}\dot{u}-\frac{(Cb^2+1)}{(\alpha b^2+1)^2}[\alpha(\alpha-2C)b^2 +(C-2\alpha)]u=0 \label{herrerasol} \end{equation} which is linear in $u$. Then equation (\ref{herrerasol}) admits three classes of solution given by\\ \\ {\it{Case 1}}:\,\,$(C-\alpha)^2b^2+(Cb^2+1)[\alpha(\alpha-2C)b^2+(C-2\alpha)]>0$\\ \begin{eqnarray} C_3(t)=\left[\beta_1\exp \left(\frac{(C-\alpha)b +\sqrt{(C-\alpha)^2b^2+(Cb^2+1)[\alpha(\alpha-2C)b^2 +(C-2\alpha)]}}{\alpha b^2+1}\right)t\ \right.\nonumber \\ \left.+\beta_2\exp{\left(\frac{(C-\alpha)b -\sqrt{(C-\alpha)^2b^2+(Cb^2+1)[\alpha(\alpha-2C)b^2 +(C-2\alpha)]}}{\alpha b^2+1}\right)t}\right]^{-2} \nonumber \end{eqnarray}\\ {\it{Case 2}}:\,\,$(C-\alpha)^2b^2+(Cb^2+1)[\alpha(\alpha-2C)b^2+(C-2\alpha)]<0$\\ \begin{eqnarray} C_3(t)=\left[\exp\left({\frac{(C-\alpha)b}{\alpha b^2+1}t}\right)\left(\beta_1\cos\left({\frac{\sqrt{(C-\alpha)^2b^2+(Cb^2+1)[\alpha(\alpha-2C)b^2 +(C-2\alpha)]}}{(\alpha b^2+1)}}\right)t \right.\right.\nonumber \\ \left.\left.\left.+\beta_2\sin\left({\frac{\sqrt{(C-\alpha)^2b^2+(Cb^2+1)[\alpha(\alpha-2C)b^2 +(C-2\alpha)]}}{(\alpha b^2+1)}}\right.\right)t\right)\right]^{-2}\nonumber \end{eqnarray}\\ {\it{Case 3}}:\,\,$(C-\alpha)^2b^2+(Cb^2+1)[\alpha(\alpha-2C)b^2+(C-2\alpha)]=0$\\ \begin{eqnarray} C_3(t)=(\beta_1+\beta_2t)^{-2}\exp\left({\frac{-2(C-\alpha)b}{\alpha b^2+1}t}\right)\nonumber \end{eqnarray}\\ where $ \beta_1 $ and $ \beta_2$ are constants of integration. These solutions reduce to the Maharaj and Govender \cite{maharaj} model when $\alpha = 0$. Note that other transformations that linearise (\ref{geqn}) are possible as indicated in \cite{herrera3}. \section{Abel Equation} The nonlinearity and complexity in the boundary condition (\ref{geqn}) is clearly evident. It is therefore remarkable that closed form solutions in terms of elementary functions have been shown to exist as shown in Section 3. These particular closed form solutions have been generated from linearised forms of the governing equation (\ref{geqn}). A natural extension would be a study of the existence of nonlinear solutions to the differential equation (\ref{geqn}). Such classes of solutions, if they exist, are important in the study of nonlinear behaviour of the shear-free, conformally flat model. Consequently we seek classes of solutions which retain the inherent nonlinear structure of (\ref{geqn}). These have not been found in the past due to the inherent difficulties of coping with nonlinearity. Here we consider a particular nonlinear transformation which leads to exact solutions. It is convenient to replace the function $C_1(t)$ with \begin{equation} U = C_1b^2+1\label{Atrans} \end{equation} Then the governing equation (\ref{geqn}) may be written with some rearrangement as \begin{eqnarray} \dot{U}(\dot{C_2}b^2+\dot{C_3}) + U\left[\frac{3}{2}\frac{(\dot{C_2}b^2+\dot{C_3})^2}{C_2b^2+C_3}-\frac{2}{b}(\dot{C_2}b^2+\dot{C_3})-(\ddot{C_2}b^2+\ddot{C_3})\right]\nonumber \\+2U^2\left[\frac{\dot{C_3}}{b}-\frac{1}{C_2b^2+C_3}(C_2^2b^2-\frac{C_3^2}{b^2})\right]+2U^3\frac{2C_2b^2-C_3}{C_2b^2+C_3}.\frac{C_3}{b^2}=0 \label{Aeqn} \end{eqnarray} Equation (\ref{Aeqn}) is complicated, but has the generic structure \\ \begin{equation} {\cal A}\dot{U}+{\cal B}U+{\cal C}U^2+{\cal D}U^3=0 \label{calAeqn} \end{equation} where we have set \begin{eqnarray} {\cal A}&=&\dot{C_2}b^2+\dot{C_3}\nonumber\\ {\cal B}&=&\frac{3}{2}\frac{(\dot{C_2}b^2+\dot{C_3})^2}{C_2b^2+C_3}-\frac{2}{b}(\dot{C_2}b^2+\dot{C_3})-(\ddot{C_2}b^2+\ddot{C_3})\nonumber\\ {\cal C}&=&2\left(\frac{\dot{C_3}}{b}-\frac{1}{C_2b^2+C_3}(C_2^2b^2-\frac{C_3^2}{b^2})\right)\nonumber\\ {\cal D}&=&2\left(\frac{2C_2b^2-C_3}{C_2b^2+C_3}.\frac{C_3}{b^2}\right)\nonumber \end{eqnarray} \\The transformed equation (\ref{calAeqn}) is an Abel's equation of the first kind in the variable $U$. Abelian equations are difficult to solve in general. However, the advantage of utilising the transformation (\ref{Atrans}) is that (\ref{calAeqn}) is a first order differential equation in $U$. In the following we present a comprehensive mathematical treatment of (\ref{calAeqn}) and derive several classes of solutions. \subsection{Case 1: ${ \cal A}=0$} The restriction ${\cal A}=0$ immediately gives \begin{equation} C_2b^2+C_3=\alpha \label{cons1} \end{equation} where $\alpha$ is a constant of integration. Then $(\ref{Aeqn})$ becomes \begin{equation} 2U^2\left[\frac{\dot{C_3}}{b}-\frac{1}{\alpha}\left(C_2^2b^2-\frac{C_3^2}{b^2}\right)\right]+2U^3\frac{2C_2b^2-C_3}{\alpha}.\frac{C_3}{b^2}=0 \label{algeqn} \end{equation} which is an algebraic equation in $U$. \\ Two cases arise: $U=0$ or $U\neq0$ in (\ref{algeqn}). We easily find: \begin{subequations}\label{solcase1} \begin{eqnarray} C_1&=&\left\{ \begin{array}{lll} -\frac{1}{b^2}& &,\,\,\, U=0\\ \frac{\alpha}{C_3(2\alpha-3C_3)}\left(\frac{\alpha}{b^2}-\frac{4C_3}{b^2}+\frac{3C_3^2}{\alpha b^2}-\frac{\dot{C_3}}{b}\right)& &,\,\,\, U\neq0\\ \end{array}\right.\\ C_2&=&\frac{\alpha-C_3}{b^2} \\ C_3&=& \text{arbitrary function of time} \end{eqnarray} \end{subequations} \\ This solution is particularly attractive since we have an infinite choice of $C_3$ and no integration is required. \\ \subsection{Case 2: ${ \cal D}=0$} With ${\cal{D}}=0$ we have two possibilities: either $2C_2b^2-C_3=0$ or $C_3=0$.\\ We firstly consider $2C_2b^2-C_3=0$. Then (\ref{Aeqn}) becomes \begin{equation} \dot{U}+U\left[\frac{3}{2}\frac{\dot{C_3}}{C_3}-\frac{2}{b}-\frac{\ddot{C_3}}{\dot{C_3}}\right]=-U^2\left[\frac{4}{3b}+\frac{2}{3}\frac{C_3}{b^2\dot{C_3}}\right]\nonumber \end{equation} This is a Bernoulli equation with solution \begin{equation} U=\frac{\dot{C_3}C_3^{-3/2}e^{2t/b}}{K-\frac{8}{3b}e^{2t/b}C_3^{-1/2}+\frac{6}{b^2}\int C_3^{-1/2}e^{2t/b}dt}\label{ucase2a}\nonumber \end{equation} where $K$ is a constant of integration. Hence for this first case we have the solution\\ \begin{subequations}\label{solcase2a} \begin{eqnarray} C_1&=&\frac{1}{b^2}\left(\frac{\dot{C_3}C_3^{-3/2}e^{2t/b}}{K-\frac{8}{3b}e^{2t/b}C_3^{-1/2}+\frac{6}{b^2}\int C_3^{-1/2}e^{2t/b}dt}-1\right)\\ C_2&=&\frac{C_3}{2b^2} \\ C_3&=& \text{arbitrary function of time} \end{eqnarray} \end{subequations} \\ This is an infinite class of solutions depending on $C_3$. Now we consider $C_3=0$. The Abel equation $(\ref{Aeqn})$ becomes \begin{equation} \dot{U}+U\left(\frac{3}{2}\frac{\dot{C_2}}{C_2} -\frac{2}{b}-\frac{\ddot{C_2}}{\dot{C_2}}\right)=2U^2\frac{C_2}{\dot{C_2}b^2}\nonumber \end{equation} This is again a Bernoulli equation with solution \begin{equation} U=\frac{\dot{C_2}C_2^{-3/2}e^{2t/b}}{K'-\frac{2}{b^2}\int e^{2t/b}C_2^{-1/2}dt}\label{ucase2b}\nonumber \end{equation} where $K'$ is a constant of integration. Therefore for the second case we have the solution \begin{subequations}\label{solcase2b} \begin{eqnarray} C_1&=&\frac{1}{b^2}\left(\frac{\dot{C_2}C_2^{-3/2}e^{2t/b}}{K'-\frac{2}{b^2}\int e^{2t/b}C_2^{-1/2}dt}-1\right)\\ C_2&=& \text{arbitrary function of time}\\ C_3&=&0 \end{eqnarray} \end{subequations} \\ Again we have generated an infinite class of solutions depending on $C_2$. \subsection{Case 3: ${ \cal C}=0$} Upon setting ${\cal{C}}=0$ we obtain the equation \begin{equation} \frac{\dot{C_3}}{b}-\frac{1}{C_2b^2+C_3}\left(C_2^2b^2-\frac{C_3^2}{b^2}\right)=0\nonumber \end{equation} \\ This equation is quadratic in $C_2$ which implies \begin{equation} C_2=\frac{\dot{C_3}b\pm\sqrt{\dot{C_3}^2b^2-4C_3(C_3+\dot{C_3}b)}}{2b^2}\label{cons2}\nonumber \end{equation} \\ Hence $C_2$ is a known quantity if the function $C_3$ is specified. The Abelian equation (\ref{Aeqn}) has the form \begin{eqnarray} & &\dot{U}(\dot{C_2}b^2+\dot{C_3}) +U\left[\frac{3}{2}\frac{(\dot{C_2}b^2+\dot{C_3})^2}{C_2b^2+C_3}-\frac{2}{b}(\dot{C_2}b^2+\dot{C_3})\right.\nonumber\\ & & \left.-(\ddot{C_2}b^2+\ddot{C_3})\right] =-2U^3\left[\frac{2C_2b^2-C_3}{C_2b^2+C_3}.\frac{C_3}{b^2}\right]\nonumber \end{eqnarray} The equation is complicated, but may be written concisely as \begin{equation} \alpha \dot{U}+\beta U = -\gamma U^3 \label{case3}\end{equation} where \begin{eqnarray} \alpha &=& \dot{C_2}b^2+\dot{C_3}\nonumber\\ \beta &=& \frac{3}{2}\frac{(\dot{C_2}b^2+\dot{C_3})^2}{C_2b^2+C_3}-\frac{2}{b}(\dot{C_2}b^2+\dot{C_3}) -(\ddot{C_2}b^2+\ddot{C_3})\nonumber\\ \gamma &=&2\frac{2C_2b^2-C_3}{C_2b^2+C_3}.\frac{C_3}{b^2}\nonumber \end{eqnarray} The simpler equation (\ref{case3}) has the form of a Bernoulli equation with solution \begin{eqnarray} U&=&\frac{1}{e^{\int (\beta/\alpha) dt}\left(\int \frac{2\gamma}{\alpha}e^{-\int (2\beta/\alpha)dt}dt\right)^{1/2}}\nonumber \\ &=&\frac{e^{(2t/b)}(\dot{C_2}b^2+\dot{C_3})}{(C_2b^2+C_3)^{3/2} \left[K^{''}+\frac{4}{b^2}\int {\frac{e^{(4t/b)}C_3(2C_2b^2-C_3) (\dot{C_2}b^2+\dot{C_3})}{(C_2b^2+C_3)^4}}dt\right]^{1/2}} \end{eqnarray} \\ where $K^{''}$ is a constant of integration. Consequently for this case we have the solution \begin{subequations}\label{solcase3} \begin{eqnarray} C_1&=&\frac{1}{b^2}\left(\frac{e^{2t/b}(\dot{C_2}b^2+\dot{C_3})}{(C_2b^2+C_3)^{3/2} \left[K^{''}+\frac{4}{b^2}\int {\frac{e^{(4t/b)}C_3(2C_2b^2-C_3) (\dot{C_2}b^2+\dot{C_3})}{(C_2b^2+C_3)^4}}dt\right]^{1/2}}-1\right)\\ C_2&=&\frac{\dot{C_3}b\pm\sqrt{\dot{C_3}^2b^2-4C_3(C_3+\dot{C_3}b)}}{2b^2}\\ C_3&=&\text{arbitrary function of time} \end{eqnarray} \end{subequations}\\ Again an infinite class of solutions is possible. \subsection{Case 4: } This is the most general case and corresponds to the situation for which all of the coefficients ${\cal A}, {\cal B}, {\cal C}$ and ${\cal D}$ are nonzero. Equation $(\ref{calAeqn})$ can be written as \begin{equation} \dot{U}=-\frac{\cal B}{\cal A}U-\frac{\cal C}{\cal A}U^2-\frac{\cal D}{\cal A}U^3 \label{quadeqn}\nonumber \end{equation} \\ so that a variables separable equation is possible if $ \frac{\cal B}{\cal A}, \frac{\cal C}{\cal A}$ and $ \frac{\cal D}{\cal A}$ are constants. Then the solution may be written as the quadrature \begin{equation} t-t_0=\int{\frac{dU}{\frac{\cal B}{\cal A}U+\frac{\cal C}{\cal A}U^2+\frac{\cal D}{\cal A}U^3}} \label{quadeqn}\end{equation} \\ It is important to emphasize that the additional constraints generated by $ \frac{\cal B}{\cal A}, \frac{\cal C}{\cal A}$ and $ \frac{\cal D}{\cal A}$ being constant simultaneously are not easy to simplify and in fact may not be consistent. Note that when ${\cal{B}}=0$ and $C_2b^2+C_3=\alpha$ we regain Case 1 with ${\cal{A}}=0$ and (\ref{Aeqn}) becomes a Bernoulli equation. Then solution (\ref{solcase1}) is applicable. However, in general, when ${\cal{B}}=0$, the quadrature (\ref{quadeqn}) is applicable. \section{Discussion} Herrera {\it{et al.}} \cite{herrera2} obtained the equation (\ref{geqn}) governing the gravitational behaviour of a radiating spherical star undergoing shear-free gravitational collapse by imposing conformal flatness to the model. Investigations of this model have thus far been confined to exact solutions of linearised forms of this equation. The nonlinear behaviour of relativistic stellar models is an inherent part of realistic stars undergoing radiative gravitational collapse. A study of the physical features of these models hinges on the solution of the governing nonlinear equations. In this paper we have presented exact solutions of the governing equation in which the nonlinearity has been preserved. This has been effected by transforming the equation (\ref{geqn}) into an Abel equation (\ref{calAeqn}). We have found several classes of exact solutions given in (\ref{solcase1}), (\ref{solcase2a}), (\ref{solcase2b}) and (\ref{solcase3}) retaining the nonlinearity of the model. Note that these generate an infinite family of solutions which allow for a systematic study of radiating relativistic spheres in different scenarios. It is important to observe that simple particular cases can be generated from our nonlinear models in Section 4. For example with ${\cal{A}}=0$ and $U\neq0$ we may obtain the line element \begin{equation} ds^2=B^2[-dt^2+dr^2+r^2(d\theta^2+\sin^2\theta d\phi^2)]\label{linecase1} \end{equation} where (\ref{linecase1}) is in conformally flat form. Here for the simple case $C_3=\alpha$ in (\ref{solcase1}) we obtain $C_1=0,\,\,C_2=0\,$ and then $B^2 $ is a constant; the Minkowski spacetime is regained. It is interesting to observe that the case $C_1=0$ in (\ref{solcase1}) also arises when $C_3$ takes the value\\ \begin{equation} C_3=\frac{2\alpha^2\beta e^{-2t/b}-\alpha}{2\alpha\beta e^{-2t/b}-3}\nonumber \end{equation} where $\beta$ is a constant of integration and $C_2=(\alpha-C_3)/b^2$. Then we can write \begin{equation} B^2=\left[\frac{b^2(2\alpha\beta e^{-2t/b}-3)}{\alpha b^2-2\alpha r^2-2b^2\alpha^2\beta e^{-2t/b}}\right]^2\nonumber \end{equation} This simple analytic form facilitates the analysis of the physical features of the model. We consider now some physical features which may be investigated in future work. With suitable choices of the arbitrary time functions, the luminosity radius \begin{equation} L=(rB)_{\Sigma}\nonumber\end{equation} may be easily found. The quantity \begin{equation} \Gamma=\frac{d \ln p}{d \ln \rho}\nonumber\end{equation} gives a measure of the dynamical instability of the stellar configuration at any given instant in time. We can use this result to confirm that the centre of the star is more unstable than the outer regions. Of particular importance is the thermal evolution of the fluid. The causal transport equation in the absence of rotation and viscous stress is \begin{equation} \tau h_a^{\,b}\dot{q}_b+q_a=-\kappa(h_a^{\,b}\nabla_bT+T\dot{u}_a) \label{transport1}\end{equation} where $h_{ab}=g_{ab}+u_au_b$ projects into the comoving rest space, {\it{T}} is the local equilibrium temperature, $\kappa (\geq0)$ is the thermal conductivity, and $\tau (\geq0)$ is the relaxation time-scale which gives rise to the causal and stable behaviour of the theory. As shown in Maharaj and Govender \cite{maharaj} for a physically reasonable radiative stellar model (\ref{transport1}) becomes \begin{equation} \beta(qB)^{\dot{}}T^{-\sigma}+A(qB)=-\alpha\frac{T^{3-\sigma}(AT)^{'}}{B}\label{transport2} \end{equation} where $A$ and $B$ are the metric functions. Both the causal and noncausal solutions of (\ref{transport2}) may be investigated in a simple model. \section*{Acknowledgements} SSM thanks the National Research Foundation and the Durban University of Technology for financial support. SDM acknowledges that this work is based upon research supported by the South African Research Chair Initiative of the Department of Science and Technology and National Research Foundation.
1,108,101,566,235
arxiv
\subsubsection*{$^\ast$ Corresponding author} Ellen Baake, University of Bielefeld, Faculty of Technology, Universit\"{a}tsstra{\ss}e 25, 33615 Bielefeld, Germany, +49-521-106-4896, [email protected] \\[2mm] \normalsize\noindent \textbf{\textit{Keywords --- }} Lenski's long-term evolution experiment, epistasis, clonal interference, runtime effect, Cannings model, offspring variance \vspace*{1.5em} \\ \noindent \textbf{\textit{Declaration of interest:}} none \vspace*{1.5em} \\ \noindent \textbf{\textit{Role of funding source:}} Deutsche Forschungsgemeinschaft (German Research Foundation, DFG) provided financial support via Priority Programme SPP 1590 (Probabilistic Structures in Evolution, grants no. BA 2469/5-2 and WA 967/4-2), but was not involved in the study design; in the collection, analysis and interpretation of data; in the writing of the report; or in the decision to submit the article for publication. \begin{abstract} \thispagestyle{plain}\setcounter{page}{2} We revisit the model by Wiser, Ribeck, and Lenski (Science \textbf{342} (2013), 1364--1367), which describes how the mean fitness increases over time due to beneficial mutations in Lenski's long-term evolution experiment. We develop the model further both conceptually and mathematically. Conceptually, we describe the experiment with the help of a Cannings model with mutation and selection, where the latter includes diminishing returns epistasis. The analysis sheds light on the growth dynamics within every single day and reveals a runtime effect, that is, the shortening of the daily growth period with increasing fitness; and it allows to clarify the contribution of epistasis to the mean fitness curve. Mathematically, we explain rigorous results in terms of a law of large numbers (in the limit of infinite population size and for a certain asymptotic parameter regime), and present approximations based on heuristics and supported by simulations for finite populations. \end{abstract} \pagenumbering{arabic}\setcounter{page}{1} \section{Introduction} \label{sec:LTEEWRL} One of the most famous instances in experimental evolution is Lenski's long-term evolution experiment or LTEE \citep{Lenski91,WRL13,Tenaillon16,Good17}. Over a period of 30 years, populations of \emph{Escherichia coli} maintained by daily serial transfer have accumulated mutations, resulting in a steady increase in fitness. The mean fitness is observed to be a concave function of time, that is, fitness increases more slowly as time goes by. \citet{WRL13} formulated a theoretical model that builds on the underlying processes, namely mutation, selection, and genetic drift, and obtained a good agreement with the data. However, the model describes the underlying population processes in a heuristic way. As a consequence, one works with effective parameters that are hard to interpret, and it is difficult to disentangle the contributions of the various model components to the resulting fitness curve. \citet{GKWY16} recently formulated an individual-based model for a special case (namely, for the case of deterministic fitness increments) and made explicit that the specific design of the LTEE lends itself ideally to a description via a \emph{Cannings model} \citep[Ch.~3.3]{Ewens04}. In a neutral setting, this classical model of population genetics works by assigning in each time step to each of $N$ (potential) mothers indexed $j=1,\ldots, N$ a random number $\nu_j$ of daughters such that the $\nu_j$ add up to $N$ and are {\em exchangeable}, that is, they have a joint distribution that is invariant under permutations of the mother's indices. In \citet{GKWY16}, this was extended to include mutation and selection. While \citet{WRL13} work close to the data and perform an approximate analysis in the spirit of theoretical biology, \citet{GKWY16} focus on a precise definition of the model and on mathematical rigour (including in particular the proof of a law of large numbers in the infinite population size limit and for a suitable parameter regime). The goal of this paper is to build a bridge between the two approaches, to generalise the model of \citet{GKWY16} to random fitness increments, and to also consider it in the finite-population regime. A~thorough mathematical analysis will reveal the many connections between this model and the one of \citet{WRL13}; in particular, this will make the meaning of its parameters transparent and will allow to separate the effects of the various model ingredients. Parameter identification and stochastic simulations of a suitable extension of the model will make the connection to the experimental data. Let us briefly describe the LTEE and the outline of this paper. \paragraph{Lenski's LTEE.} Every morning, Lenski's LTEE starts with a sample of $\approx~5 \cdot 10^6$ \emph{Escherichia~coli} bacteria in a defined amount of fresh minimal glucose medium. During the day (possibly after a lag phase), the bacteria divide until the nutrients are used up; this is the case when the population has reached $\approx 100$ times its original size. The cells then stop dividing and enter a stationary phase. At the end of the growth period, there are therefore $\approx 5 \cdot 10^8$ bacteria, namely, $\approx 5 \cdot 10^6$ clones each of average size $\approx 100$, see Fig.~\ref{fig:forest}. The next morning, one takes a random sample of $\approx 5 \cdot 10^6$ out of the $\approx 5 \cdot 10^8$ cells, puts them into fresh medium, and the process is repeated; the sampled individuals are the roots of the new offspring trees. Note that the number of offspring a founder individual contributes to the next day is random; it is 1 on average, but can also be $0$ or greater than one. \begin{figure} \begin{center} \resizebox{.9\columnwidth}{!}{\input{forest.tex}} \end{center} \caption{Illustration of some day $i - 1$ (and the beginning of day $i$) of Lenski's LTEE with $4$ founder individuals (bullets), their offspring trees within day $i-1$, and the sampling from day $i-1$ to $i$ (dotted), for an average clone size of $5$. The second founder from the left at day \mbox{$i - 1$} (and its offspring) is lost due to the sampling, and the second founder from the right at day $i$ carries a new beneficial mutation (indicated by the square). } \label{fig:forest} \end{figure} Lenski started 12 replicates of the experiment in 1988, and since then it has been running without interruption. The goal of the experiment is to observe evolution in real time. Indeed, the bacteria evolve via beneficial mutations, which allow them to adapt to the environment and thus to reproduce faster. One special feature of the LTEE is that samples are frozen at regular intervals. They can be brought back to life at any time for the purpose of comparison and thus form a living fossil record. In particular, one can, at any day~$i$, compare the current population with the initial (day $0$) population via the following \emph{competition experiment} \citep{LT94,WRL13}. A sample from the day-0 population and one from the day-$i$ population, each of the same size, are grown together; we define $T_i$ as the time at which the nutrients are used up. One then defines the \emph{empirical relative fitness at day} $i$ as \begin{equation}\label{emp_rel_fitness} \widetilde F_i = \frac{\log \big ( Y_i(T_i) / Y_i(0) \big )}{\log \big ( Y_0(T_i) / Y_0(0) \big )}, \end{equation} where, for $T=0$ and $T=T_i$, $Y_i(T)$ and $Y_0(T)$ are the sizes at time $T$ of the populations grown from the day-$i$ sample and the day-$0$ sample, respectively. Note that the empirical relative fitness is a random quantity, whose outcome will vary from replicate to replicate. Fig.~\ref{fig:WRL_data} shows the time course over 21 years of the empirical relative fitness averaged over the replicate populations, as reported by \cite{WRL13}. Obviously, the \emph{mean relative fitness} has a tendency to increase, but the increase levels off, which leads to a conspicuous concave shape. \begin{figure*}[!ht]\centering \resizebox{.63\textwidth}{!}{ \input{LTEE_measurings_powerlaw.tex} } \caption{Empirical relative fitness averaged over all 12 populations (red bullets) with error bars (95\% confidence limits based on the 12 populations) from \cite{WRL13}; and corresponding power law \eqref{powerlaw} with $\widehat{g} = 5.2$ and $\widehat{\beta}=5.1 \cdot 10^{-3}$ (red solid line). Data and parameters according to Fig.~2A and Table~S4 of \citet{WRL13DATA}. The best fit of a square root (black, dotted) and a linear (black, dashed) fitness trajectory is also shown; these correspond to a scenario without epistasis with and without runtime effect, respectively, as explained in Sec.~ \ref{sec:GKWYLLN}.} \label{fig:WRL_data} \end{figure*} As noted by \citet{WRL13}, the mean relative fitness may be described by the power law \begin{equation}\label{powerlaw} {\widetilde f} \big( k \,\big) = \big(1+\beta k \,\big)^{\frac{1}{2 g}} \end{equation} with parameters $\beta> 0$ and $g >0$. Here $\beta$ is a time-scaling constant, and the exponent $g$ determines the shape of the curve. Furthermore, $k$ is time with one generation (which here is the mean doubling time) as unit, so \begin{equation}\label{t_gamma} i = \Big \lfloor \frac{k}{\log_2 100} \Big \rfloor \approx \frac{k}{6.6}\,. \end{equation} The red solid line in Fig.~\ref{fig:WRL_data} shows the best fit of this curve to the data of all 12 replicate populations, as obtained by \cite{WRL13}, with parameter estimates $\widehat g = 5.2$ and $\widehat \beta = 5.1 \cdot 10^{-3}$. (Here and in what follows, parameter values estimated from the data are indicated by a hat, and numbers are rounded to 2 digits. Our parameters obtained via \texttt{NonlinearModelFit} of \texttt{Wolfram Mathematica 11} only differ in the third digits.) In line with \eqref{emp_rel_fitness} and \eqref{t_gamma}, we take \emph{days} as our discrete time units, rather than doubling times (this will pay off in Secs.~\ref{sec:GKWYLLN} and~\ref{sec:ci}); so $\log_2 100 \approx 6.6$ generations in Fig.~\ref{fig:WRL_data} correspond to one day, and the total of 50000 generations correspond to around 7525 days. The two models mentioned above aim to explain the power law \eqref{powerlaw}. The one by \cite{WRL13}, which we will refer to as the WRL model, uses an approach of \emph{diminishing returns epistasis}, which means that the beneficial effect of mutations decreases with increasing fitness (cf.\ \citet[p.~74]{Bu00} or \cite{Petal00}). \cite{WRL13} derive, by partly heuristic methods, a differential equation for the mean relative fitness whose solution is given by \eqref{powerlaw}. The time-scaling parameter $\beta$ is determined by the interplay of the rate and the effect of beneficial mutations, with the heuristics of \cite{GL98} for the description of clonal interference playing an important role.\footnote{\label{foot_CI}Clonal interference \citep{GL98,Ge01,PK07} refers to the situation of two (or more) beneficial mutations present in the population at the same time. They then compete with each other and, in the end, only one of them will be established in the population; an effect that slows down adaption (when measured against the stream of incoming mutations), and biases the distribution of beneficial effects.} The second approach is the individual-based model of \cite{GKWY16} and makes full use of ideas, concepts, and techniques from mathematical population genetics, which seem to be ideally tailored for the LTEE setup. We will address this as the GKWY model; since it has been published in a mathematical journal, we will review it in more detail in Sec.~\ref{sec:GKWYLLN} with an emphasis on the biological content. For a certain parameter regime that excludes clonal interference, and using a similar approach to diminishing returns as in the WRL model, \citet{GKWY16} prove a law of large numbers as $N\to \infty$, thereby rigoroulsy deriving a version of the power law~\eqref{powerlaw}. \paragraph{Goal and outline of this paper.} A major goal of this paper is to provide a thoroughly-founded mathematical model of the LTEE, and to relate it to the observed fitness curve via parameter estimation and stochastic simulations. This approach will provide additional connections between the ideas contained in the WRL and the GKWY models addressed in the previous paragraph. The design of the LTEE, with the daily growth cycles and the sampling scheme, results in an (approximately) constant population size at the beginning of each day. As made explicit by \citet{GKWY16}, this lends itself in a prominent way to a description through a Cannings model (including mutation and selection), where the mothers are identified with the founders in a given day and the daughters with the founders in the next day. The crucial parameter of the Cannings model, namely, the \emph{variance of the number of offspring} of a founder individual that make it into the next day, is obtained in the context of the LTEE from an explicit stochastic model of population growth during each day. This offspring variance enters Haldane's formula for the fixation probability, see \eqref{Haldane} below. As a matter of fact, also \citet{WRL13} use a formula for the fixation probability (see Eq. (S1) in their Supplementary Text). In this context they refer to \cite{GL98}, who assume a deterministic population growth (and clones of equal size) resulting from synchronous divisions. Indeed, the Cannings model thus hidden within the WRL model turns out to work with a different offspring variance; we will come back to this in Sec.~\ref{sec:discussion}. In addition to the specification of the offspring variance, our model for the daily population growth in continuous time allows us to quantify selection (including diminishing returns epistasis) at the level of the individual reproduction rates within a day. The effect of diminishing returns seems to be obvious from Fig.~\ref{fig:WRL_data}; however, epistasis is not the only contribution to the fitness curve. Rather, the design of the experiment also has its share in it via what we call the \emph{runtime effect}, namely, the shortening of the daily growth phase with increasing fitness. In fact, the runtime effect \emph{alone} results in a concave fitness curve, but is not strong enough to explain the observed data in the absence of epistasis. The analysis of our model will allow a clear separation of the respective contributions. Likewise, the population-genetic notions that also appear in the WRL model (namely, the mutation rate, the selective advantage, the effective population size, the fixation probability, and the strength of epistasis) will be made precise in terms of the underlying microscopic model. Throughout, we aim at a rigorous mathematical treatment where possible. The paper is organised as follows. In Sec.~\ref{sec:GKWYLLN}, we will recapitulate the GKWY model and explain its law of large numbers (that is a deterministic limit in a suitable parameter regime as population size goes to infinity) for a more biological readership. At the end of Sec.~\ref{sec:GKWYLLN}, we will consider the resulting stochastic effects in a system whose parameters are obtained from a fit to the data observed in the LTEE (and which thus naturally differs from its infinite population limit). In Sec.~\ref{sec:ci}, this will lead us to consider clonal interference, which we will investigate both for the case of deterministic and random fitness increments. Here we do not prove a law of large numbers, but derive approximations with the help of moment closure and a refined version of the Gerrish-Lenski heuristics. In Sec.~\ref{sec:discussion}, we will thoroughly discuss the crucial differences between the WRL and the GKWY models, together with the key notions of fitness increment, selective advantage, and epistasis, as well as the mutually equivalent concepts of offspring variance, pair coalescence probability, and effective population size. \section{A probabilistic model for the LTEE and a law of large numbers} \label{sec:GKWYLLN} The GKWY model takes into account two different dynamics, namely, the dynamics \emph{within each individual day}, and the dynamics \emph{from day to day}, together with a suitable \emph{scaling regime}. The resulting \emph{relative fitness process} is proved to converge, in the $N \to \infty$ limit, to a power law equivalent to \eqref{powerlaw}; that is, the power law arises as a \emph{law of large numbers}. We explain this here with the help of an appropriate \emph{heuristics}. In what follows, we present these building blocks and perform a first \emph{reality check}. \paragraph{Intraday dynamics.} Let $T$ be (continuous) physical time within a day, with $T=0$ corresponding to the beginning of the growth phase (that is, we discount the lag phase). Day $i$ starts with $N$ founder individuals ($N \approx 5 \cdot 10^6$ in the experiment). The reproduction rate or \emph{Malthusian fitness}\footnote{As in the WRL model, the simplifying assumption is inherent here that fitness is equivalent to reproduction rate, whereas other phenomena may also influence the composition of the final population from which one samples to seed the next-day’s culture; such as the duration of the lag phase or the ability to sustain some growth even when nutrients have become sparse.} of founder individual~$j$ at day $i$ is $R_{ij}^{}$, where $i \geqslant 0$ and $1 \leqslant j \leqslant N$. It is assumed that at day 0 all individuals have identical rates, $R_{0j}^{} \equiv R_0^{}$, so the population is \emph{homogeneous}. Offspring inherit the reproduction rates from their parents. We use dimensionless variables right away. Therefore we denote by \begin{linenomath}\postdisplaypenalty=0 \begin{equation} t = R_0 T \quad \text{and } r^{}_{ij} = \frac{R_{ij}}{R_0} \label{t} \end{equation} \end{linenomath} dimensionless time and rates, so that on the time scale $t$ there is, on average, one split per time unit at the beginning of the experiment (this unit is $\approx 55$ minutes, cf. \citet{Barrick09}) and $r^{}_{0j} \equiv 1$. In this paragraph, we consider the $r_{ij}^{}$ as given (non-random) numbers. We thus have $N$ independent \emph{Yule processes} at day $i$: all descendants of founder individual $j$ (the members of the $j$-clone) branch at rate $r^{}_{ij}$, independently of each other. They do so until $t=\sigma_i$, where $\sigma_i$ is the duration of the growth phase on day $i$. We define $\sigma_i$ as the value of $t$ that satisfies \begin{equation}\label{def_sigma} \begin{split} & \mathbb{E}(\text{population size at time } t) \\ & = \sum_{j=1}^N \mathrm {e}^{r^{}_{ij}t} =\gamma N, \end{split} \end{equation} where $\gamma$ is, equivalently, the multiplication factor of the population within a day, the average clone size, and the dilution factor from day to day in the experiment ($\gamma \approx 100$ in the LTEE). Note that the Yule processes are stochastic, so the population size at time $t$ is, in fact, random; in the definition of $\sigma_i$, we have idealised by replacing this random quantity by its expectation. Since $N$ is very large, this is well justified, because the fluctuations of the random time needed to grow to a factor 100 in size are small relative to its expectation. Note also that the assumption of a fixed dilution factor implies the supposition of a one-to-one correspondence between the amount of nutrient and the population size at the end of the day; this may be violated if the cell size evolves over time, as was observed in some lines of the experiment \citep{LT94}. \paragraph{Interday dynamics.} At the beginning of day $i > 0$, one samples $N$ new founder individuals out of the $\gamma N$ cells from the population at the end of day $i-1$. We assume that one of these new founders carries a \emph{beneficial mutation} with probability $\mu$; otherwise (with probability $1 - \mu$), there is no beneficial mutation. We think of $\mu$ as the probability that a beneficial mutation occurs in the course of day $i-1$ and is sampled for day $i$. In the light of the constant number of cell divisions per day (regardless of the current fitness value), it is implied that $\mu$ is independent of $i$. Assume that the new beneficial mutation at day $i$ appears in individual $m$, and that the reproduction rate of the corresponding founder individual $k$ in the morning of day $i-1$ has been $r_{i-1, k}$. The new mutant's reproduction rate is then assumed to be \begin{equation}\label{r_increase} r^{}_{im} = r^{}_{i-1,k} + \delta(r_{i-1,k}) \text{ with } \delta(r) := \frac{\varphi}{ r^{q}}. \end{equation} Here, $\varphi$ is the beneficial effect due to the first mutation (that is $\delta(1)$, which applies while \mbox{$r=1$}), and $q$ determines the strength of epistasis. In particular, $q=0$ implies constant increments (that is, additive fitness), whereas $q>0$ means that the increment decreases with $r$, that is, we have diminishing returns epistasis. Let us, at this point, include some comments on the modelling of both fitness increments and mutation. As to fitness, note first that we only take into account beneficial mutations. While neutral and deleterious mutations are, in general, considered more frequent than beneficial ones \citep{EWK07}, we will follow \cite{WRL13} and work in a parameter regime of weak mutation and moderate selection, where beneficial mutations originate and go to fixation one by one, while neutral and deleterious mutations do not contribute to the fitness trajectory \citep{McCS14}; in contrast to strong-mutation regimes that may lead to fitness waves including all kinds of mutations (as discussed, for example, by \cite{MR16}). We also adhere to the simplistic assumption that the fitness landscape is \emph{permutation invariant}, that is, every beneficial mutation on the same background conveys the same deterministic fitness increment, no matter where it appears in the genome. This simplification is abundant both in the classical (Fisher's (1918) \emph{staircase model}) and in the modern literature \citep{DeFi07}. In particular, this entails that the effect of neutral networks is neglected. In fact, neutral mutations can play important roles because they can explore distant fitness peaks via neutral networks (see \cite{HSF96} for early work in the context of RNA structures, \cite{KCGP06} for an application to the evolution of influenza viruses, and \cite{MC15} for a recent study of the effects on the molecular clock). The assumption will be relaxed in Sec.~\ref{sec:stoch}, where we turn to stochastic increments. As to the mutation model, let $M_i^{}$ be the number of new mutants in the sample of size $N$ at the beginning of day $i$. So far we have assumed that $M_i$ can only take the values 1 or 0. More generally, for describing the random number of individuals that are offspring of new mutants from day $i-1$ {\em and} make it into the $N$-sample at the beginning of day $i$, we might consider integer-valued random variables $M_i$ with small expectation $\mu$. The above definition of the mutation mechanism means in particular that the mutation probability does not depend on the current fitness value. We keep this assumption also for the distribution of $M_{i}$, and, as in \eqref{r_increase}, suppose that any mutation adds $\delta(r)= \varphi /r^q$ to the pre-mutant reproduction rate. Unless $\mu$ is very small, realism may be added by using Poisson random variables, which is what we do in the simulations, see Appendix~C. One might also think of a finer \emph{intraday modelling} of the mutation mechanism, cf. \cite{WGSV02}, \cite{WZ15}, or \cite{LCW18}. Although the limit theorem in \cite{GKWY16} is proved only for binary random variables $M_i^{}$, we conjecture that its assertion also holds for non-binary $M_i$ in the scaling regime \eqref{scaling} discussed below, at least as long as the variances of the $M_i$ remain bounded as $N\to \infty$. We will adhere to the binary assumption in our analysis, and it will lead to very satisfactory approximations, see Sec.~\ref{sec:ci}. Note also that we have idealised by not taking into account the change in fitness due to mutation during the day; this is because a mutant appearing during the day will not rise to appreciable frequency in the course of this first day of its existence, and thus will not change the overall growth rate of the population in any meaningful way. \paragraph{Mean relative fitness.} With a view towards~\eqref{emp_rel_fitness} we define the {\em mean relative fitness}, depending on the configuration of reproduction rates $r_{ij}$ of the $N$ individuals in the sample at the beginning of day $i$, as \begin{equation}\label{rel_fitness} F_i := \frac{1}{\sigma_i} \log \Big ( \frac{1}{N} \sum_{j=1}^{N} \mathrm {e}^{r^{}_{ij} \sigma_i^{}} \Big ). \end{equation} Here, $\sigma_i$ is as defined in~\eqref{def_sigma}, so that $F_i = (\log \gamma)/\sigma_i.$ Comparing \eqref{emp_rel_fitness} and \eqref{rel_fitness} we see that the former contains additional sources of randomness: on the one hand, the numerator of \eqref{emp_rel_fitness} may be viewed as stemming from a sample that was drawn from the population at the end of day $i-1$ (and which consists of individuals different from those present at the beginning of day $i$), on the other hand the duration of the growth phase leading to \eqref{emp_rel_fitness} is not a predicted time as in \eqref{rel_fitness} but an empirical time coming out of the competition experiment between the samples from day $i$ and day~$0$. However, since the samples consist of a large number of individuals, the random variables occurring in \eqref{emp_rel_fitness} will, with high probability, come out close to their expectations, thus making already a single copy of the random variable \eqref{emp_rel_fitness} a reasonably good approximation of~\eqref{rel_fitness}, at least if the population at day $i$ is sufficiently homogeneous. To see this, we take care of the new time scale and rewrite $\widetilde \sigma_i = R_{0} \, T_i$ together with $y_i(t) = Y_i(T)$ for $T=0$ and $T=T_i$. Assume that the competition experiment starts with a sample of size $y_0=n$ from the ancestral population and a sample of size $y_i=n$ from the day-$i$ population. In the `deterministic approximation' mentioned above, the duration of the experiment, $\widetilde \sigma_i$, then is the solution of \[ n\, \mathrm {e}^t + \sum_{j=1}^n \mathrm {e}^{r^{}_{ij} t} = 2\, n\, \gamma, \] so \[ y^{}_0(\widetilde \sigma^{}_i) \approx n \, \mathrm {e}^{\widetilde \sigma^{}_i} \quad \text{and } y^{}_0(\widetilde \sigma^{}_i) \approx \sum_{j=1}^n \mathrm {e}^{r^{}_{ij} \widetilde \sigma^{}_i}. \] Consequently, \begin{equation}\label{hatFi} \widetilde{F}_i = \frac{\log \big ( y^{}_i(\widetilde \sigma^{}_i) / y^{}_i(0) \big ) }{\log \big ( y^{}_0(\widetilde \sigma^{}_i) / y^{}_{0}(0) \big )} \approx \frac{1}{\widetilde \sigma^{}_i} \log \bigg ( \frac{1}{n} \sum_{j=1}^n \mathrm {e}^{r^{}_{ij} \widetilde \sigma^{}_i} \bigg ). \end{equation} Due to the enhanced reproduction rates at day $i$ compared to day $0$, $\widetilde \sigma^{}_i$ will generically be larger than $\sigma_i$. This is because $\widetilde \sigma^{}_i$ refers to a mixture of day-$i$ and day-0 populations, whereas $\sigma_i$ relates to a `pure' day-$i$ population. But if the day-$i$ population is homogeneous, that is, $r_{ij} \equiv r_i$, one has $y^{}_i(\widetilde \sigma^{}_i) = n \, \mathrm {e}^{r_i \widetilde \sigma_i}$, so $\widetilde \sigma_i$ cancels out in \eqref{hatFi}, and $\widetilde F_i \approx F_i$. If the population is inhomogeneous, however, $\widetilde F_i $ will be systematically larger than $F_i$, because then the individuals with a larger reproduction rate will get more weight in \eqref{emp_rel_fitness} than in~\eqref{rel_fitness}.\label{fn:compare:emp_rel_fit:rel_fitness} Fortunately, it will turn out in Sect.~\ref{sec:stoch} that polymorphism is low in our populations, so $F_i$ may be taken as a valid approximation to $\widetilde F_i$. Note that \eqref{rel_fitness} implies that \begin{equation}\label{exp_rel_fitness} \mathrm {e}^{F_i^{} \sigma_i^{}} = \frac{1}{N} \sum_{j=1}^{N} \mathrm {e}^{r^{}_{ij} \sigma_i^{}}. \end{equation} Thus, $F_i$ may be understood as the \emph{effective reproduction rate} of the population at day $i$, which is different from the mean Malthusian fitness $\frac{1}{N} \sum_j r^{}_{ij}$ unless the population is homogeneous. \paragraph{Heuristics leading to the limit law.} Assume a new mutation arrives in a \emph{homogeneous} population of relative fitness $F$. It conveys to the mutant individual a relative \emph{fitness increment} \begin{equation}\label{delta_N} \delta (F) = \frac{\varphi}{F^q}, \end{equation} that is, the mutant has relative Malthusian fitness $F+\delta (F)$. The length of the growth period then is \begin{equation}\label{sigma} \sigma(F) =\frac{\log \gamma}{F} \end{equation} (since this solves $\mathrm {e}^{Ft}=\gamma$, cf. \eqref{def_sigma}). We now define the \emph{selective advantage} of the mutant as \begin{equation}\label{s_N} s(F) = \delta(F) \, \sigma(F). \end{equation} Obviously, \emph{the length $\sigma$ of the growth period decreases with increasing} $F$ and, since $s$ in \eqref{s_N} decreases with decreasing $\sigma$, $s$ would decrease with increasing $F$ even if $\delta(F)$ were constant. This is what we call the \emph{runtime effect:} adding a constant to an interest rate $F$ of a savings account becomes less efficient when the runtime decreases. Let us explain the reasoning behind \eqref{s_N}. In population genetics, the selective advantage (of a mutant over a wildtype) per generation is \begin{equation}\label{s} s = \frac{a_1^{}-a_0^{}}{a_0^{}}, \end{equation} where $a_0^{}$ ($a_1^{}$) is the expected number of descendants of a wildtype (mutant) individual in one generation; Eq.~\eqref{s} has the form of a \emph{return} (of a savings account, say). If growth is in continuous time with Malthusian parameters $r_0^{}$ and $r_1^{}=r_0^{} + \delta$, respectively, and a generation takes time $\sigma$, then $a_0^{}=e^{r_0^{}\sigma}$ and $a_1^{}=e^{r_1^{}\sigma} \approx a_0^{}\, (1 + \delta\,\sigma)$ if $\delta$ is small, which turns \eqref{s} into \eqref{s_N}. Often, the appropriate notion of a generation is the time until the population has doubled in size, see e.g. Eq.~(3.2) in \citet{Chevin11}, which provides an analogue to \eqref{s_N}. In our setting, the corresponding quantity is the time required for the population to grow to $\gamma$ times its original size, which is the length $\sigma(F)$ of the growth period in \eqref{sigma}.\footnote{In line with this, we choose days as our discrete time units, as already mentioned in Sec.~\ref{sec:LTEEWRL}.} Together with the above expression for $s$, this explains \eqref{s_N}. Notably, a formula that is perfectly analogous to \eqref{s_N} also appears in \citet[p.~1977, last line]{Sanjuan10}; there, the concept of a viral generation is associated with the cell infection cycle, and the number $K$ (which corresponds to our~$\gamma$) is the burst size or viral yield per cell. Furthermore, it is precisely this notion of selection advantage conveyed by \eqref{s_N} and \eqref{s} that governs the \emph{fixation probability}. Namely, the fixation probability of the mutant turns out to be \begin{equation}\label{pi_N} \pi(F) \sim C\, s(F). \end{equation} Here, $\sim$ means asymptotic equality in the limit $N \to \infty$, \footnote{That is, $\pi(F) / (C \, s(F)) = \pi^{}_N(F) / (C \, s^{}_N(F)) \to 1$ as \mbox{$N \to \infty$}, in the next paragraph’s setting; see \cite{GKWY16}.} and $C := \gamma/(\gamma - 1)$ is asymptotically twice the reciprocal offspring variance in one Cannings generation of the GKWY model\footnote{Let us emphasise once again that one generation of this Cannings model corresponds to one day in the LTEE.}; that is, with the notation introduced in the first paragraph of the Introduction, the offspring variance $v$ in one Cannings generation satisfies \begin{equation} \label{ourv} v = \mathbb{V}(\nu_1) \sim 2 \, \frac{\gamma-1}{\gamma} = \frac{2}{C}. \end{equation} Hence \eqref{pi_N} is in line with Haldane's formula \begin{equation}\label{Haldane} \pi \sim \frac s{v/2}, \end{equation} which says that the fixation probability $\pi$ is (asymptotically) the selective advantage $s$ divided by half the offspring variance $v$ in one generation. Haldane's formula relies on a branching process approximation of the initial phase of the mutant growth; see \citet{PW08} for an account of this method, including a historic overview. We will also encounter the branching approximation in the argument around \eqref{branching} below. For the sake of completeness, let us give the following intuitive explanation for \eqref{ourv}. In every Cannings model, one has the relation \begin{equation} \label{vc} v = (N-1) \, p_{\rm coal} \end{equation} between $v$ and the pair coalescence probability $p_{\rm coal}$, that is, the probability that two randomly sampled daughters have the same mother, cf.~\citet[Ch.~4.1]{Dur08}. Eq.~\eqref{vc} then follows readily from the elementary relation $p_{\rm coal}= \mathbb E[\frac{1}{N \, (N-1)} \sum_j \nu_j \, (\nu_j-1)]$, which, in turn, equals $\frac {1}{N-1} (\mathbb E[\nu_1^2] -1)= \frac{1}{N-1} \, v$, because the $\nu_j$ are exchangeable and sum to $N$ by assumption. In our specific Cannings model, the family size of a randomly sampled daughter individual at the end of the day is, on average, asymptotically twice as large as a typical family size\footnote{The size of the clone to which a sampled individual belongs has a {\em size-biased} distribution; this is in line with the classical {\em waiting time paradox} (cf.~ \citet[Example~4.16]{HOG13}). In our model, the size distribution of a typical clone at the end of the day is approximately geometric with parameter $1/\gamma$, and the size-biasing of this distribution results (approximately) in a negative binomial with parameters $2$ and $1/\gamma$. Consequently, the expected size of the clone to which a sampled individual belongs is approximately $2 \, \gamma$, that is twice the expected size of a typical clone. This proportion carries over from the clones to the families of sampled individuals. Let us emphasise once again that a \emph{family} consists of the founders at the beginning of the next day that go back to the same founder in the current day; whereas a \emph{clone} consists of all descendants of a founder at the end of a day, regardless of whether they are sampled for the next day or not.}. Since we have $N$ clones of average size~$\gamma$, and the sampling is without replacement, we have \begin{equation}\label{ourPCP} p_{\rm coal} \sim \frac {2}{N} \, \frac{\gamma-1}{\gamma}. \end{equation} Together with \eqref{vc} this implies \eqref{ourv}. Note that \eqref{ourPCP}, at the same time, defines the (coalescence) effective population size via $N_{\text e}=1/p_{\rm coal}$, cf.~\citet[Ch.~3.7]{Ewens04} or \citet[Ch.~4.4]{Dur08}. Another crucial ingredient of the heuristics is the time window of length \begin{equation}\label{u_N} u(F) \sim \frac{\log \big (N \, s(F) \big )}{s(F)} \end{equation} after the appearance of a beneficial mutation that will survive drift (a so-called \emph{contending mutation}); this approximates the expected time it takes for the mutation to become dominant in the population \citep{MS76,DeFi07}. To see this, let us again resort to the branching process approach that led to \eqref{Haldane}. Namely, we approximate the expected offspring size of a mutant after $i$ days by $Z_i$, where $(Z_i)_{i \in \mathbb{N}}$ is a discrete-time branching process with offspring expectation $1+s$ per day, condition on non-extinction, and obtain \begin{equation}\label{branching} \begin{split} \mathbb E[Z_i \mid Z_i > 0] &= \frac{\mathbb E[Z_i \, \mathbbm{1}_{Z_i>0}]}{\mathbb P(Z_i>0)} \\ & \sim_i \frac{\mathbb E[Z_i]}{\pi} \sim \frac{v}{2} \, \frac{(1 + s)^i}{s}, \end{split} \end{equation} where $\mathbbm{1}$ is the indicator function, $\sim_i$ means asymptotic equivalence as $i \to \infty$ (while $\sim$ continues to refer to $N \to \infty$), and \mbox{$\mathbb P (Z_i>0) \sim_i \pi$} because extinction typically happens early; whereas for large $i$, the process has grown large with high probability and then only runs a tiny extinction risk. Since $\log (1+s) \sim s$, the quantity $u$ of \eqref{u_N} is then (as $N \to \infty$) asymptotically equivalent to the solution of \[ \mathbb E[Z_i \mid Z_i > 0] \sim \varepsilon \, N \] for any positive constant $\varepsilon$ (so the right-hand side is a sizeable proportion of the population). All this now leads us to the dynamics of the relative fitness process. As illustrated in Fig.~\ref{loss_and_fix}, most mutants only grow to small frequencies and are then lost again (due to the sampling step). But if it does happen that a mutation survives the initial fluctuations and gains appreciable frequency, then the dynamics turns into an asymptotically deterministic one and takes the mutation to fixation quickly, cf.~\cite{GL98}, \cite{DeFi07}, or \citet[Ch.~6.1.3]{Dur08}. Indeed, within time $u(F)$, the mutation has either disappeared or gone close to fixation. Moreover, in the scaling regime \eqref{scaling} specified in the next paragraph, this time is much shorter than the mean interarrival time $1/\mu$ between successive beneficial mutations (recall that $\mu$ is the \emph{sample-wide} mutation probability). As a consequence, there are, with high probablity, at most two types present in the population at any given time (namely, the \emph{resident} and the \emph{mutant}), and \emph{clonal interference is absent}. Therefore, in the scenario considered, survival of drift is equivalent to fixation. In the literature, the parameter regime $u \ll 1/\mu$ is known as the \emph{periodic selection} or \emph{sequential fixation} regime, and the resulting class of \emph{origin-fixation} models is reviewed in \cite{McCS14}. \begin{figure}\centering \input{heuristicLLN.tex} \caption{ Schematic drawing of the relative fitness process (black) and the approximating jump process (grey). } \label{loss_and_fix} \end{figure} Next, we consider the expected per-day increase in relative fitness, given the current value~$F$. This is \begin{equation}\label{E_F} \begin{split} \mathbb{E}(\Delta F \mid F) \, & \approx \, \mu \, \pi(F) \, \delta (F) \\ & \sim \, \frac{\Gamma}{F^{2q+1}}. \end{split} \end{equation} Here, the asymptotic equality is due to \eqref{delta_N}--\eqref{s_N} and \eqref{pi_N}, and the compound parameter \begin{equation}\label{compound} \Gamma := C \, \mu \, \varphi^2 \log \gamma \end{equation} is the rate of fitness increase per day at day~0 (where $r_{0j}^{} \equiv F_0^{}= 1$). Note that $\varphi/F^q$ appears squared in the asymptotic equality in \eqref{E_F} since it enters both $\pi$ and $\delta$. Note also that the additional $+1$ in the exponent of $F$ comes from the factor of $1/F$ in the length of the growth period \eqref{sigma}, and thus reflects the runtime effect. As was explained in the context of \eqref{s} and \eqref{Haldane}, this crucial difference is caused by the decrease of the selective advantage with decreasing length of the growth period. We will analyse this difference in some depth in the Discussion. Let us only mention here that the effect would be absent if, instead of our Cannings model, a discrete-generation scheme were used, as by \citet{WRL13}; or a standard Wright-Fisher model, for which \citet{KTP09} calculated the expected fitness increase and the fitness trajectory for various fitness landscapes, including the one given by \eqref{delta_N}. Eq. \eqref{E_F} now leads us to define a new time variable $\tau$ related to $i$ of \eqref{t_gamma} via \begin{equation}\label{tau_Gamma} i = \Big \lfloor \frac{\tau}{\Gamma} \Big \rfloor \end{equation} with $\Gamma$ of \eqref{compound}, which means that one unit of time $\tau$ corresponds to $\Gamma$ days. With this rescaling of time, Eq. \eqref{E_F} corresponds to the differential equation \begin{equation}\label{ODE} \frac{\D{}}{\D{\tau}} f(\tau) = \frac{1}{f^{2q+1}(\tau)}, \quad f(0)=1, \end{equation} with solution \begin{equation}\label{LLN} f(\tau) = \big ( 1+2\, (1+q) \, \tau \big )^{\frac{1}{2(1+q)}}. \end{equation} This is the desired power law for the fitness trajectory. Note that \eqref{ODE} is just a scaling limit of \eqref{E_F}, where the expectation was omitted due to a dynamical law of large numbers, as will be explained next. \paragraph{Scaling regime and law of large numbers.} We now think of $\mu = \mu_N$ and $\varphi= \varphi_N$ as being indexed with population size because the law of large numbers requires to consider a sequence of processes indexed with $N$. Thus, other quantities now also depend on $N$ (so $\delta=\delta_N^{}, s=s^{}_N$, $\pi=\pi^{}_N$, $\Gamma=\Gamma_N$ etc.), and so does the \emph{relative fitness process} $(F_i)_{i \geqslant 0} = (F^N_i)_{i \geqslant 0}^{}$ with $F_i$ of \eqref{rel_fitness}. More precisely, we will take a \emph{weak mutation--moderate selection limit}, which requires that $\mu_N$ and $\varphi_N$ become small in some controlled way as $N$ goes to infinity. Specifically, \citet{GKWY16} assume \begin{equation}\label{scaling} \begin{split} & \mu^{}_N \sim \frac{1}{N^{a}}, \; \varphi^{}_N \sim \frac{1}{N^{b}} \quad \text{as} \quad N \to \infty, \\ & 0 < b < \frac{1}{2}, \; a > 3\,b. \end{split} \end{equation} Due to the assumption $a> 3\,b$, $\mu_N$ is of much lower order than $\varphi_N$. This is used by \citet{GKWY16} to prove that, as $N\to \infty$, with high probability no more than two fitness classes are simultaneously present in the population over a long time span. Note that $\mu_N$ is the per-day \emph{mutation probability per population} (but see the discussion at the end of the paragraph on interday dynamics at the beginning of this section). Furthermore, the scaling of $\varphi_N$ implies that selection is stronger than genetic drift as soon as the mutant has reached an appreciable frequency. The method of proof applied by \citet{GKWY16} requires the assumptions \eqref{scaling} in order to guarantee a coupling between the new mutant's offspring and two nearly critical Galton-Watson processes between which the mutant offspring's size is `sandwiched' for sufficiently many days. Specifically, under the assumption $0 < b < \frac{1}{2}$, the coupling works until the mutant offspring in our Cannings model has reached a small (but strictly positive) proportion of the population, or has disappeared. A careful inspection of the arguments shows that, under the weaker condition $0 < b < \frac{2}{3}$, this coupling works at least until the mutant offspring has (either disappeared or) reached size $N^b$, from which it then goes to fixation by a law of large numbers argument. This makes the limit result of \citet{GKWY16} valid for $0 < b < \frac{2}{3}$; we conjecture that it even holds for $0 < b < 1$. In the case where selection is much stronger than mutation, the classical models of population genetics, such as the Wright-Fisher or Moran model, display the well-known dynamics of sequential fixation. Two distinct scenarios can happen (see the review by \citet{McCS14}, or \citet[Ch.~2 and Fig.~2.7]{GL00}): either a fast loss of a new beneficial mutation, or its fixation. Qualitatively, our Cannings model displays a similar behaviour. Furthermore, as already indicated, with the chosen scaling the population turns out to be homogeneous on generic days~$i$ as $N\to \infty$. This has the following practical consequences for the relative fitness process $(F_i^N)_{i \geqslant 0}^{}$. First, on a time scale with a unit of $1/(\mu_N \, \varphi_N)$ days, $(F^N_i)_{i \geqslant 0}^{}$ turns into a jump process as $N \to \infty$, cf. Fig.~\ref{loss_and_fix}. Second, on the (generic) days $i$ at which the populations are nearly homogeneous, the subtle systematic difference between \eqref{emp_rel_fitness} and \eqref{rel_fitness}, as described in Footnote~\ref{fn:compare:emp_rel_fit:rel_fitness}, will disappear. The precise formulation of the limit law \citep{GKWY16} reads as \smallskip \noindent {\bf Theorem} {\it For $N \to \infty$ and under the scaling \eqref{scaling}, the sequence of processes $\big (F^N_{\lfloor \tau /\Gamma_N \rfloor} \big )_{\tau \geqslant 0}$ converges, in distribution and locally uniformly, to the deterministic function $\big ( f(\tau) \big )_{\tau \geqslant 0}$ in \eqref{LLN}.} \smallskip \noindent The theorem was proved along the heuristics outlined above\footnote{Note that \cite{GKWY16} partly work with dimensioned variables, which is why the notation and the result look somewhat different.} with the help of advanced tools from probability theory. It is a law of large numbers reasoning, which allows to go from \eqref{E_F} to \eqref{ODE} (and thus to `sweep the expectation under the carpet'), in the following sense: For large $N$ and under the scaling assumption \eqref{scaling}, fitness is the sum of a large number of small per-day increments accumulated over many days, and may be approximated by its expectation. Since time has been rescaled via \eqref{tau_Gamma}, Eq.~\eqref{LLN} has $q$ as its single parameter. Note that $1/(2\,(1+q))<1$ (leading to a concave $f$) whenever $q \geqslant 0$; in particular, \emph{the fitness curve is concave even for} $q=0$, \emph{that is, in the absence of epistasis}. In contrast, the fitness trajectory obtained by \cite{KTP09} for the Wright-Fisher model under $q = 0$ is linear. The difference is due to the runtime effect, which is present in our Cannings model even for $q=0$ because of the parametrisation of the intraday dynamics with the individual reproduction rate $r$: If the population as a whole already reproduces faster, then the end of the growth phase is reached sooner and thus leaves less time for a mutant to play out its advantage $\delta(r) = \varphi/r^0 = \varphi$ of \eqref{r_increase}; see also the discussion in Sec.~\ref{sec:discussion}. The Wright-Fisher model of \cite{KTP09} does not display the runtime effect because it does not contain the individual (intraday) reproduction rate as a parameter. The second parameter, namely $\Gamma_N$, reappears when $\tau$ is translated back into days; that is, $F^N_i \approx f(\Gamma_N \, i)$. Note that $R_0$, as used in the first nondimensionalisation step \eqref{t}, is not an additional parameter because it is already absorbed in $\varphi^2_N$. \paragraph{A first reality check.} The limit law \eqref{LLN} is identical with the power law \eqref{powerlaw} of \cite{WRL13} up to a transformation of the parameters that relies on relevant details in the modelling (see also the discussion in Sec.~\ref{sec:discussion}). We have $q=g-1$, so $\widehat g = 5.2$ of Sec.~\ref{sec:LTEEWRL} translates into $\widehat q=4.2$.\footnote{Recall that we denote parameter estimates by a hat to distinguish them from the corresponding theoretical quantities.} Furthermore, $\Gamma = (\beta \, \log_2 \gamma) /(2\, (1+ q))$ due to \eqref{powerlaw} and \eqref{LLN} together with the fact that $k = (\tau \log_2 \gamma)/\Gamma$ by \eqref{t_gamma} and \eqref{tau_Gamma}; given $\widehat \beta=5.1 \cdot 10^{-3}$, this results in $\widehat \Gamma = 3.2 \cdot 10^{-3}$ (here and in what follows, we again suppress the index $N$, since we will work with fixed, finite $N$ from now on). The resulting fit is reproduced in Fig.~\ref{cannings_det_init_0p11_uncorrected} (red solid line). In line with \citet[Fig.~2]{WRL13}, we average over all 12 populations, at this point neglecting a certain variability of the parameters between the populations, see their Table~S4. For comparison, we have also included in Fig.~\ref{fig:WRL_data} the fit without epistasis, that is, for $q=0$; as well as the linear one, which applies in the absence of both epistasis and runtime effect. In the notation corresponding to \eqref{powerlaw}, these are $\widetilde f(k)=\sqrt{1+\widehat{\beta}_{\mathrm{sqr}} \, k}$ and $\widetilde f(k)=1+ \widehat{\beta}_{\mathrm{lin}} \, k$ with suitable constants $\widehat{\beta}_{\mathrm{sqr}}$ and $\widehat{\beta}_{\mathrm{lin}}$. In the light of \eqref{compound}, of the given value $\widehat \Gamma$, and of the fact that $C \, \log \gamma \approx 4.7$, the values of $\widehat \mu$ and $\widehat \varphi$ cannot both be very small. We therefore now check the limit law against realistic parameter values. \begin{figure*}[!ht]\centering \resizebox{.63\textwidth}{!}{ \input{cannings_det_init_0p11_uncorrected.tex} } \caption{Least-squares fit of the curve \eqref{LLN} to the data in \cite{WRL13DATA}, and stochastic simulations of finite populations with deterministic beneficial effects. Red bullets: mean empirical relative fitness (averaged over all 12 populations) with error bars as in Fig.~\ref{fig:WRL_data}; solid red line: $F^{}_i \approx f(\widehat\Gamma \, i)$ with parameter values $\widehat q=4.2$ and $\widehat \Gamma=3.2 \cdot 10^{-3}$; green lines: 12 individual trajectories $F_i$ obtained via Cannings simulations with $N=5 \cdot 10^6, \gamma=100, \widehat \varphi = 0.11$, and $\widehat \mu=0.057$; light blue line: average over the 12 simulations; inset: zoom on the early phase.} \label{cannings_det_init_0p11_uncorrected} \end{figure*} We start by decomposing the compound parameter $\Gamma$. Recall from \eqref{r_increase} that the \emph{fitness increment due to the first beneficial mutation} is \begin{equation} \label{delta1} \varphi = \delta(F_0^{}) = \delta(1). \end{equation} This was estimated as $0.1$ by \citet{Lenski91}, see also \citet{GL98}, and \citet{WRL13}. For reasons to be explained in Sec.~\ref{sec:det}, however, we work with the somewhat larger value $\widehat \varphi = 0.11$. The mutation probability may then be obtained from \eqref{compound} as \begin{equation}\label{est_mutrate} \widehat \mu=\frac{\widehat\Gamma}{C \, \widehat \varphi^{\,2} \log \gamma} = 0.057. \end{equation} Stochastic simulations of the GKWY model, performed with Algorithm~\ref{alg:cannings} described in Appendix~C and using the above parameters\footnote{The table in Appendix~C contains the precise values used in the simulations, whereas numbers are rounded to two decimals throughout the text.} together with {$N= 5 \cdot 10^6$}, are also shown in Fig.~\ref{cannings_det_init_0p11_uncorrected}. Their mean (over 12 runs) recovers the basic shape of the fitness curve, but systematically underestimates both the limit law and the data. A natural explanation for this is clonal interference, which is absent in the limit under the scaling \eqref{scaling}, but leads to loss of mutations for finite $N$. This will be taken into account in Sec.~\ref{sec:ci}. But let us note here that the fluctuations in the data are rather larger than those of the simulations; this may well go along with a variability of the parameters between the 12 replicates of the LTEE, which is present in the data, but not in our simulations. \section{Including clonal interference} \label{sec:ci} As discussed in Sec.~\ref{sec:GKWYLLN}, the scaling regime in the GKWY model was such that, with high probability, no new beneficial mutation arrived while the previous one was on its way either to extinction or fixation. As indicated by the simulation results in Fig.~4, also clonal interference should be taken into account. Briefly stated, clonal interference refers to the situation where a second contending mutation appears while the previous one is still on its way to fixation (recall also Footnote~\ref{foot_CI}). It is crucial to keep in mind that, unlike the case without clonal interference considered in Sec.~\ref{sec:GKWYLLN}, survival of drift may then no longer be identified with fixation; rather, there may be an additional loss of contending mutations due to clonal interference. In particular, the quantity $\pi$ of \eqref{pi_N} must now be addressed as the \emph{probability to survive drift} rather than the fixation probability. A full analytic treatment of clonal interference is beyond the scope of this paper; in particular, we will not prove a law of large numbers here. Rather, we refine and adapt the heuristics of \cite{GL98}, see also \cite{WRL13}. We will first consider the deterministic effects as assumed in the GKWY model in Sec.~\ref{sec:det} and then proceed to random effects from a very general class of probability distributions in Sec.~\ref{sec:stoch}. \subsection{Deterministic beneficial effects} \label{sec:det} The heuristics of \citet{GL98} was originally formulated for fitness effects that follow an exponential distribution; if applied to the degenerate case of derministic effects, it leads to certain artifacts. We will therefore sketch and apply a {\em thinning heuristics} as a counterpart to the Gerrish-Lenski heuristics. Consider the situation that a second mutation surviving drift appears within the time window $u(F)$ of \eqref{u_N} after the appearance of a first mutation (this is more or less while the first mutation has not become dominant yet). Then, with high probability, the second mutation occurs in an individual of relative fitness $F$ (rather than in an individual of relative fitness $F+\delta(F)$), and therefore belongs to the same fitness class as the first mutant and its offspring. Thus, as far as fitness is concerned, the two mutants (and their offspring) can be considered equivalent. In our heuristics, the occurrence of a second (and also a third, fourth, $\ldots$) mutation within the given time window neither speeds up nor decelerates the (order of magnitude of) the time until the new fitness class is established in the population. So $u(F)$ plays the role of a \emph{dead time}, in the sense that the fitness increments carried by contending mutations arriving within this period are lost. We now determine the probability that a given increment is \emph{not} lost by comparing the intensities of two point processes. The first is the process of contending mutations arriving at rate (or intensity) $I_1(F) := \mu \, \pi(F)$. The second is the process of contending mutations arriving outside the dead time of the preceding one. This is a renewal process, where the next point appears after a waiting time of $u(F)$ plus an $\Exp(I_1(F))$-distributed random variable\footnote{Note that both processes are in \emph{continuous time} and approximate what happens in the original discrete-time model. In this sense, $\mu$ is to be understood as a mutation \emph{rate} here.}. The intensity of this process is then the inverse of the expected waiting time, namely \[ I_2 (F) := \frac{1}{u(F)+1/I_1(F)}. \] By a simple argument from renewal theory (see e.g. \cite[Ch.~3.4, in particular Ex.~4.3]{Dur05}), the fraction of contending mutations that are not lost due to clonal interference at fitness level $F$ is thus approximately given by the {\em retainment factor} \begin{equation}\label{survive_ci_det} \frac{I_2(F)}{I_1(F)} = \frac{1}{1 + I_1(F) \, u(F)}=:\vartheta(F). \end{equation} Under this approximation, the expected per-day increase of the relative fitness, given its current value $F$, turns into \begin{equation}\label{withtheta} \mathbb{E}(\Delta F \mid F) \, \approx \, \mu \, \pi(F) \, \delta (F)\,\vartheta(F). \end{equation} We recall from \eqref{delta_N}--\eqref{s_N} that \[ s(F) = \frac{\varphi\log\gamma}{F^{q+1}}. \] Hence \eqref{u_N} becomes \begin{equation}\label{ufull} u(F)\sim \frac {\log \big ((N \, \varphi \log \gamma)/F^{q+1} \big )}{s(F)}; \end{equation} and, due to \eqref{pi_N}, the effect of $F$ cancels out in the leading term in the product of $\pi(F)$ and $u(F)$ in \eqref{survive_ci_det}. Put differently, the dead time and the expected interarrival time of contending mutations increase with $F$ in the same way, up to logarithmic corrections. In order to dispose of the remainig dependence of $F$, namely the one in the numerator of \eqref{ufull}, we replace the factor $(\log \gamma) / F^{q + 1}$ by 1. This somewhat crude-looking approximation seems justified because the term appears under the logarithm in \eqref{ufull}, and $\log(\log(\gamma) / \widetilde{F}^{\widehat q + 1})$ is between $-1$ and $+1$ for $\gamma=100$, our estimate $\widehat q=4.2$, and $\widetilde F$ between $1.1$ and $1.6$ (recall from Fig.~\ref{cannings_det_init_0p11_uncorrected} that $\widetilde F$ is between 1 and 1.7). On the other hand, the estimated value of $\log (N \, \varphi)$ is $ \log (N \, \widehat \varphi) = 13.22$ for $N = 5 \cdot 10^6$ and $\widehat \varphi=0.11$. With this approximation, \eqref{ufull} turns into \begin{equation}\label{ulight} u(F)\approx \frac {\log (N \, \varphi)}{s(F)} \end{equation} and \eqref{survive_ci_det} becomes \begin{equation}\label{constant_theta} \vartheta(F) \equiv \vartheta = \frac{1}{1+C \, \mu \, \log(N \, \varphi)}. \end{equation} Moreover, \eqref{withtheta} becomes \begin{equation}\label{Exp_incr_F_i} \mathbb{E}(\Delta F \mid F) \, \approx \frac{\Gamma}{F^{2q+1}}, \end{equation} where now \begin{equation}\label{compound2} \Gamma = \frac{C \,\mu \, \varphi^2 \log \gamma}{1+ C \, \mu \log (N \, \varphi)} , \end{equation} that is, the factor $\mu$ in \eqref{compound} is replaced by $\mu/(1+C \, \mu \log (N \, \varphi))$. Now, taking the expectation over $F$ in \eqref{Exp_incr_F_i} yields \[ \mathbb{E}(\Delta F ) \, \approx \Gamma \, \mathbb{E} \Big ( \frac{1}{F^{2q+1}} \Big ) . \] Assuming a suitable concentration of the random variables in question around their expectations (which in theory would be justified by a dynamical law of large numbers result such as the one discussed in Sec.~\ref{sec:GKWYLLN}, and in practice is a crude way of moment closure also implied by \citet{WRL13} and \cite{KTP09}), we interchange the expectation with the nonlinearity and arrive at the approximation \[ F_{\lfloor \tau / \Gamma \rfloor} \approx \mathbb{E} \big ( F_{\lfloor \tau / \Gamma \rfloor} \big ) \approx f(\tau) \; \text{for large } N \] with $f$ as in \eqref{LLN}. We may, therefore, approximate (as in Fig. \ref{cannings_det_init_0p11_uncorrected}) the data by the function $f$, with the same values $\widehat q$ and $\widehat \Gamma$ as before. The compound parameter $\Gamma$, however, has an internal structure different from the previous one (compare \eqref{compound2} with \eqref{compound}). Solving \eqref{compound2} for $\mu$ now yields the mutation rate \begin{equation}\label{find_mu} \mu = \frac{\Gamma}{C \, \big ( \varphi^2 \log \gamma - \Gamma \, \log (N \, \varphi)\big )} . \end{equation} However, the denominator has a pole at \mbox{$\varphi \approx 0.096$} (and is negative for smaller values of $\varphi$), see Fig.~\ref{fig:pole}. The existence of the pole, and the resulting explosion of $\mu$ in its neighbourhood, have the following meaning. According to \eqref{ulight}, the window length $u(F)$ depends on $N, \varphi$, and $s (F)$. Each window goes along with an increment of $F$ by $\delta(F)$, and a spacing on the time axis until the next window begins. For smaller $\varphi$, the increments $\delta(F)$ become smaller and the windows get wider, which inevitably means that the gaps between the windows have to be shorter (in order to obtain the observed total increase of $F$ of $\approx 0.7$ within the given time). Shorter gaps between the windows, however, mean larger mutation rates and, in the limit of vanishing gaps, even an infinite mutation rate (and a vanishing retainment factor), which is, of course, not realistic. For an asymptotic analysis as $N\to \infty$, this suggests one has to assume that $\vartheta$ is bounded away from 0. For substantially higher mutation probabilities, the heuristics would break down \citep{FND08} and a different asymptotic regime would apply \citep{DeFi07,DM11}. Our choice of $\widehat \varphi \approx 0.11$ for the simulations in Section~\ref{sec:GKWYLLN} was intended to avoid the numerical instabilities close to the pole of \eqref{find_mu}. For this value, Eq.~\eqref{find_mu} gives $\widehat \mu=0.24$ and, via \eqref{constant_theta}, a retainment factor of $\widehat \vartheta=0.24$. \begin{figure} \resizebox{\columnwidth}{!}{\input{parameterestimation_det.tex}} \caption{\label{fig:pole} $\mu$ (green) and $\vartheta$ (orange) as functions of $\varphi$ according to \eqref{find_mu} and \eqref{constant_theta}; the former has a pole at $\approx 0.096$. Values of $\varphi \lessapprox 0.96$ are forbidden because there $\mu$ (and $\vartheta$) would become negative.} \end{figure} As was to be expected, this now gives a better agreement between the simulated mean fitness and the approximating power law (and hence with the data), see Fig.~\ref{cannings_det_init_0p11}. \begin{figure*}[!ht]\centering \resizebox{.63\textwidth}{!}{\input{cannings_det_init_0p11.tex}} \caption{Cannings simulation as in Fig.~\ref{cannings_det_init_0p11_uncorrected}, but with mutation probability $\widehat \mu=0.24$. } \label{cannings_det_init_0p11} \end{figure*} Recall that the parameters $\widehat \mu$ and $\widehat \varphi$ have been obtained by fitting a first order (ODE) approximation (of the above described {\em thinning heuristics}) to the empirical data, taking into account some information on the effect of the first successful beneficial mutation. As a consistency check, it is interesting to also simulate the thinning heuristics with these parameters (see Algorithm~\ref{alg:approximation} in Appendix~C) and compare the result with the simulations based on the Cannings model. As shown in Fig.~\ref{approx_det_init_0p11}, the fit of the mean is better for the Cannings simulations than for the simulation of the heuristics in the early phase of the LTEE, and vice versa in the late phase. Note that the simulations of the heuristics yields smaller fluctuations than that of the Cannings model; this goes along with the fact that the model based on the heuristics contains fewer random elements than the Cannings model. With the parameter values $\widehat \varphi = 0.11$ and $\widehat \mu = 0.24$, the number of fixed beneficial mutations in the simulation in Fig.~\ref{approx_det_init_0p11}, averaged over the 12 runs, is 27; this is to be compared with the estimate of 60--110 fixed mutations observed in 50000 generations by \citet{Tenaillon16}, and of 100 fixed mutations observed in 60000 generations by \citet{Good17}, which both include neutral mutations and mildly deleterious hitchhiking `passenger' mutations. We will come back to this in the discussion. \begin{figure*}[!ht]\centering \resizebox{.63\textwidth}{!}{\input{approx_det_init_0p11.tex}} \caption{Simulation using heuristics for deterministic increments. Parameters as in Fig.~\ref{cannings_det_init_0p11}. Mean number of clonal interference events: 84; mean number of established beneficial mutations: 27. } \label{approx_det_init_0p11} \end{figure*} \subsection{Random beneficial effects} \label{sec:stoch} Let us now turn to random beneficial effects. To this end, we scale the fitness increments with a positive random variable $X$ with density $h$ and expectation $\mathbb{E}(X)=1$. We assume throughout that $\mathbb{E}(X^2) < \infty$ to ensure that all quantities required in what follows are well-defined. Taking into account the dependence on $X$, the quantities in \eqref{delta_N}--\eqref{s_N}, \eqref{pi_N} and \eqref{u_N} turn into \begin{subequations}\label{X} \begin{linenomath}\postdisplaypenalty=0 \begin{align} \delta (F,X) & = X \, \frac{\varphi}{F^q}, \label{X:delta}\\ \sigma(F) & = \, \frac{\log \gamma}{F} \text{ (as before)}, \label{X:sigma}\\ s(F,X) \, & = \, \delta(F,X) \, \sigma(F), \label{X:s}\\ \pi(F,X) \, & \approx \, C \, s(F,X), \label{X:pi}\\ \begin{split} u (F,X) \, & = \, \frac{\log (N\,s(F,X))}{s(F,X)} \\ &\approx \frac{\log (N\, \varphi\, X)}{s(F,X)}. \label{X:u} \end{split} \end{align} \end{linenomath} \end{subequations} In \eqref{X:u} we apply the same reasoning that led to the approximation~\eqref{ulight} for $u$. Note that large $X$ implies large $s$ and hence small $u$ and vice versa. The following Poisson picture will be central to our heuristics: % The process of \emph{beneficial mutations} with scaled effect $x$ that arrives at time $\tau$ has intensity $\mu \D{\tau} h(x) \D{x}$ with points $(\tau, x) \in \mathbb R_+\times \mathbb R_+$. % And in fitness background $\approx F$, we denote by $\Pi$ the Poisson process of \emph{contending mutations}, i.e. those beneficial mutations that survive drift (but not necessarily go to fixation), which has intensity $\mu \D{\tau} h(x) \, \pi(F,x) \D{x}$ on $\mathbb R_+\times \mathbb R_+$. % \\ We now develop a refined version of the {\em Gerrish-Lenski heuristics for clonal interference} and adapt it to the context of our model. If, in the fitness background $\approx F$, two contending mutations $(\tau, x)$ and $(\tau', x')$ appear at $\tau < \tau' < \tau + u(F,x)$, then the first one outcompetes (`kills') the second one if $x' \leqslant x$, and the second one kills the first one if $x' > x$. Thus, neglecting interactions of higher order, given that a contending mutation arrives at $(\tau, x)$ in the fitness background $\approx F$, the probability that it does not encounter a killer in its past is % \begin{linenomath} \begin{equation} \begin{split} \back{\chi}(& F, x) := \\ & \exp \Big ( - \int_x^\infty \!\!\! \mu \, \pi (F,y)\, u(F,y) \, h(y) \D{y} \Big ), \label{past} \end{split} \end{equation} \end{linenomath} whereas the probability that it does not encounter a killer in its future is \begin{equation} \begin{split} & \forw{\chi}(F, x) := \\ & \exp \Big ( - u (F, x) \int_x^\infty \!\!\! \mu \, \pi (F,x') \, h(x') \D{x'} \Big ) \end{split} \label{future} \end{equation} (note that only the term corresponding to $\forw{\chi}$ is considered by \citet{GL98}). Using \eqref{X}, $\back{\chi}(F, x)$ is approximated by \begin{linenomath}\postdisplaypenalty=0 \begin{align} \back{\psi}(x) & := \notag \\ \exp & \Big ( -\mu\, C \int_x^\infty \log (N \, \varphi \, y) h(y) \D{y} \Big ), \label{psipast} \intertext{whereas $\forw{\chi}(F, x)$ is approximated by} \begin{split} \forw{\psi}(x) & := \\ \exp & \Big ( -\mu \, \frac{C \log (N \, \varphi \, x)}{x} \int_x^\infty x' \, h(x') \D{x'} \Big ). \end{split} \label{psifuture} \end{align} \end{linenomath} In analogy with the approximation of the retainment factor \eqref{survive_ci_det} that uses \eqref{ulight}, neither $\back{\psi}$ nor $\forw{\psi}$ depend on $F$. % Thus, setting $\bafo{\chi} := \back{\chi} \, \forw{\chi}$ and analogously $\bafo{\psi}:= \back{\psi} \, \forw{\psi}$, we obtain, as an analogue of~\eqref{Exp_incr_F_i}, the expected (per-day) increase of $F$, given the current value of $F$, as \begin{linenomath}\postdisplaypenalty=0 \begin{align} & \mathbb{E} (\Delta F \mid F) \notag \\ &\approx \, \mu \int_0^\infty \!\!\! \delta (F,x) \, \pi(F,x) \, \bafo{\chi}(F, x) \, h(x) \D{x} \notag \\ & \approx \, \frac{C \, \mu\, \varphi^2 \log \gamma }{F^{2q+1}} \int_0^\infty x^2 \, \bafo{\psi}(x) \, h(x) \D{x} \label{exp_incr_random} \\ & = \, \frac{\Gamma}{F^{2q+1}}, \notag \end{align} \end{linenomath} where \begin{equation} \Gamma := C \, \mu \, \varphi^2 \log (\gamma) \, I(\mu, \varphi) \label{compound_stoch} \end{equation} and $I(\mu, \varphi) := \mathbb{E} \big (\bafo{\psi}(X) \, X^2 \big)$ is the integral in~\eqref{exp_incr_random}. % Similarly as in Sec.~\ref{sec:det}, the assumption of a suitable concentration of the random variable $\Delta F$ around its conditional expectation allows us to take \eqref{exp_incr_random} into \[ F_{\lfloor \tau / \Gamma \rfloor} \approx \mathbb{E} \big (F_{\lfloor \tau / \Gamma \rfloor} \big ) \approx f(\tau) \] with $f$ as in \eqref{LLN}. % As in Section~\ref{sec:det}, we will refer to this approximation step as `moment closure'. The analysis so far allows to conclude that, as long as the above-described approximation may be relied on, the \emph{power law of the mean fitness curve} observed by \citet{WRL13} \emph{is obtained under any suitable distribution of fitness effects}; in particular, the \emph{epistasis parameter} $q$ is \emph{not affected by the distribution} of $X$. % \paragraph{More general forms of epistasis.} If \eqref{X:delta} is replaced by the more general condition \begin{equation} \delta (F,X) = X \, \varphi\, \eta(F)\label{X:delta_general} \end{equation} for some (continuously differentiable, decreasing) function $\eta$ with $\eta(1)=1$ such that again the approximation \eqref{ulight} makes sense, then all the arguments in the previous paragraph go through, and we obtain \[ \mathbb{E} (\Delta F \mid F) = \, \Gamma\, \frac{ \big (\eta(F) \big )^2}{F}\] with $\Gamma$ as in \eqref{compound_stoch}. Again, under a suitable concentration assumption, $F_{\lfloor \tau / \Gamma \rfloor}$ is approximated by $f(\tau)$, where now $f$ solves the initial value problem \begin{equation}\label{GODE} \frac{\D{}}{\D{\tau}} f(\tau) = \frac{\big( \eta \big (f(\tau) \big )\big )^2}{f(\tau)}, \quad f(0)=1. \end{equation} For the scaling regime \eqref{scaling}, which excludes clonal interference in the limit $N\to \infty$, a corresponding dynamical law of large numbers leading to the limiting ordinary differential equation \eqref{GODE} was proved in \citet[Cor. 2.15]{GKWY16} for $F_{\lfloor \tau / \Gamma \rfloor}$ with $\Gamma$ as in \eqref{compound}. \paragraph{Estimation of parameters.} Our next goal is to estimate the parameters $\mu$ and $\varphi$ that will then be used in simulations to check consistency as in Section~\ref{sec:det}, but now for stochastic increments. Again one starts with the composite parameter~$\Gamma$, which can be estimated from the empirical data in the same way as described at the end of Sec.~\ref{sec:GKWYLLN}. % In Appendix~A, we derive an approximation for $\mathfrak d_1$, the \emph{expected effect of the first among the contending mutations} (in fitness background $F=1$) \emph{that is not killed}.\footnote{Note that in our counterpart of this heuristic for deterministic beneficial effects, $\mathfrak{d}_1$ would coincide with $\varphi$, since the first contending mutation is never killed.} In order to estimate $\mu$ and $\varphi$ % from $\widehat \Gamma$ and $\widehat{\mathfrak{d}}_1$, the observed mean fitness increment of the first fixed beneficial mutation (in analogy with~\eqref{est_mutrate}), we combine \eqref{X:delta1} with \eqref{compound_stoch} to obtain the system of equations \begin{subequations}\label{system_est} \begin{linenomath}\postdisplaypenalty=0 \begin{align} \mathfrak{d}_1 & \, \approx \, \varphi \, \zeta_\ell^{}(\mu,\varphi) \label{mu_hat}, \\ \mathfrak{d}_1 & \, \approx \, \sqrt{\frac{\Gamma}{C \log \gamma}} \frac{\zeta_\ell^{}(\mu,\varphi)}{\sqrt{\mu \, I(\mu,\varphi)}}, \label{varphi_hat} \end{align} \end{linenomath} \end{subequations} where $\zeta_\ell(\mu,\varphi)$ is an approximation of the expectation of the first \emph{scaled} beneficial effect that goes to fixation. The subscript $1 \leqslant \ell < \infty$ indicates the maximum number of contending mutations taken into account before the first fixation; the approximation becomes more precise with increasing $\ell$. We will work with $\ell = 3$ since more than three contenders turn out to be rarely present at the same time (see the polymorphism statistics in the next paragraph). Plugging the value $\widehat \Gamma=3.2 \cdot 10^{-3}$ (from Fig. \ref{cannings_det_init_0p11_uncorrected}) into \eqref{mu_hat}, we will solve \eqref{mu_hat} and \eqref{varphi_hat} for the parameter estimates $\widehat \mu$ and $\widehat \varphi$ for exponentially distributed $X$ in the remainder of this section. % Let us anticipate that, as in the case of deterministic increments, one has to cope with the fact that such solutions do not exist for all values of $\widehat{\mathfrak{d}}_1$. \paragraph{Exponentially distributed beneficial effects.} % For definiteness, we now turn to random beneficial effects where $X$ follows $\Exp(1)$, the exponential distribution with parameter~1. % This was the canonical choice also in previous investigations (cf.\ \citet{GL98,WRL13}) and is in line with experimental evidence (reviewed by \citet{EWK07}) and theoretical predictions \citep{Gill84,Orr03}. % Some crucial quantities related to the heuristics can be calculated explicitly in the exponential case, see Appendix~B. Numerical evaluation of \eqref{mu_hat} and \eqref{varphi_hat} shows that the threshold for $\widehat{\mathfrak{d}}_1$ below which there are no solutions $(\widehat \mu,\widehat \varphi)$ is between 0.14 and 0.15 (recall the reported value is $\widehat{\mathfrak{d}}_1=0.1$). We therefore work with $\widehat{\mathfrak{d}}_1=0.15$, which gives $\widehat \mu=0.73$ and hence $\widehat \varphi=0.0375$ for this choice of the distribution of $X$. % Fig.~\ref{cannings_exp_init_0p15} shows the corresponding Cannings simulations, and Fig.~\ref{approx_exp_init_0p15} displays the simulations according to the heuristics. % The agreement of the simulation mean with the approximating power law is now nearly perfect. The fluctuations, however, are smaller in the simulations than in the experiment. As argued in Sec.~\ref{sec:GKWYLLN} in the context of the first reality check, this may be explained by the constant parameters assumed by the model, whereas parameters do vary across replicate populations in the experiment. % \\ Let us also mention the degree of polymorphism observed in the Cannings simulations of Fig.~\ref{cannings_exp_init_0p15}. % Counting a type as `present' if its frequency is at least 20\%, it turns out that, on average, the population is monomorphic on 79.0\% of the days; it contains two types on 19.6\% of the days, three types on 1.39\% of the days, and four or more types on 0.01\% of the days. Thus, in the finite system, some polymorphism is present, but it is not abundant. Recall that our model does not consider neutral mutations, and thus the low level of (fitness) polymorphism observed in the simulations does not contradict the high level of genetic diversity observed in experiments \citep{Tenaillon16}. \begin{figure*}[!ht]\centering \resizebox{.63\textwidth}{!}{\input{cannings_exp_init_0p15.tex}} \caption{Simulations of the Cannings model with $X$ following $\Exp(1)$ and parameters $\widehat \varphi=0.0375$, and mutation probability $\widehat \mu=0.73$. } \label{cannings_exp_init_0p15} \end{figure*} \begin{figure*}[!ht]\centering \resizebox{.63\textwidth}{!}{\input{approx_exp_init_0p15.tex}} \caption{Simulations using the refined Gerrish-Lenski heuristics with $X$ following $\Exp(1)$ and parameters as in Fig.~\ref{cannings_exp_init_0p15}. Mean number of clonal interference events with $x'\leqslant x$: 63; mean number of clonal interference events with $x'>x$: 29; mean number of established beneficial mutations: 19. } \label{approx_exp_init_0p15} \end{figure*} \paragraph{Beneficial effects with a Pareto distribution.} As argued already, the exponential distribution seems to be the most realistic choice for beneficial mutation effects. % The theory developed above, however, holds for arbitrary probability distributions on the positive half axis that have expectation 1 and a finite second moment. % Furthermore, the analysis of the heuristics indicates that the results are, in fact, independent of the distribution, provided the compound parameter $\Gamma$ is interpreted in the appropriate way. % It is therefore interesting to explore whether this conclusion may be verified by simulations. % In order to push our conjecture to the limits, we have also carried out the program with $X$ distributed according to a \emph{(shifted) Pareto distribution} (see \citet[Ch.~II.4]{Feller_71} or \citet[Ex.~2.19]{SO94}), which again has expectation 1 but a variance that is much larger than that of Exp(1). However, the numerical evaluations involved in the parameter estimation are substantially more difficult. We have therefore simplified the approximation for $u$ by working with $u(F)=(\log N)/s(F)$ in place of \eqref{u_N}. Our simulations (not shown here) reveal that the mean, according to the predictions in Sec.~\ref{sec:ci}, is still well described by the approximating power law, but the fluctuations are enhanced relative to the case of the exponential distribution and seem to be unrealistically large compared to the experiment. This is compatible with the statement at the beginning of this paragraph. \section{Discussion} \label{sec:discussion} We have, so far, postponed a detailed comparison with the model and the results of \cite{WRL13}. We now have everything at hand to do so. \paragraph{Modelling aspects.} Let us recall our starting point by summarising the key common points and differences between the WRL and the GKWY models. Namely, the common modelling assumptions are: \begin{enumerate} \item The population dynamics features periodic growth and dilutions, without deaths. \item Neutral and deleterious mutations are ignored. \item Beneficial mutations occur at constant rate, independently of the current fitness. \item The current fitness affects the fitness increments epistatically, in a way that leads to power laws of the same form for the mean relative fitness curve, namely \eqref{powerlaw} and \eqref{LLN}. \end{enumerate} The key differences are: \begin{enumerate} \item The GKWY model explicitly includes the runtime effect. \item The GKWY model assumes deterministic fitness increments and ignores clonal interference, while the WRL model assumes exponentially distributed fitness increments and accounts for clonal interference. \item The intraday dynamics of the GKWY model is a stochastic Yule process of growth, while deterministic synchronous divisions take place in the WRL model. \item The fitness curve of the GKWY model results from a law of large numbers obtained via rigorous analysis, while the derivation is heuristic in the case of the WRL model. \end{enumerate} In this article, we have developed the GKWY model further by introducing arbitrary distributions of fitness effects and taking clonal interference into account, while still obtaining a power law fitness curve. Let us now discuss the differences in detail, along with the consequences for the interpretation of the parameters. Here and below we use a tilde to distinguish the quantities belonging to the WRL model from our corresponding quantities. The main difference is that \cite{WRL13} describe the experiment with a discrete generation scheme given by $\log_2 \gamma \ (\approx 6.6)$ doublings during one daily growth phase, see Fig.~\ref{fig:forest_deterministic}. This neglects the variability that comes from a continuous-time intraday reproduction mechanism, and affects the WRL analogue to our formula \eqref{pi_N} for the probability to survive drift. The latter is stated in (S1) of their Supplementary Text, reads \begin{equation}\label{WRLs} \widetilde \pi = \widetilde \pi(\widetilde s) = 4 \, \widetilde s, \end{equation} and relies on \cite{GL98}, Appendix 1. In line with the generation scheme of Fig.~\ref{fig:forest_deterministic}, $\widetilde s$ is the selective advantage in each of the $\log_2 \gamma$ generations per day. At the end of the day, the population has increased from size $N$ to size $\gamma \, N$ and consists of $N$ clones, each of (deterministic) size $\gamma$. A sampling of $N$ individuals without replacement thus leads to a pair coalescence probability of $(\gamma-1) / (\gamma \, N)$, and hence to an offspring variance per day of \begin{equation}\label{WRLvariance} \widetilde v \sim \frac {\gamma-1}{\gamma}; \end{equation} note the factor of 2 between $\widetilde v$ and our $v$ in \eqref{ourv}, which comes from a size-biasing effect due to the sampling from clones of random size. \begin{figure} \centering \resizebox{\columnwidth}{!}{\input{forest_WRL.tex}} \caption{Synchronous growth model as used in \cite{GL98}, with equally-sized clones at the end of the day (here, $\gamma=8$); compare Fig.~\ref{fig:forest}.} \label{fig:forest_deterministic} \end{figure} Since $\widetilde s$ is related to one `doubling generation', the selective advantage \emph{per day} is \begin{equation}\label{sday} \widetilde s_{\text{d}} \approx \widetilde s \, \log_2 \gamma. \end{equation} Now, Haldane's formula \eqref{Haldane} related to the daily rhythm gives \[ \widetilde \pi \approx \frac{\widetilde s_{\text{d}}}{\widetilde v_{\text{d}}/2}, \] and, with \eqref{WRLs}, this yields a per-day offspring variance $\widetilde v_{\text{d}} \approx \log_2 \gamma$, which differs significantly from $\widetilde v$ in~\eqref{WRLvariance} for $\gamma=100$ (to be precise, we then have $\widetilde v \approx 99/100=0.99$, whereas $\widetilde v_{\text{d}} \approx 6.6$). Thus, we see that the ansatz of \cite{WRL13} combined with \cite{GL98} leads to an ambiguously defined offspring variance per day. Moreover, at the end of the Materials and Methods section in the Supplement, \cite{WRL13} relate the difference between the new and the old relative fitness to the (per generation) selective advantage of a mutant as follows: \begin{equation} \label{diffw} w_{\text{new}} = w(1+\widetilde s) \end{equation} with $\widetilde s$ from \eqref{WRLs}. Here \begin{equation}\label{wrecursion} w=w_i= \frac {\log \widetilde a}{\log \widetilde b}, \end{equation} with the growth factors $\widetilde a = y_i(\widetilde \sigma_i)/y_i(0)$ and $\widetilde b= y_0(\widetilde \sigma_i)/y_0(0)$ as in \eqref{hatFi}. They are not explicit about an intraday growth model, so one should think of $y_i(0), y_0(0), y_i(\widetilde \sigma_i) $ and $y_0(\widetilde \sigma_i)$ as the numbers of individuals at the beginning and the end of the competition experiment. For a consistent definition of the selective advantage per day, it is inevitable to use the growth factors $a_{\text{new}}$ and $a$ related to one day; then, according to \eqref{s}, one has \begin{equation} \label{sda} s_{\text{d}} = \frac{a_{\text{new}}-a}a \sim \log \frac{a_{\text{new}}}a. \end{equation} In principle, $a$ may (and will) differ from the $\widetilde a$ in the definition of $w$. At least in a model with intraday exponential growth, however, the definition of $w$ in \eqref{wrecursion} becomes independent of $\widetilde \sigma_i$ because $\widetilde \sigma_i$ cancels out (see the explanation below \eqref{hatFi}); we may (and will) therefore use the growth factors $a=y_i(\sigma_i)/y_i(0)$ and $ b= y_0(\sigma_i)/y_0(0)$ instead of $\widetilde a$ and $\widetilde b$ in \eqref{wrecursion}. Then \eqref{wrecursion} implies \begin{equation} \frac{w_{\text{new}}} w = \frac{1}{\log a} \Big (\log\Big (\frac {a_{\text{new}}}{a}\Big )+\log a\Big ) , \end{equation} which by \eqref{sda} yields \begin{equation} \label{wneww} w_{\text{new}} = w \Big (1 + \frac {s_{\text{d}}}{ \log a}\Big ), \end{equation} or equivalently, using \eqref{wrecursion} again, \begin{equation}\label{wnewww} w_{\text{new}} - w = \frac {s_{\text{d}}}{ \log b}. \end{equation} Under the assumption of an intraday exponential growth we have (as long as the populations are nearly homogeneous): \begin{equation}\label{expon} a \approx \mathrm {e}^{r\sigma},\quad b \approx \mathrm {e}^\sigma, \quad w\approx r, \quad r \, \sigma \approx \log \gamma. \end{equation} Thus \eqref{wnewww} translates into \begin{equation}\label{ourr} s_{\text{d}} \approx \frac{1}{r} \, (r_{\text{new}}-r) \log \gamma, \end{equation} which also results from combining \eqref{sigma} and \eqref{s_N} and equating $F$ and $r$. This shows that the runtime effect discussed in Sec.~\ref{sec:GKWYLLN} is already implicit in the definition \eqref{wrecursion} of $w$ as the ratio of logarithms of growth factors, as soon as one uses a model with intraday exponential growth. Let us emphasise again that this runtime effect is a consequence of the design of Lenski's experiment; it would be absent in a variant of the experiment in which sampling occurs at a given fixed time before the onset of the stationary phase. Let us also note that the runtime effect appears as soon as individuals consume the resources faster, regardless of how they developed the ability to reproduce faster. For this reason the runtime effect may play a role (and should be taken into account) in sequential dilution experiments, regardless of whether the population is monomorphic or polymorphic. Furthermore, comparing \eqref{wneww} with \eqref{diffw} and using \eqref{expon} gives \[ s_{\text{d}} = \widetilde s \, \log a \approx \widetilde s \, \log \gamma. \] Comparing with \eqref{sday}, this shows that \[ s_{\text{d}} = \frac{\log \gamma}{\log_2 \gamma} \, \widetilde s_{\text{d}}, \] which points to a certain inconsistency inherent in $\widetilde s_{\text{d}}$. Another issue worth to compare is the interpretation of {\em diminishing returns epistasis}, and the corresponding translation between the exponent $g$ in the WRL model and the exponent $q$ in ours. Formula (S1) of \cite{WRL13} says that the multiplicative effect on $r$ has expected size $1/\alpha$; this corresponds to an additive effect on $r$ of expected size $ \delta:= r/\alpha$. Thus, the ansatz \eqref{delta_N} translates into \begin{equation*}\frac 1\alpha = \frac \varphi {r^{q+1}}. \end{equation*} On the other hand, formula (S9) in \cite{WRL13} says that \begin{equation*}\alpha = c \, \mathrm {e}^{g\log r}, \end{equation*} which implies that $g=q+1$. The choice $g=1$ in the WRL model (or equivalently, $q=0$ in ours, cf.\ \eqref {powerlaw} and \eqref{LLN}) corresponds to {\em additive} increments on the Malthusian fitness that do not depend on the current value of the latter, see \eqref{delta_N}. It is this case of constant additive increments which may be appropriately addressed as the {\em absence of epistasis}. More precisely, in \emph{continuous time} (as considered here for the intraday dynamics), additive fitness increments correspond to independent action of mutations and hence to absence of epistasis (\citet{Fisher18}; \citet[pp.~48 and 74]{Bu00}); in \emph{discrete time}, the same would be true of multiplicative increments. Consequently, $q= g-1$ can be seen as an exponent describing the effect of epistasis. With this interpretation, a (slight) concavity of the mean fitness curve is caused by the runtime effect (and hence by the design of the experiment) even in the absence of epistasis, cf.~Fig.~\ref{fig:WRL_data}. This fact is sometimes overlooked when interpreting the mean fitness curve; see, for example, \cite{KTP09,GoDe15}. In contrast, a runtime effect closely related to ours was described by \citet{YD13} in the context of selection in fluctuating environments. Here it was observed that selection is biased in favor of the rare competitor, because ``more time is spent growing in environments favorable to it (because the common competitor grows more slowly, taking longer to exhaust the limiting resource)''. A substantial part of the derivations of \cite{WRL13} deals with incorporating the Gerrish-Lenski heuristics for {\em clonal interference} into their model. The fact that they work with multiplicative fitness increments and various approximations complicates the translation between the time-scaling constant in their power law (S16) (that we subsume as $\beta$ in \eqref{powerlaw}) and our time-scaling constant $\Gamma$ (see \eqref{LLN} and \eqref{compound_stoch}). We refrain from pursuing the details here; but let us emphasise that \eqref{X} together with the calibrations discussed in Sec. \ref{sec:stoch} applies to arbitrary random (additive) fitness effects with finite second moments. \paragraph{Analytic and simulation results.} We have presented three lines of results. First, rigorous results for the relative mean fitness in terms of a law of large numbers in the limit $N \to \infty$ for deterministic beneficial effects in a regime of weak mutation and moderately strong selection. Second, we have derived transparent analytic expressions for the \emph{expected} mean fitness in a finite-$N$ system by means of heuristics of Gerrish-Lenski type and a moment closure approximation (which is also used by \cite{WRL13}). The beneficial effects may be either deterministic (and then require a specific thinning heuristics), or random with an arbitrary density. In the latter case we have developed a refinement of the original Gerrish-Lenski heuristics. Briefly stated, this refinement does not only consider the retainment factor \eqref{future} coming from {\em future} interfering mutations, but also the retainment factor \eqref{past} coming from {\em past} ones. This makes the heuristics consistent with its verbal description, which says that `if two contending mutations appear within the time required to become dominant in the population, then the fitter one wins.' A refinement that also includes thinning due to past competitors was suggested by \cite{Ge01}; this focuses on the \emph{time} at which a `winning' mutation appears, whereas our analysis is mainly concerned with the distribution of the \emph{effects} of these mutations. For reasons of calibration, we have established an approximate analytic expression \eqref{X:delta1} for the expected scaled effect of the first beneficial mutation that goes to fixation. This introduces a \emph{size bias} into the distribution of beneficial effects (see \eqref{scaledeffectfix}), similar to the descriptions by \citet{RVG02} and \citet{WRL13} in the case of the exponential distribution. As it turned out, the analytic expressions are \emph{robust}. In particular, the estimate of $q$ is not affected by the choice of the distribution of beneficial effects, and it is also, at least approximately, independent of clonal interference, as obvious from the independence of $F$ of the factors $\vartheta$, $\back{\psi}$, and $\forw{\psi}$ in \eqref{constant_theta}, \eqref{psipast}, and \eqref{psifuture}. What changes is the internal structure of the compound parameter $\Gamma$, but for any given estimate $\widehat \Gamma$, the mutation probability and scaling of beneficial effects may be arranged appropriately (provided $X$ has second moments). The deviations from $q=0$ are a signal of diminishing returns epistasis; at this point, let us emphasise again that the approximating curve of the mean relative fitness is (slightly) concave even for $q=0$ (due to the runtime effect). By any means, the pronounced concavity in the curve approximating the LTEE data (with its estimated $\widehat q = 4.2$) gives strong evidence for diminishing returns epistasis, in line with the conclusions of previous investigations \citep{WRL13,GoDe15,Wuetal17}. We would like to emphasise, however, that our goal here was not to find the `best' (or even the `true') increment function; rather, the choice \eqref{delta_N} was made for the sake of comparison with \cite{WRL13}, while we have seen that the GKWY model in fact allows for arbitrary increment functions \eqref{X:delta_general}. Our third line of investigations is a simulation study both of the Cannings model and the approximating heuristics (described in Section \ref{sec:det} for deterministic effects and in Section \ref{sec:stoch} for stochastic effects). It turned out that the heuristics (which might be improved even further by taking into account the refined heuristics of \citet{Ge01} and \citet{RVG02}) approximates the Cannings model quite well. The simulations show that the deviation of the mean fitness of the latter from the power law fitted to empirical data is moderate for deterministic increments and minute for exponential increments. \paragraph{Validity and limits of the sequential fixation model.} Following \cite{WRL13}, we have worked with a sequential fixation model, which led to the dynamical law of large numbers of \cite{GKWY16} in the limit $N \to \infty$ under scaling assumptions on $\mu$ and $\varphi$ that, in particular, require $\mu \ll \varphi$ and $\mu, \varphi \to 0$ as $N \to \infty$. It then turned out that, provided appropriate corrections for clonal interference are made, the power law still describes the mean of the simulations very well for finite $N$ and moderately-large $\mu$ and $\varphi$, even when $\mu$ is substantially larger than $\varphi$. In this sense, the result once more appears to be quite robust. A question that still remains concerns the `true' beneficial mutation probability and the `true' distribution of the beneficial effects. In particular, it is not finally decided (neither by experiment nor by theory) whether the fitness trajectory increasing from 1 to $\approx 1.7$ is dominated by a small number of mutations of large effects or a larger number of smaller effects. On the one hand, the reported mean fitness increment of the first fixed beneficial mutation, $\widehat{\mathfrak{d}}_1=0.1$, is quite large, and if this is taken as typical, it is hard to reconcile with a plethora of small effects. On the other hand, \cite{Tenaillon16} infer that most of their 60--110 fixed mutations are beneficial in those populations that keep the original low mutation rate; whereas the adaptive proportion is harder to quantify in those strains that evolved into hypermutators. Our 27 or 19 fixed beneficial mutations (for deterministic and exponential effects, respectively), as estimated from the trajectory simulated according to the heuristics and averaged over \emph{all} populations, may seem somewhat on the low side, but this may also reflect the fact that the parameters are close to the limit of validity of a sequential fixation model with clonal interference heuristics, as discussed in Secs.~\ref{sec:ci} and \ref{sec:stoch}. \section*{Appendix~A: Derivation of effect of first fixed mutation} \input{appendix_firsteffect.tex} \section*{Appendix~B: Application to Exp(1)-distributed $X$} \input{appendix_expeffect.tex} \section*{Appendix~C: Simulation algorithms} \input{algorithms.tex} \clearpage \subsection*{Acknowledgements} It is our pleasure to thank Phil Gerrish for valuable hints and comments on the manuscript. The paper also profited from discussions with Jason Schweinsberg about the WRL model and its analysis; he further pointed us to a strategy to relax the lower bound on the order of the selection strength in \citet{GKWY16}, see the discussion in the paragraph {\em Scaling regime and law of large numbers} in Sec. 2. Furthermore, we thank Richard Lenski for stimulating discussions and Nick Barton for sharing with us his thoughts about~\eqref{u_N}. Last not least, we are indebted to four anonymous referees, who provided numerous valuable hints to improve the manuscript. This project received financial support from Deutsche Forschungsgemeinschaft (German Research Foundation, DFG) via Priority Programme SPP 1590 \emph{Probabilistic Structures in Evolution}, grants no. BA 2469/5-2 and WA 967/4-2. \bibliographystyle{plainnat}
1,108,101,566,236
arxiv
\section{General Packing Constraints} \label{sec:main} In this section, we study approximation to the core under general packing constraints of the form $A \vec{x} \le \vec{b}$. Recall that there are $m$ elements, $V_i$ is the maximum possible utility that agent $i$ can receive from a feasible outcome, and $V_{\max} = \max_{i \in N} V_i$. We prove a statement slightly more general than Theorem~\ref{thm:packing}. We first need the following concept. \subsection{Maximal Proportionally Fair Outcome} \label{sec:mmf} Given an instance of public goods allocation subject to packing constraints, we define the notions of an $r$-proportionally fair ($r$-PF) outcome, a maximally proportionally fair (MPF) outcome, and the MPF value of the instance. \begin{definition}[MPF Outcome] \label{def:rmmf} For $r > 0$, we say that a fractional outcome $\vec{w}$ is $r$-\emph{proportionally fair} ($r$-PF) if it satisfies: $$ u_i(\mathbf{w}) \ge \frac{V_i}{r} - 1, \ \ \forall i \in N. $$ The {\em maximally proportionally fair} (MPF) value $R$ of an instance is the least value $r$ such that there exists an $r$-PF outcome. For simplicity, we say that an $R$-PF outcome is a {\em maximally proportionally fair} (MPF) outcome. \end{definition} This concept is crucial to stating and deriving our approximation results. In words, an $r$-PF outcome gives each agent an $r$ fraction of its maximum possible utility $V_i$ (which can be thought of as the fair share guarantee of the agent), if the agent is given $1$ unit of utility for free. Thus, a smaller value of $r$ indicates a better solution. The MPF value $R$ denotes the best possible guarantee. The additive $1$ in Def.~\ref{def:rmmf} can be replaced by any positive constant; we choose $1$ for simplicity. We now show an upper bound for $R$ that holds for all instances. Recall from Equation~(\ref{eq:width}) that $\rho$ is the {\em width} of the instance. \begin{lemma} $R \le \min(V_{\max},n,\rho)$, and an MPF outcome is computable in polynomial time. \label{lem:R-MMF} \end{lemma} \begin{proof} To show that $R$ is well-defined, note that for $r = V_{\max}$, an $r$-PF outcome $\vec{w}$ simply requires $u_i(\vec{w}) \ge 0$, which is trivially achieved by every outcome. Therefore, $R$ is well-defined, and $R \le V_{\max}$. Next, $R \le n$ follows from the fact that there exist fractional outcomes satisfying proportionality (e.g., the outcome $\vec{w}$ obtained by taking the uniform convex combination of the $n$ outcomes that give optimal to each individual agent). Finally, to show $R \le \rho$, consider the outcome $\vec{w}$ in which $w_j = \frac{1}{\rho}$ for each element $j$. Clearly, $u_i(\vec{w}) \ge \frac{V_i}{\rho}$ for all $i$. Further, $A \vec{w} \le \vec{b}$ is satisfied trivially due to the fact that $\rho$ is the width of the packing constraints. To compute the value of $R$ as well as an MPF outcome, we first note that the value of $V_i$ for each agent $i$ can be computed by solving a separate LP. Then, we consider the following LP: \begin{equation} \label{eq:compute-R} \mbox{Maximize} \ \ \hat{r} \end{equation} \[ \begin{array}{rcll} \sum_{j \in W} u_{ij} w_j & \ge & V_i \cdot \hat{r} - 1 & \forall i \in [n] \\ A \vec{w} & \leq & \vec{b} & \\ w_j & \in & [0,1] & \forall j \in W \end{array}\] Here, $A \vec{w} \le \vec{b}$ are the packing constraints of the instance, and $\hat{r}$ is a variable representing $1/r$. Thus, maximizing $\hat{r}$ minimizes $r$, which yields an MPF outcome. This can be accomplished by solving $n+1$ linear programs, which can be done in polynomial time. \end{proof} Our main result in this section uses any $r$-PF outcome, and provides a guarantee in terms of $\log r$. Thus, we do not need to necessarily compute an exact MPF outcome. We note that an MPF outcome can be very different from a core outcome. Yet, an MPF outcome gives each agent a large fraction of its maximum possible utility, subject to a small additive relaxation. As we show below, this helps us find integral outcomes that provide good approximations of the core. \subsection{Result and Proof Idea} \label{sec:idea} Our main result for this section (Theorem~\ref{thm:packing}) can be stated in a refined way as follows. Recall that $\log^*$ is the iterated logarithm, which is the number of times the logarithm function must be iteratively applied before the result becomes less than or equal to $1$. \begin{theorem} \label{thm:main} Fix constant $\delta \in (0,1)$. Suppose we are given a set of $K$ packing constraints $A \vec{x} \le \vec{b}$ such that $b_k = \omega\left(\frac{\log K}{\delta^2} \right)$ for all $k \in [K]$. Let $R$ be the MPF value of this instance. Then there exists a polynomial time computable $(\delta,\alpha)$-core outcome, where $$ \alpha = O\left( \frac{1}{\delta^4} \cdot \log\left(\frac{R \cdot \log^* V_{\max}}{\delta} \right) \right). $$ \end{theorem} We first note that the above result cannot be obtained by maximizing the smooth Nash welfare objective; we present Example~\ref{eg:knapsack} in Appendix~\ref{sec:Examples}, which demonstrates this using only one packing constraint. To be precise, the example shows that no single value of parameter $\ell$ in the smooth Nash welfare objective can provide a polylog additive guarantee for all instances. While it may be possible to choose the value of $\ell$ based on the instance, it does not seem trivial. We take a different approach. Our idea is to start with a fractional core solution $\vec{x}$. Suppose it assigns utility $U^*_i$ to agent $i$. Fix $\delta > 0$, and consider the following program. \begin{equation} \label{eq:round} \mbox{Minimize} \ \ \alpha \end{equation} \[ \begin{array}{rcll} \alpha + (1+\delta) \cdot \sum_{j \in W} u_{ij} w_j & \ge & U^*_i & \forall i \in [n] \\ A \vec{w} & \leq & \vec{b} & \\ w_j & \in & \{0,1\} & \forall j \in W \\ \alpha & \ge & 0 \end{array}\] For the optimum value $\alpha^*$, we obtain an outcome that is $(\delta,\alpha')$-core for every $\alpha' > \alpha$. To see this, take a subset of agents $S$ and a feasible utility vector $\vec{U'}$ under any other (even fractional) outcome. Because $\vec{x}$ is a core outcome, there exists $i \in S$ such that $U^*_i \ge (|S|/n) \cdot U'_i$. For $\alpha' > \alpha$, the ILP solution implies $$ \alpha' + (1+\delta) \cdot \sum_{j \in W} u_{ij} w_j > U^*_i \ge \frac{|S|}{n} \cdot U'_i, $$ which implies that the solution is $(\delta,\alpha')$-core according to Definition~\ref{def:approx}. However, $\alpha^*$ obtained from this program can be rather large, as illustrated in the following example. Consider the {\sc Knapsack} setting with $m$ unit-size projects. There is an overall budget $B = m/2$. For every feasible integral outcome $\bc$, let there be an agent with utility $1$ for every project in $\bc$ and $0$ for all other projects. Thus, there are $\binom{m}{m/2}$ agents. The fractional core outcome gives weight $1/2$ to each project, thus giving utility $V_i / 2 = m / 4$ to each agent $i$. However, every integral outcome gives utility $0$ to at least one agent, which implies $\alpha^* = \Omega(m)$. This example shows that when there are a large number of agents, we cannot achieve Theorem~\ref{thm:main} by hoping to approximately preserve the utilities to {\em all} agents with respect the fractional core solution. However, note that in the above example, though there is one agent who gets very little utility, this agent has no incentive to deviate if she is given one unit of utility for free. This insight leads us to our analysis below, which is based on rounding the fractional core solution $\vec{x}$. Let us apply randomized rounding to $\vec{x}$. Instead of using Chernoff bounds to ensure that there are no ``violations'' (i.e., that no agent receives utility that is too far from its utility under the core outcome $\vec{x}$), we hope to bound the expected number of such violations. If there are few such agents, we still have an approximate core outcome because if this small coalition of agents deviates, its utility under a new outcome will be scaled down by a large factor. Unfortunately, it can be shown that bounding the expected number of deviations by a sufficiently small number forces $\alpha = \Omega(\log V_{\max})$. This is better than $\alpha = \Omega(m)$ from our previous approach, but still {\em much} larger than the bound we want to achieve in Theorem~\ref{thm:main} when the width $\rho$ is small. This brings up the main technical idea. We observe that an MPF outcome, though not in the core, provides a reasonably large utility to each agent. We add a small amount of this outcome to the fractional core before applying randomized rounding. We are now ready to present our algorithm. \subsection{Algorithm} Fix $\delta \in \left(0,1\right)$, and let $\gamma = \frac{\delta}{8}$. \begin{enumerate} \item Compute the (approximate) fractional core solution $\vec{x}$ as in Theorem~\ref{thm:fractional}, where $x_j$ is the fraction of element $j$ chosen. \item Let $\vec{y}$ be an MPF outcome as in Definition~\ref{def:rmmf}. \item Let $\vec{z} = (1-\gamma) \vec{x} + \gamma \vec{y}$. \item For each $j \in W$, choose $j$ to be in the outcome $\bc$ independently with probability $\hat{z_j} = (1-\gamma) z_j$. \end{enumerate} \newcommand{L}{L} \subsection{Analysis} We show that this algorithm yields, with at least a constant probability, a feasible outcome that satisfies the guarantee in Theorem~\ref{thm:main}. This directly shows the existence of such an outcome. Note that the fractional Max Nash Welfare solution $\vec{x}$ can be irrational, but we can compute an approximation in polynomial time (see Theorem~\ref{thm:fractional} for details), which does not change our guarantee asymptotically. Further, $\vec{y}$ can be computed in polynomial time (Lemma~\ref{lem:R-MMF}). Hence, the algorithm runs in expected polynomial time. We first show that the packing constraints are satisfied. Since we scale down $\vec{z}$ by a factor $(1-\gamma)$ before rounding, we have $A \hat{z} \le (1-\gamma) \vec{b}$. Since $\vec{b} = \omega\left(\frac{\log K}{\delta^2}\right)$, a simple application of Chernoff bounds shows that with probability at least $0.99$, the rounded solution $\bc$ satisfies $A \vec{\bc} \le \vec{b}$. Therefore, if we show that the algorithm also yields the desired approximation of the core with at least a constant probability ($1/6$ to be precise), we will be done by applying union bound to the two events, feasibility and good approximation to the core. For the ease of presentation, we suppress constants throughout the proof and use the asymptotic notation liberally. We also assume that $V_{\max} = \omega(1)$ since otherwise there is a trivial $(0,O(1))$-core outcome that chooses a null outcome, giving zero utility to each agent. \subsubsection{Grouping Agents} In order to analyze our algorithm, we partition the agents into groups with exponentially decreasing values of $V_i$. Recall that $V_i$ is the maximum utility that agent $i$ can get from any outcome. Set $Q_0 = \log V_{\max}$, and for $\ell = 0,1,\ldots,L-1$, define group $G_{\ell}$ as: $$ G_{\ell} = \left\{ i \in N \ | \ Q_{\ell} \ge \log V_i \ge Q_{\ell+1} \right\}. $$ Here, for $\ell = 0,1,\ldots,L-1$, we define: $ Q_{\ell+1} = 2 \log Q_{\ell}.$ We call $G_0,\ldots,G_{L-1}$ the {\em heavy groups}. We choose $L$ so that $Q_{L} = \Theta\left( \log \frac{R \log^* V_{\max}}{\gamma^3} \right)$. This implies $L = \Omega(\log^* V_{\max}) = \omega(1)$, since $V_{\max} = \omega(1)$. For agent $i$ in a heavy group, $V_i \geq e^{Q_{L}} \ge \frac{2 R L}{\gamma^3} > 2 R$. Thus, the utility that the MPF solution provides to agent $i$ is at least $\frac{V_i}{R}-1 \ge \frac{V_i}{2 R}$. Finally, we put the remaining agents (with a small $V_i$) in a {\em light group} defined as follows: $$ G_{L} = \left\{ i \in N \ | \ \log V_i \le Q_{L} \right\}. $$ The MPF solution may not provide any guarantee for the utility of agents in this group. \subsubsection{Bounding Violations of Utility Preservation} \label{sec:light} We want to bound the number of agents whose utilities are far from those under the core outcome. First, we need a specialized Chernoff bound. \begin{lemma} (Proved in Appendix~\ref{proof:main}) \label{lem:main} Let $X_1, X_2, \ldots, X_q$ be independent random variables in $[0,1]$, and let $X = \sum_{j=1}^q X_j$. For $\gamma \in (0,1/2)$, suppose $\mathbf{E}[X] = (1-\gamma) \cdot A + \gamma \cdot B$ for $A, B \geq 0 $. Then $$ \Pr[X < (1-2\gamma) \cdot A] \le e^{- \frac{\gamma^3}{2} \max (B,A/2)}$$ \end{lemma} Recall that $\vec{x}$ is the fractional MNW solution, $\vec{y}$ is the fractional MPF solution, and our algorithm applies randomized rounding to their scaled down mixture $(1-\gamma) \vec{z} = (1-\gamma)^2 \vec{x} + \gamma(1-\gamma) \vec{y}$. Let $\widehat{U_i}$ denote the utility of agent $i$ under the final integral outcome obtained by randomly rounding $(1-\gamma) \vec{z}$. Recall that $U_i^*$ is the utility of agent $i$ under the core outcome $\vec{x}$. We want to show that $\widehat{U_i}$ is either multiplicatively or additively close to $U_i^*$ for most agents. For a heavy group $G_{\ell}$, where $\ell \in \set{0,1,\ldots,L-1}$, define $$ F_{\ell} = \set{i \in G_{\ell}\ \left|\ \widehat{U_i} < (1-3 \gamma) U^*_i \right.}. $$ Simiarly, for the light group $G_{L}$, define $$ F_{L} = \set{i \in G_{L} \ \left| \ \widehat{U_i} < \min\left((1-3\gamma) U^*_i, U_i^* - \frac{4 Q_{L}}{\gamma^4} \right)\right.}. $$ We will use Lemma~\ref{lem:main} to bound the sizes of $F_{\ell}$ for $\ell \in \set{0,1,\ldots,L}$ as follows. \begin{theorem} \label{thm:main2} We have that: \begin{enumerate} \item With probability at least $2/3$, we have $|F_{\ell}| \le \frac{1}{2 L e^{Q_{\ell}}} \cdot |G_{\ell}|, \quad \forall \ell \in \set{0,1,\ldots,L-1}$. \item With probability at least $1/2$, we have $|F_{L}| \le \frac{1}{2 e^{Q_{L}}} \cdot |G_{L}|$. \end{enumerate} Thus, with probability at least $1/6$, both the above inequalities hold simultaneously. \end{theorem} \begin{proof} We prove the first and the second part of Theorem~\ref{thm:main2} separately by considering the heavy groups and the light group in turn. The combined result follows from the union bound. \paragraph{Case 1: Heavy Groups} Consider a heavy group $G_{\ell}$ for $0 \le \ell < L$. Recall that the MPF solution provides utility at least $V_i/(2R)$ to each agent in a heavy group. Hence, we have: \begin{equation} \mathbf{E} \left[ \widehat{U_i} / (1-\gamma) \right] = u_i(\vec{z}) \ge (1-\gamma) \cdot U^*_i + \gamma \cdot \frac{V_i}{2 R}. \label{eqn:heavy-utility} \end{equation} The key point is that even if $U^*_i$ is small, the expected utility is at least a term that is proportional to $V_i$. This will strengthen our application of Chernoff bounds. Using Lemma~\ref{lem:main} with $A = U_i^*$ and $B = V_i/(2R)$, we have: \begin{align} \Pr\left[ \widehat{U_i} < (1- 3 \gamma) \cdot U^*_i \right] & \leq \Pr\left[ \frac{\widehat{U_i}}{1-\gamma} < (1- 2 \gamma) \cdot U^*_i \right] \nonumber\\ & \le e^{- \frac{\gamma^3}{4} \frac{V_i}{2R}} \le e^{- \frac{\gamma^3}{8R} Q_{\ell}^2}\nonumber\\ & \le e^{- Q_{\ell} \cdot \log L} \le e^{ - \left( Q_{\ell} + 2 \log L + \log 6 \right)} \leq \frac{1}{6 L^2 e^{Q_{\ell}}},\label{eqn:heavy-prob} \end{align} where the second inequality holds because $\log V_i \ge 2 \log Q_{\ell} $, the third holds because $Q_{\ell} = \Omega \left(\frac{R L}{\gamma^3} \right)$, and the fourth holds because $L = \omega(1)$. We are now ready to prove the first part of Theorem~\ref{thm:main2}. Let $\eta_{\ell} = \frac{1}{6 L^2 e^{Q_{\ell}}}$. Recall that $F_{\ell}$ consists of agents in $G_{\ell}$ for which $\widehat{U_i}< (1-3\gamma) \cdot U^*_i$. Using the linearity of expectation in Equation~\eqref{eqn:heavy-prob}, we have $\mathbf{E}[F_{\ell}] \le \eta_{\ell} \cdot |G_{\ell}|$. By Markov's inequality, $ \Pr\left[ |F_{\ell}| > 3 L \cdot \eta_{\ell} \cdot |G_{\ell}| \right] \le \frac{1}{3L}.$ Applying the union bound over the $L$ heavy groups, we have that with probability at least $2/3$, $$ |F_{\ell}| \le 3 L \cdot \eta_{\ell} \cdot |G_{\ell}| = \frac{1}{ 2 L e^{Q_{\ell}}} \cdot |G_{\ell}|,\quad \forall \ell \in \{0,1,\ldots,L-1\}, $$ which proves the first part of Theorem~\ref{thm:main2}. \paragraph{Case 2: Light Group} For the light group, note that $\log V_i \le Q_{L}$. For this group, the MPF solution may not provide any non-trivial guarantee on the utility to the agents. Since the expected utility can now be small, we have to allow additive approximation as well. Recall that $F_{L}$ consists of agents in $G_{L}$ for whom $\widehat{U_i} < (1-3\gamma) \cdot U^*_i$ {\em as well as} $\widehat{U_i} < U_i^* - 4Q_{L}/\gamma^4$. We again consider two cases. \medskip \noindent {\bf Case 1.} If $U^*_i \le \frac{4}{\gamma^4} Q_{L}$, then $\widehat{U}_i \ge U^*_i - \frac{4 Q_{L}}{\gamma^4}$ trivially. \medskip \noindent {\bf Case 2.} Otherwise, $U^*_i \ge \frac{4 Q_{L}}{\gamma^4} $, and using Lemma~\ref{lem:main}, we have: $$ \Pr\left[ \widehat{U}_i < (1- 3 \gamma) U^*_i \right] \le \Pr\left[ \frac{\widehat{U}_i}{1-\gamma} < (1- 2 \gamma) U^*_i \right] \le e^{- \frac{\gamma^3}{4} U^*_i} \le e^{- \frac{\gamma^3}{4} \cdot \frac{4 Q_{L}}{\gamma^4} } \le \frac{1}{4 e^{Q_{L}}}. $$ It is easy to check that the final transition holds because $\gamma < 1$ is a constant and $Q_L = \omega(1)$. Note that none of the agents in $F_{L}$ are in Case 1. Hence, by Markov's inequality, we again have: $$ \Pr\left[|F_{L}| \ge \frac{1}{2e^{Q_{L}}} \cdot |G_{L}| \right] \le \frac{1}{2}, $$ which proves the second part of Theorem~\ref{thm:main2}. \end{proof} \subsubsection{Approximate Core} We showed that with probability at least $1/6$, our algorithm returns a solution that satisfies conditions in both parts of Theorem~\ref{thm:main2}. We now show that such a solution is the desired approximate core solution. The main idea is that when a set of agents deviate, the fraction of agents in a group $G_{\ell}$ that are in $F_{\ell}$ is small enough such that even if they receive their maximum possible utility, which is $e^{Q_{\ell}}$, their scaled down utility is at most a constant. \begin{theorem} \label{thm:main3} For every coalition $S$ and every possible outcome $\vec{h}$, there exists an agent $i \in S$ s.t. $$ \frac{|S|}{n} \cdot u_i(\vec{h}) \le (1+8\gamma) \cdot \widehat{U_i} + \frac{5 Q_{L}}{\gamma^4}. $$ \end{theorem} \begin{proof} Let $W = N \setminus \cup_{\ell = 0}^{L} F_{\ell}$. In other words, $W$ is the set of agents who either receive a good multiplicative approximation to their expected utility in the core (for the heavy groups), or a good additive approximation to their expected utility in the core (for the light group). In particular, for every $i \in W$, we have $\widehat{U_i} \ge \min\left((1-3 \gamma) \cdot U^*_i, U_i^* - \frac{4 Q_{L}}{\gamma^4} \right)$, which implies \begin{equation} U^*_i \le \frac{1}{1-3 \gamma} \cdot \widehat{U_i} + \frac{4 Q_{L}}{\gamma^4}.\label{eqn:Ustar-U} \end{equation} Consider a set of agents $S$ that may want to deviate, and let $\vec{h}$ be any (even fractional) outcome. There are two cases: \paragraph{Case 1.} Suppose $|S \cap W| \ge (1-\gamma) \cdot |S|$. Then, due to the fractional core optimality condition (see Section~\ref{sec:fractional}), we have: $$\sum_{i \in S \cap W} \frac{U_i(\vec{h})}{U^*_i} \le n.$$ Note that in polynomial time, Theorem~\ref{thm:fractional} only finds an approximate solution whose utilities $\{\tilde{U}_i\}$ with $\sum_{i \in S \cap W} \frac{U_i(\vec{h})}{\tilde{U}_i} \le n (1+\eta)$ for small $\eta > 0$. It is easy to check this does not alter the rest of the proof and adds a small multiplicative factor of $(1+\eta)$ to the final approximation bound. We ignore this factor for simplicity and simply assume $\{U_i^*\}$ are the optimal MNW utilities. The above implies $ \frac{|S|}{n} \cdot \sum_{i \in S \cap W} \frac{u_i(\vec{h})}{U^*_i} \le |S| \le \frac{1}{1-\gamma} \cdot |S \cap W|$. Therefore, there exists an agent $i \in S \cap W$ such that $$ \frac{|S|}{n} \cdot u_i(\vec{h}) \le \frac{1}{1-\gamma} \cdot U^*_i \le \frac{1}{1-\gamma} \cdot \left( \frac{1}{1-3\gamma} \cdot \widehat{U_i} + \frac{4 Q_{L}}{\gamma^4} \right), $$ where the last transition is due to Equation~\eqref{eqn:Ustar-U} and the fact that $i \in W$. Finally, it is easy to check that for $\gamma = \delta/8 \le 1/8$, we have $\frac{1}{(1-\gamma)\cdot (1-3\gamma)} \le 1+8\gamma$ and $4/(1-\gamma) \le 5$, which yields: \begin{equation} \frac{|S|}{n} \cdot u_i(\vec{h}) \le (1 + 8 \gamma) \cdot \widehat{U_i} + \frac{5 Q_{L}}{\gamma^4}. \label{eqn:main2-bound} \end{equation} \paragraph{Case 2.} Otherwise, $|S \setminus W| \ge \gamma |S|$. In this case, we want to show that there exists an agent $i \in S\setminus W$ such that $(|S|/n) \cdot u_i(\vec{h}) \le 1/\gamma$. Because $\widehat{U_i} \ge 0$ and $Q_{L} = \omega(1)$, such an agent will also satisfy Equation~\eqref{eqn:main2-bound}. We show this by taking two sub-cases. First, suppose the light group satisfies $|S \cap F_{L}| \ge \frac{\gamma}{2} |S|$. Then: $ |S| \le \frac{2}{\gamma} \cdot |S \cap F_{L}| \le \frac{2}{\gamma} \cdot |F_{L}|.$ Thus, for any agent $i \in F_L$, we have $$ \frac{|S|}{n} \cdot u_i(\vec{h}) \le \frac{2}{\gamma n} \cdot |F_{L}| \cdot V_i \le \frac{2}{\gamma n} \cdot \frac{|G_{L}|}{2 e^{Q_{L}}} \cdot V_i \le \frac{1}{\gamma}. $$ Here, the second transition follows from Theorem~\ref{thm:main2}. To see why the third transition holds, note that $|G_{L}| \le n$, and that $\log V_i \le Q_{L}$ because $i \in G_{L}$. Similarly, in the other sub-case, suppose $|S \cap F_{L}| \le \frac{\gamma}{2} |S|$. Then, there exists a heavy group $\ell \in \{0,1,\ldots,L-1\}$ such that $|S \cap F_{\ell}| \ge \frac{\gamma}{2L} |S|$. This means $ |S| \le \frac{2L}{\gamma} \cdot |S \cap F_{\ell}| \le \frac{2L}{\gamma} \cdot |F_{\ell}|.$ Again, for an arbitrary agent $i \in F_{\ell}$, we have: $$ \frac{|S|}{n} \cdot u_i(\vec{h}) \le \frac{2 L}{\gamma n} \cdot |F_{\ell}| \cdot V_i \le \frac{2L}{\gamma n} \cdot \frac{|G_{\ell}|}{2 L e^{Q_{\ell}}} \cdot V_i \le \frac{1}{\gamma}. $$ Once again, the third transition follows from Theorem~\ref{thm:main2}, and the fourth transition holds because $|G_{\ell}| \le n$ and $\log V_i \le Q_{\ell}$ as $i \in G_{\ell}$. Putting everything together, the theorem follows. \end{proof} Since $\gamma = \frac{\delta}{8}$ and $Q_{L} = \Theta\left( \log \frac{R \log^* V_{max}}{\gamma} \right)$, Theorem~\ref{thm:main3} implies $ \frac{|S|}{n} \cdot u_i(\vec{h}) \le (1+\delta) \cdot \widehat{U_i} + \alpha^*$, where $ \alpha^* = O\left(\frac{1}{\delta^4} \cdot \log\left(\frac{R \cdot \log^*V_{\max}}{\delta} \right) \right)$. The existence of such an agent implies that a solution satisfying Theorem~\ref{thm:main2} is a $(\delta,\alpha)$-core solution for every $\alpha > \alpha^*$, which completes the proof of Theorem~\ref{thm:main}. \section*{Appendix} \section{Impossibility Examples} \label{sec:Examples} \begin{example} \label{eg:IS} \normalfont{ The following example shows the necessity of assuming a large $\vec{b}$ in Theorem~\ref{thm:packing} for general packing constraints. Specifically, the example uses packing constraints $A \vec{x} \le \vec{b}$ where $\vec{b} = \vec{1}$ (and the width is $\rho = 2$), and does not admit a $(\delta, m/4)$-core outcome for any $\delta > 0$. Consider a complete bipartite graph $G(L,R,E)$, where $|L| = |R| = m/2$. The vertices are the elements of the ground set $W$, and the constraints ensure that feasible outcomes are independent sets. There are two agents. Agent $1$ has unit utility for each vertex in $L$, and zero utility for each vertex in $R$, while agent $2$ has unit utility for each vertex in $R$. A feasible outcome is forced to choose either vertices from $L$ or vertices from $R$, and hence gives zero utility to at least one agent. But this agent can deviate and choose an outcome with utility $m/2$, which is then scaled down to $m/4$. Hence, no feasible outcome is $(\delta,m/4)$-core for any $\delta > 0$. Note that in this example, the welfare-maximizing outcome for a {\em single} agent is simple to compute, which shows that the non-existence of a good approximation to the core is orthogonal to the computational difficulty of the single-agent welfare maximization problem.} \end{example} \begin{comment} \begin{example} \label{eg:knapsack} \km{This needs to be verified and written better to make the connection between $\ell$ and the approximation more explicit.} Consider the smooth Nash Welfare objective in Equation~\eqref{eq:FK} for constant $\ell \ge 1$. We consider the {\sc Knapsack} setting. There is a knapsack of integer capacity $B$. Let $c \ge \ell$ be a parameter capturing the ratio of capacity to the largest item size. There are $n$ agents, of which the first $\alpha n$ agents are {\em special}. We assume $\alpha \le 1$ is a parameter that we will choose later. There are $c$ {\em large} items of size $B/c$, and all $n$ agents have unit utility for all these items. There are $B$ {\em small} items of unit size. The special agents have unit utility for all these items. Note that $m = \Theta(B)$ on this instance. We will determine the condition under which the smooth NW objective only chooses large items. Consider comparing two solutions: The first chooses only large items and yields utility $c$ for all agents, while the second chooses $(c-r)$ large items and $r B/c$ small items. If the former has larger objective value, then: $$ \log (c + \ell) > \alpha \log \left(\frac{rB}{c} + (c-r) + \ell \right) + (1-\alpha) \log(c - r + \ell)$$ Since $c \ge \ell$, the above is true for any $1 \le r \le c$ when: $$ \log \left(1 + \frac{1}{2c}\right) \ge \alpha \log (2B)$$ Since $\log(1+x) \ge x/2$ for $x \in [0,1]$, the above holds when: $$ \alpha \le \frac{1}{2c \log (2B)}$$ Consider any $c = B^{1/2 - \epsilon}$ for constant $\epsilon > 0$, and $\alpha = \frac{1}{2c \log (2B)}$. The above derivation shows that the optimal smooth Nash Welfare solution for any $\ell \le c$ chooses only the large items. The special agents get utility $c$ in this solution. If they deviate and choose all small items, they get utility $$\alpha B = \Omega \left(\frac{B}{c \log B}\right) = \Omega\left(\frac{B^{1/2 + \epsilon}}{\log B} \right) = c \times \Omega\left(\frac{B^{2 \epsilon}}{\log B} \right)$$ Therefore, even when all items have small sizes relative to the knapsack capacity (which corresponds to the large $\vec{b}$ case for packing constraints), there is no reasonable setting of $\ell$ that yields a polylog$(m)$ approximation to the core. \end{example} \end{comment} \iffalse \begin{example} \label{eg:knapsack} Consider the {\sc Knapsack} setting. We will show that no single smoothing parameter yields a smooth Nash Welfare objective that can give a polylog approximate core guarantee on all instances, as we gave in Theorem~\ref{thm:main}. In particular, we will show that for any smoothing $\ell > 0$, there exists a family instances in the Knapsack setting, each with a single global budget $B$ and $\Theta(B)$ elements, for which the integer outcome maximizing $\sum_{i \in N} \ln{(\ell + u_i(\bc)}$ is not an $(o(m), o(m))$-core outcome. This does not rule out the possibility of an algorithm based on smooth Nash Welfare, but it suggests that adapting such an algorithm from our work is not obvious. \medskip Let $\ell > 0$. There is a global budget $B \geq l^4$. That is, every element will have a size, and the sum of the sizes of elements must be no more than $B$ in a feasible outcome. The family of instances will be parameterized on $B$. There are $n$ agents, of which the first $\alpha n$ agents are {\em special} (we will determine $\alpha$ later). Assume that $n$ is very large, in particular $n \geq 2 B^{1/4} \log(2B)$. There are $B^{1/4}$ {\em large} elements of size $B^{3/4}$, and all $n$ agents have unit utility for all these elements. There are $B$ {\em small} elements of unit size. The special agents have unit utility for all these elements, and all others agents have 0 utility for all these elements. Note that $m = \Theta(B)$. The idea of the construction is that no amount of smoothing is enough: For any smoothing, the smooth NW objective on the constructed instance only chooses large items provided that $\alpha$ is sufficiently small. However, we can still choose $\alpha$ to be large enough that the special agents can deviate and get a large amount of utility. Consider comparing the two types of Pareto Optimal solutions: The first chooses only large items and yields utility $B^{1/4}$ for all agents, while the second chooses $(B^{1/4}-r)$ large items and $r B^{3/4}$ small items. If the former has larger objective value, then: $$ \log (B^{1/4} + \ell) > \alpha \log \left(r B^{3/4} + (B^{1/4}-r) + \ell \right) + (1-\alpha) \log(B^{1/4} - r + \ell)$$ Since $\ell \leq B^{1/4}$, the above is true for any $1 \le r \le B^{1/4}$ when: $$ \log (2 B^{1/4}) > \alpha \log \left(B^{1/4} B^{3/4} + 2 B^{1/4} \right) + \log(2 B^{1/4} - 1)$$ so that: $$ \log \left(1 + \frac{1}{2B^{1/4}}\right) \ge \alpha \log (2B)$$ Since $\log(1+x) \ge x/2$ for $x \in [0,1]$, the above holds when: $$ \alpha \le \frac{1}{2B^{1/4} \log (2B)}$$ So if we set $\alpha = \frac{1}{2B^{1/4} \log (2B)}$, the outcome maximizing smooth Nash Welfare with smoothing parameter $\ell$ choses only the large items (that that $n \geq 1/\alpha$, so there is at least one special agent). The special agents get utility $B^{1/4}$ in this solution. If they deviate and choose all small items, they get utility $$\alpha B = \left(\frac{B}{2B^{1/4}\log (2B)}\right) = B^{1/4} \left(\frac{B^{1/2}}{2\log(2B)} \right)$$ Since we only needed $B \geq l^4$ for this argument, and $m = \Theta(B)$, this shows that on a family of instances with increasing $B$, the outcome maximizing smooth Nash Welfare with smoothing parameter $\ell$ is not an $(o(m), o(m))$-core solution. \end{example} \fi \begin{example} \label{eg:knapsack} \normalfont{ Recall that in the {\sc Knapsack} setting, we are given a set of elements of different sizes, and our goal is to select a subset of elements with total size at most a given budget $B$. We show that for any $\ell > 0$, there exists a {\sc Knapsack} instance in which maximizing the smooth Nash welfare objective $F(\bc) = \sum_{i \in N} \ln{(\ell + u_i(\bc))}$ returns an outcome that is not a $(O(m^{1/2-\epsilon}),O(m^{3/4-\epsilon}))$-core outcome. This is in contrast to Theorem~\ref{thm:main}, which provides a $(\delta,\alpha)$-core guarantee where $\delta$ is constant and $\alpha$ is logarithmic in the number of elements. Fix $\ell > 0$. Set a large budget $B \geq \ell^4$. There are $m = B^{1/4}+B$ elements, of which $B^{1/4}$ are {\em large} elements of size $B^{3/4}$ and the remaining $B$ are {\em small} elements of unit size. There are $n \ge 4 B^{1/4} \log(2B)$ agents. Each agent has unit utility for each large element. A subset of $\alpha n$ agents are {\em special} (we determine $\alpha$ later). These special agents have unit utility for each small element, while the remaining agents have zero utility for the small elements. The idea is to show that when $\alpha$ is sufficiently small, the smooth Nash welfare objective will choose only the large elements. However, $\alpha$ can still be large enough so that the special agents can deviate, and get a large amount of utility. Note that maximizing the smooth Nash welfare objective returns a Pareto efficient solution, and hence can be one of two types: it either chooses all large elements (which gives utility $B^{1/4}$ to each agent), or it chooses $B^{1/4}-r$ large elements and $r B^{3/4}$ small elements. For the former to have a larger smooth Nash welfare objective value, we need that for each $1 \le r \le B^{1/4}$, $$ \ln (B^{1/4} + \ell) > \alpha \ln \left(r B^{3/4} + (B^{1/4}-r) + \ell \right) + (1-\alpha) \ln(B^{1/4} - r + \ell). $$ This holds true if $$ \ln\left(\frac{B^{1/4}+\ell}{B^{1/4}+\ell-r}\right) > \alpha \ln \left(r B^{3/4} + (B^{1/4}-r) + \ell \right). $$ Since $0 < \ell \leq B^{1/4}$, the above is true for each $1 \le r \le B^{1/4}$ if $$ \ln \left(\frac{2B^{1/4}}{2B^{1/4}-1}\right) > \alpha \ln \left(B^{1/4} B^{3/4} + B^{1/4} \right). $$ This is true when $$ \ln \left(1 + \frac{1}{2B^{1/4}}\right) \ge \alpha \ln (2B). $$ Since $\ln(1+x) \ge x/2$ for $x \in [0,1]$, the above holds when: $$ \alpha \le \frac{1}{4B^{1/4} \ln (2B)}. $$ Let us set $\alpha = \frac{1}{4B^{1/4} \ln (2B)}$. Choosing all large elements maximizes the smooth Nash welfare objective. Since $n \ge 1/\alpha$, there is at least one special agent. The special agents get utility $B^{1/4}$ each. If they deviate and choose all the small elements, they get (scaled down) utility $$ \alpha B = \frac{B}{4B^{1/4}\ln (2B)} = \frac{B^{3/4}}{4\ln(2B)}. $$ Hence, for the solution to be a $(\delta,\alpha)$-core outcome, we need $(1+\delta) \cdot B^{1/4} + \alpha \ge B^{3/4}/(4\ln (2B))$. Since $m = \Theta(B)$, this shows that the outcome is not a $(O(m^{1/2-\epsilon}),O(m^{3/4-\epsilon}))$-core outcome for any constant $\epsilon > 0$, as required. } \end{example} \begin{comment} \begin{example} \label{eg:matroid} This example shows that for matroid constraints, a $(0, 1-\epsilon)$-core outcome is not guaranteed to exist for any $\epsilon > 0$. Consider the following instance of public decision making, which is a special case of matroid constraints. There are $n$ agents, where $n$ is even. There are $m = (n-2)+n/2$ issues. The first $n-2$ issues correspond to unit-value private goods, \textit{i.e.}, each such issue has $n$ alternatives, and each alternative gives utility $1$ to a unique agent and utility $0$ to others. The remaining $n/2$ issues are ``pair issues''; each such issue has $\binom{n}{2}$ alternatives, one corresponding to every pair of agents that gives both agents in the pair utility $1$ and all other agents utility $0$. It is easy to see that every integer allocation gives utility at most $1$ to at least two agents. Consider the deviating coalition consisting of these two agents. They can choose the alternative that gives them each utility $1$ on every pair issue, and split the $n-2$ private good equally. Thus, they each get utility $n/2 + (n-2)/2 = n-1$. For the outcome to be a $(0,\alpha)$-core outcome, we need $1 + \alpha \ge (2/n) \cdot (n-1)$. As $n \to \infty$, this requires $\alpha \rightarrow 1$. Hence, for any $\epsilon > 0$, a $(0,1-\epsilon)$-core outcome is not guaranteed to exist. Note that Theorem~\ref{thm:matroid} shows existence of a $(0,2)$-core outcome, which is therefore tight up to a unit additive relaxation. \end{example} \begin{example} \label{eg:matching} This example shows that a $(\delta, \alpha)$-core outcome is not guaranteed to exist for matching constraints, for any $\delta \ge 0$ and $\alpha < 1$. Consider the graph $K_{2,2}$ (the complete bipartite graph with two vertices on each side). This graph has four edges, and two disjoint perfect matchings. Let there be two agents. Agent $1$ has unit utility for the edges of one matching, while agent $2$ has unit utility for the edges of the other matching. Any integer outcome gives zero utility to one of these agents. This agent can deviate and obtain utility $2$. Hence, for an outcome to be a $(\delta,\alpha)$-core outcome, we need $(1+\delta)\cdot 0 + \alpha \ge (1/2) \cdot 2$, which is impossible for any $\delta \ge 0$ and $\alpha < 1$. \end{example} \end{comment} \section{Omitted Proofs} \label{sec:omittedProof} \iffalse \section{The Core for Private Goods} \label{app:private} In the private goods setting, there are $m$ goods that must be allocated among $n$ agents, each good going to at most one agent. Agent $i$ has utility $u_{ij}$ for good $j$, so that if she receives a subset $S$ of goods, then her utility is $\sum_{j \in S} u_{ij}$. Assume by scaling that $u_{ij} \in [0,1]$ for all $i,j$. This is a special case of the public goods setting with a partition matroid. For each good $j$, create $n$ copies, $(j,i)$, one for each agent. The matroid constraint is that only one copy for any $j$ can be chosen. Agent $i$ only has utility $u_i(j,i')$ which is $u_{ij}$ if $i' = i$ and zero otherwise. Then, Theorem~\ref{thm:matroid} shows the existence of a $(0,2)$-core. The lemma below shows that in this setting, we can preserve the fractional Nash welfare allocation of {\em all} agents. This is in contrast to the public goods setting, where such a guarantee is not possible (refer Example~\ref{eg:one} in Section~\ref{sec:main}). \begin{lemma} For a private goods problem, let $U^*_i$ denote the utility of agent $i$ in the fractional Nash welfare allocation. Then there is an integer allocation that gives each agent $i$ utility $\hat{U}_i \ge U^*_i - 1$, yielding a $(0,1)$-core solution. \end{lemma} \begin{proof} Let $x_{ij}$ denote the allocation of good $j$ to agent $i$. Consider the following set of constraints: \[ \begin{array}{rcll} \sum_j u_{ij} x_{ij} & \ge & U^*_i & \forall i \\ \sum_i x_{ij} & \le & 1 & \forall j \\ x_{ij} & \ge & 0 & \forall i,j \end{array} \] The fractional MNW solution is feasible for the above program. By the Generalized Assignment rounding scheme~\cite{ShmoysT93}, there is an integer allocation that reduces the utility of each agent by at most one item. We omit the simple details. \end{proof} \fi \input{fractional.tex} \begin{comment} \subsection{Proof of Lemma~\ref{lem:privatecore}} \label{proof:privatecore} We now show that for the allocation of private goods, maximizing the smooth Nash welfare objective with $\ell = 1$ yields a $(0,1)$-core outcome. Let us first formally define the setting. There is a set of agents $N$ and a set of private goods $M$. Each agent $i \in N$ has a utility function $u_i : 2^M \to \mathbb{R}_{\ge 0}$. Utilities are additive, so $u_i(S) = \sum_{g \in S} u_i(\set{g})$ for all $S \subseteq M$. For simplicity, we denote $u_{ig} \triangleq u_i(\set{g})$. Without loss of generality, we normalize the utility of each agent such that $\max_{g \in M} u_{ig} = 1$ for each $i$. An allocation $A$ is a {\em partition} of the set of goods among the agents; let $A_i$ denote the bundle of goods received by agent $i$. We want to show that an allocation maximizing the objective $\prod_{i \in N} (1+u_i(A_i))$ is a $(0,1)$-core outcome. \begin{proof}[Proof of Lemma~\ref{lem:privatecore}] Let $A$ denote an allocation maximizing the smooth Nash welfare objective with $\ell = 1$. We assume without loss of generality that every good is positively valued by at least one agent. Hence, $u_j(A_j) = 0$ must imply $A_j = \emptyset$. For agents $i,j \in N$ with $A_j \neq \emptyset$ (hence $u_j(A_j) > 0$), and good $g \in A_j$, moving $g$ to $A_i$ should not increase the objective function. Hence, for each $g \in A_j$, we have $$ \big(1+u_i(A_i \cup \set{g})\big) \cdot \big(1+u_j(A_j \setminus \set{g})\big) \le \big(1+u_i(A_i)\big) \cdot \big(1+u_j(A_j)\big). $$ Using additivity of utilities, this simplifies to \begin{equation} \frac{u_{ig}}{1 + u_i(A_i)} \le \frac{u_{jg}}{1+u_j(A_j)-u_{jg}} \le \frac{u_{jg}}{u_j(A_j)}. \label{eqn:privatecore1} \end{equation} For every agent $j \in N$ with $A_j \neq \emptyset$ and good $g \in A_j$, define $p_g = u_{jg}/u_j(A_j)$. Abusing the notation a little, for a set $T \subseteq M$ define $p_T = \sum_{g \in T} p_g$. Then, from Equation~\eqref{eqn:privatecore1}, we have that for all players $i \in N$ and goods $g \in M$, \begin{equation} (1+u_i(A_i)) \cdot p_g \ge u_{ig}. \label{eqn:privatecore2} \end{equation} Suppose for contradiction that $A$ is not a $(0,1)$-core outcome. Then, there exists a set of agents $S \subseteq N$ and an allocation $B$ of the set of all goods to agents in $S$ such that $(|S|/n) \cdot u_i(B_i) \ge 1+u_i(A_i)$ for every agent $i \in S$, and at least one inequality is strict. Rearranging the terms and summing over $i \in S$, we have \begin{equation} \sum_{i \in S} \frac{u_i(B_i)}{1+u_i(A_i)} > \sum_{i \in S} \frac{n}{|S|} = n. \label{eqn:privatecore3} \end{equation} We now derive a contradiction. For agent $i \in S$, summing Equation~\eqref{eqn:privatecore2} over $g \in B_i$, we get $$ (1+u_i(A_i)) \cdot p_{B_i} \ge u_i(B_i) \Rightarrow \frac{u_i(B_i)}{1+u_i(A_i)} \le p_{B_i}. $$ Summing this over $i \in S$, we get $$ \sum_{i \in S} \frac{u_i(B_i)}{1+u_i(A_i)} \le \sum_{i \in S} p_{B_i} = \sum_{g \in M} p_g = \sum_{\substack{j \in N \text{ s.t.}\\A_j \neq \emptyset}} \sum_{g \in A_j} \frac{u_{jg}}{u_j(A_j)} = \sum_{\substack{j \in N \text{ s.t.}\\A_j \neq \emptyset}} \frac{u_j(A_j)}{u_j(A_j)} \le n. $$ However, this contradicts Equation~\eqref{eqn:privatecore3}. \end{proof} \subsection{Analysis in Section~\ref{sec:matching}: Proof of Theorem~\ref{thm:matching}} \label{app:matching} First, we show that the algorithm in Section~\ref{sec:matching} runs in polynomial time. Again, recall that each agent has utility at most $m$. Thus, $F(\bc) = O(n \cdot \ln m)$. Because each improvement increases the objective value by at least $n/(\kappa r)$, the number of iterations is $O(\kappa r \ln m) = O(m^2/\delta)$. Each iteration can be implemented by na\"{\i}vely going over all $O(m^{\kappa})$ subsets of edges of size at most $\kappa$, checking if they are valid augmentations with respect to $\bc$, and whether they improve the objective function by more than $n/(\kappa r)$. The local search therefore runs in polynomial time for constant $\delta > 0$. Let $\bc$ denote the outcome returned by the algorithm. We next show that $\bc$ is indeed a $(\delta,8+3\kappa)$-core outcome. Suppose for contradiction that this is not true. Then, there exist a subset of agents $S$ and a matching ${\mathbf c'}$ such that for all $i \in S$, $$ \frac{|S|}{n} \cdot u_i({\mathbf c'}) \ge (1+\delta) \cdot u_i(\bc) + 8 + 3 \kappa \ge (1+ \delta) \cdot \left(u_i(\bc) + 3 \kappa + 1 \right), $$ and at least one inequality is strict (the last inequality is because $\delta \in (0,1]$). Rearranging and summing over all $i \in S$, we obtain \begin{equation} \sum_{i \in S} \frac{u_i({\mathbf c'})}{u_i(\bc) + 3 \kappa + 1} > (1+\delta) \cdot \sum_{i \in S} \frac{n}{|S|} = n \cdot (1+\delta). \label{eqn:matching-inequality1} \end{equation} For $j \in E$, define $w_j = \sum_{i \in N} \frac{u_{ij}}{u_i(\bc) + 1}$ and $w'_j = \sum_{i \in N} \frac{u_{ij}}{u_i(\bc) + 3 \kappa + 1}$. Let $W = \sum_{j \in \bc} w_j$, and $W' = \sum_{j \in {\mathbf c'}} w'_j$. It is easy to check that \begin{equation} W \le n \quad \text{and} \quad W' \ge n \cdot (1+\delta), \label{eqn:matching-inequality2} \end{equation} where the latter follows from Equation~\eqref{eqn:matching-inequality1}. Further note that $w_j \ge w'_j$ for all $j$. For an augmentation $T$ with respect to $\bc$, define $\mathrm{gain}(T) = \sum_{j \in T} w'_j - \sum_{j \in M(T)} w_j$. The next lemma is a simple generalization of the analysis in~\cite{HV}; we give the adaptation here for completeness. \begin{lemma} Assuming weights $w_j \ge w'_j$ for all edges $j$, for any integer $\kappa \ge 1$ and matchings $\bc$ and ${\mathbf c'}$, there exists a multiset $OPT$ of augmentations with respect to $\bc$ such that: \begin{itemize} \item For each $T \in OPT$, $T \subseteq {\mathbf c'}$ and $|T| \le \kappa$; \item $|OPT| \le \kappa r$; and \item $\sum_{T \in OPT} \mathrm{gain}(T) \ge \kappa \cdot W' - (\kappa + 1) \cdot W$. \end{itemize} \label{lem:hv} \end{lemma} \begin{proof} We follow~\cite{HV} in the construction the multiset $OPT$ of augmentations with respect to $\bc$ out of edges in ${\mathbf c'}$. Let $\bc \triangle {\mathbf c'}$ be the symmetric difference of matchings $\bc$ and ${\mathbf c'}$ consisting of alternating paths and cycles. For every cycle or path ${\mathbf d} \in \bc \triangle {\mathbf c'}$, let $T_{{\mathbf d}}$ be be set of edges ${\mathbf d} \cap {\mathbf c'}$. For all $T_{{\mathbf d}}$ with $|T_{{\mathbf d}}| \leq \kappa$, just add $T_S$ to OPT $\kappa$ times (note that $OPT$ is a multiset, not a set). For $T_{{\mathbf d}}$ with $|T_{{\mathbf d}}| > \kappa$, we break up $T_{{\mathbf d}}$ into multiple smaller augmentations. To do so, index the edges in $T_{{\mathbf d}}$ from $1$ to $|T_{{\mathbf d}}|$ and add $|T_{{\mathbf d}}|$ different augmentations to $OPT$ by considering starting at every index in $T_{{\mathbf d}}$ and including the next $\kappa$ edges in $T_{{\mathbf d}}$ with wrap-around from $|T_{{\mathbf d}}|$ to $1$. Now we must argue that $OPT$ as we have constructed it satisfies the conditions of the lemma. The first point, that $\forall T \in OPT, T \subseteq {\mathbf c'}$ and $|T| \leq \kappa$, follows trivially from the construction. The second point also follows easily from observing that we add $\kappa$ augmentations to $OPT$ for every ${\mathbf d} \in \bc \cap {\mathbf c'}$, and graph $G$ has $r$ vertices. To see the third point, note that every edge in ${\mathbf c'} \backslash \bc$ is contained in at least $\kappa$ augmentations in $OPT$. On the other hand, for every edge $e \in \bc \backslash {\mathbf c'}$, there are no more than $\kappa + 1$ augmentations $T \in OPT$ such that $e \in M(T)$ (recall $M(T)$ are the edges of $\bc$ with a vertex matched under $T$). This can happen, for example, if $T_S$ happens to be a path of length $\kappa + 1$. Finally, for the edges $j \in {\mathbf c'} \cap \bc$, the weight $w'_j \le w_j$. Putting these facts together, the third point of the lemma follows. \end{proof} Consider the set of augmentations $OPT$ from Lemma~\ref{lem:hv}. For augmentation $T \in OPT$, we have: \begin{align*} F(\left(\bc\setminus M(T)\right) \cup T) - F(\bc) & = \Big( F(\left(\bc\setminus M(T)\right) \cup T) - F(\bc \setminus M(T)) \Big) - \Big(F(\bc) - F(\bc \setminus M(T)) \Big)\\ & \ge \sum_{i \in N} \left( \frac{ \sum_{T \in S} u_{ij}}{u_i(\bc) + 2 \kappa + 1 + \sum_{j \in T} u_{ij}} - \frac{ \sum_{j \in M(T)} u_{ij}}{u_i(\bc) + 2 \kappa + 1 - \sum_{j \in M(T)} u_{ij}} \right) \\ & \ge \sum_{i\in N} \left( \frac{ \sum_{j \in T} u_{ij}}{u_i(\bc) + 3 \kappa + 1} - \frac{ \sum_{j \in M(T)} u_{ij}}{u_i(\bc)+ 1} \right) \\ & = \sum_{j \in T} w'_j - \sum_{j \in M(T)} w_j = \mathrm{gain}(T). \end{align*} Here, the second transition holds because $h/(x+h) \le \ln(x+h)-\ln x \le h/x$ for all $x \ge 1$ and $h \ge 0$, and the third transition holds due to $|T| \le \kappa$ and $|M(T)| \le 2|T| \le 2\kappa$. Therefore, we have: \begin{align*} \sum_{T \in OPT} F(\left(\bc\setminus M(T)\right) \cup T) - F(\bc) \ge \sum_{T \in OPT} \mathrm{gain}(T) &\ge \kappa \cdot W' - (\kappa + 1) \cdot W\\ &\ge \kappa \cdot n \cdot (1+\delta) - (\kappa + 1) \cdot n = n, \end{align*} where the second transition follows from Lemma~\ref{lem:hv}, and the third transition follows from Equation~\eqref{eqn:matching-inequality2}. Since $|OPT| \le \kappa r$, there exists an augmentation $T \in OPT$ with $F(\left(\bc\setminus M(T)\right) \cup T) - F(\bc) \ge \sfrac{n}{\kappa r}$, which violates local optimality of $\bc$. This completes the proof of Theorem~\ref{thm:matching}. \end{comment} \subsection{Proof of Lemma~\ref{lem:main}} \label{proof:main} We first state the standard theorem for Chernoff bounds. \begin{theorem} \label{thm:chernoff} Let $X_1, X_2, \ldots, X_q$ be independent random variables in $[0,1]$, and let $X = \sum_{j=1}^q X_j$. For any $\epsilon \in (0,1)$, we have: $$ \Pr\left[ X < (1-\epsilon) \mathbf{E}[X] \right] \le e^{-\frac{\epsilon^2}{2} \mathbf{E}[X]}$$ Equivalently, for any $\eta < \mathbf{E}[X]$, $$ \Pr\left[ X < \mathbf{E}[X] - \eta \right] \le e^{-\frac{\eta^2}{2 \mathbf{E}[X]}}$$ \end{theorem} Lemma~\ref{lem:main} follows from considering two cases. First, suppose $(1-\gamma) A \ge B$. \begin{align*} \Pr[ X < (1-\gamma)^2 A ] & \le \Pr[ X < (1-\gamma) \mathbf{E}[X] ] \le e^{- \frac{\gamma^2}{2} \mathbf{E}[X] } \\ & \le e^{- \frac{\gamma^2}{2} \max(\gamma B, (1-\gamma)A)} \le e^{- \frac{\gamma^3}{2} \max(B,A)} \end{align*} In the other case, if $(1-\gamma) A < B$, then $\gamma B \le \mathbf{E}[X] \le (1+\gamma)B$. Then \begin{eqnarray*} \Pr[X < (1-\gamma) A ] & \le & \Pr[ X \le \mathbf{E}[X] - \gamma B ] \\ & \le & e^{- \gamma^2 \frac{ B ^2}{2 \mathbf{E}[X]}} \le e^{- \gamma^2 \frac{ B ^2}{2(1+\gamma) B} }\\ & \le & e^{- \frac{\gamma^3}{2} B} \le e^{- \frac{\gamma^3}{2} (1-\gamma)A} \le e^{- \frac{\gamma^3}{4} A} \\ \end{eqnarray*} \begin{comment} \subsection{Proof of Theorem~\ref{thm:main2}} \label{proof:main2} We prove the first and the second part of Theorem~\ref{thm:main2} separately by considering the heavy groups and the light group in turn. The combined result follows from the union bound. \paragraph{Case 1: Heavy Groups} Consider a heavy group $G_{\ell}$ for $0 \le \ell < L$. Recall that the MPF solution provides utility at least $V_i/(2R)$ to each agent in a heavy group. Hence, we have: \begin{equation} \mathbf{E} \left[ \widehat{U_i} / (1-\gamma) \right] = u_i(\vec{z}) \ge (1-\gamma) \cdot U^*_i + \gamma \cdot \frac{V_i}{2 R}. \label{eqn:heavy-utility} \end{equation} The key point is that even if $U^*_i$ is small, the expected utility is at least a term that is proportional to $V_i$. This will strengthen our application of Chernoff bounds. Using Lemma~\ref{lem:main} with $A = U_i^*$ and $B = V_i/(2R)$, we have: \begin{align} \Pr\left[ \widehat{U_i} < (1- 3 \gamma) \cdot U^*_i \right] & \leq \Pr\left[ \frac{\widehat{U_i}}{1-\gamma} < (1- 2 \gamma) \cdot U^*_i \right] \nonumber\\ & \le e^{- \frac{\gamma^3}{4} \frac{V_i}{2R}} \le e^{- \frac{\gamma^3}{8R} Q_{\ell}^2}\nonumber\\ & \le e^{- Q_{\ell} \cdot \log L} \le e^{ - \left( Q_{\ell} + 2 \log L + \log 6 \right)} \leq \frac{1}{6 L^2 e^{Q_{\ell}}},\label{eqn:heavy-prob} \end{align} where the second inequality holds because $\log V_i \ge 2 \log Q_{\ell} $, the third holds because $Q_{\ell} = \Omega \left(\frac{R L}{\gamma^3} \right)$, and the fourth holds because $L = \omega(1)$. We are now ready to prove the first part of Theorem~\ref{thm:main2}. Let $\eta_{\ell} = \frac{1}{6 L^2 e^{Q_{\ell}}}$. Recall that $F_{\ell}$ consists of agents in $G_{\ell}$ for which $\widehat{U_i}< (1-3\gamma) \cdot U^*_i$. Using the linearity of expectation in Equation~\eqref{eqn:heavy-prob}, we have $\mathbf{E}[F_{\ell}] \le \eta_{\ell} \cdot |G_{\ell}|$. By Markov's inequality, $ \Pr\left[ |F_{\ell}| > 3 L \cdot \eta_{\ell} \cdot |G_{\ell}| \right] \le \frac{1}{3L}.$ Applying the union bound over the $L$ heavy groups, we have that with probability at least $2/3$, $$ |F_{\ell}| \le 3 L \cdot \eta_{\ell} \cdot |G_{\ell}| = \frac{1}{ 2 L e^{Q_{\ell}}} \cdot |G_{\ell}|,\quad \forall \ell \in \{0,1,\ldots,L-1\}, $$ which proves the first part of Theorem~\ref{thm:main2}. \paragraph{Case 2: Light Group} For the light group, note that $\log V_i \le Q_{L}$. For this group, the MPF solution may not provide any non-trivial guarantee on the utility to the agents. Since the expected utility can now be small, we have to allow additive approximation as well. Recall that $F_{L}$ consists of agents in $G_{L}$ for whom $\widehat{U_i} < (1-3\gamma) \cdot U^*_i$ {\em as well as} $\widehat{U_i} < U_i^* - 4Q_{L}/\gamma^4$. We again consider two cases. \medskip \noindent {\bf Case 1.} If $U^*_i \le \frac{4}{\gamma^4} Q_{L}$, then $\widehat{U}_i \ge U^*_i - \frac{4 Q_{L}}{\gamma^4}$ trivially. \medskip \noindent {\bf Case 2.} Otherwise, $U^*_i \ge \frac{4 Q_{L}}{\gamma^4} $, and using Lemma~\ref{lem:main}, we have: $$ \Pr\left[ \widehat{U}_i < (1- 3 \gamma) U^*_i \right] \le \Pr\left[ \frac{\widehat{U}_i}{1-\gamma} < (1- 2 \gamma) U^*_i \right] \le e^{- \frac{\gamma^3}{4} U^*_i} \le e^{- \frac{\gamma^3}{4} \cdot \frac{4 Q_{L}}{\gamma^4} } \le \frac{1}{4 e^{Q_{L}}}. $$ It is easy to check that the final transition holds because $\gamma < 1$ is a constant and $Q_L = \omega(1)$. Note that none of the agents in $F_{L}$ are in Case 1. Hence, by Markov's inequality, we again have: $$ \Pr\left[|F_{L}| \ge \frac{1}{2e^{Q_{L}}} \cdot |G_{L}| \right] \le \frac{1}{2}, $$ which proves the second part of Theorem~\ref{thm:main2}. \end{comment} \section{Conclusion} We considered the problem of fairly allocating public goods. We argued that the {\em core}, which is a generalization of proportionality and Pareto efficiency, and approximations of the core provide reasonable fairness guarantees in this context. Given that no integral outcome may be in the core, we presented efficient algorithms to produce integral outcomes that are constant or near-constant approximations of the core, thereby also establishing the non-trivial existence of such outcomes. Note that our algorithms for matroid and matching constraints that globally optimize the smooth Nash welfare objective achieve {\em exact} rather than approximate Pareto efficiency, in addition to an approximation of the core. An interesting question is whether the same guarantee can be provided (regardless of computation time) for general packing constraints. Another natural question following our work is to tighten our upper bounds, or to establish matching lower bounds. For instance, we show the existence of a $(0,2)$-core outcome for matroid constraints (Theorem~\ref{thm:matroid}), but our lower bound only shows that a $(0,1-\epsilon)$-core outcome may not exist. This leaves open the question of whether a $(0,1)$-core outcome always exists. Existence of $(0,1)$-core outcome is also an open question for matching constraints. For packing constraints, it is unknown if even a $(\delta,\alpha)$-core outcome exists for constant $\delta > 0$ and $\alpha = O(1)$. This also remains an open question for the endowment-based notion of core we consider in Appendix~\ref{sec:prop}. At a higher level, we established connections between approximating the core in our multi-agent environment and the problem of finding the optimal (i.e., utility-maximizing) outcome for a {\em single} agent. For instance, given matching constraints, our algorithm uses the idea of {\em short} augmenting paths from fast PRAM algorithms. This hints at the possibility of a deeper connection between efficiency results and existence results. \section{Endowments-Based Core} \label{sec:prop} In this section, we show that our randomized rounding approach to packing problems extends to a slightly different definition of the core, and yields a similar approximation result. This alternate definition of the core only applies to packing constraints. For simplicity of presentation, we focus on the {\sc Approval Voting} setting, where $n$ agents have binary additive utilities over $m$ unit-size elements, and feasible outcomes are subsets of elements of size at most $B$. So far, we considered a notion of core in which a subset $S$ of agents can deviate and choose a feasible outcome using the entire budget $B$; however, their utility is scaled down by $|S|/n$. A different notion of core is based on scaling the {\em endowment}. Under this notion, when $S$ deviates, it can choose an outcome with a scaled down budget of $B \cdot |S|/n$, but then its utility is not scaled down. This notion of core has been considered in the context of participatory budgeting~\cite{Fain2016} and logically implies {\em proportional representation} of voters in multi-winner elections with approval voting. This notion builds on the seminal work of \cite{lindahlCore} on Lindahl equilibrium and its connection to the core. For $P \le B$, let $\O(P)$ denote the set of outcomes consisting of at most $P$ elements. \begin{definition} \label{def:core2} We say that an outcome $\bc$ is a $(\delta,\alpha)$-core outcome if for every subset $S$ of agents and every outcome ${\mathbf c'} \in \O(B \cdot (1-\delta) \cdot \frac{|S|}{n})$, it is {\em not} the case that $u_i({\mathbf c'}) \geq (1+\delta) \cdot u_i(\bc) + \alpha$ for all $i \in S$ and at least one inequality is strict. We refer to a $(0,0)$-core outcome simply as a core outcome. \end{definition} As shown by~\cite{Fain2016}, it follows directly from the work of \cite{lindahlCore} that there always exists a fractional core outcome in this setting due to a fixed point argument.\footnote{Note that Theorem~\ref{thm:fractional} does not apply in this setting, since it requires scaling the utility by the size of the coalition.} However, it is not known if a fractional core or approximate core outcome can be computed in polynomial time. More interestingly, it is also an open question whether an integral core outcome always exists for {\sc Approval Voting}. It is easy to show that an integral core outcome does not always exist in a slightly more general setting of participatory budgeting, in which non-binary utilities and different sized elements are allowed. Consider an example with three elements $\{a,b,c\}$ of size $2$ each, a budget of $B = 3$, and three agents with {\em cyclic} preferences over the elements as follows. \begin{center} \begin{tabular}{c | c c c} & $a$ & $b$ & $c$\\ \hline $u_1$ & 1 & 0.5 & 0\\ $u_2$ & 0 & 1 & 0.5\\ $u_3$ & 0.5 & 0 & 1 \end{tabular} \end{center} An integral outcome $\bc$ can only choose a single element. Without loss of generality, suppose $\bc = \set{a}$. Then, the set of agents $S = \set{2,3}$ and outcome ${\mathbf c'} = \set{c}$ show a violation of the core. We now show the existence of an approximate core solution. We begin with the {\em fractional core} outcome $\vec{x}$ that can be computed using fixed point methods~\cite{lindahlCore,Fain2016}. We use dependent rounding~\cite{GandhiKS01} to round $\vec{x}$ to $\vec{X}$ so that (i) $x_j = \mathbf{E}[X_j]$ for each element $j$; (ii) the constraint $\sum_j X_j \le B$ is preserved; and (iii) $\{X_j\}$ are negatively correlated. Since we do not know if the fractional core outcome can be computed in polynomial time, this algorithm is not necessarily polynomial time, but it yields the following approximation result. \begin{theorem} \label{thm:main4} For $\delta \in (0,1]$, there is a $(\delta,\alpha)$-core for {\sc Approval Voting}, where $ \alpha = O\left( \frac{1}{\delta^4} \log \frac{B}{\delta} \right)$. \end{theorem} \begin{proof} We only sketch the proof of the upper bound, since it is similar to the argument in Section~\ref{sec:light}. Let $\gamma = \frac{\delta}{5}$. Let $U^*_i$ denote the utility agent $i$ receives in the fractional core outcome. We have $U^*_i \in [0,B]$. Let $\hat{U}_i$ be the random variable denoting the utility agent $i$ obtains in the rounded allocation. Let $L = \frac{2}{\gamma^4} \log \frac{4 B}{\gamma}$. First, if $U^*_i \le L$, then $\hat{U}_i \ge U^*_i - L$ trivially. Otherwise, $U^*_i \ge L$, and using Lemma~\ref{lem:main}, we have: $$ \Pr\left[ \hat{U}_i < (1- 2 \gamma) U^*_i \right] \le e^{- \frac{\gamma^3}{4} U^*_i} \le e^{- \frac{\gamma^3}{4} L} \le \frac{1}{2 B}.$$ Let $F$ denote the subset of agents with $\hat{U}_i < \min\left((1-2 \gamma) U^*_i, U_i^* - L \right)$. By Markov's inequality: $$ \Pr\left[|F| \ge \frac{n}{B} \right] \le \frac{1}{2}.$$ Let $W = N \setminus F$. Suppose set $S$ of agents deviate. We consider two cases: \paragraph{Case 1.} Suppose $|W \cap S| \ge (1-\gamma) |S|$. Then, consider the agents in $W \cap S$, $P = \frac{|W \cap S|}{n} B \ge (1-\gamma) \frac{|S|}{n} B \ge (1-\delta) \frac{|S|}{n} B$ and any allocation $\vec{h} \in \O(P)$. By the core condition, there exists $i \in W \cap S$ with $U^*_i \ge U_i(\vec{h})$. Since $i \in W$, we have $$\hat{U}_i \ge \min\left((1-2 \gamma) U^*_i, U_i^* - L \right).$$ This implies $ U_i(\vec{h}) \le \hat{U}_i (1 + 5 \gamma) + L$. \paragraph{Case 2.} Otherwise, $|S \setminus W| \ge \gamma |S|$. Then $$|S| \le \frac{1}{\gamma} |S \cap F| \le \frac{1}{\gamma} |F| \le \frac{1}{\gamma} \frac{ n}{B}.$$ Thus, if $S$ deviates, their scaled down budget is at most $1/\gamma$. Using this budget, an agent in $S$ can derive utility at most $1/\gamma < \alpha$. Since we give $\alpha$ utility for free to each agent in $S$ under our additive approximation, the approximate core condition is satisfied. \end{proof} The above proof generalizes to arbitrary packing constraints $A \vec{x} \le \vec{b}$. In this case, let $\Delta = \max_k \max_j \frac{b_i}{a_{kj}}$. For $P \le 1$, let $\O(P)$ denote the set of outcomes satisfying $A \vec{x} \le P \vec{b}$. Then, for $\delta \ge 0$ and $\alpha \ge 0$, we say that an outcome $\bc$ is a $(\delta, \alpha$)-core outcome if for any $S \subseteq N$ and outcome ${\mathbf c'} \in \O(\frac{t (1 - \delta)}{n})$, it is not the case that $u_i({\mathbf c'}) \ge (1+\delta) \cdot u_i(\bc) + \alpha $ and at least one inequality is strict. Generalizing the above proof, it is easy to show the following theorem. \begin{theorem} For $\delta \in (0,1]$, there is a $(\delta,\alpha)$-core outcome for general packing problems, where $ \alpha = O\left( \frac{1}{\delta^4} \log \frac{\Delta}{\delta} \right)$. \end{theorem} \subsection{Proof of Theorem~\ref{thm:fractional}} \label{app:fractional} The fractional outcome maximizing the Nash welfare objective is the solution of the following program. For simplicity of presentation, we absorb the constraint that $w_j \le 1$ for each $j \in W$ into the packing constraints. \begin{equation} \label{prog:2} \mbox{Maximize } \ \sum_{i \in N} \ln U_i \end{equation} \[ \begin{array}{rcll} \sum_{j=1}^{m} a_{kj} w_j & \leq & b_k & \forall k \in [K] \\ U_i & = & \sum_{j =1}^m w_j u_{ij} & \forall i \in N \\ w_j & \ge & 0 & \forall j \in W \end{array}\] \renewcommand{\P}{\mathcal{U}} Denote a vector of utilities by $\vec{U} = \langle U_1,U_2, \ldots, U_n \rangle$, and the polytope of feasible utility vectors by $\P$. Then, the fractional MNW outcome is obtained by the following maximization. \begin{equation} \label{prog:3} \max_{\vec{U} \in \P} \ \sum_{i \in N} \ln U_i \end{equation} We want to compute a fractional $(\delta,\epsilon)$-approximate core outcome in time polynomial in $n, V_{\max},$ and $\log \frac{1}{\delta\epsilon}$. Assume that $\P$ is a convex polytope of feasible utility vectors. For any $\delta \geq 0, \epsilon > 0$, let $\epsilon' = \epsilon/(1+\delta)$. Define the following objective function. Note that in the absence of the $\epsilon'$ term, it would mimic the derivative of the Nash social welfare objective from Program~\eqref{prog:3}. \begin{equation} \label{prog:4} \min_{\vec{U} \in \P} Q(\vec{U}), \text{ where } Q(\vec{U}) = \max_{\vec{U'} \in \P} \ \sum_{i \in N} \frac{U'_i + \epsilon'}{U_i + \epsilon'}. \end{equation} Clearly, $Q(\vec{U}) \ge n$ for every $\vec{U}$. Thus, the objective value in Program~\eqref{prog:4} is at least $n$. In Section~\ref{sec:fractional}, we presented an argument showing that the fractional MNW outcome is in the core. A similar argument using the first order optimality condition shows that if $\vec{U^*} \in \argmax_{\vec{U} \in \P} \sum_{i \in N} \ln (U_i+\epsilon')$, then $$ \sum_{i \in N} \frac{U_i + \epsilon'}{U^*_i + \epsilon'} \leq n. $$ This implies the optimum of Program~\eqref{prog:4} is achieved at the fractional outcome maximizing the smooth Nash welfare objective $\sum_{i \in N} \ln (U_i+\epsilon')$, and this optimal value is exactly $n$. Next, we turn to efficiently approximating Program~\eqref{prog:4}, and show that if $Q(\vec{U}) \le n(1+\delta)$, then $\vec{U}$ is a $(\delta,\epsilon)$-core outcome. We want to use the Ellipsoid algorithm to approximately minimize the objective function $Q(\vec{U})$ over $\vec{U} \in \P$ in polynomial time. For this, all we need is that $Q$ is a convex function, its subgradient is efficiently computable, the range of $Q$ and the diameter of $\P$ are exponentially bounded, and polytope $\P$ is efficiently separable~\cite{Bubeck}. First, we claim that $Q(\vec{U})$ is a convex function of $\vec{U}$. To see this, note that for any fixed $\vec{U'}$, $U'_i/U_i$ is convex in $U_i$. Since the sum and maximum of convex functions is convex, we conclude that $Q(\vec{U})$ is also convex. Second, the subgradient of $Q(\vec{U})$ is efficiently computable for every $\vec{U} \in \P$. First, we find the $\vec{U'} \in \P$ that maximizes $\sum_{i\in N} \frac{U'_i + \epsilon'}{U_i + \epsilon'} $ by solving a linear program. Then, we fix $\vec{U'}$ and take the gradient of $\frac{U'_i}{U_i} $ with respect to $U_i$ to obtain a subgradient of $Q(\vec{U})$. Third, note that $U_i \in [0,V_{\max}]$ for each $i$. Hence, $Q(\vec{U}) \le \frac{n \cdot (V_{\max}+\epsilon')}{\epsilon'}$, which is exponentially bounded in the input size. It is easy to see that the same holds for the diameter of the polytop $\P$. Finally, polytope $\P$ is efficiently separable because it is a set of polynomially many linear inequalities. Hence, we can efficiently obtain a solution $\vec{\hat{U}} \in \P$ that satisfies $$ \max_{\vec{U'} \in \P} \sum_{i \in N} \frac{U'_i + \epsilon'}{\hat{U}_i + \epsilon'} \leq n + \delta \le n (1+ \delta). $$ Finally, we show that $\vec{\hat{U}}$ must be a $(\delta,\epsilon)$-core outcome. Suppose for contradiction that it is not. Then, there exists a subset $S$ of agents and an outcome $\vec{U'}$ such that $$ (1+\delta) \cdot \hat{U}_i + \epsilon \le \frac{|S|}{n} \cdot U'_i $$ for all $i \in S$, and at least one inequality is strict. Rearranging the terms and summing over $i \in S$, we obtain $$ \sum_{i \in S} \frac{U'_i}{(1+\delta) \cdot \hat{U}_i + \epsilon} > |S| \cdot \frac{n}{|S|} = n. $$ However, we also have $$ \sum_{i \in S} \frac{U'_i}{(1+\delta) \cdot \hat{U}_i + \epsilon} \le \sum_{i \in S} \frac{U'_i+\epsilon'}{(1+\delta) \cdot (\hat{U}_i + \epsilon')} = \frac{1}{1+\delta} \sum_{i \in S} \frac{U'_i+\epsilon'}{\hat{U}_i + \epsilon'} \le n, $$ where the last inequality is due to approximate optimality of $\vec{\hat{U}}$. This is a contradiction. Hence, $\vec{\hat{U}}$ is a $(\delta,\epsilon)$-core outcome. \begin{comment} \medskip \noindent {\bf Second Approach.} The above approach needs solving a convex program to compute the subgradient. We now present a simpler approach when $\P$ is a set of packing constraints. We start with the following Nash welfare program: \begin{equation} \label{prog:5} \mbox{Maximize } \ \sum_{i \in N} \ln \left( \sum_{j =1}^m w_j u_{ij} + \epsilon \right) \end{equation} \[ \begin{array}{rcll} \sum_{j=1}^{m} a_{kj} w_j & \leq & 1 & \forall k \in [K] \\ w_j & \ge & 0 & \forall j \in W \end{array}\] The KKT conditions applied to this program imply there are non-negative Lagrange multipliers $\lambda_1, \lambda_2, \ldots, \lambda_K$ and an optimal solution $\vec{w}$ such that: $$ \sum_i \frac{u_{ij}}{\sum_{r =1}^m w_r u_{ir} + \epsilon} \le \sum_k \lambda_k a_{kj}$$ with the inequality being tight when $w_j > 0$. Furthermore, $$ \lambda_k > 0 \qquad \Rightarrow \qquad \sum_{j=1}^{m} a_{kj} w_j = 1$$ Multiplying the first set of equalities by $w_j$ and summing up, we obtain: $$ \sum_k \lambda_k \le n$$ Consider now the following convex program: \begin{equation} \label{prog:6} \mbox{Minimize } \max_j \left( \ \sum_{i \in N} \frac{u_{ij}}{\sum_{r =1}^m w_r u_{ir} + \epsilon} - \sum_k \lambda_k a_{kj} \right) \end{equation} \[ \begin{array}{rcll} \sum_{j=1}^{m} a_{kj} w_j & \leq & 1 & \forall k \in [K] \\ \sum_k \lambda_k & \le & n & \\ w_j & \ge & 0 & \forall j \in W \\ \lambda_k & \ge & 0 & \forall k \in [K] \end{array}\] The above discussion implies that there exists a setting of $\vec{w^*}$ and $\vec{\lambda^*}$ that makes the objective of this new program at most $0$. Since the objective is convex and bounded, while the constraints are linear, there is a polynomial time algorithm to compute a $\delta/m$ approximation to this objective for any $\delta > 0$. This solution $\vec{w}$ and $\vec{\lambda}$ satisfies for all $j$ the condition: $$ \sum_{i \in N} \frac{u_{ij}}{\sum_{r =1}^m w_r u_{ir} + \epsilon} \le \sum_k \lambda_k a_{kj} + \frac{\delta}{m}$$ For any feasible $\vec{w'}$, multiplying the above inequalities by $w'_j$ and summing, this implies: $$ \sum_{i \in N} \frac{\sum_{r =1}^m w'_r u_{ir}}{\sum_{r =1}^m w_r u_{ir} + \epsilon} \le \sum_k \lambda_k + \frac{\delta}{m} \sum_j w'_j \le n (1+\delta)$$ where we note that $\sum_k \lambda_k \le n$ and we have assumed $w'_j \le 1$ for all $j$. Using the same argument as before, this implies $\vec{w}$ is a $(\delta,\epsilon)$-core. \renewcommand{\P}{\mathcal{P}} \iffalse We note that our algorithm uses the fractional Max Nash Welfare (MNW) solution $\vec{x}$. For the allocation of private goods, given rational inputs this solution is guaranteed to use rational probabilities, and can be computed exactly in polynomial time through to the Gale-Eisenberg program~\cite{EG59,eisenbergGaleMarkets}. However, for the allocation of public goods, it can be shown that the fractional MNW solution can have irrational probabilities despite rational inputs~\cite{AACK+17}, preventing an exact polynomial time algorithm. For our approximation results, a fractional solution that approximately preserves the utility to each agent would suffice. However, to the best of our knowledge, interior point methods can only find a solution that approximates {\em the objective value} in polynomial time. We can still circumvent this issue by observing that we only need Equation~\eqref{eqn:mnw-need} to hold approximately. Formally, for a fractional outcome $\vec{c}$, define $Q(\vec{c}) = \max_{\vec{d}} \sum_{i \in N} u_i(\vec{d})/u_i(\vec{c})$. The fractional MNW solution $\vec{x}$ satisfies $Q(\vec{x}) \le n$. If we can find a solution $\hat{\vec{x}}$ for which $Q(\hat{\vec{x}}) \le n+\epsilon$, it is easy to check that our multiplicative and additive approximations increase only by a factor of $1+\epsilon/n$, preserving our asymptotic guarantees. Finally, we use the fact that $Q(\vec{c})$ is concave in $\vec{c}$~\cite{Fain2016}, and can be evaluated in polynomial time as it amounts to solving a linear program. Thus, using interior point methods to optimize $Q$ provides the desired solution $\hat{\vec{x}}$ in polynomial time. \fi \end{comment} \section{Introduction} \bibliographystyle{acm} \section{Introduction} \label{sec:intro} In fair resource allocation, most work considers \textit{private goods}; each good must be assigned to a particular agent (and no other). However, not all goods are private. \textit{Public goods} are those which can be enjoyed by multiple agents simultaneously, like a public road. Allocation of public goods generalizes the problem of allocation of private goods, and, as we will see, can provide new difficulties from both a normative and an algorithmic perspective. Consider an example to highlight what a public resource allocation problem might look like, and why fairness might be a concern. Suppose that the next time you vote, you see that there are four referendums for your consideration on the ballot, all of which concern the allocation of various public goods in your city: A = a new school, B = enlarging the public library, C = renovating the community college, and D = improving a museum. In 2016, residents of Durham, North Carolina faced precisely these options~\cite{DurhamElection}. Suppose the government has resources to fund only two of the four projects, and the (hypothetical) results were as follows: a little more than half of the population voted for $(A, B)$, a little less than half voted for $(C,D)$, and every other combination received a small number of votes. Which projects should be funded? If we na\"{\i}vely tally the votes, we would fund A and B, and ignore the preferences of a very large minority. In contrast, funding A and C seems like a reasonable compromise. Of course, it is impossible to satisfy \textit{all} voters, but given a wide enough range of possible outcomes, perhaps we can find one that fairly reflects the preferences of large subsets of the population. This idea is not captured by fairness axioms like proportionality or their approximations~\cite{FPDM}, which view fairness from the perspectives of {\em individual} agents. Indeed, in the aforementioned example, {\em every} allocation gives zero utility to {\em some} agent, and would be deemed equally good according to such fairness criteria. \subsection{Public Goods Model} We consider a fairly broad model for public goods allocation that generalizes much of previous work~\cite{LMMS04,FPDM,Fain2016,Brill,FairKnapsack,envyFreeUpTo1,envyFreeUpToAny}. There is a set of voters (or agents) $N = [n]$. Public goods are modeled as elements of a ground set $W$. We denote $m = |W|$. An {\em outcome} $\bc$ is a subset of $W$. Let $\mathcal{F} \subseteq 2^W$ denote the set of feasible outcomes. The utility of agent $i$ for element $j \in W$ is denoted $u_{ij} \in \mathbb{R}_{\ge 0}$. We assume that agents have additive utilities, i.e., the utility of agent $i$ under outcome $\bc \in \mathcal{F}$ is $u_i(\bc) = \sum_{j \in \bc} u_{ij}$. Since we are interested in scale-invariant guarantees, we assume without loss of generality that $\max_{j \in W} u_{ij} = 1$ for each agent $i$, so that $u_{ij} \in [0,1]$ for all $i,j$. Crucially, this does not restrict the utility of an agent for an outcome to be $1$: $u_i(\bc)$ can be as large as $m$. Specifically, let $V_i = \max_{\bc \in \mathcal{F}} u_i(\bc)$, and $V_{\max} = \max_{i\in N} V_i$. Our results differ by the feasibility constraints imposed on the outcome. We consider three types of constraints, special cases of which have been studied previously in literature. \paragraph{Matroid Constraints.} In this setting, we are given a matroid $\mathcal{M}$ over the ground set $W$, and the feasibility constraint is that the chosen elements must form a basis of $\mathcal{M}$ (see~\cite{Kung} for a formal introduction to matroids). This generalizes the {\em public decision making} setting introduced by~\cite{FPDM}. In this setting, there is a set of issues $T$, and each issue $t \in T$ has an associated set of alternatives $A^t = \{a_1^t, \hdots, a_{k_t}^t\}$, exactly one of which must be chosen. Agent $i$ has utility $u_i^t(a_j^t)$ if alternative $a_j^t$ is chosen for issue $t$, and utilities are additive across issues. An outcome $\bc$ chooses one alternative for every issue. It is easy to see that if the ground set is $\cup_t A^t$, the feasibility constraints correspond to a partition matroid. We note that public decision making in turn generalizes the classical setting of {\em private goods allocation}~\cite{LMMS04,envyFreeUpTo1,envyFreeUpToAny} in which private goods must be divided among agents with additive utilities, with each good allocated to exactly one agent. Matroid constraints also capture multi-winner elections in the voting literature (see, e.g.~\cite{Brill}), in which voters have additive utilities over candidates, and a committee of at most $k$ candidates must be chosen. This is captured by a uniform matroid over the set of candidates. \paragraph{Matching Constraints.} In this setting, the elements are edges of an undirected graph $G(V,E)$, and the feasibility constraint is that the subset of edges chosen must form a {\em matching}. Matchings constraints in a bipartite graph can be seen as the intersection of two matroid constraints. Matching constraints are a special case of the more general packing constraints we consider below. \paragraph{Packing Constraints.} In this setting, we impose a set of packing constraints $A \vec{x} \le \vec{b}$, where $x_j \in \{0,1\}$ is the indicator denoting whether element $j$ is chosen in the outcome. Suppose $A$ is a $K \times m$ matrix, so that there are $K$ packing constraints. By scaling, we can assume $a_{kj} \in [0,1]$ for all $k,j$. Note that even for one agent, packing constraints encode independent set. Thus, to make the problem tractable, we assume $\vec{b}$ is sufficiently large, in particular, $b_k = \omega\left( \log K \right)$ for all $k \in \{1,2,\ldots,K\}$. This is in contrast to matroid and matching constraints, for which single-agent problems are polynomial time solvable. A classic measure of how easy it is to satisfy the packing constraints is the {\em width} $\rho$~\cite{PST}: \begin{equation} \label{eq:width} \rho = \max_{k \in [K]} \frac{\sum_{j \in [m]} a_{kj}}{b_k}. \end{equation} Packing constraints capture the general {\sc Knapsack} setting, in which there is a set of $m$ items, each item $j$ has an associated size $s_j$, and a set of items of total size at most $B$ must be selected. This setting is motivated by participatory budgeting applications~\cite{PBP,knapsack1,knapsack2,Fain2016,GargKGMM17,FairKnapsack,BNPS17}, in which the items are public projects, and the sizes represents the costs of the projects. {\sc Knapsack} uses a single packing constraint. Multiple packing constraints can arise if the projects consume several types of resources, and there is a budget constraint for each resource type. For example, consider a statewide participatory budgeting scenario where each county has a budget than can only be spent on projects affecting that county, the state has some budget that can be spent in any county, and projects might affect multiple counties. In such settings, it is natural to assume a small width, i.e., that the budget for each resource is such that a large fraction (but not all) of the projects can be funded. We note that the aforementioned multi-winner election problem is a special case of the {\sc Knapsack} problem with unit sizes. \subsection{Prior Work: Fairness Properties} We define fairness desiderata for the public goods setting by generalizing appropriate desiderata from the private goods setting such as Pareto optimality, which is a weak notion of efficiency, and proportionality, which is a per-agent fair share guarantee.\footnote{Those familiar with the literature on fair division of private goods will note the conspicuous absence of the \textit{envy freeness} property: that no agent should (strongly) prefer the allocation of another agent. Because we are considering public goods, envy freeness is only vacuously defined: the outcome in our setting is common to all agents.} \begin{definition} An outcome $\bc$ satisfies \textbf{Pareto optimality} if there is no other outcome ${\mathbf c'}$ such that $u_i({\mathbf c'}) \geq u_i(\bc)$ for all agents $i \in N$, and at least one inequality is strict. \end{definition} Recall that $V_i$ is the maximum possible utility agent $i$ can derive from a feasible outcome. \begin{definition} The \textbf{proportional share} of an agent $i \in N$ is $Prop_i := \frac{V_i}{n}$. For $\beta \in (0,1]$, we say that an outcome $\bc$ satisfies $\beta$-proportionality if $u_i(\bc) \geq \beta \cdot Prop_i$ for all agents $i \in N$. If $\beta=1$, we simply say that $\bc$ satisfies proportionality. \end{definition} The difficulty in our setting stems from requiring integral outcomes, and not allowing randomization. In the absence of randomization, it is reasonably straightforward to show that we cannot guarantee $\beta$-proportionality for any $\beta \in (0,1]$. Consider a problem instance with two agents and two feasible outcomes, where each outcome gives a positive utility to a unique agent. In any feasible outcome, one agent has zero utility, which violates $\beta$-proportionality for every $\beta > 0$. To address this issue, \cite{FPDM} introduced the novel relaxation of proportionality up to one issue in their public decision making framework, inspired by a similar relaxation called envy-freeness up to one good in the private goods setting~\cite{LMMS04,envyFreeUpTo1}. They say that an outcome $\bc$ of a public decision making problem satisfies \emph{proportionality up to one issue} if for all agents $i \in N$, there exists an outcome ${\mathbf c'}$ that differs from $\bc$ only on a single issue and $u_i({\mathbf c'}) \geq Prop_i$. Proportionality up to one issue is a reasonable fairness guarantees only when the number of issues is larger than the number of agents; otherwise, it is vacuous and is satisfied by all outcomes. Thus, it is perfectly reasonable for some applications (e.g., three friends choosing a movie list to watch together over the course of a year), but not for others (e.g., when thousands of residents choose a handful of public projects to finance). In fact, it may produce an outcome that may be construed as unfair if it does not reflect the wishes of large groups of voters. Thus, in this work, we address the following question posed by~\cite{FPDM}: \begin{quote}{\em Is there a stronger fairness notion than proportionality in the public decision making framework...? Although such a notion would not be satisfiable by deterministic mechanisms, it may be satisfied by randomized mechanisms, or it could have novel relaxations that may be of independent interest.} \end{quote} \subsection{Summary of Contributions} Our primary contributions are twofold. \begin{itemize} \item We define a fairness notion for public goods allocation that is stronger than proportionality, ensures fair representation of groups of agents, and in particular, provides a meaningful fairness guarantee even when there are fewer goods than agents. \item We provide polynomial time algorithms for computing integer allocations that approximately satisfy this fairness guarantee for a variety of settings generalizing the public decision making framework and participatory budgeting. \end{itemize} \subsection{Core and Approximate Core Outcomes} Below, we define the notion of \emph{core outcomes}, which has been extensively studied (in similar forms) as a notion of stability in economics~\cite{lindahlCore,scarfCore,coreConjectureCounter} and computer science~\cite{ROBUS,Fain2016} in the context of {\em randomized} or {\em fractional} allocations. Our main contribution is to study it in the context of {\em integer} allocations. \begin{definition} \label{def:core} Given an outcome $\bc$, we say that a set of agents $S \subseteq N$ form a blocking coalition if there exists an outcome ${\mathbf c'}$ such that $(|S|/n) \cdot u_i({\mathbf c'}) \geq u_i(\bc)$ for all $i \in S$ and at least one inequality is strict. We say that an outcome $\bc$ is a {\bf core outcome} if it admits no blocking coalitions \end{definition} Note that non-existence of blocking coalitions of size $1$ is equivalent to proportionality, and non-existence of blocking coalitions of size $n$ is equivalent to Pareto optimality. Hence, a core outcome is both proportional and Pareto optimal. However, the core satisfies a stronger property of being, in a sense, Pareto optimal for coalitions of {\em any size}, provided we scale utilities based on the size of the coalition. Another way of thinking about the core is to view it as a fairness property that enforces a proportionality-like guarantee for coalitions: \textit{e.g.}, if half of all agents have identical preferences, they should be able to get at least half of their maximum possible utility. It is important to note that the core provides a guarantee for every possible coalition. Hence, in satisfying the guarantee for a coalition $S$, a solution cannot simply make a single member $i \in S$ happy and ignore the rest as this would likely violate the guarantee for the coalition $S \setminus \{i\}$. \paragraph{Approximate Core.} Since a proportional outcome is not guaranteed to exist (even allowing for multiplicative approximations), the same is true for the core. However, an additive approximation to the core still provides a meaningful guarantee, even when there are fewer elements than agents because it provides a non-trivial guarantee to large coalitions of like-minded agents. \begin{definition} \label{def:approx} For $\delta,\alpha \ge 0$, an outcome $\bc$ is a {\bf $\boldsymbol{(\delta, \alpha)}$-core outcome} if there exists no set of agents $S \subseteq N$ and outcome ${\mathbf c'}$ such that $$ \frac{|S|}{n}\cdot u_i({\mathbf c'}) \ge (1+\delta) \cdot u_i(\bc) + \alpha $$ for all $i \in S$, and at least one inequality is strict. \end{definition} A $(0,0)$-core outcome is simply a core outcome. A $(\delta,0)$-core outcome satisfies $\delta$-proportionality. Similarly, a $(0,1)$-core outcome $\bc$ satisfies the following relaxation of proportionality that is slightly weaker than proportionality up to one issue: for every agent $i \in N$, $u_i(\bc)+1 \ge Prop_i$. We note that this definition, and by extension, our algorithms satisfy scale invariance, i.e., they are invariant to scaling the utilities of any individual agent. Because we normalize utilities of the agents, the true additive guarantee is $\alpha$ times the maximum utility an agent can derive from a single element. Since an outcome can have many elements, an approximation with small $\alpha$ remains meaningful. The advantage of an approximate core outcome is that it fairly reflects the will of a like-minded subpopulation relative to its size. An outcome satisfying approximate proportionality only looks at what {\em individual} agents prefer, and may or may not respect the collective preferences of sub-populations. We present such an instance in Example~\ref{eg:prop1} (Section~\ref{sec:fnw}), in effect showing that an approximate core outcome is arguably more fair. \iffalse \bdf{I added this example to give some intuition about the approximate core. I'm not beholden to it if we feel that returning to a very simple example is too much fluff.} Recall the opening example: There are four public goods A, B, C, and D. Suppose we interpret the votes as follows: slightly less than half of agents get utility 1 for A and 1 for B, slightly less than half get utility 1 for C and 1 for D, and there are some small number of agents who get utility from the other combinations of projects. Then if we choose the outcome \{A, B\}, consider the agents who like C and D. They are getting utility 0, and there exists a feasible outcome \{C, D\} so that their utility scaled down by slightly less than half is slightly less than 1. So this is roughly a $(0,1)$-core solution. But it is not too hard to see that the outcome \{A, C\}, which gives utility 1 to all of the agents in the two large subsets, is very close to an exact core solution. This corresponds with the intuition we had at the outset that \{A, C\} is more fair to the large subsets of agents. Note, however, that if we only consider additive approximations of proportionality, \{A, B\} and \{A, C\} are indistinguishable, assuming there is at least one agent who gets utility only from \{B, D\}. \fi In our results, we will assume $\delta < 1$ to be a small constant, and focus on making $\alpha$ as small as possible. In particular, we desire guarantees on $\alpha$ that exhibit sub-linear or no dependence on $n$, $m$, or any other parameters. Deriving such bounds is the main technical focus of our work. \subsection{Our Results} We present algorithms to find approximate core outcomes under matroid, matching, and general packing constraints. Our first result (Section~\ref{sec:mnw}) is the following: \begin{theorem} \label{thm:matroid} If feasible outcomes are constrained to be bases of a matroid, then a $(0,2)$-core outcome is guaranteed to exist, and for any $\epsilon > 0$, a $(0,2+\epsilon)$-core outcome can be computed in time polynomial in $n, m,$ and $\sfrac{1}{\epsilon}$. \end{theorem} In particular, for the public decision making framework, the private goods setting, and multi-winner elections (a.k.a. {\sc Knapsack} with unit sizes), there is an outcome whose guarantee for {\em every coalition} is close to the guarantee that~Conitzer et al. provide to individual agents~\cite{FPDM}. In Section~\ref{sec:matching}, we consider matching constraints. Our result now involves a tradeoff between the multiplicative and additive guarantees. \begin{theorem} \label{thm:matching} If feasible outcomes are constrained to be matchings in an undirected graph, then for constant $\delta \in (0,1]$, a $\left(\delta,8+\sfrac{6}{\delta}\right)$-core outcome can be computed in time polynomial in $n$ and $m$. \end{theorem} \medskip Our results in Section~\ref{sec:main} are for general packing constraints. Here, our guarantee depends on the width $\rho$ from Equation~(\ref{eq:width}), which captures the difficulty of satisfying the constraints. In particular, the guarantee improves if the constraints are easier to satisfy. This is the most technical result of the paper, and involves different techniques than those used in proving Theorems~\ref{thm:matroid} and~\ref{thm:matching}; we present an outline of the techniques in Section~\ref{sec:idea}. \begin{theorem} \label{thm:packing} For constant $\delta \in (0,1)$, given $K$ packing constraints $A \vec{x} \le \vec{b}$ with width $\rho$ and $b_k = \omega \left(\frac{\log K}{\delta^2}\right)$ for all $k \in [K]$, there exists a polynomial time computable $(\delta,\alpha)$-core solution, where $$ \alpha = O\left( \frac{1}{\delta^4} \cdot \log\left(\frac{\min(V_{\max},n,\rho) \cdot \log^* V_{\max}}{\delta} \right)\right). $$ \end{theorem} Here, $\log^*$ is the iterated logarithm, which is the number of times the logarithm function must be iteratively applied before the result becomes less than or equal to $1$. Recall that $V_{\max}$ is the maximum utility an agent can have for an outcome (thus $V_{\max} \le m$); our additive error bound is a vanishing fraction of this quantity. Our bound is also small if the number of agents $n$ is small. Finally, the guarantee improves for small $\rho$, i.e., as the packing constraints become easier to satisfy. For instance, in participatory budgeting, if the total cost of all projects is only a constant times more than the budget, then our additive guarantee is close to a constant. Note that $V_{\max}$ (which is bounded by $m$), $n$, and $\rho$ are all unrelated quantities --- either could be large with the other two being small. In fact, in Section~\ref{sec:main}, we state the bound more generally in terms of what we call the {\em maximally proportionally fair value} $R$, which informally captures the (existential) difficulty of finding a proportionally fair allocation. The quantity $\min(V_{\max},n,\rho)$ stems from three different bounds on the value of $R$. In Example~\ref{eg:IS} (Appendix~\ref{sec:Examples}), we show that the lower bound on $\vec{b}$ in the above theorem is necessary: if $\vec{b} = O(1)$, then no non-trivial approximation to the core can be guaranteed, even when $\rho$ is a constant. \medskip Finally, in Appendix~\ref{sec:prop}, we consider a different (and more classical) version of the core for general packing constraints, in which a deviating coalition gets a proportional share of resources rather than a proportional share of utility. We show that our techniques provide a similar approximation to this version of the core, although we do not provide an efficient algorithm in this model. \subsection{Related Work} \paragraph{Core for Public Goods.} The notion of core is borrowed from cooperative game theory and was first phrased in game theoretic terms by~\cite{scarfCore}. It has been extensively studied in public goods settings~\cite{lindahlCore,coreConjectureCounter,Fain2016}. Most literature so far has considered the core with {\em fractional allocations}. Our definition of core (Definition~\ref{def:core}) assumes the utility of a deviating coalition is scaled by the size of the coalition. For fractional allocations, one such core allocation coincides with the well-known notion of {\em proportional fairness}, the extension of the Nash bargaining solution~\cite{nashBargaining}. This solution maximizes the product of the utilities of the agents, and we present the folklore proof in Section~\ref{sec:fractional}. Our main focus is on finding {\em integer} allocations that approximate the core, and to the best of our knowledge, this has not been studied previously. A simpler property than the core is proportionality, which like the core, is impossible to satisfy to any multiplicative approximation using integral allocations. To address this problem, \cite{FPDM} defined proportionality up to one issue in the public decision making framework, inspired by related notions for private goods. This guarantee is satisfied by the integral outcome maximizing the {\em Nash welfare} objective, which is the {\em geometric} mean of the utilities to the agents. For public goods, this objective is not only {\sc NP-Hard} to approximate to any multiplicative factor, but approximations to the objective also do not retain the individual fairness guarantees. We extend the notion of additive approximate proportionality to additive approximate core outcomes, which provides meaningful guarantees even when there are fewer goods than agents. Unlike proportionality, we show in Section~\ref{sec:fnw} that the approach of computing the optimal integral solution to the Nash welfare objective fails to provide a reasonable approximation to the core. Therefore, for our results about matroid constraints (Theorem~\ref{thm:matroid}) and matching constraints (Theorem~\ref{thm:matching}), we slightly modify the integer Nash welfare objective and add a suitable constant term to the utility of each agent. We show that maximizing this smooth objective function achieves a good approximation to the core. However, maximizing this objective is still {\sc NP-hard}~\cite{FairKnapsack}, so we devise local search procedures that run in polynomial time and still give good approximations of the core. In effect, we make a novel connection between appropriate {\em local optima} of smooth Nash Welfare objectives and the core. \paragraph{Fairness on Endowments.} Classically, the core is defined in terms of agent endowments, not scaled utilities. In more detail, in Definition~\ref{def:core}, we assumed that when a subset $S$ of agents deviates, they can choose any feasible outcome; however, their utility is reduced by a factor that depends on $|S|$. A different notion of core is based on endowments~\cite{scarfCore,lindahlCore} and has been considered in the context of participatory budgeting~\cite{Fain2016} and in {\em proportional representation} of voters in multi-winner elections with approval voting. In this notion, a deviating coalition gets a proportional share of resources rather than a proportional share of utility. For example, if the elements have different sizes, and we need to select a subset of them with total size at most $B$, then a deviating coalition $S$ would get to choose an outcome with total size at most $B|S|/n$ instead of $B$, but would not have its utility scaled down. This notion builds on the seminal work of Foley on the Lindahl equilibrium~\cite{lindahlCore}, from which it can be shown that such a core outcome always exists when fractional allocations are allowed. However, it is not known how to compute such a core outcome efficiently, and further, it is difficult to define such a notion of endowments in settings such as matroid or matching constraints. In the context of integer allocations with packing constraints, we extend our techniques to provide approximations to the notion of core with endowments in Appendix~\ref{sec:prop}, though this is not the main focus of our paper. The notion of core with endowments logically implies a number of fairness notions considered in multi-winner election literature, such as justified representation, extended justified representation~\cite{Brill}, and proportional justified representation~\cite{Sanchez}. Approval-based multi-winner elections are a special case of packing constraints, in which voters (agents) have binary utilities over a pool of candidates (elements), and we must select a set of at most $B$ candidates. The idea behind proportional representation is to define a notion of large cohesive groups of agents with similar preferences, and ensure that such coalitions are proportionally represented. The core on endowments represents a more general condition that holds for all coalitions of agents, not just those that are large and cohesive. Nevertheless, our local search algorithms for Theorems~\ref{thm:matroid} and~\ref{thm:matching} are similar to local search algorithms for proportional approval voting (PAV)~\cite{Thiele,PJR2018} that achieve proportional representation. It would be interesting to explore the connection between these various notions in greater depth. \paragraph{Private Goods and Envy-freeness.} Private goods are a special case of public goods with matroid constraints. Fair allocation of private goods is a widely studied topic~\cite{varian,drf,Parkes:2012:BDR:2229012.2229075,kellyTCP}. A common fairness criterion for private goods is envy-freeness: that no agent should (strongly) prefer the allocation of another agent. For fractional allocations, the classic context for envy-free allocation is cake cutting~\cite{cakeCutting, cakeCuttingProtocol}. For integral allocations, envy-free allocations or multiplicative approximations thereof may not exist in general. Recent work has introduced envy-freeness up to one good~\cite{LMMS04,budish2,envyFreeUpTo1,envyFreeUpToAny}, an additive approximation of envy-freeness. The notion of envy does not extend as is to public goods, and the core can be thought of as enforcing envy-freeness across demographics. We note that in addition to resource allocation, group based fairness is also appearing as a desideratum in machine learning. Specifically, related notions may provide a tool against gerrymandered classifiers that appear fair on small samples, but not on structured subsets~\cite{MLFairness} \paragraph{Strategyproofness.} In this work, we will not consider game-theoretic incentives for manipulation for two reasons. First, even for the restricted case of private goods allocation, preventing manipulation leads to severely restricted mechanisms. For instance, \cite{Schum97} shows that the only strategyproof and Pareto efficient mechanisms are dictatorial, and thus highly unfair, even when there are only two agents with additive utilities over divisible goods. Second, our work is motivated by public goods settings with a large number of agents, such as participatory budgeting, wherein individual agents often have limited influence over the final outcome. It would be interesting to establish this formally, using notions like {\em strategyproofness in the large}~\cite{AB17}. \section{Prelude: Nash Social Welfare} Our approach to computing approximate core solutions revolves around the Nash social welfare, which is the product (or equivalently, the sum of logarithms) of agent utilities. This objective is commonly considered to be a natural tradeoff between the fairness-blind utilitarian social welfare objective (maximizing the sum of agent utilities) and the efficiency-blind egalitarian social welfare objective (maximizing the minimum agent utility). This function also has the advantage of being \textit{scale invariant} with respect to the utility function of each agent, and in general, preferring more equal distributions of utility. \subsection{Integer Nash Welfare and Smooth Variants} \label{sec:fnw} The integer {\em Max Nash Welfare} (MNW) solution~\cite{envyFreeUpTo1,FPDM} is an outcome $\bc$ that maximizes $\sum_{i \in N} \ln u_i(\bc) $. More technically, if every integer allocation gives zero utility to at least one agent, the MNW solution first chooses a largest set $S$ of agents that can be given non-zero utility simultaneously, and maximizes the product of utilities to agents in $S$. \cite{FPDM} argue that this allocation is reasonable by showing that it satisfies proportionality up to one issue for public decision making. A natural question is whether it also provides an approximation of the core. {\em We answer this question in the negative.} The example below shows that even for public decision making (a special case of matroid constraints), the integer MNW solution may fail to return a $(\delta,\alpha)$-core outcome, for any $\delta = o(m)$ and $\alpha = o(m)$. \begin{example} \label{eg:prop1} \label{eg:mnw} \normalfont{ Consider an instance of public decision making~\cite{FPDM} with $m$ issues and two alternatives per issue. Specifically, each issue $t$ has two alternatives $\{a_1^t, a_2^t\}$, and exactly one of them needs to be chosen. There are two sets of agents $X = \set{1,\ldots,m}$ and $Y = \set{m+1,\ldots,2m}$. Every agent $i \in X$ has $u_i^i(a_1^i) = 1$, and utility 0 for all other alternatives. Every agent $i \in Y$ has $u_i^t(a_2^t) = 1$ and $u_i^t(a_1^t) = 1/m$ for all issues $i \in \{1,2,\ldots,m\}$. Visually, this is represented as follows. \begin{center} \begin{tabular}{ c | c c | c c | c | c c} & $a_1^1$ & $a_2^1$ & $a_1^2$ & $a_2^2$ & $\hdots$ & $a_1^m$ & $a_2^m$ \\ \hline $u_{1 \in X}$ & 1 & 0 & 0 & 0 & $\hdots$ & 0 & 0 \\ $u_{2 \in X}$ & 0 & 0 & 1 & 0 & $\hdots$ & 0 & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & & \vdots & \vdots \\ $u_{m \in X}$ & 0 & 0 & 0 & 0 & $\hdots$ & 1 & 0 \\ $u_{i \in Y}$ & $1/m$ & 1 & $1/m$ & 1 & $\hdots$ & $1/m$ & 1 \end{tabular} \end{center} The integer MNW outcome is $\mathbf{c} = (a_1^1, a_1^2, \hdots, a_1^{m})$ because any other outcome gives zero utility to at least one agent. However, coalition $Y$ can deviate, choose outcome $\mathbf{c'} = (a_2^1, a_2^2, \hdots, a_2^{m})$, and achieve utility $m$ for each agent in $Y$. For $\mathbf{c}$ to be a $(\delta,\alpha)$-core outcome, we need \begin{align*} \exists i \in Y : (1+\delta) \cdot u_i(\mathbf{c}) + \alpha \ge \frac{|Y|}{|Y|+|X|} \cdot u_i(\mathbf{c'}) \quad \Rightarrow \quad 1+\delta + \alpha \ge \frac{m}{2}. \end{align*} Hence, $\mathbf{c}$ is not a $(\delta,\alpha)$-core outcome for any $\delta = o(m)$ and $\alpha = o(m)$. In contrast, it is not hard to see that $\mathbf{c'}$ is a $(0,1)$-core outcome because each agent in $X$ gets utility at most one in any outcome. Further, note that outcome $\mathbf{c}$ gives every agent utility $1$. Since $Prop_i \le 1$ for each agent $i$, $\mathbf{c}$ satisfies proportionality, and yet fails to provide a reasonable approximation to the core. One may argue that $\mathbf{c'}$, which is a $(0,1)$-core outcome, is indeed fairer because it respects the utility-maximizing choice of half of the population; the other half of the population cannot agree on what they want, so respecting their top choice is arguably a less fair outcome. Hence, the example also shows that outcomes satisfying proportionality (or proportionality up to one issue) can be very different from and less fair than approximate core outcomes. } \end{example} \iffalse \begin{theorem} \label{thm:neg} Under matroid constraints (even for the special case of public decision making), for any $\delta = o(m)$, the integer Max Nash Welfare solution is not a $(\delta,o(m))$-core solution. \end{theorem} \ns{Bring this here if space permits.} \fi \paragraph{Smooth Nash Welfare.} One issue with the Nash welfare objective is that it is sensitive to agents receiving zero utility. We therefore consider the following smooth Nash welfare objective: \begin{equation} \label{eq:FK}F(\bc) := \sum_{i \in N} \ln \left(\ell +u_i(\bc) \right) \end{equation} where $\ell \ge 0$ is a parameter. Note that $\ell = 0$ coincides with the Nash welfare objective. The case of $\ell = 1$ was considered by~\cite{FairKnapsack}, who showed it is {\sc NP-Hard} to optimize. Recall that we normalized agent utilities so that each agent has a maximum utility of $1$ for any element, so when we add $\ell$ to the utility of agent $i$, it is equivalent to adding $\ell \max_j u_{ij}$ to the utility of agent $i$ when utilities are not normalized. We show that local search procedures for the smooth Nash welfare objective, for appropriate choices of $\ell$, yield a $(0,2)$-core outcome for matroid constraints (Section~\ref{sec:mnw}) and a $\left(\delta, O\left(\frac{1}{\delta}\right)\right)$-core outcome for matching constraints (Section~\ref{sec:matching}). In contrast, in Example~\ref{eg:knapsack} (Appendix~\ref{sec:Examples}) we show that optimizing any fixed smooth Nash welfare objective cannot guarantee a good approximation to the core, even with {\em a single} packing constraint, motivating the need for a different algorithm. \renewcommand{\P}{\mathcal{P}} \subsection{Fractional Max Nash Welfare Solution} \label{sec:fractional} For general packing constraints, we use a fractional relaxation of the Nash welfare program. A {\em fractional outcome} consists of a vector $\mathbf{w}$ such that $w_j \in [0,1]$ measures the fraction of element $j$ chosen. The utility of agent $i$ under this outcome is $u_i(\mathbf{w}) = \sum_{j=1}^m w_j u_{ij}$. The fractional {\em Max Nash Welfare} (MNW) solution is a fractional allocation that maximizes the Nash welfare objective (without any smoothing). Define the packing polytope as: $$\P = \left\{ \mathbf{w} \in [0,1]^m \ | \ \textstyle\sum_{j=1}^{m} a_{kj} w_j \leq b_k, \forall k \in [K] \right\}$$ Then the fractional MNW solution is $\argmax_{\bc \in \P} \sum_i \ln u_i(\bc)$. It is easy to show that the fractional MNW allocation lies in the core. Let $\bc$ denote the optimal fractional allocation to the MNW program. By first order optimality, for any other allocation ${\mathbf d}$, \begin{equation} \nabla \ln \vec{u(\bc)} \cdot \left(\vec{u({\mathbf d})} - \vec{u(\bc)} \right) \le 0 \quad \Rightarrow \quad \sum_{i \in N} \frac{1}{u_i(\bc)} \left(u_i({\mathbf d}) - u_i(\bc) \right) \leq 0 \quad \Rightarrow \quad \sum_{i \in N} \frac{u_i({\mathbf d})}{u_i(\bc)} \leq n. \label{eqn:prop-fairness} \end{equation} Suppose for contradiction that $\bc$ is not a core outcome. Then there exists a set of agents $S \subseteq N$ and an outcome ${\mathbf d}$ such that $u_i({\mathbf d}) \ge (n/|S|) \cdot u_i(\bc)$, and at least one inequality is tight. This implies $\sum_{i \in S} u_i({\mathbf d})/u_i(\bc) > n$. However, this contradicts Equation~\eqref{eqn:prop-fairness}. Thus $\bc$, the optimal fractional solution to the MNW program, is a core solution. \medskip For the allocation of public goods, it can be shown that the fractional MNW outcome can be irrational despite rational inputs~\cite{AACK+17}, preventing an exact algorithm. For our approximation results, a fractional solution that approximately preserves the utility to each agent would suffice, and we prove the following theorem in Appendix~\ref{app:fractional}. \begin{theorem} \label{thm:fractional} For any $\epsilon, \delta > 0$, we can compute a fractional $(\delta,\epsilon)$-core outcome in time polynomial in the input size and $\log \frac{1}{\epsilon \delta}$. \end{theorem} \section{Matroid Constraints} \label{sec:mnw} We now consider public goods allocation with matroid constraints. In particular, we show that when the feasibility constraints encode independent sets of a matroid $\mathcal{M}$, maximizing the smooth Nash welfare objective in Equation~\eqref{eq:FK} with $\ell = 1$ yields a $(0,2)$-core outcome. However, optimizing this objective is known to be {\sc NP-hard}~\cite{FairKnapsack}. We also show that given $\epsilon > 0$, a local search procedure for this objective function (given below) yields a $(0,2+\epsilon)$-core outcome in polynomial time, which proves Theorem~\ref{thm:matroid}. \subsection{Algorithm} Fix $\epsilon > 0$. Let $\gamma = \frac{\epsilon}{4m}$, where $m = |W|$ is the number of elements. Recall that there are $n$ agents. \begin{enumerate} \item Start with an arbitrary basis $\bc$ of $\mathcal{M}$. \item Compute $F(\bc) = \sum_{i \in N} \ln (1+ u_i(\bc))$. \item Let a {\em swap} be a pair $(j,j')$ such that $j \in \bc$, $j' \notin \bc$, and ${\mathbf c'} = \bc \setminus \{j\} \cup \{j'\}$ is also a basis of $\mathcal{M}$. \item Find a swap such that $F({\mathbf c'})-F(\bc) \ge \frac{n \gamma}{m}$. \begin{itemize} \item If such a swap exists, then perform the swap, i.e., update $\bc \gets {\mathbf c'}$, and go to Step (2). \item If no such swap exists, then output $\bc$ as the final outcome. \end{itemize} \end{enumerate} \subsection{Analysis} First, we show that the local search algorithm runs in time polynomial in $n$, $m$, and $\sfrac{1}{\epsilon}$. Note that $F(\bc) = O(n \ln m)$ because in our normalization, each agent can have utility at most $m$. Thus, the number of iterations is $O\left(m^2 \log m/ \epsilon\right)$. Finally, each iteration can be implemented in $O(n \cdot m^2)$ time by iterating over all pairs and computing the change in the smooth Nash welfare objective. Next, let $\bcstar$ denote the outcome maximizing the smooth Nash welfare objective with $\ell = 1$, and $\bchat$ denote the outcome returned by the local search algorithm. We show that $\bcstar$ is a $(0,2)$-core outcome, while $\bchat$ is a $(0,2+\epsilon)$-core outcome. For outcome $\bc$, define $ F_i(\bc) = \ln (1+ u_i(\bc))$. Fix an arbitrary outcome $\bc$. For an agent $i$ with $u_i(\bc) > 0$, we have that for every element $j \in \bc$: $$ F_i(\bc) - F_i(\bc \setminus \{j\}) \le \frac{u_{ij}}{ u_i(\bc) + 1 - u_{ij}} \le \frac{u_{ij}}{ u_i(\bc)}. $$ This holds because $\ln(x + h) - \ln x \le \frac{h}{x}$ for $x > 0$ and $h \ge 0$. Summing this over all $j \in \bc$ gives $$ \sum_{j \in \bc} F_i(\bc)-F_i(\bc \setminus \{j\}) \le \sum_{j \in \bc} \frac{u_{ij}}{u_i(\bc)} = \frac{u_i(\bc)}{u_i(\bc)} = 1. $$ For an agent $i$ with $u_i(\bc) = 0$, we trivially have $\sum_{j \in \bc} F_i(\bc)-F_i(\bc \setminus \{j\}) = 0$. Summing over all agents, we have that for every outcome $\vec{c}$: \begin{equation} \label{eq1}\sum_{j \in \bc} F(\bc) - F(\bc \setminus \{j\}) = \sum_{i \in N} \sum_{j \in \bc} F_i(\bc) - F_i(\bc \setminus \{j\}) \le n. \end{equation} We now use the following result: \begin{lemma}[\cite{Kung}] For every pair of bases $\bc$ and ${\mathbf c'}$ of a matroid $\mathcal{M}$, there is a bijection $f: \bc \rightarrow {\mathbf c'}$ such that for every $j \in \bc$, $\bc \setminus \{j\} \cup \{f(j)\}$ is also a basis. \end{lemma} Using the above lemma, combined with the fact that $\ln(x+h) - \ln x \ge \frac{h}{x+h}$ for $x > 0$ and $h \ge 0$, we have that for all $\bc,{\mathbf c'}$: \begin{align} \sum_{j \in \bc} F(\bc \setminus \{j\} \cup \{f(j)\}) - F(\bc \setminus \{j\}) & \ge \sum_i \sum_{j \in \bc} \frac{u_{if(j)}}{u_i(\bc) + 1 - u_{ij} + u_{if(j)}}\nonumber\\ & \ge \sum_{i \in S} \sum_{j' \in {\mathbf c'}} \frac{u_{ij'}}{u_i(\bc) + 2} = \sum_{i \in S} \frac{u_i({\mathbf c'})}{u_i(\bc) + 2}.\label{eq2} \end{align} We now provide almost similar proofs for the approximations achieved by the global optimum $\bcstar$ and the local optimum $\bchat$. {\em Global optimum.} Suppose for contradiction that $\bcstar$ is not a $(0,2)$-core outcome. Then, there exist a subset $S$ of agents and an outcome ${\mathbf c'}$ such that for all $i \in S$, $$ \frac{|S|}{n} \cdot u_i({\mathbf c'}) \ge u_i(\bcstar) + 2, $$ and at least one inequality is strict. Rearranging the terms and summing over all $i \in S$, we obtain: $$ \sum_{i \in S} \frac{u_i({\mathbf c'})}{u_i(\bcstar) + 2} > \sum_{i \in S} \frac{n}{|S|} = n. \label{eqn:global-opt} $$ Combining this with Equation~\eqref{eq2}, and subtracting Equation~\eqref{eq1} yields: $$ \sum_{j \in \bcstar} \left( F(\bcstar \setminus \{j\} \cup \{f(j)\}) - F(\bcstar) \right) >0. $$ This implies existence of a pair $(j,f(j))$ such that $F(\bcstar \setminus \{j\} \cup \{f(j)\}) - F(\bcstar) > 0$, which contradicts the optimality of $\bcstar$ because $\bcstar \setminus \{j\} \cup \{f(j)\}$ is also a basis of $\mathcal{M}$. {\em Local optimum.} Similarly, suppose for contradiction that $\bchat$ is not a $(0,2+\epsilon)$-core outcome. Then, there exist a subset $S$ of agents and an outcome ${\mathbf c'}$ such that for all $i \in S$, $$ \frac{|S|}{n} \cdot u_i({\mathbf c'}) \ge u_i(\bchat) + 2 + \epsilon > (1+\gamma) \left(u_i(\bchat) + 2 \right). $$ Here, the final transition holds because $\gamma < \epsilon/(m+2) \le \epsilon/(u_i(\bchat)+2)$. Again, rearranging and summing over all $i \in S$, we obtain: $$ \sum_{i \in S} \frac{u_i({\mathbf c'})}{u_i(\bchat) + 2} > (1+\gamma) \sum_{i \in S} \frac{n}{|S|} \ge n \cdot (1+\gamma). $$ Once again, combining this with Equation~\eqref{eq2}, and subtracting Equation~\eqref{eq1} yields: $$ \sum_{j \in \bchat} \left( F(\bchat \setminus \{j\} \cup \{f(j)\}) - F(\bchat) \right)> n \gamma. $$ This implies existence of a pair $(j,f(j))$ such that $F(\bchat \setminus \{j\} \cup \{f(j)\}) - F(\bchat) > n \gamma/m$, which violates local optimality of $\bchat$ because $\bchat \setminus \{j\} \cup \{f(j)\}$ is also a basis of $\mathcal{M}$. \paragraph{Lower Bound.} While a $(0,2)$-core always outcome exists, we show in the following example that a $(0,1-\epsilon)$-core outcome is not guaranteed to exist for any $\epsilon > 0$. \begin{lemma} \label{lem:noexist} For $\epsilon > 0$ and matroid constraints, $(0, 1-\epsilon)$-core outcomes are not guaranteed to exist. \end{lemma} \begin{proof} Consider the following instance of public decision making where we have several issues and must choose a single alternative for each issue, a special case of matroid constraints. There are $n$ agents, where $n$ is even. There are $m = (n-2)+n/2$ issues. The first $n-2$ issues correspond to unit-value private goods, \textit{i.e.}, each such issue has $n$ alternatives, and each alternative gives utility $1$ to a unique agent and utility $0$ to others. The remaining $n/2$ issues are ``pair issues''; each such issue has $\binom{n}{2}$ alternatives, one corresponding to every pair of agents that gives both agents in the pair utility $1$ and all other agents utility $0$. It is easy to see that every integer allocation gives utility at most $1$ to at least two agents. Consider the deviating coalition consisting of these two agents. They can choose the alternative that gives them each utility $1$ on every pair issue, and split the $n-2$ private goods equally. Thus, they each get utility $n/2 + (n-2)/2 = n-1$. For the outcome to be a $(0,\alpha)$-core outcome, we need $1 + \alpha \ge (2/n) \cdot (n-1)$. As $n \to \infty$, this requires $\alpha \rightarrow 1$. Hence, for any $\epsilon > 0$, a $(0,1-\epsilon)$-core outcome is not guaranteed to exist. \end{proof} Note that Theorem~\ref{thm:matroid} shows existence of a $(0,2)$-core outcome, which is therefore tight up to a unit additive relaxation. Whether a $(0,1)$-core outcome always exists under matroid constraints remains an important open question. Interestingly, we show that such an outcome always exists for the special case of private goods allocation, and, in fact, can be achieved by maximizing the smooth Nash welfare objective. \begin{lemma \label{lem:privatecore} For private goods allocation, maximizing the smooth Nash welfare objective with $\ell = 1$ returns a $(0,1)$-core outcome. \end{lemma} \begin{proof} There is a set of agents $N$ and a set of private goods $M$. Each agent $i \in N$ has a utility function $u_i : 2^M \to \mathbb{R}_{\ge 0}$. Utilities are additive, so $u_i(S) = \sum_{g \in S} u_i(\set{g})$ for all $S \subseteq M$. For simplicity, we denote $u_{ig} \triangleq u_i(\set{g})$. Without loss of generality, we normalize the utility of each agent such that $\max_{g \in M} u_{ig} = 1$ for each $i$. An allocation $A$ is a {\em partition} of the set of goods among the agents; let $A_i$ denote the bundle of goods received by agent $i$. We want to show that an allocation maximizing the objective $\prod_{i \in N} (1+u_i(A_i))$ is a $(0,1)$-core outcome. Let $A$ denote an allocation maximizing the smooth Nash welfare objective with $\ell = 1$. We assume without loss of generality that every good is positively valued by at least one agent. Hence, $u_j(A_j) = 0$ must imply $A_j = \emptyset$. For agents $i,j \in N$ with $A_j \neq \emptyset$ (hence $u_j(A_j) > 0$), and good $g \in A_j$, moving $g$ to $A_i$ should not increase the objective function. Hence, for each $g \in A_j$, we have $$ \big(1+u_i(A_i \cup \set{g})\big) \cdot \big(1+u_j(A_j \setminus \set{g})\big) \le \big(1+u_i(A_i)\big) \cdot \big(1+u_j(A_j)\big). $$ Using additivity of utilities, this simplifies to \begin{equation} \frac{u_{ig}}{1 + u_i(A_i)} \le \frac{u_{jg}}{1+u_j(A_j)-u_{jg}} \le \frac{u_{jg}}{u_j(A_j)}. \label{eqn:privatecore1} \end{equation} For every agent $j \in N$ with $A_j \neq \emptyset$ and good $g \in A_j$, define $p_g = u_{jg}/u_j(A_j)$. Abusing the notation a little, for a set $T \subseteq M$ define $p_T = \sum_{g \in T} p_g$. Then, from Equation~\eqref{eqn:privatecore1}, we have that for all players $i \in N$ and goods $g \in M$, \begin{equation} (1+u_i(A_i)) \cdot p_g \ge u_{ig}. \label{eqn:privatecore2} \end{equation} Suppose for contradiction that $A$ is not a $(0,1)$-core outcome. Then, there exists a set of agents $S \subseteq N$ and an allocation $B$ of the set of all goods to agents in $S$ such that $(|S|/n) \cdot u_i(B_i) \ge 1+u_i(A_i)$ for every agent $i \in S$, and at least one inequality is strict. Rearranging the terms and summing over $i \in S$, we have \begin{equation} \sum_{i \in S} \frac{u_i(B_i)}{1+u_i(A_i)} > \sum_{i \in S} \frac{n}{|S|} = n. \label{eqn:privatecore3} \end{equation} We now derive a contradiction. For agent $i \in S$, summing Equation~\eqref{eqn:privatecore2} over $g \in B_i$, we get $$ (1+u_i(A_i)) \cdot p_{B_i} \ge u_i(B_i) \Rightarrow \frac{u_i(B_i)}{1+u_i(A_i)} \le p_{B_i}. $$ Summing this over $i \in S$, we get $$ \sum_{i \in S} \frac{u_i(B_i)}{1+u_i(A_i)} \le \sum_{i \in S} p_{B_i} = \sum_{g \in M} p_g = \sum_{\substack{j \in N \text{ s.t.}\\A_j \neq \emptyset}} \sum_{g \in A_j} \frac{u_{jg}}{u_j(A_j)} = \sum_{\substack{j \in N \text{ s.t.}\\A_j \neq \emptyset}} \frac{u_j(A_j)}{u_j(A_j)} \le n. $$ However, this contradicts Equation~\eqref{eqn:privatecore3}. \end{proof} \section{Matching Constraints} \label{sec:matching} We now present the algorithm proving Theorem~\ref{thm:matching}. We show that if the elements are edges of an undirected graph $G(V,E)$, and the feasibility constraints encode a matching, then for constant $\delta \in (0,1]$, a $(\delta, 8 + \frac{6}{\delta})$-core always exists and is efficiently computable. The idea is to again run a {\em local search} on the smooth Nash welfare objective in Equation~(\ref{eq:FK}), but this time with $\ell \approx 1+\frac{4}{\delta}$. \paragraph{Algorithm.} Recall that there are $n$ agents. Let $|V| = r$ and $|E| = m$. Let $\kappa = \frac{2}{\delta}$. For simplicity, assume $\kappa \in \mathbb{N}$. Our algorithm is inspired by the PRAM algorithm for approximate maximum weight matchings due to~\cite{HV}, and we follow their terminology. Given a matching $\bc$, an {\em augmentation} with respect to $\bc$ is a matching $T \subseteq E \setminus \bc$. The {\em size} of the augmentation is $|T|$. Let $M(T)$ denote the subset of edges of $\bc$ that have a vertex which is matched under $T$. Then, the matching $\left(\bc\setminus M(T)\right) \cup T$ is called the {\em augmentation} of $\bc$ using $T$. \begin{enumerate} \item Start with an arbitrary matching $\bc$ of $G$. \item Compute $F(\bc) = \sum_i \ln \left(1 + 2 \kappa + u_i(\bc) \right)$. \item Let $\mathcal{C}$ be the set of all augmentations with respect to $\bc$ of size at most $\kappa$. \begin{itemize} \item If there exists $T \in \mathcal{C}$ such that $F(\left(\bc\setminus M(T)\right) \cup T) - F(\bc) \ge \frac{n}{\kappa r}$, perform this augmentation (i.e., let $\bc \gets \left(\bc\setminus M(T)\right) \cup T$) and go to Step (2). \item Otherwise, output $\bc$ as the final outcome. \end{itemize} \end{enumerate} \paragraph{Analysis.} The outline of the analysis is similar to the analysis for matroid constraints. First, we show that the algorithm runs in polynomial time. Again, recall that each agent has utility at most $m$. Thus, $F(\bc) = O(n \cdot \ln m)$. Because each improvement increases the objective value by at least $n/(\kappa r)$, the number of iterations is $O(\kappa r \ln m) = O(m^2/\delta)$. Each iteration can be implemented by na\"{\i}vely going over all $O(m^{\kappa})$ subsets of edges of size at most $\kappa$, checking if they are valid augmentations with respect to $\bc$, and whether they improve the objective function by more than $n/(\kappa r)$. The local search therefore runs in polynomial time for constant $\delta > 0$. Let $\bc$ denote the outcome returned by the algorithm. We next show that $\bc$ is indeed a $(\delta,8+3\kappa)$-core outcome. Suppose for contradiction that this is not true. Then, there exist a subset of agents $S$ and a matching ${\mathbf c'}$ such that for all $i \in S$, $$ \frac{|S|}{n} \cdot u_i({\mathbf c'}) \ge (1+\delta) \cdot u_i(\bc) + 8 + 3 \kappa \ge (1+ \delta) \cdot \left(u_i(\bc) + 3 \kappa + 1 \right), $$ and at least one inequality is strict (the last inequality is because $\delta \in (0,1]$). Rearranging and summing over all $i \in S$, we obtain \begin{equation} \sum_{i \in S} \frac{u_i({\mathbf c'})}{u_i(\bc) + 3 \kappa + 1} > (1+\delta) \cdot \sum_{i \in S} \frac{n}{|S|} = n \cdot (1+\delta). \label{eqn:matching-inequality1} \end{equation} For $j \in E$, define $w_j = \sum_{i \in N} \frac{u_{ij}}{u_i(\bc) + 1}$ and $w'_j = \sum_{i \in N} \frac{u_{ij}}{u_i(\bc) + 3 \kappa + 1}$. Let $W = \sum_{j \in \bc} w_j$, and $W' = \sum_{j \in {\mathbf c'}} w'_j$. It is easy to check that \begin{equation} W \le n \quad \text{and} \quad W' \ge n \cdot (1+\delta), \label{eqn:matching-inequality2} \end{equation} where the latter follows from Equation~\eqref{eqn:matching-inequality1}. Further note that $w_j \ge w'_j$ for all $j$. For an augmentation $T$ with respect to $\bc$, define $\mathrm{gain}(T) = \sum_{j \in T} w'_j - \sum_{j \in M(T)} w_j$. The next lemma is a simple generalization of the analysis in~\cite{HV}; we give the adaptation here for completeness. \begin{lemma} Assuming weights $w_j \ge w'_j$ for all edges $j$, for any integer $\kappa \ge 1$ and matchings $\bc$ and ${\mathbf c'}$, there exists a multiset $OPT$ of augmentations with respect to $\bc$ such that: \begin{itemize} \item For each $T \in OPT$, $T \subseteq {\mathbf c'}$ and $|T| \le \kappa$; \item $|OPT| \le \kappa r$; and \item $\sum_{T \in OPT} \mathrm{gain}(T) \ge \kappa \cdot W' - (\kappa + 1) \cdot W$. \end{itemize} \label{lem:hv} \end{lemma} \begin{proof} We follow~\cite{HV} in the construction the multiset $OPT$ of augmentations with respect to $\bc$ out of edges in ${\mathbf c'}$. Let $\bc \triangle {\mathbf c'}$ be the symmetric difference of matchings $\bc$ and ${\mathbf c'}$ consisting of alternating paths and cycles. For every cycle or path ${\mathbf d} \in \bc \triangle {\mathbf c'}$, let $T_{{\mathbf d}}$ be be set of edges ${\mathbf d} \cap {\mathbf c'}$. For all $T_{{\mathbf d}}$ with $|T_{{\mathbf d}}| \leq \kappa$, just add $T_S$ to OPT $\kappa$ times (note that $OPT$ is a multiset, not a set). For $T_{{\mathbf d}}$ with $|T_{{\mathbf d}}| > \kappa$, we break up $T_{{\mathbf d}}$ into multiple smaller augmentations. To do so, index the edges in $T_{{\mathbf d}}$ from $1$ to $|T_{{\mathbf d}}|$ and add $|T_{{\mathbf d}}|$ different augmentations to $OPT$ by considering starting at every index in $T_{{\mathbf d}}$ and including the next $\kappa$ edges in $T_{{\mathbf d}}$ with wrap-around from $|T_{{\mathbf d}}|$ to $1$. Now we must argue that $OPT$ as we have constructed it satisfies the conditions of the lemma. The first point, that $\forall T \in OPT, T \subseteq {\mathbf c'}$ and $|T| \leq \kappa$, follows trivially from the construction. The second point also follows easily from observing that we add $\kappa$ augmentations to $OPT$ for every ${\mathbf d} \in \bc \cap {\mathbf c'}$, and graph $G$ has $r$ vertices. To see the third point, note that every edge in ${\mathbf c'} \backslash \bc$ is contained in at least $\kappa$ augmentations in $OPT$. On the other hand, for every edge $e \in \bc \backslash {\mathbf c'}$, there are no more than $\kappa + 1$ augmentations $T \in OPT$ such that $e \in M(T)$ (recall $M(T)$ are the edges of $\bc$ with a vertex matched under $T$). This can happen, for example, if $T_S$ happens to be a path of length $\kappa + 1$. Finally, for the edges $j \in {\mathbf c'} \cap \bc$, the weight $w'_j \le w_j$. Putting these facts together, the third point of the lemma follows. \end{proof} Consider the set of augmentations $OPT$ from Lemma~\ref{lem:hv}. For augmentation $T \in OPT$, we have: \begin{align*} F(\left(\bc\setminus M(T)\right) \cup T) - F(\bc) & = \Big( F(\left(\bc\setminus M(T)\right) \cup T) - F(\bc \setminus M(T)) \Big) - \Big(F(\bc) - F(\bc \setminus M(T)) \Big)\\ & \ge \sum_{i \in N} \left( \frac{ \sum_{T \in S} u_{ij}}{u_i(\bc) + 2 \kappa + 1 + \sum_{j \in T} u_{ij}} - \frac{ \sum_{j \in M(T)} u_{ij}}{u_i(\bc) + 2 \kappa + 1 - \sum_{j \in M(T)} u_{ij}} \right) \\ & \ge \sum_{i\in N} \left( \frac{ \sum_{j \in T} u_{ij}}{u_i(\bc) + 3 \kappa + 1} - \frac{ \sum_{j \in M(T)} u_{ij}}{u_i(\bc)+ 1} \right) \\ & = \sum_{j \in T} w'_j - \sum_{j \in M(T)} w_j = \mathrm{gain}(T). \end{align*} Here, the second transition holds because $h/(x+h) \le \ln(x+h)-\ln x \le h/x$ for all $x \ge 1$ and $h \ge 0$, and the third transition holds due to $|T| \le \kappa$ and $|M(T)| \le 2|T| \le 2\kappa$. Therefore, we have: \begin{align*} \sum_{T \in OPT} F(\left(\bc\setminus M(T)\right) \cup T) - F(\bc) \ge \sum_{T \in OPT} \mathrm{gain}(T) &\ge \kappa \cdot W' - (\kappa + 1) \cdot W\\ &\ge \kappa \cdot n \cdot (1+\delta) - (\kappa + 1) \cdot n = n, \end{align*} where the second transition follows from Lemma~\ref{lem:hv}, and the third transition follows from Equation~\eqref{eqn:matching-inequality2}. Since $|OPT| \le \kappa r$, there exists an augmentation $T \in OPT$ with $F(\left(\bc\setminus M(T)\right) \cup T) - F(\bc) \ge \sfrac{n}{\kappa r}$, which violates local optimality of $\bc$. This completes the proof of Theorem~\ref{thm:matching}. \paragraph{Lower Bound.} We give a stronger lower bound for matchings than the lower bound for matroids in Lemma~\ref{lem:noexist}. \begin{lemma} A $(\delta, \alpha)$-core outcome is not guaranteed to exist for matching constraints, for any $\delta \ge 0$ and $\alpha < 1$. \end{lemma} \begin{proof} This example shows that Consider the graph $K_{2,2}$ (the complete bipartite graph with two vertices on each side). This graph has four edges, and two disjoint perfect matchings. Let there be two agents. Agent $1$ has unit utility for the edges of one matching, while agent $2$ has unit utility for the edges of the other matching. Any integer outcome gives zero utility to one of these agents. This agent can deviate and obtain utility $2$. Hence, for an outcome to be a $(\delta,\alpha)$-core outcome, we need $(1+\delta)\cdot 0 + \alpha \ge (1/2) \cdot 2$, which is impossible for any $\delta \ge 0$ and $\alpha < 1$. \end{proof}
1,108,101,566,237
arxiv
\section{Discussion} \bibliographystyle{IEEEtran} \section{Experiments} \label{sec:expr} \subsection{Datasets and model details} \label{ssec:expr:dataset_model} We evaluate the proposed cross-speaker reading style transfer method on the following disjoint datasets \textbf{MST-Originbeat}: A neutral Mandarin corpus offered by the ICASSP 2021 M2VoC challenge \cite{9414001}, with one female and one male speaker, each with 5,000 utterances. \textbf{DB}: A private neutral Mandarin dataset with 10,000 utterances from another female Chinese speaker, named DB6. \textbf{Audiobook\_FM}: A private Mandarin audiobook dataset with 8 speakers and 2 types of genres: fairy tale and Chinese martial arts fiction. A female and a male speakers cover both of the 2 genres, each of the other 6 speakers only covers one of the 2 genres. One of the 6 speakers is DB6, who only reads the fairy tale documents. This dataset has a total amount of 1,315 paragraphs, which is 13,718 utterances adding up to 24.3 hours. When applied to the proposed chunk-wise reading style model, each short paragraph in the Audiobook\_FM dataset is regarded as a chunk, whose global genre label is assigned based on the topic of the including document: either \textbf{fairy tale} or \textbf{martial arts fiction}. While for datasets MST-Originbeat and DB, each chunk is made up of 10 randomly sampled utterances that are voiced by the same speaker, and all chunks share the same global genre label: \textbf{neutral}. In our experiment, the model is trained 380k steps for the first stage, 260k steps for the second stage. All classifiers are trained by cross-entropy loss with weight set as $0.05$. A pre-trained HiFi-GAN \cite{DBLP:conf/nips/KongKB20} vocoder is utilized to generate audio \subsection{Cross-speaker reading style transfer} \label{ssec:expr:style_trans} Based on the proposed model, we aim to transfer the reading style of fairy tale and martial arts fiction to neutral speakers in MST-Originbeat, or speakers in whose training data the target reading style is absent. Given the target speaker identity and the audios of the reference paragraph with the target reading style, this could be achieved by combining: (i) the averaged speaker timbre GSE vector over all chunks in the training data of the target speaker; (ii) the global genre GSE vector and LPE sequences extracted from the reference audio chunk. We evaluate the reading style transfer results on the reserved test set, the proposed synthesized speech keeps local prosody and global genre close to the reference utterance, but with the timbre of target speakers. For audio samples, please check our demo website\footnote{\scriptsize https://thuhcsi.github.io/is2022-cross-speaker-reading-style-transfer}. \subsubsection{Baseline} For comparison, we establish an embedding-table-based baseline method by replacing the 2 global branches of the multi-scale style model with a speaker embedding table and a global genre embedding table. Which is similar to the setting of \cite{xie2021multi}, except that there is an additional embedding table of the global genre to accommodate the audiobook dataset. \subsubsection{Evaluation} We conduct a Mean Opinion Score (MOS) test on 66 utterances from the test set, with 26 native Chinese speakers serving as subjects. Each test group is made up of a synthesized audio $\tilde{M}$, the ground truth reference audio $M_{ref}$ and the target speaker audio $M_{tgt}$, The subjects are asked to rate $\tilde{M}$ on a scale of 1 to 5 regarding 3 different aspects: (i) its style similarity to $M_{ref}$; (ii) its timbre similarity to $M_{tgt}$; (iii) its audio quality. As shown in Table.\ref{tab:mos}, the proposed model beats the baseline model on all evaluation scores, which verifies the effectiveness of the proposed chunk-wise multi-scale cross-speaker style model in enhancing cross-speaker reading style transfer performance. In addition to the subjective timbre similarity test, we use a pre-trained speaker verification (SV) model \cite{zhang2022mfa} to extract speaker embedding vectors from the synthesized audios and the target speaker audios. The objective evaluation of speaker timbre similarity of each synthesized speech is thus obtained by computing the cosine similarity between the extracted vector and the averaged vectors of the corresponding target speaker. As shown in Table.\ref{tab:mos}, the overall outcomes are generally consistent with subjective timbre similarity test results. \begin{table*}[th] \centering \caption{Similarity/Quality MOS and SV embedding similarity results (Average score and 95\% confidence interval)} \label{tab:mos} \begin{tabular}{m{7.8em}|ccc|c|c|c} \toprule \multirow{2}{7.8em}{Models} & \multicolumn{3}{c|}{Style Similarity} & Timbre & SV embedding & Audio \\ & Fairy tale & Maritial arts fiction & Overall & Similarity & Cosine similarity & Quality Score \\ \midrule Baseline & $3.57\pm0.10$ & $3.99\pm0.10$ & $3.79\pm0.07$ & $2.89\pm0.08$ & $0.75\pm0.02$ & $3.08\pm0.10$ \\ Proposed & $\mathbf{3.88\pm0.08}$ & $\mathbf{4.02\pm0.0}$ & $\mathbf{3.95\pm0.06}$ & $3.22\pm0.07$ & $0.76\pm0.02$ & $3.58\pm0.06$ \\ \hspace{3mm}w/o GSE.style & $3.79\pm0.09$ & $3.80\pm0.10$ & $3.79\pm0.07$ & $\mathbf{3.49\pm0.07}$ & $\mathbf{0.81\pm0.02}$ & $3.56\pm0.06$ \\ \hspace{3mm}w/o chunk & $3.81\pm0.08$ & $3.80\pm0.10$ & $3.80\pm0.06$ & $3.44\pm0.07$ & $0.79\pm0.02$ & $\mathbf{3.60\pm0.06}$ \\ \hspace{3mm}w/o SAC & $3.67\pm0.10$ & $4.01\pm0.11$ & $3.85\pm0.07$ & $2.83\pm0.09$ & $0.48\pm0.06$ & $3.49\pm0.07$ \\ \midrule GT & - & - & - & - & - & $4.32\pm0.09$ \\ \bottomrule \end{tabular} \vspace{-0.3cm} \end{table*} \begin{figure}[th] \centering \includegraphics[width=0.9\linewidth]{figs/41_80.png} \caption{t-SNE plot for speaker GSE vectors \textit{(left column)} and genre GSE vectors \textit{(right column)}. Marker colors correspond to speaker IDs; marker shapes correspond to genre labels. The proposed model reaches the best clustering on both branches.} \label{fig:t-sne} \vspace{-0.5cm} \end{figure} \subsection{Ablation study} \label{ssec:expr:abl} We conduct 3 settings of ablation experiments to reveal the functionality of each component of the proposed method. (i) We remove the global genre branch in the style model, leaving the timbre branch as the only global module. As shown in the subjective test results, there is noticeable style similarity degradation in the transferred speech of the ablating model, which indicates the necessity of explicitly modeling the global genre in reading style transfer. In the meantime, the timbre similarity scores are observed to be improved, since on the global scale, the ablating model is focused on speaker timbre only. (ii) We replace the chunk-wise GSE extracting method with an ordinary utterance-wise approach. The style similarity evaluation results are downgraded due to the shrunken receptive field of the global scale modules. After visualizing the extracted GSE vectors via t-SNE, the genre GSE vectors of the fairy tales are found to be confused with those of the martial arts fiction (right column of the 2nd row in Figure.\ref{fig:t-sne}). On the other hand, considering there are unique GSE vectors for each utterance, more information is conveyed to help the model converge better, which explains the slight promotion in audio quality. (iii) We employ vanilla adversarial classifiers to disentangle GSE vectors of different branches, instead of the proposed switchable adversarial classifiers (SAC). As revealed in Table.\ref{tab:mos}, the speaker timbre similarity of transferred speeches drops significantly. Furthermore, in the t-SNE plot of the extracted timbre GSE vectors, the samples of $<\mathrm{SPKER1\_fiction}>$ and $<\mathrm{SPKER1\_fairy\ tale}>$ end up in different clusters (left column of the 3rd row in Fig.\ref{fig:t-sne}), which confirms there is still considerable genre information entangled in the timbre GSE vector. These details support that the proposed SAC is crucial for timbre and style disentanglement on disjoint dataset scenarios. \subsection{Automatic audiobook generation} Based on the proposed cross-speaker reading style transfer model, an automatic audiobook generation system could be constructed by incorporating a text analysis model which predicts the LPE and genre from given book content. In our practice, the prediction model is implemented with RNN and linear layers, which takes BERT \cite{devlin2018bert} token embedding and Tacotron-2 phoneme embedding as its inputs, similar to existing methods \cite{hodari2021camp,xie2021multi}. According to the predicted genre label and the identity of the desired speaker, the GSE vectors on each branch could be obtained by choosing the averaged GSE vectors over the training data of the target genre/speaker. Together with the predicted LPE and text sequence, the speeches of the target speaker reading the material with the predicted style is eventually generated. The inference results are presented on our demo site. \label{ssec:expr:abl} \section{Introduction} \label{sec:intro} \begin{figure*}[t] \centering \includegraphics[width=0.9\linewidth]{figs/chunk_revised.pdf} \caption{Chunk-wise multi-speaker multi-scale reference encoder with 2 global branches} \label{fig:chunk} \end{figure*} The past decade has witnessed a flourishing development in the area of neural text-to-speech (TTS) \cite{Wang2017,shen2018natural,DBLP:conf/nips/KongKB20}. Recently, expressive TTS is receiving growing attention \cite{skerry2018towards,wang2018style,lee2019robust,Klimkov2019}, since style variations are important for synthesizing more natural speeches. However, in the real world, high-quality expressive datasets typically contain only a small number of speakers due to expense concerns. To generate stylized speeches of arbitrary speakers, it is necessary to transfer the given speech style to the target speakers, whose recorded audio data exclude the target style. Existing studies on cross-speaker style transfer basically follow the practice of extracting style information from given reference speeches, which is widely adopted in expressive TTS \cite{skerry2018towards,wang2018style}. The extracted style is subsequently organized into either a global style embedding \cite{bian2019multi,DBLP:conf/interspeech/WhitehillMMS20}, or a fine-grained local style embedding sequence \cite{xie2021multi}, which are supposed to be adaptable to various speakers. Specifically, to distinguish the characteristics of the target style, utterance-level style labels including prosody class and emotion category are usually utilized. A part of existing works on this topic focus on transferring various prosody classes (e.g. poetry reading, call-center) across different speakers: the multi-reference Tacotron \cite{bian2019multi} implements a method based on multiple Global Style Tokens (GST), where speaker timbre and prosody class are embedded into different global style dimensions and disentangled via inter-cross training strategy; \cite{xie2021multi} attaches a fine-grained prosody module to the multi-speaker Tacotron-2 \cite{shen2018natural}, which extracts local prosody embedding as an additional input of the decoder. Hence, prosody transfer is directly realized by switching speaker embedding while keeping the local prosody representation. Other studies intend to transfer given speech emotions (e.g. happy, angry) to speakers with only neutral data: \cite{DBLP:conf/interspeech/WhitehillMMS20} employs multiple reference audios to provide speaker and emotion embeddings separately, which is achieved by training on $<${text, style-matched audio, GT audio}$>$ and $<$text, random audio, random audio$>$ triplets with adversarial cycle consistency; \cite{li2021controllable} introduces an emotion disentangling module (EDM) for the disentanglement of speaker timbre and emotion style attributes. Recently proposed methods further insert conditional layer normalization into the TTS decoder for better adaptation performance, to which the target speaker ID and desired emotion labels are fed as input condition \cite{liu2021meta,wu2021cross}. Compared with prosody-class-labeled or emotional corpus, the reading style of audiobook datasets is usually observed to emerge on both local and global scales: On the one hand, the rich local prosody variation in speech is one of the crucial elements to make the content of an audiobook more attractive to listeners and easier to follow (e.g. there are often prosody changes at the joint point between character lines and narrator lines). On the other hand, the expressiveness of audiobook datasets is also characterized by the steady global genre, which must fit the topic of the original document (e.g. fairy tales are supposed to be read in an innocent tone with patience). Since the type of global genre is determined by the book topic, the same genre label is shared across the whole book, which could be directly obtained according to the book content, rather than being manually annotated sentence by sentence like emotion. However, the fixed document-level genre label rarely contains the particular style information of a specific utterance, which makes it difficult to directly apply existing utterance-label-based style model to audiobook datasets. The multi-scale characteristics of audiobook reading style also bring challenges to utilize the aforementioned single-scale cross-speaker style transfer methods. With these concerns, this paper adopts the multi-scale style modeling scheme in \cite{li21r_interspeech}, and proposes a chunk-wise multi-speaker multi-scale reference encoder to model speaker timbre and global genre of the audiobook on the global scale, while fine-grained prosody is modeled on the local scale. Specifically, the speaker timbre and genre are extracted from a chunk of consecutive utterances, since a larger view would help the model capture these factors. In order to disentangle reading style and timbre, a novel adversarial training strategy based on switchable adversarial classifiers are devised to provide style transfer ability among disjoint datasets, which is the common training basis of cross-speaker reading style transfer in reality. \section{Methodology} \label{sec:meth} We construct an expressive TTS system with Tacotron-2 \cite{shen2018natural} as backbone. The integrated multi-scale cross-speaker style model consists of two components: (i) A chunk-wise multi-speaker multi-scale reference encoder; (ii) Switchable adversarial classifiers for speaker timbre and style disentanglement. \subsection{Multi-scale cross-speaker expressive TTS scheme} \label{ssec:meth:ms} The proposed model adopts the idea of extracting a global-scale style embedding (GSE) vector and a local-scale prosody embedding (LPE) sequence from the given reference speech with a multi-scale reference encoder \cite{li21r_interspeech}, which represent the global genre and local prosody of audiobook reading style, respectively. The LPE sequence and the repeated GSE vector are then attached to the text embedding to stylize the generated speech For cross-speaker style modeling, a multi-branch global style module is introduced to our reference encoder, which is able to extract two disentangled GSE vectors based on two branches. One of the GSE vectors is assigned to model speaker timbre since timbre is commonly recognized as a coarse style attribute. And the other is set to model the global genre of the audiobook. As a result, cross-speaker style transfer could be achieved by switching speaker timbre GSE vector \subsection{Chunk-wise multi-speaker multi-scale reference encoder} \label{ssec:meth:chunk} The workflow of our reference encoder is divided into two steps. The input mel-spectrograms of reference speeches are first fed into 6 1D-convolution layers, which share the same structure as those in \cite{li21r_interspeech} except that Hardswish \cite{howard2019searching} is adopted as the activation function instead of ReLU for better performance. The produced frame-wise feature sequence of each utterance is then regularized into phoneme-wise by averaging frames within the same phoneme, according to the prepared forced alignment results. Eventually, the regularized sequence is concatenated with preprocessed phoneme-wise acoustic features (logF0 and energy) and fed through a linear layer with Hardswish activation to obtain the final phoneme-wise intermediate feature sequence. During the second step, the intermediate sequence is shared for both global and local scale modules as input: On the local scale, the sequence is sent through a GRU layer, followed by a linear layer and tanh activation, to get the pre-aligned LPE sequence. This output sequence is limited to a small number of channels (4 in our setting) by the linear layer to form an information bottleneck, which is typically adopted to tackle content or timbre leakage problems. On the global scale, style embedding is extracted on a chunk-wise basis. The definition of a chunk is a short paragraph in the audiobook made up of a couple of consecutive utterances. Compared with single-utterance style models, the chunk-wise model has access to a larger scope and generates a smoother representation of the global genre of the audiobook, since style fluctuation among different utterances is averaged. Specifically, every branch shares the same network structure and workflow: Each intermediate feature sequence is first compressed into a global vector by passing through a GRU layer and taking the final state as output. All global vectors within the same chunk are then consecutively stacked together as a sequence, and compressed into a vector in the same way by another GRU. The final state of the GRU is processed by a linear layer and tanh activation, whose output is the eventual GSE vector on the current branch, and is shared across the whole chunk. \begin{figure}[t] \centering \includegraphics[width=0.7\linewidth]{figs/grl.pdf} \caption{Adversarial training with switchable classifiers} \label{fig:grl} \vspace{-0.5cm} \end{figure} \subsection{Cross-speaker training strategy} \label{ssec:meth:train} \subsubsection{Two-stage training process} The multi-scale cross-speaker style model is trained in a two-stage fashion to eliminate interference among different scales. During the first stage, the global-scale module is excluded. The model focuses on learning a speaker-agnostic local scale prosody representation by sending the LPE sequences to an adversarial speaker classifier which consists of a gradient reverse layer and 2 linear layers. Meanwhile, similar to the setting in \cite{xie2021multi}, a temporary speaker embedding table is used to generate speaker embeddings that are attached to the text embedding to provide speaker timbre information for the TTS decoder. During the second stage, the temporary speaker embedding table is dropped; the 1D-convolution layers, local scale style module, and the text encoder of Tacotron-2 are frozen; the decoder of Tacotron-2 is reinitialized. The model is committed to extracting disentangled timbre/genre features by feeding each GSE vector through corresponding speaker/genre label classifiers and switchable adversarial genre/speaker classifiers. \subsubsection{Switchable adversarial classifiers} As shown in Figure.\ref{fig:grl}, different from vanilla gradient-reverse-layer-based adversarial classifier, the proposed switchable adversarial classifier (SAC) introduces: (i) multiple underlying classifiers, each of which is made up of 2 linear layers; (ii) an additional switch box to choose the proper underlying classifier for each input sample according to its corresponding label (speaker ID on speaker branch, genre label on genre branch). Consequently, there is a unique underlying classifier for each corresponding label of the branch, except for those with only one prediction target (e.g. on the speaker branch, those would be the speakers with only one type of genre), none of the classifiers is applied (noted as \textit{NULL} in Figure.\ref{fig:grl}). The switchable adversarial classifiers are designed to accommodate the common scenario of cross-speaker reading style transfer task: training on disjoint datasets. More specifically, cross-speaker reading style transfer is usually conducted among several audiobooks with different narrators and topics. Which indicates that each speaker may have different types of global genre, if more than one; whereas each genre label may correspond to various numbers of speakers. Thus, it is more proper to assign a specific adversarial classifier for each label, rather than depending on the same classifier to handle the various data circumstances of all labels. \section{Experiments} \label{sec:expr} \subsection{Datasets and model details} \label{ssec:expr:dataset_model} We evaluate the proposed cross-speaker reading style transfer method on the following disjoint datasets: \textbf{MST-Originbeat}: A neutral Mandarin corpus offered by the ICASSP 2021 M2VOC challenge \cite{9414001}, with one female and one male speaker, each with 5,000 utterances. \textbf{DB}: A private neutral Mandarin dataset with 10,000 utterances from another female Chinese speaker, named DB6. \textbf{Audiobook\_FM}: A private Mandarin audiobook dataset with 8 speakers and 2 types of genres: fairy tale and Chinese martial arts fiction. A female and a male speakers cover both of the 2 genres, each of the other 6 speakers only covers one of the 2 genres. One of the 6 speakers is DB6, who only reads the fairy tale documents. This dataset has a total amount of 1,315 paragraphs, which is 13,718 utterances adding up to 24.3 hours. When applied to the proposed chunk-wise reading style model, each short paragraph in the Audiobook\_FM dataset is regarded as a chunk, whose global genre label is assigned based on the topic of the including document: either \textbf{fairy tale} or \textbf{martial arts fiction}. While for datasets MST-Originbeat and DB, each chunk is made up of 10 randomly sampled utterances that are voiced by the same speaker, and all chunks share the same global genre label: \textbf{neutral}. In our experiment, the model is trained 380k steps for the first stage, 260k steps for the second stage. All classifiers are trained by cross entropy loss with weight set as $0.05$. A pre-trained HiFi-GAN \cite{DBLP:conf/nips/KongKB20} vocoder is utilized to generate audio. \subsection{Cross-speaker reading style transfer} \label{ssec:expr:style_trans} Based on the proposed model, we aim to transfer the reading style of fairy tale and martial arts fiction to neutral speakers in MST-Originbeat, or speakers in whose training data the target reading style is absent. Given the target speaker identity and the audios of reference paragraph with target reading style, this could be achieved by combining: (i) the averaged speaker timbre GSE vector over all chunks in the training data of the target speaker; (ii) the global genre GSE vector and LPE sequences extracted from the reference audio chunk. We evaluate the reading style transfer results on the reserved test set, the proposed synthesized speech keeps local prosody and global genre close to the reference utterance, but with the timbre of target speakers. For audio samples, please check our demo website\footnote{\scriptsize https://thuhcsi.github.io/is2022-cross-speaker-reading-style-transfer}. \subsubsection{Baseline} For comparison, we establish an embedding-table-based baseline method by replacing the 2 global branches of the multi-scale style model with a speaker embedding table and a global genre embedding table. Which is similar to the setting of \cite{xie2021multi}, except that there is an additional embedding table of the global genre to accommodate the audiobook dataset. \subsubsection{Evaluation} We conduct a Mean Opinion Score (MOS) test on 66 utterances from the test set, with 26 native Chinese speakers serve as subjects. Each test group is made up by a synthesized audio $\tilde{M}$, the ground truth reference audio $M_{ref}$ and the target speaker audio $M_{tgt}$, The subjects are asked to rate $\tilde{M}$ on a scale of 1 to 5 regrading 3 different aspects: (i) its style similarity to $M_{ref}$; (ii) its timbre similarity to $M_{tgt}$; (iii) its audio quality. As shown in Table.\ref{tab:mos}, the proposed model beats baseline model on all evaluation scores, which verifies the effectiveness of the proposed chunk-wise multi-scale cross-speaker style model in enhancing cross-speaker reading style transfer performance. In addition to the subjective timbre similarity test, we use a pre-trained speaker verification (SV) model to extract speaker embedding vectors from the synthesized audios and the target speaker audios. The objective evaluation of speaker timbre similarity of each synthesized speech is thus obtained by computing the cosine similarity between the extracted vector and the averaged vectors of the corresponding target speaker. As shown in Table.\ref{tab:mos}, the overall outcomes are generally consistent with subjective timbre similarity test results. \subsection{Ablation study} \label{ssec:expr:abl} We conduct 3 settings of ablation experiments to reveal the functionality of each component of the proposed method. (i) We remove the global genre branch in the style model, leaving the timbre branch as the only global module. As shown in the subjective test results, there is noticeable style similarity degradation in the transferred speech of the ablating model, which indicates the necessity of explicitly modeling the global genre in reading style transfer. In the meantime, the timbre similarity scores are observed to be improved, since on the global scale, the ablating model is focused on speaker timbre only. (ii) We replace the chunk-wise GSE extracting method with an ordinary utterance-wise approach. The style similarity evaluation results are downgraded due to the shrunken receptive field of the global scale modules. After visualizing the extracted GSE vectors via t-SNE, the genre GSE vectors of the fairy tales are found to be confused with those of the martial arts fiction (right column of the 2nd row in Figure.\ref{fig:t-sne}). On the other hand, considering there are unique GSE vectors for each utterance, more information is conveyed to help the model converge better, which explains the slight promotion in audio quality. (iii) We employ vanilla adversarial classifiers to disentangle GSE vectors of different branches, instead of the proposed switchable adversarial classifiers (SAC). As revealed in Table.\ref{tab:mos}, the speaker timbre similarity of transferred speeches drops significantly. Furthermore, in the t-SNE plot of the extracted timbre GSE vectors, the samples of $<\mathrm{SPKER1\_fiction}>$ and $<\mathrm{SPKER1\_fairy\ tale}>$ end up in different clusters (left column of the 3rd row in Fig.\ref{fig:t-sne}), which confirms there is still considerable genre information entangled in the timbre GSE vector. These details support that the proposed SAC is crucial for timbre and style disentanglement on disjoint dataset scenarios. \subsection{Automatic audiobook generation} Based on the proposed cross-speaker reading style transfer model, an automatic audiobook generation system could be constructed by incorporating a text analysis model which predicts the LPE and genre from given book content. In our practice, the prediction model is implemented with RNN and linear layers, which takes BERT \cite{devlin2018bert} token embedding and Tacotron-2 phoneme embedding as its inputs, similar to existing methods \cite{hodari2021camp,xie2021multi}. According to the predicted genre label and the identity of the desired speaker, the GSE vectors on each branch could be obtained by choosing the averaged GSE vectors over the training data of the target genre/speaker. Together with the predicted LPE and text sequence, the speeches of the target speaker reading the material with the predicted style is eventually generated. The inference results are presented on our demo site. \label{ssec:expr:abl} \section{Page layout and style} Authors should observe the following rules for page layout. A highly recommended way to meet these requirements is to use a given template (Microsoft Word\textregistered\ or \LaTeX) and check details against the corresponding example PDF file. Given templates, Microsoft Word\textregistered\ or \LaTeX, can be adapted/imported easily in other software such as LibreOffice, Apple Pages, Lua\LaTeX, and Xe\LaTeX, but please be careful to match the layout of the provided PDF example. \subsection{Basic layout features} \begin{itemize} \item Proceedings will be printed in DIN A4 format. Authors must submit their papers in DIN A4 format. \item Two columns are used except for the title section and for large figures that may need a full page width. \item Left and right margin are 20 mm each. \item Column width is 80 mm. \item Spacing between columns is 10 mm. \item Top margin is 25 mm (except for the first page which is 30 mm to the title top). \item Bottom margin is 35 mm. \item Text height (without headers and footers) is maximum 235 mm. \item Headers and footers must be left empty. \item Check indentations and spacings by comparing to this example file (in PDF). \end{itemize} \subsubsection{Headings} Section headings are centered in boldface with the first word capitalized and the rest of the heading in lower case. Sub- headings appear like major headings, except they start at the left margin in the column. Sub-sub-headings appear like sub-headings, except they are in italics and not boldface. See the examples in this file. No more than 3 levels of headings should be used. \subsection{Text font} Times or Times Roman font is used for the main text. Font size in the main text must be 9 points, and in the References section 8 points. Other font types may be used if needed for special purposes. It is VERY IMPORTANT that while making the final PDF file, you embed all used fonts! To embed the fonts, you may use the following instructions: \begin{enumerate} \item For Windows users, the bullzip printer can convert any PDF to have embedded and subsetted fonts. \item For Linux/Mac users, you may use \\ pdftops file.pdf\\ pstopdf -dPDFSETTINGS=/prepress file.pdf \end{enumerate} \LaTeX users: users should use Adobe Type 1 fonts such as Times or Times Roman. These are used automatically by the INTERSPEECH2022.sty style file. Authors must not use Type 3 (bitmap) fonts. \subsection{Figures} All figures must be centered on the column (or page, if the figure spans both columns). Figure captions should follow each figure and have the format given in Figure~\ref{fig:speech_production}. Figures should be preferably line drawings. If they contain gray levels or colors, they should be checked to print well on a high-quality non-color laser printer. Graphics (i.\,e., illustrations, figures) must not use stipple fill patterns because they will not reproduce properly in Adobe PDF. Please use only SOLID FILL COLORS. Figures which span 2 columns (i.\,e., occupy full page width) must be placed at the top or bottom of the page. \subsection{Tables} An example of a table is shown in Table~\ref{tab:example}. The caption text must be above the table. \begin{table}[th] \caption{This is an example of a table} \label{tab:example} \centering \begin{tabular}{ r@{}l r } \toprule \multicolumn{2}{c}{\textbf{Ratio}} & \multicolumn{1}{c}{\textbf{Decibels}} \\ \midrule $1$ & $/10$ & $-20$~~~ \\ $1$ & $/1$ & $0$~~~ \\ $2$ & $/1$ & $\approx 6$~~~ \\ $3.16$ & $/1$ & $10$~~~ \\ $10$ & $/1$ & $20$~~~ \\ $100$ & $/1$ & $40$~~~ \\ $1000$ & $/1$ & $60$~~~ \\ \bottomrule \end{tabular} \end{table} \subsection{Equations} Equations should be placed on separate lines and numbered. Examples of equations are given below. Particularly, \begin{equation} x(t) = s(f_\omega(t)) \label{eq1} \end{equation} where \(f_\omega(t)\) is a special warping function \begin{equation} f_\omega(t) = \frac{1}{2 \pi j} \oint_C \frac{\nu^{-1k} \mathrm{d} \nu} {(1-\beta\nu^{-1})(\nu^{-1}-\beta)} \label{eq2} \end{equation} A residue theorem states that \begin{equation} \oint_C F(z)\,\mathrm{d}z = 2 \pi j \sum_k \mathrm{Res}[F(z),p_k] \label{eq3} \end{equation} Applying (\ref{eq3}) to (\ref{eq1}), it is straightforward to see that \begin{equation} 1 + 1 = \pi \label{eq4} \end{equation} Finally we have proven the secret theorem of all speech sciences. No more math is needed to show how useful the result is! \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figure.pdf} \caption{Schematic diagram of speech production.} \label{fig:speech_production} \end{figure} \subsection{Information for Word users only} For ease of formatting, please use the styles listed in Table 2. The styles are defined in this template file and are shown in the order in which they would be used when writing a paper. When the heading styles in Table 2 are used, section numbers are no longer required to be typed in because they will be automatically numbered by Word. Similarly, reference items will be automatically numbered by Word when the ``Reference'' style is used. \begin{table}[t] \caption{Main predefined styles in Word} \label{tab:word_styles} \centering \begin{tabular}{ll} \toprule \textbf{Style Name} & \textbf{Entities in a Paper} \\ \midrule Title & Title \\ Author & Author name \\ Affiliation & Author affiliation \\ Email & Email address \\ AbstractHeading & Abstract section heading \\ Body Text & First paragraph in abstract \\ Body Text Next & Following paragraphs in abstract \\ Index & Index terms \\ 1. Heading 1 & 1\textsuperscript{st} level section heading \\ 1.1 Heading 2 & 2\textsuperscript{nd} level section heading \\ 1.1.1 Heading 3 & 3\textsuperscript{rd} level section heading \\ Body Text & First paragraph in section \\ Body Text Next & Following paragraphs in section \\ Figure Caption & Figure caption \\ Table Caption & Table caption \\ Equation & Equations \\ \textbullet\ List Bullet & Bulleted lists \\\relax [1] Reference & References \\ \bottomrule \end{tabular} \end{table} If your Word document contains equations, you must not save your Word document from ``.docx'' to ``.doc'' because when doing so, Word will convert all equations to images of unacceptably low resolution. \subsection{Hyperlinks} For technical reasons, the proceedings editor will strip all active links from the papers during processing. Hyperlinks can be included in your paper, if written in full, e.\,g.\ ``http://www.foo.com/index.html''. The link text must be all black. Please make sure that they present no problems in printing to paper. \subsection{Multimedia files} The INTERSPEECH organizing committee offers the possibility to submit multimedia files. These files are meant for audio-visual illustrations that cannot be conveyed in text, tables and graphs. Just like you would when including graphics, make sure that you have sufficient author rights to the multimedia materials that you submit for publication. The proceedings media will NOT contain readers or players, so be sure to use widely accepted file formats, such as MPEG, Windows WAVE PCM (.wav) or Windows Media Video (.wmv) using standard codecs. Your multimedia files must be submitted in a single ZIP file for each separate paper. Within the ZIP file you can use folders and filenames to help organize the multimedia files. In the ZIP file you should include a TEXT or HTML index file which describes the purpose and significance of each multimedia file. From within the manuscript, refer to a multimedia illustration by its filename. Use short file names without blanks for clarity. The ZIP file you submit will be included as-is in the proceedings media and will be linked to your paper in the navigation interface of the proceedings. Causal Productions (the publisher) and the conference committee will not check or change the contents of your ZIP file. Users of the proceedings who wish to access your multimedia files will click the link to the ZIP file which will then be opened by the operating system of their computer. Access to the contents of the ZIP file will be governed entirely by the operating system of the user's computer. \subsection{Page numbering} Final page numbers will be added later to the document electronically. \emph{Do not make any footers or headers!} \subsection{References} The reference format is the standard IEEE one. References should be numbered in order of appearance, for example \cite{Davis80-COP}, \cite{Rabiner89-ATO}, \cite[pp.\ 417--422]{Hastie09-TEO}, and \cite{YourName21-XXX}. \subsection{Abstract} The total length of the abstract is limited to 200 words. The abstract included in your paper and the one you enter during web-based submission must be identical. Avoid non-ASCII characters or symbols as they may not display correctly in the abstract book. \subsection{Author affiliation} Please list country names as part of the affiliation for each country. \subsection{Number of authors in the author list} The maximum number of authors in the author list is twenty. If the number of contributing authors is more than twenty, they should be listed in a footnote or in acknowledgement section, as appropriate. \subsection{Submitted files} Authors are requested to submit PDF files of their manuscripts. You can use commercially available tools or for instance http://www.pdfforge.org/products/pdfcreator. The PDF file should comply with the following requirements: (a) there must be no PASSWORD protection on the PDF file at all; (b) all fonts must be embedded; and (c) the file must be text searchable (do CTRL-F and try to find a common word such as ``the''). The proceedings editors (Causal Productions) will contact authors of non-complying files to obtain a replacement. In order not to endanger the preparation of the proceedings, papers for which a replacement is not provided in a timely manner will be withdrawn. \section{Conclusions} \label{sec:conclusions} In this work, a chunk-wise multi-scale cross-speaker speech style model is introduced for cross-speaker reading style transfer task on audiobook dataset. Experiments on disjoint corpus indicate improvements on both speaker and style similarities compared to the baseline model. Ablation study reveals the necessity of explicitly modeling the global genre of audiobook on a chunk basis; and the switchable adversarial classifiers are verified to be effective in style embedding disentanglement. In the future, we intend to generalize the proposed method to larger-scale multi-speaker TTS corpus, and improve the speaker disentanglement on local scale prosody representation. \textbf{Acknowledgement} This work is supported by National Key R\&D Program of China (2020AAA0104500), National Natural Science Foundation of China (62076144) and Shenzhen Key Laboratory of next generation interactive media innovative technology (ZDSYS20210623092001004).
1,108,101,566,238
arxiv
\section{Introduction} We consider the classical Lagrange function interpolation problem in the following ``discrete'' setting. Let $\tau_1, \ldots, \tau_\nu$ be a given set of mutually distinct interpolation nodes, and let $f_1, \ldots, f_\nu$ be a given set of ``basis'' functions. For any given function $g$ we seek a linear combination of the basis functions that interpolates $g$ at all interpolation nodes, \begin{displaymath} \sum_{j = 1}^{\nu} y_j f_j(\tau_i) = g(\tau_i), \quad i = 1, \ldots, \nu. \end{displaymath} The coefficients $y_j$ can be computed by solving the linear system \begin{displaymath} A y = g, \end{displaymath} where $A$ is the so-called \textit{collocation matrix\/} containing the values of the basis functions at the interpolation nodes \begin{displaymath} a_{ij} = f_j(\tau_i). \end{displaymath} If the basis functions are linearly independent on $\{ \tau_1, \ldots, \tau_{\nu} \}$, the matrix $A$ is nonsingular, and the interpolation problem has a unique solution for all functions $g$. The sensitivity of the solution is then determined by the condition number $\kappa_p(A)$ of the collocation matrix \begin{equation} \kappa_p(A) = \| A \|_p \| A^{-1} \|_p, \label{1.1} \end{equation} where $\| \ \|_p$ denotes the standard operator $p$-norm, with $1 \leq p \leq \infty$. The polynomial B-splines of a fixed degree $d$ are frequently used as the basis functions in practice. Such a basis is uniquely determined by a given multiset of knots that defines the local smoothness of the basis functions. The corresponding collocation matrix is always totally nonnegative (see~\cite{deBoor-2001,Schumaker-2007}), regardless of the choice of the interpolation nodes. A particular choice of nodes is of special interest, both in theory and in practice, for shape preserving approximation. When the nodes are located at the so-called Greville sites, i.e., at the \textit{knot averages\/}, the interpolant has the variation diminishning property. Moreover, for low order B-spline interpolation, it can be shown that $\kappa_\infty(A)$ is bounded \textit{independently\/} of the knot sequence~\cite{deBoor-2001}. On these grounds, in 1975, Carl de Boor~\cite{deBoor-75} conjectured that the interpolation by B-splines of degree $d$ at knot averages is bounded by a function that depends only on $d$, regardless of the knots themselves. In our terms, the conjecture says that $\kappa_\infty(A)$ or, equivalently, $\| A^{-1} \|_\infty$ is bounded by a function of $d$ only. The conjecture was disproved by Rong--Qing Jia~\cite{Jia-88} in 1988. He proved that, for geometric meshes, the condition number $\kappa_\infty(A)$ is not bounded independently of the knot sequence, for degrees $d \geq 19$. Therefore, it is a natural question whether exists any class of meshes for which de Boor's conjecture is valid. Since geometric meshes are highly nonuniform, the most likely candidates for the validity are uniform meshes. Here we discuss the problem of interpolation at knot averages by B-splines with \textit{equidistant simple\/} knots. The corresponding B-splines are symmetric on the support, and have the highest possible smoothness. It is easy to see that the condition of this interpolation does not depend on the knot spacing $h$, and we can take $h = 1$. So, just for simplicity, we shall consider only the cardinal B-splines, i.e., B-splines with simple knots placed at successive integers. It should be stressed that the only free parameters in this problem are the \textit{degree\/} $d$ of the B-splines and the \textit{size\/} $\nu$ of the interpolation problem. Our aim is to prove that the condition of $A$ can be bounded independently of its order. The corresponding collocation matrices $A$ are symmetric, positive definite, and most importantly, T\"{o}plitz. But, it is not easy to compute the elements of $A^{-1}$, or even reasonably sharp estimates of their magnitudes. So, the natural choice of norm in (\ref{1.1}) is the spectral norm \begin{equation} \kappa_2(A) = \frac{\sigma_{\max}(A)}{\sigma_{\min}(A)}, \label{1.2} \end{equation} where $\sigma_{\max}(A)$, and $\sigma_{\min}(A)$ denote the largest and the smallest singular value of $A$, respectively. For low spline degrees $d \leq 6$, the collocation matrices are also strictly diagonally dominant, and it is easy to bound $\kappa_2(A)$ by a constant. Consequently, de Boor's conjecture is valid for $d \leq 6$. For higher degrees, the condition number can be estimated by embedding the T\"{o}plitz matrix $A$ into circulant matrices of higher orders. The main advantage of this technique, developed by Davis~\cite{DavisP-94} and Arbenz~\cite{Arbenz-91}, lies in the fact that the eigenvalues of a circulant matrix are easily computable. The final bounds for $\kappa_2(A)$ are obtained by using the Cauchy interlace theorem for singular values (see~\cite{Horn-Johnson-91} for details), to bound both singular values in (\ref{1.2}). The paper is organized as follows. In Section 2 we briefly review some basic properties of cardinal B-splines. The proof of de Boor's conjecture for low degree ($d \leq 6$) cardinal B-splines is given in Section 3. In Section 4 we describe the embedding technique and derive the estimates for $\kappa_2(A)$. Despite all efforts, we are unable to prove de Boor's conjecture in this, quite probably, the easiest case. The final section contains the results of numerical testing that strongly support the validity of the conjecture, as well as some additional conjectures based on these test results. \section{Properties of cardinal splines} Let $x_i = x_0 + ih$, for $i = 0, \ldots, n$, be a sequence of simple uniformly spaced knots. This sequence determines a unique sequence of normalized B-splines $N_0^d, \ldots, N_{n - d - 1}^d$ of degree $d$, such that the spline $N_i^d$ is non-trivial only on the interval $\langle x_i, x_{i + d + 1} \rangle$. Each of these B-splines can obtained, by translation and scaling, from the basic B-spline $Q^d$ with knots $i = 0, \ldots, d + 1$, \begin{equation} Q^{d}(x) = \frac{1}{(d + 1)!} \sum_{i = 0}^{d + 1} (-1)^i \atp{d + 1}{i} (x - i)_{+}^d. \label{2.1} \end{equation} Here, $(x - i)_{+}^d$ denotes the truncated powers $(x - i)_{+}^d = (x - i)_{}^d (x - i)_{+}^0$, for $d > 0$, while \begin{displaymath} (x - i)_{+}^0 = \begin{cases} 0, & \hbox{$x < i$,} \\ 1, & \hbox{$x \geq i$.} \end{cases} \end{displaymath} The normalized version of the basic spline is defined as \begin{equation} N^d(x) = (d + 1) Q^d(x). \label{2.2} \end{equation} From (\ref{2.1}) and (\ref{2.2}), we obtain the \textit{normalized\/} B-spline basis $\{N_i^d\}$: \begin{displaymath} N_i^d(x) = N_{}^d \left( \frac{x - x_i}{h} \right). \end{displaymath} If the interpolation nodes $\tau_i$ are located at the knot averages, i.e., \begin{equation} x_i^{\ast} = \frac{x_{i + 1} + \cdots + x_{i + d}}{d} = x_i + h \frac{d + 1}{2}, \quad i = 0, \ldots, n - d - 1, \label{2.3} \end{equation} then \begin{displaymath} x_i^{} < x_i^{\ast} < x_{i + d + 1}^{}, \end{displaymath} and the Sch\"{o}nberg--Whitney theorem~\cite{deBoor-2001} guarantees that the collocation matrix is nonsingular. Moreover, this matrix is totally nonnegative~\cite{Karlin-68}, i.e., all of its minors are nonnegative. Due to symmetry of B-splines on uniform meshes, the collocation matrices are also symmetric and T\"{o}plitz. So, we can conclude that cardinal B-spline collocation matrix is T\"{o}plitz, symmetric and positive definite. It is easy to show that the elements of the collocation matrix do not depend on the step-size $h$ of the uniform mesh, so we take the simplest one with $h = 1$ and $x_i = i$. Such B-splines are called cardinal. The interpolation nodes (\ref{2.3}) are integers for odd degrees, while for even degrees, the interpolation nodes are in the middle of the two neighbouring knots of the cardinal B-spline. \begin{figure}[hbt] \begin{center} \includegraphics[width=5.75cm]{BS3B7.eps} \qquad \includegraphics[width=5.75cm]{BS4B8.eps} \end{center} \caption{The first four cardinal splines $N_i^d$ of degree $d = 3$ and $4$, respectively. Big black dots denote the spline values at the knot averages.} \label{fig2.1} \end{figure} The normalized basic cardinal spline $N^d$ suffices to determine all basis function values at the interpolation nodes \begin{displaymath} N_i^d (x_j^{\ast}) = N^d(x_{j-i}^{\ast}). \end{displaymath} The general de Boor--Cox reccurence relation~\cite{deBoor-2001}, written in terms of the degree of a spline is: \begin{equation} N^d(x) = \frac{x N^{d - 1}(x) + (d + 1 - x) N^{d - 1}(x - 1)}{d}. \label{2.4} \end{equation} Note that the elements of a collocation matrix are rational, because the interpolation nodes are rational, and the de Boor--Cox recurrence formula (\ref{2.4}) involves only basic arithmetic operations on rational coefficients. These elements are therefore exactly computable in (arbitrary precise) rational arithmetic. \section{Low degree cardinal B-splines} Let $t_i^d = N^d(x_i^{\ast})$ denote the values of cardinal B-splines at knot averages (see (\ref{2.3})). Then, the cardinal B-spline collocation matrix $A$ with interpolation nodes $x_i^{\ast}$ is a banded T\"{o}plitz matrix of order $n - d$, to be denoted by \begin{equation} T_n^d = \begin{bmatrix} t_0^d & \cdots & t_r^d & & 0 \\ \vdots & \ddots & & \ddots & \\ t_r^d & & \ddots & & t_r^d \\ & \ddots & & \ddots & \vdots \\ 0 & & t_r^d & \cdots & t_0^d, \end{bmatrix}, \quad r = \left\lfloor \frac{d}{2} \right\rfloor. \label{3.1} \end{equation} The matrix $T_n^d$ is represented by its first row, usually called the symbol, \begin{displaymath} t = (t_0^d, \ldots, t_r^d, 0, \ldots, 0), \quad t \in \mathbb{R}^{n-d}. \end{displaymath} It is useful to note that each B-spline of degree $d > 0$ is a unimodal function, i.e., it has only one local maximum on the support. In the case of cardinal B-splines, we have already concluded that the splines are symmetric, and therefore the maximum values of $N^d$ is attained at the middle of the support, for $x = (d + 1) /2$. The maximum value is \begin{displaymath} N^d\left( \frac{d + 1}{2} \right) = N^d(x_0^{\ast}) = t_0^d. \end{displaymath} Furthermore, unimodality implies that the values of the spline $N^d$ are decreasing in the interval $[ (d + 1) /2, d + 1 ]$, so \begin{equation} t_0^d > t_1^d > \cdots > t_r^d. \label{3.2} \end{equation} To esitmate the condition number of a cardinal B-spline we need to bound both the minimal and the maximal singular value of $T_n^d$. For a symmetric and positive definite matrix, the singular values are eigenvalues. Therefore, the bounds for the eigenvalues of $T_n^d$ are sought for. From the Ger\v{s}gorin bound for the eigenvalues, and the partition of unity of the B-spline basis, we obtain an upper bound for $\lambda_{\max}(T_n^d)$ \begin{equation} \lambda_{\max}(T_n^d) \leq t_0^d + 2 (t_1^d + \cdots + t_r^d) = 1. \label{3.3} \end{equation} Similarly, we also obtain a lower bound for $\lambda_{\min}(T_n^d)$, \begin{equation} \lambda_{\min}(T_n^d) \geq t_0^d - 2 (t_1^d + \cdots + t_r^d) = 2t_0^d - 1, \label{3.4} \end{equation} which is sensible only if $T_n^d$ is strictly diagonally dominant. Strict diagonal dominance is achieved only for B-spline degrees $d = 1, \ldots, 6$ (easily verifiable by a computer). The corresponding Ger\v{s}gorin bounds are presented in Table \ref{tbl3.1}. This directly proves de Boor's conjecture for low order B-splines. \begin{table}[hbt] \begin{center} \newcolumntype{I}{!{\vrule width 0.6pt}} \newlength\savedwidth \newcommand\whline{\noalign{\global\savedwidth\arrayrulewidth \global\arrayrulewidth 0.6pt \hline \noalign{\global\arrayrulewidth\savedwidth}} \renewcommand{\arraystretch}{1.25} \begin{tabular}{cIccccc} \whline $n\backslash d$ & 2 & 3 & 4 & 5 & 6 \\ \whline 64 & 1.998136 & 2.994873 & 4.785918 & 7.466648 & 11.727897 \\ 128 & 1.999541 & 2.998757 & 4.796641 & 7.492176 & 11.785901 \\ 256 & 1.999886 & 2.999694 & 4.799180 & 7.498105 & 11.799106 \\ 512 & 1.999971 & 2.999924 & 4.799797 & 7.499534 & 11.802256 \\ 1024 & 1.999993 & 2.999981 & 4.799950 & 7.499884 & 11.803026 \\ 2048 & 1.999998 & 2.999995 & 4.799987 & 7.499971 & 11.803216 \\ \whline $\textrm{GB}(d)$ & 2 & 3 & $\frac{96}{19} \approx 5.052632$ & 10 & $\frac{5760}{127} \approx 45.354331$ \\ \whline \end{tabular} \end{center} \caption{Comparison of the actual condition numbers $\kappa_2(T_n^d)$, for $d = 2, \ldots, 6$, $n = 64, \ldots, 2048$, and the bounds $\textrm{GB}(d)$ for $\kappa_2(T_n^d)$, obtained by the Ger\v{s}gorin circle theorem.} \label{tbl3.1} \end{table} Note that in the case of tridiagonal T\"{o}plitz matrices, i.e.~for $d = 2, 3$, and, thus, $r = 1$ in (\ref{3.1}), the exact eigenvalues are also known (see B\"{o}ttcher--Grudsky~\cite{Boettcher-Grudsky-2005}) \begin{displaymath} \lambda_k (T_n^d) = t_0^d + 2 t_1^d \cos \frac{\pi k}{n - d + 1}, \quad d = 2, 3, \quad k = 1, \ldots, n - d. \end{displaymath} The largest and the smallest eigenvalue can then be uniformly bounded by \begin{align*} \lambda_{\max} (T_n^d) & = t_0^d + 2 t_1^d \cos \frac{\pi}{n - d + 1} < t_0^d + 2 t_1^d, \\ \lambda_{\min} (T_n^d) & = t_0^d - 2 t_1^d \cos \frac{\pi}{n - d + 1} > t_0^d - 2 t_1^d > 0, \end{align*} These uniform bounds are somewhat better than those obtained by the Ger\v{s}gorin circles. \section{Embeddings of T\"{o}plitz matrices into circulants} When the degree of a cardinal B-spline is at least $7$, the eigenvalue bounds for T\"{o}plitz matrices can be computed by circulant embeddings. First, we will introduce the smallest possible circulant embedding, and give its properties. Then we will present some other known embeddings, with positive semidefinite circulants. To obtain a bound for $\lambda_{\min}(T_n^d)$, the collocation matrix $T_n^d$ is to be embedded into a circulant \begin{equation} C_m^d = \left[ \begin{array}{ccccccc:ccc} \! & & & & & & & t_r^d & \cdots & t_1^d \! \\ \! & & & & & & & & \ddots & \vdots \! \\ \! & & & & & & & & & t_r^d \! \\ \! & & & \smash{T_n^d} & & & & & & \! \\ \! & & & & & & & t_r^d & & \! \\ \! & & & & & & & \vdots & \ddots & \! \\ \! & & & & & & & t_1^d & & t_r^d \! \\ \hdashline \! t_r^d & & & & t_r^d & \cdots & t_1^d & t_0^d & \ddots & \vdots \! \\ \! \vdots & \ddots & & & & \ddots & & \ddots & \ddots & t_1^d \! \\ \! t_1^d & \cdots & t_r^d & & & & t_r^d & \cdots & t_1^d & t_0^d \! \end{array} \right], \quad m = n - d + r. \label{4.1} \end{equation} It is obviously a T\"{o}plitz matrix with the following symbol \begin{displaymath} t = (t_0^d, \ldots, t_r^d, 0, \ldots, 0, t_r^d, \ldots, t_1^d), \quad t \in \mathbb{R}^{n - d + r}. \end{displaymath} This circulant $C_m^d$ is called a periodization of $T_n^d$ by B\"{o}ttcher and Grudsky~\cite{Boettcher-Grudsky-2005}. The bounds (\ref{3.3})--(\ref{3.4}) for the eigenvalues of $T_n^d$ are also valid for $C_m^d$. Moreover, $C_m^d$ is doubly stochastic, always having $\lambda_{\max}(C_m^d) = 1$ as its largest eigenvalue. Interestingly enough, the upper bound (\ref{3.3}) is attained here (the Ger\v{s}gorin bounds are rarely so sharp). The symmetry of $T_n^d$ immediately implies the symmetry of $C_m^d$, and we can conclude that the eigenvalues of $C_m^d$ are real, but not necessarily positive. For symmetric matrices, the singular values are, up to a sign, equal to the eigenvalues,~so \begin{equation} \sigma_i(C_m^d) = |\lambda_i(C_m^d)|. \label{4.2} \end{equation} If the eigenvalues of the circulant $C_m^d$ are known, the spectrum of embedded $T_n^d$ can be bounded by the Cauchy interlace theorem for singular values, applied to $C_m^d$. \begin{thm}[Cauchy interlace theorem] Let $C \in \mathbb{C}^{m \times n}$ be given, and let $C_\ell$ denote a submatrix of $C$ obtained by deleting a total of $\ell$ rows and/or $\ell$ columns of $C$. Then \begin{displaymath} \sigma_k(C) \geq \sigma_k(C_\ell) \geq \sigma_{k+\ell}(C), \quad k = 1, \ldots, \min \{ m, n \}, \end{displaymath} where we set $\sigma_j(C) \equiv 0$ if $j > \min \{m, n \}$. \end{thm} The proof can be found, for example, in~\cite[page 149]{Horn-Johnson-91}. If we delete the last $r$ rows and columns of $C_m^d$, we obtain $T_n^d$. The Cauchy interlace theorem will then give useful bounds for $\sigma_{\min}(T_n^d) = \lambda_{\min}(T_n^d)$, provided that $C_m^d$ is nonsingular. Moreover, if we delete more than $r$ last rows and columns of $C_m^d$, we obtain bounds for T\"{o}plitz matrices $T_k^d$, of order $k - d$, for $k \leq n$, \begin{equation} \kappa_2(T_k^d) = \frac{\sigma_{\max}(T_k^d)}{\sigma_{\min}(T_k^d)} \leq \frac{\sigma_{\max}(C_m^d)}{\sigma_{\min}(C_m^d)} = \frac{1}{\min_j|\lambda_j(C_m^d)|}. \label{4.3} \end{equation} Now we need to calculate the smallest singular value of $C_m^d$, and show that it is non-zero. The eigendecomposition of a circulant matrix is well-known (see~\cite{DavisP-94,Arbenz-91}). A circulant $C$ of order $m$, defined by the symbol $(c_0, \ldots, c_{m - 1})$, can be written as \begin{displaymath} C = \sum_{j = 0}^{m - 1} c_j \Pi^j, \end{displaymath} where \begin{displaymath} \Pi = \begin{bmatrix} \; 0 \;\; & 1 & & \\[3pt] & \smash{\ddots} & \smash{\ddots} & \\ & & \smash{\ddots} & \;\; 1 \;\; \\ \; 1 \;\; & & & \;\; 0 \;\; \\ \end{bmatrix}. \end{displaymath} \vspace*{3pt} \noindent The spectral decomposition of $\Pi$ is $\Pi = F \Omega F^{\ast}$, where \begin{displaymath} \Omega = \diag (1, \omega, \omega^2, \ldots, \omega^{m - 1}), \quad \omega = \frac{2\pi i}{m}, \quad i = \sqrt{-1}, \end{displaymath} while \begin{displaymath} F_{j, k} = \frac{1}{\sqrt{m}} \omega^{kj}, \quad 0 \leq k, j \leq m - 1. \end{displaymath} Hence, $C$ can be decomposed as \begin{displaymath} C = F \Lambda F^{\ast}, \quad \Lambda = \diag(\lambda_0, \ldots, \lambda_{m - 1}) = \sum_{j = 0}^{m - 1} c_j \Omega^j. \end{displaymath} The eigenvalues of a real symmetric circulant $C$ are real, and given by \begin{equation} \lambda_k(C) = c_0 + \sum_{j = 1}^{m - 1} c_j \cos \frac{2\pi k j}{m}, \quad k = 0, \ldots, m - 1. \label{4.4} \end{equation} They can also be viewed as the discrete Fourier transform (DFT) of the symbol $(c_0, \ldots, c_{m - 1})$. For real and symmetric $C$, i.e., when $c_k = c_{m - k}$, for $k = 1, \ldots, m - 1$, from (\ref{4.4}) it also follows that \begin{displaymath} \lambda_k(C) = c_0 + \sum_{j = 1}^{m - 1} c_j \cos \frac{2\pi k j}{m} = c_0 + \sum_{j = 1}^{m - 1} c_{m - j} \cos \frac{2\pi k (m - j)}{m} = \lambda_{m - k}(C). \end{displaymath} So, all the eigenvalues, except $\lambda_0(C)$, and possibly $\lambda_{\frac{m}{2}}(C)$, for even $m$, are multiple. Therefore, the eigenvalues of the circulant $C_m^d$ from (\ref{4.1}) are \begin{equation} \lambda_k(C_m^d) = t_0^d + 2 \sum_{j = 1}^{r} t_j^d \cos \frac{2\pi k j}{m}, \quad k = 0, \ldots, m - 1. \label{4.6} \end{equation} For prime orders $m$, the nonsingularity of $C_m^d$ is a consequence of the following theorem from~\cite{Geller-Kra-Popescu-Simanca-2004}. \begin{thm}[Geller, Kra, Popescu and Simanca] Let $m$ be a prime number. Assume that the circulant $C$ of order $m$ has entries in $\mathbb{Q}$. Then $\det C = 0$ if and only if \begin{displaymath} \lambda_0 = \sum_{j = 0}^{m - 1} c_i = 0, \end{displaymath} or all the symbol entries $c_i$ are equal. \label{thmGKPS} \end{thm} If $m$ is prime, then we must have $\det C_m^d \neq 0$, since (\ref{3.2}) implies that $c_i$'s are not equal, and from (\ref{4.6}) we get \begin{displaymath} \lambda_0 = c_0 + 2 \sum_{j = 1}^r c_j = 1 \neq 0. \end{displaymath} Theorem \ref{thmGKPS} suggests how to get the nonsingular embedding of $T_n^d$. First, $T_n^d$ should be embedded into the T\"{o}plitz matrix $T_p^d$, of order $p - d$, where $p \geq n$ is chosen so that $m = p - d + r$ is a prime number. Then, $T_p^d$ is embedded into the circulant $C_m^d$. The other possibility is to embed $T_n^d$ into the smallest circulant matrix $C_m^d$, as in (\ref{4.1}), and calculate its eigenvalues from (\ref{4.6}), in hope that $C_m^d$ is nonsingular. In this case, extensive numerical testing suggests that $C_m^d$ is always positive definite, but we have not been able to prove it. There are also several other possible embeddings that guarantee the positive semidefiniteness of the circulant matrix $C$. The first one, constructed by Dembo, Mallows and Shepp in~\cite{Dembo-Mallows-Shepp-89}, ensures that the positive definite T\"{o}plitz matrix $T$, of order $n$, can be embedded in the positive semidefinite circulant $C$, of order $m$, where \begin{equation} m \geq 2 \left(n + \kappa_2(T) \frac{n^2}{\sqrt{6}} \right). \label{4.7} \end{equation} A few years later, Newsam and Dietrich~\cite{Newsam-Dietrich-94} reduced the size of the embedding to \begin{equation} m \geq 2 \sqrt{6n^2 + \kappa_2(T) \frac{3 \cdot 2^{11/2} \, n^{5/2}}{5^{5/2}}}. \label{4.8} \end{equation} Note that among all positive semidefinite matrices $C$ of order greater or equal $m$, we can choose one of prime order. This embedding will be positive definite according to Theorem \ref{thmGKPS}. It is obvious that embeddings (\ref{4.7})--(\ref{4.8}) are bounded by a function of the condition number of $T$, i.e., the quantity which we are trying to bound. Ferreira in~\cite{FerreiraP-94} embeds a T\"{o}plitz matrix $T$ of order $n$, defined by the symbol $t = (t_0, \ldots, t_r, 0, \ldots, 0) \in \mathbb{R}^n$, into the circulant $C$ of order $m = 2n$, \begin{equation} C = \begin{bmatrix} T & S \\ S & T \end{bmatrix}, \label{4.9} \end{equation} where the symbol of the T\"{o}plitz matrix $S$ is $s = (0, \ldots, 0, t_r, \ldots, t_1) \in \mathbb{R}^n$. If we take $T = T_n^d$ from (\ref{3.1}), the only difference between embeddings (\ref{4.1}) and (\ref{4.9}) is in exactly $n - d - r$ zero diagonals, added as the first diagonals of $S$. A sufficient condition for positive semidefiniteness of $C$ is given by the next result. \begin{thm}[Ferreira] Let $C$ be defined as in (\ref{4.9}), and let $b^T = [t_0, \ldots, t_{n - 1}]$, $c^T = [t_{n - 1}, \ldots, t_1]$. If $T$ is positive definite, and $|b^T T^{-1} c| < 1$, then $C$ is positive semidefinite. \end{thm} Once again, there is no obvious efficient way to verify whether the condition $|b^T T^{-1} c| < 1$ is fullfiled or not. \section{Conjecture about the minimal eigenvalues} Extensive numerical testing has been conducted, by using \textit{Mathematica\/} 7 from Wolfram Research, for the symbolic, arbitrary-precision rational, and machine-precision floating-point computations. Whenever feasible, the full accuracy was maintained. Owing mostly to the elegance and the accuracy of these results, insight into and the following conjecture about the spectral properties of the collocation matrices and the corresponding periodizations were obtained. \begin{con}[The smallest eigenvalue of a circulant] The circulant $C_m^d$ from (\ref{4.1}) is always positive definite, and the index $\mu$ of its smallest eigenvalue $\lambda_\mu(C_m^d)$ is always the integer nearest to $m / 2$, i.e., \begin{equation} \lambda_{\mu}(C_m^d) = \begin{cases} \displaystyle\lambda_{\frac{m \pm 1}{2}}(C_m^d) = t_0^d + 2 \sum_{j=1}^r(-1)^j t_j^d \cos\left(\frac{\pi j}{m}\right), & \hbox{$m$ odd,} \\ \displaystyle\lambda_{\frac{m}{2}}(C_m^d) = t_0^d + 2 \sum_{j=1}^r(-1)^j t_j^d, & \hbox{$m$ even.} \end{cases} \label{5.1} \end{equation} \label{con5.1} \end{con} Figure~\ref{fig5.1} illustrates both cases of Conjecture~\ref{con5.1}. \begin{figure}[hbt] \begin{center} \includegraphics[width=5.75cm]{CS7-23.eps} \qquad \includegraphics[width=5.75cm]{CS7-24.eps} \end{center} \caption{The eigenvalues (black dots) $\lambda_k(C_m^d)$ for spline of degree $d = 7$ with $n = 23, 24$, respectively. The associated circulants have order $m = n - 7 + 3$, i.e., $19$ and $20$. Note that for $m = 20$ there is only one minimal eigenvalue, while for $m = 19$ we have two minimal eigenvalues.} \label{fig5.1} \end{figure} For even $m$, $\lambda_{\mu}(C_m^d)$ (and, therefore, $\kappa_2(C_m^d)$) depends solely on $d$, i.e., the order $m$ of a circulant is irrelevant here. Moreover, for $m$ odd and even alike, the limiting value of $\lambda_{\mu}(C_m^d)$ is the same: \begin{equation} \lambda_{\infty}^d := \lim_{m\to\infty} \lambda_{\mu}(C_m^d) = t_0^d + 2 \sum_{j=1}^r (-1)^j t_j^d. \label{5.2} \end{equation} Hence, the notation $\lambda_{\infty}^d$ is justified, since that value is determined uniquely by the degree $d$ of the chosen cardinal splines. This is consistent with de~Boor's conjecture. The equations (\ref{5.1}) and (\ref{5.2}) provide us with efficiently and exactly computable estimates of the spectral condition numbers of large collocation matrices $T_n^d$. As demonstrated in Figure~\ref{fig5.2} and Table~\ref{tbl5.1}, the smallest eigenvalues of the collocation matrices converge rapidly and monotonically to the smallest eigenvalues of the corresponding circulant periodizations $C_m^d$, as well as to the limiting value~(\ref{5.2}). \begin{figure}[hbt] \begin{center} \includegraphics[width=8cm]{k9tc.eps} \end{center} \caption{Spectral condition numbers of T\"{o}plitz matrices $T_n^9$ (lower, brighter line), and the circulant periodizations $C_m^9$ (solid black line). The constant function denotes $1/\lambda_{\infty}^9$.} \label{fig5.2} \end{figure} It is worth noting that the spectral bounds obtained in such a way for lower degrees ($d = 2, \ldots, 6$) of cardinal B-splines are quite sharper than those established by the Ger\v{s}gorin circle theorem (cf.~Table~\ref{tbl3.1} and Table~\ref{tbl5.1}), at no additional cost. \begin{table}[hbt] \begin{center} \newcolumntype{I}{!{\vrule width 0.6pt}} \newcommand\whline{\noalign{\global\savedwidth\arrayrulewidth \global\arrayrulewidth 0.6pt \hline \noalign{\global\arrayrulewidth\savedwidth}} \renewcommand{\arraystretch}{1.25} \begin{tabular}{cIccIccIcc} \whline $n\backslash d$ & $T_n^2$ & $C_m^2$ & $T_n^5$ & $C_m^5$ & $T_n^6$ & $C_m^6$ \\ \whline 64 & 1.998137 & 1.998758 & 7.466648 & 7.472749 & 11.72790 & 11.74214 \\ 128 & 1.999541 & 1.999694 & 7.492176 & 7.493492 & 11.78590 & 11.78866 \\ 256 & 1.999886 & 1.999924 & 7.498105 & 7.498410 & 11.79911 & 11.79971 \\ 512 & 1.999971 & 1.999981 & 7.499534 & 7.499607 & 11.80226 & 11.80240 \\ 1024 & 1.999993 & 1.999995 & 7.499884 & 7.499902 & 11.80303 & 11.80306 \\ 2048 & 1.999998 & 1.999999 & 7.499971 & 7.499975 & 11.80322 & 11.80322 \\ $1/\lambda_{\infty}^d$ & & 2.000000 & & 7.500000 & & 11.80328 \\ \whline \end{tabular} \begin{tabular}{cIccIccIcc} \whline $n\backslash d$ & $T_n^9$ & $C_m^9$ & $T_n^{21}$ & $C_m^{21}$ & $T_n^{30}$ & $C_m^{30}$ \\ \whline 64 & 45.04067 & 45.17179 & \hphantom{0}9012.21 & \hphantom{0}9543.49 & 371000.6 & 502472.1 \\ 128 & 45.57648 & 45.59721 & 10100.96 & 10150.47 & 569223.5 & 579852.3 \\ 256 & 45.69092 & 45.69486 & 10273.67 & 10279.58 & 594976.6 & 596037.0 \\ 512 & 45.71737 & 45.71822 & 10308.14 & 10309.00 & 599497.1 & 599628.0 \\ 1024 & 45.72373 & 45.72393 & 10315.86 & 10316.01 & 600450.4 & 600469.5 \\ 2048 & 45.72529 & 45.72534 & 10317.69 & 10317.72 & 600669.7 & 600673.0 \\ $1/\lambda_{\infty}^d$ & & 45.72581 & & 10318.28 & & 600739.5 \\ \whline \end{tabular} \end{center} \caption{Comparison of the spectral condition numbers $\kappa_2(T_n^d)$ and $\kappa_2(C_m^d)$, for $d = 2, 5, 6, 9, 21, 30$, $n = 64, \ldots, 2048$, $m = n - d + r$, and $1/\lambda_{\infty}^d$.} \label{tbl5.1} \end{table} Since $t_j^d$ are rational numbers,~(\ref{5.2}) is useful for the exact computation of $\lambda_{\infty}^d$. But, in floating-point arithmetic, the direct computation of $\lambda_{\infty}^d$ from~(\ref{5.2}) is numerically unstable, as it certainly leads to severe cancellation. It can be easily shown from~(\ref{2.1}) or~(\ref{2.4}) that the smallest non-zero value of the cardinal B-spline of degree $d$ at an interpolation node is: \begin{displaymath} t_r^d = \begin{cases} N^d(1) = \frac{1}{d!}, & \hbox{for odd\ $d$,}\\[6pt] N^d\left(\frac{1}{2}\right) = \frac{1}{2^d \cdot d!}, & \hbox{for even $d$.} \end{cases} \end{displaymath} Moreover, all other values $t_j^d$ in~(\ref{5.2}) and, consequently, $\lambda_{\infty}^d$ are integer multiples of~$t_r^d$. With that in mind, yet another, somewhat surprising conjecture emerged from the test results: \begin{equation} \lambda_{\infty}^d = \begin{cases} t_r^d \cdot T_d = \frac{1}{d!} T_d, & \hbox{$d$ odd,} \\[6pt] t_r^d \cdot 2^d E_d = \frac{1}{d!} E_d, & \hbox{$d$ even,} \end{cases} \label{5.3} \end{equation} where, as in~\cite{Knuth-Buckholtz-67}, $T_n$ are the tangent numbers, and $E_n$ are the Euler numbers, defined by the Taylor expansions of $\tan t$ and $\sec t$, respectively, \begin{displaymath} \tan t = \sum_{n = 0}^\infty T_n \frac{t^n}{n!}, \quad \sec t = \sum_{n = 0}^\infty E_n \frac{t^n}{n!}. \end{displaymath} These numbers are also related to the sequences A000182 (the tangent or ``zag'' numbers), A000364 (the Euler or ``zig'' numbers) and A002436, from~\cite{Sloane-2008}. If true,~(\ref{5.3}) would be of significant practical merit, for there exist very stable and elegant algorithms for calculation of $T_n$ and $E_n$ by Knuth and Buckholtz~\cite{Knuth-Buckholtz-67}. So, it deserved an effort to find the proof. A unifying framework for handling both cases is provided by the Euler polynomials $E_n(x)$, defined by the following exponential generating function (see~\cite[23.1.1, p.~804]{Abramowitz-Stegun-72}) \begin{equation} \frac{2 e^{xt}}{e^t + 1} = \sum_{n = 0}^\infty E_n(x) \frac{t^n}{n!}, \label{5.4} \end{equation} which is valid for $|t| < \pi$. First, note that $T_{2k} = E_{2k + 1} = 0$, for all $k \geq 0$. The remaining nontrivial values can be expressed in terms of special values of Euler polynomials. For the tangent numbers, we have \begin{equation} T_{2k + 1} = (-1)^k 2^{2k + 1} E_{2k + 1}(1), \quad k \geq 0. \label{5.5} \end{equation} This follows easily, by comparing the Taylor expansion of $1 + \tanh t$ \begin{displaymath} 1 + \tanh t = \frac{2 e^{2t}}{e^{2t} + 1} = 1 + \sum_{k = 0}^\infty (-1)^k T_{2k + 1} \frac{t^{2k + 1}}{(2k + 1)!} \end{displaymath} and~(\ref{5.4}), with $x = 1$ and $2t$, instead of $t$. Similarly, by comparing the Taylor expansion of $\sech t$ \begin{displaymath} \sech t = \frac{2 e^t}{e^{2t} + 1} = \sum_{k = 0}^\infty (-1)^k E_{2k} \frac{t^{2k}}{(2k)!} \end{displaymath} and~(\ref{5.4}), with $x = 1/2$ and $2t$, instead of $t$, we get \begin{equation} E_{2k} = (-1)^k 2^{2k} E_{2k} \left( \frac{1}{2} \right), \quad k \geq 0. \label{5.6} \end{equation} The following identities will also be needed in the proof of~(\ref{5.3}). \begin{lem} Let $d \geq 0$ be a non-negative integer. Then \begin{align} \sum_{\ell = 0}^{d + 1} (-1)^\ell \binom{d + 1}{\ell} E_n(\ell) = 0, \label{5.7} \\ \sum_{\ell = 0}^{d + 1} (-1)^\ell \binom{d + 1}{\ell} E_n(\ell + 1) = 0, \label{5.8} \end{align} for all $n = 0, \ldots, d$. \end{lem} \begin{proof} Consider the function $g_d$ defined by \begin{displaymath} g_d(t) := \frac{2 (1 - e^t)^{d + 1}}{e^t + 1} = \sum_{\ell = 0}^{d + 1} (-1)^\ell \binom{d+1}{\ell} \frac{2 e^{\ell t}}{e^t + 1}. \end{displaymath} From~(\ref{5.4}) with $x = \ell$, the Taylor expansion of $g_d$ can be written as \begin{displaymath} g_d(t) = \sum_{n = 0}^\infty \left[ \sum_{\ell = 0}^{d + 1} (-1)^\ell \binom{d + 1}{\ell} E_n(\ell) \right] \frac{t^n}{n!}, \end{displaymath} so \begin{displaymath} D^n g_d(t) \big|_{t = 0} = \sum_{\ell = 0}^{d + 1} (-1)^\ell \binom{d + 1}{\ell} E_n(\ell), \quad n \geq 0. \end{displaymath} On the other hand, the Leibniz rule gives \begin{displaymath} D^n g_d(t) = \sum_{m = 0}^{n} \binom{n}{m} D^m \left[ (1 - e^t)^{d + 1} \right] \, D^{n - m} \left[ \frac{2}{e^t + 1} \right]. \end{displaymath} If $n \leq d$, then $D^m \left[ (1 - e^t)^{d + 1} \right]$ is always divisible by $(1 - e^t)$. Hence, \begin{displaymath} D^n g_d(t) \big|_{t = 0} = 0, \quad n = 0, \ldots, d, \end{displaymath} which proves the first identity~(\ref{5.7}). The second one follows similarly, by considering \begin{displaymath} h_d(t) := g_d(t) - g_{d + 1}(t) = \frac{2 e^t (1 - e^t)^{d + 1}}{e^t + 1} = \sum_{\ell = 0}^{d + 1} (-1)^\ell \binom{d+1}{\ell} \frac{2 e^{(\ell + 1) t}}{e^t + 1}. \end{displaymath} The Taylor expansion of $h_d$ is then given by \begin{displaymath} h_d(t) = \sum_{n = 0}^\infty \left[ \sum_{\ell = 0}^{d + 1} (-1)^\ell \binom{d + 1}{\ell} E_n(\ell + 1) \right] \frac{t^n}{n!}. \end{displaymath} If $n \leq d$, from the first part of the proof, it follows immediately that \begin{displaymath} D^n h_d(t) \big|_{t = 0} = D^n g_d(t) \big|_{t = 0} - D^n g_{d + 1}(t) \big|_{t = 0} = 0, \end{displaymath} which proves~(\ref{5.8}). \end{proof} Finally, we are ready to prove the conjecture~(\ref{5.3}). \begin{thm}[Relation to integer sequences] The following holds for all cardinal B-spline degrees $d \geq 0$ \begin{displaymath} \lambda_{\infty}^d = \frac{1}{d!} \cdot \begin{cases} T_d, & \hbox{$d$ odd,} \\ E_d, & \hbox{$d$ even.} \end{cases} \end{displaymath} \end{thm} \begin{proof} To simplify the notation, let $L_d := d! \, \lambda_{\infty}^d$. Due to the symmetry of interpolation nodes, the sum in~(\ref{5.2}) can be written as \begin{displaymath} \lambda_{\infty}^d = \sum_{j = -r}^r (-1)^j t_j^d, \quad t_j^d = N^d \left( j + \frac{d + 1}{2} \right), \quad j = -r, \ldots, r, \end{displaymath} where $r = \lfloor d/2 \rfloor$. From~(\ref{2.1}) and~(\ref{2.2}), it follows that \begin{displaymath} t_j^d = \frac{1}{d!} \sum_{\ell = 0}^{d + 1} (-1)^\ell \binom{d + 1}{\ell} \left( j + \frac{d + 1}{2} - \ell \right)_{+}^d. \end{displaymath} Then \begin{equation} L_d = \sum_{j = -r}^r (-1)^j \sum_{\ell = 0}^{d + 1} (-1)^\ell \binom{d + 1}{\ell} \left( j - \ell + \frac{d + 1}{2} \right)_{+}^d. \label{5.9} \end{equation} Let $d$ be odd, $d = 2k + 1$, with $k \geq 0$. Then $r = k$ and $(d + 1)/2 = k + 1$, so~(\ref{5.9}) becomes \begin{displaymath} L_{2k + 1} = \sum_{j = -k}^k (-1)^j \sum_{\ell = 0}^{2k + 2} (-1)^\ell \binom{2k + 2}{\ell} \left( j - \ell + k + 1 \right)_{+}^{2k + 1}. \end{displaymath} From the definition of truncated powers with positive exponents, the second sum contains only the terms with $j - \ell + k + 1 > 0$, i.e., for $l \leq j + k$. By changing the order of summation, we get \begin{displaymath} L_{2k + 1} = \sum_{\ell = 0}^{2k} (-1)^\ell \binom{2k + 2}{\ell} \sum_{j = \ell - k}^k (-1)^j \left( j - \ell + k + 1 \right)^{2k + 1}. \end{displaymath} Then we shift $j$ by $k - \ell + 1$, so that $j$ starts at $1$, to obtain \begin{displaymath} L_{2k + 1} = (-1)^k \sum_{\ell = 0}^{2k} (-1)^\ell \binom{2k + 2}{\ell} \sum_{j = 1}^{2k + 1 - \ell} (-1)^{(2k + 1 - \ell) - j} j^{2k + 1}. \end{displaymath} The second sum can be simplified as (see~\cite[23.1.4, p.~804]{Abramowitz-Stegun-72}) \begin{displaymath} \sum_{j = 1}^{2k + 1 - \ell} (-1)^{(2k + 1 - \ell) - j} j^{2k + 1} = \frac{1}{2} \left( E_{2k + 1}(2k + 2 - \ell) + (-1)^{2k + 2 - \ell} E_{2k + 1}(1) \right). \end{displaymath} Hence \begin{displaymath} L_{2k + 1} = \frac{(-1)^k}{2} \left[ \sum_{\ell = 0}^{2k} (-1)^\ell \binom{2k + 2}{\ell} E_{2k + 1}(2k + 2 - \ell) + E_{2k + 1}(1) \sum_{\ell = 0}^{2k} \binom{2k + 2}{\ell} \right]. \end{displaymath} By reversing the summation, from~(\ref{5.7}) with $d = 2k + 1$ and $n = d$, we conclude that \begin{align*} \sum_{\ell = 0}^{2k} (-1)^\ell \binom{2k + 2}{\ell} E_{2k + 1}(2k + 2 - \ell) & = \sum_{\ell = 2}^{2k + 2} (-1)^\ell \binom{2k + 2}{\ell} E_{2k + 1}(\ell) \\ & = - \sum_{\ell = 0}^1 (-1)^\ell \binom{2k + 2}{\ell} E_{2k + 1}(\ell). \end{align*} Since $E_{2k + 1}(0) = - E_{2k + 1}(1)$, by using~(\ref{5.5}), we have \begin{displaymath} L_{2k + 1} = \frac{(-1)^k}{2} E_{2k + 1}(1) \sum_{\ell = 0}^{2k + 2} \binom{2k + 2}{\ell} = (-1)^k 2^{2k + 1} E_{2k + 1}(1) = T_{2k + 1}. \end{displaymath} This proves the claim for odd values of $d$. Let $d$ be even, $d = 2k$, with $k \geq 0$. For $d = 0$, it is obvious that $L_0 = t_0^0 = 1 = E_0$, so we may assume that $k > 0$. Then $r = k$ and $(d + 1)/2 = k + 1/2$, so~(\ref{5.9}) becomes \begin{displaymath} L_{2k} = \sum_{j = -k}^k (-1)^j \sum_{\ell = 0}^{2k + 1} (-1)^\ell \binom{2k + 1}{\ell} \left( j - \ell + k + \frac{1}{2} \right)_{+}^{2k}. \end{displaymath} The second sum contains only the terms with $j - \ell + k + 1/2 > 0$, i.e., for $l \leq j + k$. By exactly the same transformation as before, we arrive at \begin{displaymath} L_{2k} = (-1)^k \sum_{\ell = 0}^{2k} (-1)^\ell \binom{2k + 1}{\ell} \sum_{j = 1}^{2k + 1 - \ell} (-1)^{(2k + 1 - \ell) - j} \left( j - \frac{1}{2} \right)^{2k}. \end{displaymath} Now we expand the last factor in terms of powers of $j$. Then $L_{2k}$ can be written as \begin{equation} L_{2k} = (-1)^k \sum_{n = 0}^{2k} \binom{2k}{n} \left( - \frac{1}{2} \right)^{2k - n} S_{2k, n}, \label{5.10} \end{equation} with \begin{displaymath} S_{2k, n} = \sum_{\ell = 0}^{2k} (-1)^\ell \binom{2k + 1}{\ell} \sum_{j = 1}^{2k + 1 - \ell} (-1)^{(2k + 1 - \ell) - j} j^n, \quad n = 0, \ldots, 2k. \end{displaymath} Like before, the second sum can be simplified as \begin{displaymath} \sum_{j = 1}^{2k + 1 - \ell} (-1)^{(2k + 1 - \ell) - j} j^n = \frac{1}{2} \left( E_n(2k + 2 - \ell) + (-1)^{2k + 2 - \ell} E_n(1) \right), \end{displaymath} which gives \begin{displaymath} S_{2k, n} = \frac{1}{2} \left[ \sum_{\ell = 0}^{2k} (-1)^\ell \binom{2k + 1}{\ell} E_n(2k + 2 - \ell) + E_n(1) \sum_{\ell = 0}^{2k} \binom{2k + 1}{\ell} \right]. \end{displaymath} By reversing the summation, from~(\ref{5.8}) with $d = 2k$, for $n = 0, \ldots, d$, we see that \begin{displaymath} \sum_{\ell = 0}^{2k} (-1)^\ell \binom{2k + 1}{\ell} E_n(2k + 2 - \ell) = - \sum_{\ell = 1}^{2k + 1} (-1)^\ell \binom{2k + 1}{\ell} E_n(\ell + 1) = E_n(1). \end{displaymath} Therefore, \begin{displaymath} S_{2k, n} = \frac{1}{2} E_n(1) \sum_{\ell = 0}^{2k + 1} \binom{2k + 1}{\ell} = 2^{2k} E_n(1). \end{displaymath} From~(\ref{5.10}) we obtain \begin{displaymath} L_{2k} = (-1)^k 2^{2k} \sum_{n = 0}^{2k} \binom{2k}{n} E_n(1) \left( -\frac{1}{2} \right)^{2k - n}. \end{displaymath} Finally, by using~\cite[23.1.7, p.~804]{Abramowitz-Stegun-72}) \begin{displaymath} \sum_{n = 0}^{2k} \binom{2k}{n} E_n(1) \left( -\frac{1}{2} \right)^{2k - n} = E_{2k} \left( \frac{1}{2} \right). \end{displaymath} Together with~(\ref{5.6}), this gives \begin{displaymath} L_{2k} = (-1)^k 2^{2k} E_{2k} \left( \frac{1}{2} \right) = E_{2k}. \end{displaymath} This completes the proof for even values of $d$. \end{proof} We would like to conclude with an observation that, to the best of our knowledge, scarcely any result could be found about sufficient conditions for the non-negativeness of the DFT in terms of its coefficients, apart from the classical result of Young and Kolmogorov (cited in Zygmund~\cite[page~109]{Zygmund-35}): \begin{thm} For a convex sequence $(a_n,n\in\mathbb{N})$, where $\displaystyle\lim_{n\to\infty}a_n=0$, the sum \begin{displaymath} \frac{1}{2} a_0 + \sum_{n=1}^{\infty} a_n \cos\left( nx \right) \end{displaymath} converges (save for $x = 0$), and is non-negative. \end{thm} Here, a sequence is convex if $\Delta^2 a_n \ge 0$ for all $n$, with $\Delta a_n = a_n - a_{n+1}$. Convexity is not fulfilled in the case of cardinal B-spline coefficients, since there is always one inflection point on each slope of the spline. And yet, our numerical experiments strongly suggest that the class of series with a positive DFT is worth investigating further, for the theoretical and practical reasons alike.
1,108,101,566,239
arxiv
\section{Introduction} As it is known (see an interesting review by Shirkov \cite{Sh1}) the basic idea of the renormalization group (RG) was formulated in an article by Stueckelberg and Petermann \cite{SP}. The existence of such group of transformations was related to the arbitrariness in the procedure of subtraction of ultraviolet divergencies in quantum electrodynamics. Functional equations for the propagators in the ultraviolet limit, corresponding to the regularized Dyson transformations, were basically derived by Gell-Mann and Low \cite{GML}. Bogoliubov and Shirkov \cite{BSh1} unveiled the group nature of these functional equations, established their relation to the renormalization group by Stueckelberg and Petermann and derived the functional group equations for propagators and vertices of quantum electrodynamics for a general case, i.e. massive case. For example, in a renormalizable quantum field theory model with one coupling constant $g$ and one mass $m$ (like $g \phi^{4}$ model in four dimensions) the functional equation for the invariant charge $\bar{g}(x,y;g)$, with $x=p^{2}/\mu^{2}$ and $y=m^{2}/\mu^{2}$ being the ratios of the momentum and mass squared to the square of the normalization scale, has the following form: \begin{equation} \bar{g} (x,y;g) = \bar{g} \left(~\frac{x}{t}~, ~\frac{y}{t} ~, ~\bar{g}(t,y;g) \right)~. \label{func} \end{equation} This equation establishes the connection between the exact symmetry of Stueckelberg and Petermann and an approximate one of Gell-Mann and Low since in the massless limit $m=0$ it reduces formally to the result by Gell-Mann and Low \cite{GML}. Eq. (\ref{func}) reflects the exact symmetry of a solution which later became known as a functional self-similarity symmetry \cite{Sh82} (as it is relevant to the notion of self-similarity known in mathematical physics). From this equation the standard differential renormalization group equations in both Gell-Mann-Low and Callan-Symanzik form \cite{BSh,ChL} can be derived. We will call this approach just "renormalization group" (RG) in order to distinguish it from the approach originated from the works by Wilson and later by Polchinski, which is traditionally called "exact renormalization group" (ERG). The symmetry, underlying the RG, is the symmetry of a characteristic function $F(x; x_{0}, F_{0})$ of a physical problem ("solution of the problem") with respect to the change of the boundary condition $F(x_{0};x_{0},F_{0}) = F_{0}$ \cite{Sh1}. If we now consider $F(x;x_{1},F_{1})$, which is the {\it same} solution but with boundary condition settled at another point $x_{1}$, we arrive to the functional relation \[ F(x; x_{1}, F_{0}) = F(x; x_{1}, F(x_{1}; x_{0}, F_{0})). \] Assuming that the function possesses the homogeneity property $F(x; x_{0}, F_{0} ) = f \left( \frac{x}{x_{0}}, F_{0} \right)$ with $f(1,F_{0}) = F_{0}$ we obtain for the function $f$ an equation of the type (\ref{func}). Another approach was developed in articles by Kadanoff and Wilson \cite{Ka,W}. It employes the idea of Wilson effective action, which is the action obtained by integrating out degrees of freedom with momenta $p^{2} \geq \Lambda^{2}$ in the defining functional integral. So, when the scale $\Lambda$ is reduced, the generating functional \begin{equation} Z_{\Lambda} (J) = \int \Pi_{|p| \leq \Lambda} D\phi(p) \exp \left\{- S(\phi; \Lambda) -\int dp \ J \phi \right\} \label{Z-def} \end{equation} includes integration over less and less modes, but the effective action $S (\phi; \Lambda)$ changes in such a way that the generating functional remains unchanged. In this sense, it describes the same physics for any $\Lambda$, that is the $S$-matrix elements and Green functions remain unchanged. These ideas were first developed for lattice statistical systems and were fruitfully used in condensed matter physics. Some re-formulations of this approach, which made possible its applications in quantum field theory, were made first by Wegner and Houghton \cite{WH} and Weinberg \cite{Weinberg} and later by Polchinski \cite{Po}. It is basically this approach which was called the exact renormalization group (ERG). In this article we discuss in detail the Polchinski version of the ERG (see a comprehensive review by Ball and Thorne \cite{BT} on this subject), though results obtained within other schemes will also be presented. The action can be characterized by a set $g = \{g_{2}, g_{4}, \ldots \}$ of coupling constants of all possible operators consistent with the symmetries of the system. The change of the scale $\Lambda \rightarrow \Lambda / t$ combined with the corresponding change of the set $g \rightarrow R_{t} g$, where $R_{t}$ is some operator, can be viewed as a group transformation. Under such transformation the field changes as $\phi (p) \rightarrow \alpha_{t} \phi (tp)$. Then, for example, for the 2-point function, defined from (\ref{Z-def}) in a standard way, we have a relation of the type: \begin{equation} G_{2}\left(\frac{p}{\Lambda}, g(\Lambda) \right) = \alpha_{t}^{2} G_{2} \left( \frac{tp}{\Lambda}, R_{t} g(\Lambda) \right). \end{equation} One can see similarity of this relation with the functional equation (\ref{func}). Having in mind this similarity, we can view the symmetry, underlying the Wilson renormalization group, as the invariance of the Wilson effective action with respect to the boundary condition \begin{equation} S( \phi, \Lambda) |_{\Lambda = \Lambda_{0}} = S_{0}(\phi), \end{equation} where $S_{0}(\phi)$ is some "fundamental" action defined at the "fundamental" scale $\Lambda_{0}$. Because of this similarity one can expect that the RG and ERG approaches have much in common. This aspect was emphasized by Shirkov \cite{Sh1}. The first aim of this contribution is to give a short review of recent applications of the ERG in quantum field theory. Here we will limit ourselves to the case of scalar and fermionic theories only. Application of the ERG in gauge theories at the moment encounters difficulties (which are not unavoidable, in our opinion) due to the absence of a gauge invariant formulation \cite{gauge}. We will concentrate on results which are non-perturbative, though obtained within some approximation. This approximation will be the derivative expansion. The second aim is to work out some details on the relation between the RG and the ERG. To much extent this was studied in the original paper by Polchinski \cite{Po} (for a detailed discussion see also the article by Ball and Thorne \cite{BT}). Explicit calculation of the 1-loop $\beta$-function and anomalous dimensions in scalar theory with the ERG method was done by Hughes and Liu \cite{HL}. In the present contribution we will examine the $\beta$-function calculated by the derivative expansion technique within the ERG and analyze its relation to the perturbative RG calculation. The article is organized as follows. In Sect.\ 2 we review the Polchinski version of the ERG equation in the case of scalar theory and describe the derivative expansion, which is a method used for non-perturbative calculations in this approach. In Sect.\ 3 we explain how the standard perturbative results can be obtained in the framework of the ERG approach and compare them with the results calculated within the derivative expansion. We will discuss the difference and the relation between these two calculations. In Sect.\ 4 we review very briefly the main non-perturbative results obtained in scalar field theory within the ERG approach. Here calculations for various types of the ERG equations, not only for the Polchinski one, are summarized. In the last section the ERG equation for fermions and some first results obtained in the 2-dimensional Gross-Neveu type model are presented. \section{The ERG equation} Consider a $d$-dimensional scalar theory of the field $\Phi$ which is invariant under the $Z_{2}$-symmetry transformation $\Phi \rightarrow -\Phi$. Let us introduce the regularized generating functional \cite{Po,BT} \begin{equation} e^{-W(J;\Lambda)} = \int D\Phi e^{- S_{\Lambda}(\Phi;J)}, \label{W-def} \end{equation} where the action with the source term is given by \begin{eqnarray} S_{\Lambda}(\Phi,J) & = & \int \frac{d^{d}p}{(2\pi)^{d}} \Phi(p) P_{\Lambda}^{-1}(p) \Phi(-p) + S_{int}(\Phi; \Lambda) \nonumber \\ & + & \int d^{d}p J(p) Q_{\Lambda}^{-1}(p) \Phi(p) + f_{\Lambda}. \label{action} \end{eqnarray} The regularized propagator \begin{equation} P_{\Lambda}(p) = \frac{K\left( \frac{p^{2}}{\Lambda^{2}} \right)} {p^{2}} \label{propag} \end{equation} is defined by introducing the regulating function $K(z)$ which is supposed to be decreasing fast enough when $z \rightarrow \infty$ and is normalized by $K(0) = 1$. The functions $Q_{\Lambda}(p)$ and $f_{\Lambda}$ are necessary for the consistency of the formulation and are to be determined. Note that the last term in Eq. (\ref{action}) does not depend on the field. In the functional integral above only those momentum modes which have $|p|$ lower or about $\Lambda$ are important, contributions of higher modes are suppressed by the the regulating function. Thus $\Lambda$ plays the role of a (smooth) upper cutoff. In this setting Wilson's idea is realized as follows: the effective action $S_{\Lambda}$ is such that while the scale $\Lambda$ varies the $S$-matrix elements or even the off-shell Green functions remain unchanged. This implies that the generating functional is a constant function of $\Lambda$: \begin{equation} \Lambda \frac{d W(J,\Lambda)}{d\Lambda} = 0. \label{W-const} \end{equation} The change of the propagator (the range of modes suppressed in the functional integral) with $\Lambda$ in the kinetic term is compensated by the change of the action of interaction and other terms in Eq. (\ref{action}) so that the whole functional integral, defining the generating functional, describes the same physics. Next step is to use the functional integral identity \[ \int D\Phi \frac{\delta}{\delta \Phi(p)} \left( \frac{1}{2} \frac{\delta}{\delta \Phi(-p)} + P_{\Lambda}^{-1}(p) \Phi(p) + Q_{\Lambda}^{-1}(p) J(p) \right) e^{-S_{\Lambda}(\Phi,J)} = 0. \] Combining this identity with the condition (\ref{W-const}) one can derive the ERG equation for the effective action. Here we omit details of the derivation and present the result only. Before doing this let us introduce some notations. It is convenient to define the parameter \begin{equation} t = -\ln \frac{\Lambda}{\Lambda_{0}}, \end{equation} where $\Lambda_{0}$ is some fixed scale. The scale of dimensionful objects is carried by $\Lambda$. So we can define the "dimensionless momentum" $q$ and the dimensionless field variable $\phi(q;t)$ as follows: \begin{equation} q = \frac{p}{\Lambda}, \; \; \; \phi(q;t) = \Lambda^{1+ d/2} \Phi(p; \Lambda, \Lambda_{0}). \end{equation} The dependence of the field on $t$ is characterized by the anomalous dimension $\eta$: \begin{equation} \frac{\partial}{\partial t} \phi(q) = \frac{\eta}{2} \phi(q). \end{equation} Then the part of the effective action (\ref{action}), which does not include the source term and the constant term $f_{\Lambda}$, takes the form \begin{equation} S[\phi] = \int \frac{d^{d}q}{(2\pi)^{d}} \phi(q) \frac{q^{2}}{K(q^{2})} \phi(-q) + S_{int}[\phi;t]. \label{action1} \end{equation} The ERG equation for it is \begin{eqnarray} \frac{\partial S}{\partial t} & = & - \int d^{d}q (2\pi)^{d} K'(q^{2}) \left[ \frac{\delta^{2} S}{\delta\phi(q) \delta \phi(-q)} - \frac{\delta S}{\delta \phi(q)} \frac{\delta S}{\delta \phi(-q)} \right] + S d \label{ERG-eq} \\ & + & \int d^{d}q \phi(q) \frac{\delta S}{\delta \phi(q)} \left[ \frac{\eta}{2} - 2q^{2} \frac{K'(q^{2})}{K(q^{2})} + 1 - \frac{d}{2} \right] - \int d^{d}q \phi(q) q_{\mu} \frac{\partial '}{\partial q_{\mu}} \frac{\delta S}{\delta \phi(q)}. \nonumber \end{eqnarray} The prime in the last term means that the derivative does not act on the standard $\delta$-functions of the total energy-momentum conservation which appear in the action in the momentum representation. Only the first line is a non-trivial part, the rest of the terms just reflects the canonical dimensions of the objects of the action and the anomalous dimension of the field. For the condition (\ref{W-const}) to fulfill the function $f_{\Lambda}$ has to satisfy \[ \dot{f}_{\Lambda} = \int d^{d}p \left[ \tilde{Q}^{-2}(p^2) \frac{\dot{P}_{\Lambda} (p)}{P_{\Lambda}^{2}(p)} J(p)J(-p) - \int d^{d}p \frac{\dot{P}_{\Lambda}(p)}{P_{\Lambda}(p)} \delta(0) \right] \] and $Q_{\Lambda}(p)=P_{\Lambda}(p) \tilde{Q}(p^2)$, where $\tilde{Q}(p^2)$ obeys the equation $\dot{\tilde{Q}}(p^2)=(\eta/2)\tilde{Q} (p^2)$. Here the dot means differentiation with respect to $t$. The equation (\ref{ERG-eq}) is supposed to be supplied with the initial condition set at some scale $\Lambda_{0}$ or at $t=0$: \[ S_{int}[\phi;t]|_{t=0} = \tilde{S}_{int}[\phi] = \int d^{d}x \tilde{L}_{int}(\phi), \] where $\tilde{L}_{int}$ is essentially the bare Lagrangian. Then the ERG equation defines the running Lagrangian $L_{int}(\phi,t)$ and, correspondingly, the running action $S_{int}[\phi,t]=\int d^{d}x L_{int}(\phi,t)$, i.e. a trajectory in the space of all possible Lagrangians parametrized by $t$. Note that the equation (\ref{ERG-eq}) is exact and non-perturbative. One can observe certain similarity of it with a functional generalization of the RG equation to the case of Lagrangians of arbitrary type (including non-renormalizable Lagrangians) \cite{Kaz}. The limit $t \rightarrow \infty$ describes either the situation with $\Lambda_{0}$ fixed and $\Lambda \rightarrow 0$, i.e. the limit of low characteristic energy, or the situation $\Lambda_{0} \rightarrow \infty$, i.e. the continuum limit of the model. In this way the ERG equation allows for non-perturbative studies of the continuum limit in quantum field theory. There are a few important issues that can be addressed by studying and solving this equation. First of all we can look for fixed point solutions $L_{int}^{*}(\phi)$ which some of the trajectories can approach as $t \rightarrow +\infty$ (here we mean fixed points for finite values of the coupling constants in the Lagrangian). The fixed points satisfy Eq. (\ref{ERG-eq}) with the zero l.h.s., $\dot{S}_{int}^{*}=0$, and this equation defines the value of the anomalous dimension $\eta=\eta^{*}$ for which such solution exists. The Gaussian fixed point $S_{int}^{*}=0$ gives an example of the trivial solution. Having found a fixed point solution we can study the theory in its vicinity. For this we represent the Lagrangian as an expansion in operators \begin{equation} L_{int}(\phi,t) = L_{int}^{*}(\phi) + \sum_{n} {\cal O}_{n} e^{\lambda_{n}t}. \end{equation} The parameters $\lambda_{n}$ are called critical exponents and they are physical observables. The operators ${\cal O}_{n}(\phi)$, which correspond to $\lambda_{n}>0$, are called relevant operators and are important for the physics of the system in the vicinity of the fixed point when $t \rightarrow \infty$. The ERG equation (\ref{ERG-eq}) in principle allows to calculate the critical exponents and found the corresponding operators. Finally, one can try to solve the ERG equation for arbitrary $t$, i.e. find the complete renormalization group trajectory. There are also other interesting problems which can be considered in a non - perturbative way in the framework of this ap\-proach. These include bound states, Za\-mo\-lod\-chi\-kov $c$-function, etc. We would like to mention that in quantum field theory ERG equations of other types are also considered. Historically the first one was the Wegner-Houghton equation \cite{WH}. It was formulated for the sharp cutoff and was used in a number of articles for calculation of fixed points, critical exponents, flows, etc. in the scalar theory (see the article by Hasenfratz and Hasenfratz for one of the first detailed studies \cite{HH}). An approach based on an ERG equation with a sharp cutoff for the effective action $\Gamma_{eff}$ was developed by Morris \cite{Mo1}. Another version is the equation for the average effective action \cite{Wet}. Some of these results will be discussed in Sect. 4. \section{Approximations and relation between ERG and RG} There are some special cases when the ERG equation (\ref{ERG-eq}) simplifies essentially and one can find its solutions. This happens, for example, in the theory with $N$-component scalar field in the limit $N \rightarrow \infty$ \cite{WH,N-large}. In general it is not clear how the ERG equation (\ref{ERG-eq}) can be solved exactly. So we need to use an approximation to obtain solutions and to analyze them. For this it is useful to have an idea of a Lagrangian which captures essential features of the problem. In general, even if we start with a simple initial Lagrangian, like, for example, $\tilde{L}_{int}(\phi) = g\phi^{4}/4!$ at $t=0$, the running Lagrangian can include all possible operators constructed out of the field $\phi$ and its derivatives which are consistent with the symmetry of the problem. In the momentum representation the action can be written as an infinite series \begin{eqnarray} S_{int}[\phi,t] & = & \frac{1}{2} \int \frac{d^{d}q}{(2\pi)^{d}} A_{2}(q,-q,t) \phi(q) \phi(-q) + \frac{1}{4!} \int \frac{d^{d}q_{1} d^{d}q_{2} d^{d}q_{3} d^{d}q_{4}} {(2\pi)^{3d}} \delta(\sum_{i=1}^{4} q_{i}) \nonumber \\ & \times & A_{4}(q_{1},q_{2},q_{3},q_{4},t) \phi(q_{1}) \phi(q_{2}) \phi(q_{3}) \phi(q_{4}) + \ldots \label{action2} \end{eqnarray} Then the ERG equation gives rise to an infinite system of coupled equations \begin{eqnarray} \dot{A}_{2}(q,-q,t) & = & (2+\eta) A_{2}(q,-q,t) - 2q_{\mu}\frac{\partial} {\partial q_{\mu}} A_{2}(q,-q,t) + 2 K'(q^{2}) A_{2}(q,-q,t) \nonumber \\ & - & \int\frac{d^{d}q_{1}}{(2\pi)^{d}} K'(q_{1}^{2})A_{4}(q,-q,q_{1},-q_{1},t), \nonumber \\ \dot{A}_{4}(q_{1},\ldots, q_{4},t) & = & (4-d+2\eta) A_{4} - \sum_{j=1}^{4} q_{j\mu}\frac{\partial} {\partial q_{j\mu}} A_{4} \nonumber \\ & + & 2 \sum_{j=1}^{4} \left[ K'(q_{j}^{2}) A_{2}(q_{j},-q_{j},t)\right] A_{4}(q_{1},\ldots q_{4},t) \nonumber \\ & - & \int\frac{d^{d}q}{(2\pi)^{d}} K'(q^{2}) A_{6}(q_{1},\ldots , q_{4},q,-q,t), \label{ERG-system} \\ \ldots & & \nonumber \end{eqnarray} Let us consider now the case $d=4$ as an example. The initial conditions, corresponding to one of the usual renormalization prescriptions in terms of the coefficient functions in (\ref{action2}), are settled at two scales \cite{HL,BT}: at $\Lambda = \Lambda_{0}$: \begin{eqnarray} A_{2}(q,-q,\Lambda_{0})& =& \rho_{1}(\Lambda_{0}) + q^{2}\rho_{2}(\Lambda_{0}); \nonumber \\ A_{4}(q_{1}, \ldots , q_{4},\Lambda_{0}) & = & g(\Lambda_{0}) \equiv g_{B}; \nonumber \\ A_{2j}(q_{1}, \ldots , q_{2j},\Lambda_{0}) & = & 0 \; \; \; \mbox{for $j \geq 3$}, \label{renorm1} \end{eqnarray} that gives the standard form for the bare Lagrangian where, of course, $g_{B}$ is the bare coupling constant; and at some physically relevant scale (renormalization point) $\Lambda = \mu_{R}$: \begin{eqnarray} A_{4}(0,0,0,0,\mu_{R}) & = & g(\mu_{R}); \nonumber \\ \rho_{1}(\mu_{R}) = 0, & & \; \; \; \rho_{2}(\mu_{R})=0. \label{renorm2} \end{eqnarray} In the formulas above for illustrative purposes we indicated the dependence on the scale as the dependence on $\Lambda$ and not on $t$ as before. One of the known ways to solve the ERG equation is to use the perturbative expansion. We assume that all coefficient functions can be represented as power series in $g_{R} \equiv g(\mu_{R})$. After a bit lengthy but straightforward calculation one can get solutions for $A_{2j}$. In particular, \begin{eqnarray} g(\Lambda;\mu_{R},g_{R}) & \equiv & A_{4}(0,0,0,0,\Lambda; \mu_{R}, g_{R}) \nonumber \\ & = & g_{R} - 3 g_{R}^{2} \int \frac{d^{4}p}{(2\pi)^4} \frac{1}{p^2} \frac{1}{p^2} \int_{\mu_{R}}^{\Lambda} d\Lambda' \left(\frac{d}{d \Lambda'} K \left(\frac{p^{2}}{\Lambda'^{2}} \right) \right) \left[ K \left(\frac{p^{2}}{\Lambda'^{2}} \right) - K \left(\frac{p^{2}}{\Lambda_{0}^{2}} \right) \right] \label{g-PT0} \\ & = & g_{R} - 3 g_{R}^{2} \int \frac{d^{4}p}{(2\pi)^4} \frac{1}{p^2} \frac{1}{p^2} \left\{ \frac{1}{2} \left( K^{2}\left(\frac{p^{2}}{\Lambda^2} \right) - K^{2}\left( \frac{p^{2}}{\mu_{R}^{2}}\right) \right) \right. \nonumber \\ & - & \left. \left( K \left(\frac{p^{2}}{\Lambda^2} \right) - K\left( \frac{p^{2}}{\mu_{R}^{2}}\right) \right) K \left( \frac{p^2} {\Lambda_{0}^{2}} \right) \right\} + {\cal O}(g_{R}^{3}). \label{g-PT} \end{eqnarray} In this formula we indicated the dependence on the initial condition (i.e. the renormalization point) explicitly. Taking the limit $\Lambda_{0} \rightarrow \infty$ we get the standard $\beta$-function in the 1-loop approximation \cite{HL}: \begin{equation} \beta(g_{R}) \equiv \Lambda \frac{d}{d \Lambda} g(\Lambda; \mu_{R}, g_{R}) = \frac{3}{16 \pi^2} g_{R}^{2} + {\cal O}(g_{R}^{2}). \label{beta} \end{equation} Can we get non-perturbative solutions of the ERG equation? At the moment the available systematic techniques for this are the derivative expansion and its modifications. The idea is to represent the action (here in the coordinate representation) as \begin{eqnarray} S_{int}[\phi,t] & = & \int d^{d}x \left[ V(\phi(x),t) + (\partial_{\mu} \phi)^2 U(\phi(x),t) \right. \nonumber \\ & + & \left. (\partial_{\mu} \phi)^4 H_{1}(\phi (x),t) + (\partial_{\mu}^2 \phi)^2 H_{2}(\phi(x),t) + \ldots \right], \label{der-exp} \end{eqnarray} where $V$, $U$, $H_{1}$, $H_{2}$, $\ldots$ depend on the field but not on its derivatives. In the momentum representation Eq. (\ref{der-exp}) corresponds to the expansion in powers of momenta, so one can hope that using a few first terms of this expansion is justified if we consider effects at low momenta (there may be some complications in the case of the sharp cutoff \cite{Mo2}). The function $V(\phi,t)$ is the effective (local) potential of the theory. To solve the ERG equation (\ref{ERG-eq}) approximately we truncate the derivative expansion (\ref{der-exp}) and substitute the truncated action into the ERG equation thus getting equations for the coefficient functions. For example, if we consider just the leading and next-to-leading terms we have \cite{BHLM}: \begin{eqnarray} \dot{V} & = & -\alpha V'' - 2\beta U'' + \gamma (V')^2 + d \cdot V \left(1 - \frac{\eta}{2} - \frac{d}{2} \right) + \phi V' \label{ERG-d1} \\ \dot{U} & = & - \alpha U'' + \delta (V'')^{2} + 4 \gamma U V'' + 2 \gamma U' V' - \eta U + \left(1 - \frac{\eta}{2} - \frac{d}{2}\right) \phi U' - \frac{\eta}{2}, \label{ERG-d2} \end{eqnarray} where the prime means differentiation with respect to the field $\phi$. The equation depends explicitly on the parameters characterizing the regulating function: \begin{equation} \alpha = \int \frac{d^{d}q}{(2\pi)^d} K'(q^2), \; \; \; \beta = \int q^2 \frac{d^{d}q}{(2\pi)^d} K'(q^2), \; \; \; \gamma = K'(0), \; \; \; \delta = K''(0). \label{scheme} \end{equation} This is similar to the dependence on the renormalization scheme in the RG. However, there is a problem of the breaking of the reparametrization invariance by the derivative expansion that gives rise additional dependence on the regulating function \cite{repinv,Mo8}. One way is to solve the system (\ref{ERG-d1}), (\ref{ERG-d2}) numerically. Similar systems for the ERG equations of other types were also studied \cite{Mo3}. These results will be discussed in the next section. Another way is to do further approximation, namely, to expand the functions $V$, $U$, etc. in powers of fields: \begin{eqnarray} V(\phi,t) & = & a_{1}(t) \phi^{2} + a_{2}(t) \phi^4 + a_{3}(t) \phi^6 + \ldots, \nonumber \\ U(\phi,t) & = & b_{2}(t) \phi^4 + b_{3}(t) \phi^6 + \ldots \label{VU-polynom} \end{eqnarray} Then the system (\ref{ERG-d1}), (\ref{ERG-d2}) becomes the following set of flow equations \cite{CKM1} \begin{eqnarray} \dot{a}_{1} & = & (2+\eta)a_{1} - 12 a_{2} - 6b_{2}/s_{1} + 4a_{1}^{2}, \nonumber \\ \dot{a}_{2} & = & (4-d+2\eta)a_{2} - 30 a_{3} - 10b_{3}/s_{1} + 16a_{1}^{2}, \nonumber \\ \dot{a}_{3} & = & (6-2d+3\eta)a_{3} + 24 a_{1} a_{3} + 16a_{2}^{2}, \nonumber \\ \dot{b}_{2} & = & (2-d+2\eta)b_{2} + 16 a_{1}a_{2} + 16 a_{1} b_{2} - 20b_{3}, \nonumber \\ \dot{b}_{3} & = & (4-2d+3\eta)b_{3} + \frac{192}{5} a_{2}b_{2} + 24 a_{1}b_{3} + 24 a_{1} a_{3} + \frac{144}{5} a_{2}^{2} \label{ERG-p1} \end{eqnarray} with the relation \begin{equation} 0 = \eta - s_{2} (12 b_{2} - 8 a_{1}^2). \label{eta-eq} \end{equation} Here we introduced the combinations of the scheme parameters $s_{1} = \alpha \gamma / \beta \delta$ and $s_{2}=\delta / \gamma^2$. The last equation arises because the normalization of the kinetic term is fixed. Let us mention that Eqs. (\ref{ERG-p1}) are basically $\beta$-functions for the coupling constants of corresponding operators. The leading order of the derivative expansion with subsequent polynomial approximation of the potential $V(\phi,t)$ in scalar theories for the Wegner-Houghton ERG equation was studied in detail \cite{MOP,HKLM,Mo2}. We summarize some of the results in the next section. Now we are going to illustrate the relation between the solutions within the derivative expansion in the ERG and the perturbative results in the RG (for $d=4$). For this purpose let us solve Eqs. (\ref{ERG-p1}), (\ref{eta-eq}) perturbatively, i.e. presenting the coefficients $a_{i}(t)$ and $b_{i}(t)$ and $\eta$ as series in powers of $g(t) \equiv 4! a_{2}(t)$, the coupling constant of the $\phi^4$-interaction. After a simple calculation up to the next-to-leading order ${\cal O}(\partial ^{2})$ of the derivative expansion one gets \begin{equation} \beta(g) = - \frac{d}{dt} g(t) = 6(\alpha \gamma + \frac{1}{2} \beta \delta + \ldots) g^2 + {\cal O}(g^3). \label{beta-d1} \end{equation} We see that the expression (\ref{beta}) for the $\beta$-function to the order $g^2$ is not recovered within this approximation. Moreover, the result (\ref{beta-d1}) for the first coefficient of the $\beta$-function depends on the scheme. Of course, what happens is that there are contributions with higher derivatives (or with higher powers of the momenta) in the expansion (\ref{der-exp}) which give rise to further terms in the equations (\ref{ERG-p1}), (\ref{eta-eq}). In the perturbative solution these terms provide further contributions to the expression (\ref{beta-d1}), also to the $g^2$ coefficient (the dots in that expression stand for these contributions). To see the form of these contributions it is useful to re-write Eq. (\ref{beta-d1}), using the definitions (\ref{scheme}), as follows \[ \beta(g) = 6 g^{2} \int \frac{d^{4}q}{(2 \pi)^{4}} K'(q^2) \left[ K'(0) + \frac{1}{2} q^2 K''(0) + \ldots \right] + {\cal O}(g^3). \] Analyzing the relevant contributions of higher derivatives one can show that when they are taken into account the expression above sums up to \begin{equation} \beta(g) = 6 g^{2} \int \frac{d^{4}q}{(2 \pi)^{4}} K'(q^2) \frac{1}{q^2} \left[ K(q^2) - 1 \right] + {\cal O}(g^3). \label{beta-d2} \end{equation} The integral can be calculated for an arbitrary regulating function and one obtains \[ \beta(g) = \frac{3}{16 \pi^2} g^2 + {\cal O}(g^3), \] the standard result (\ref{beta}). In this calculation we used only that $K(\infty)=0$ and the normalization $K(0)=1$. This example illustrates the relation between the derivative expansion of the ERG and the perturbation theory of the RG. On one hand the derivative expansion contains only a part of the contribution of the weak coupling perturbative expansion to a given order in $g$. It can be obtained by making the Taylor expansion of a part of terms in the integral over internal momenta of the corresponding Feynman diagram. Indeed, consider expression (\ref{g-PT0}) or (\ref{g-PT}), which can be interpreted as a contribution of the 1-loop diagram with zero external momenta regularized according to Eq.\ (\ref{propag}). If we take the limit $\Lambda_{0} \rightarrow \infty$ and differentiate (\ref{g-PT0}) with respect to $\Lambda$, according to the definition (\ref{beta}), we obtain precisely the expression (\ref{beta-d2}). Now, if for infinite $\Lambda_{0} $ we expand the part in the square brackets in powers of $p^2$ (the integral still remains finite), we get the expression which after differentiation with respect to $\Lambda$ provides the result (\ref{beta-d1}) of the derivative expansion in the next-to-leading order. On the other hand the derivative expansion, when is not limited to the weak coupling expansion, contains non-perturbative information, that makes it and the ERG equation in general to be rather valuable tools. We will discuss the results of non-perturbative calculations in the next sections. \section{Scalar theory: main results} Here we discuss some of the main results of the studies within the ERG approach in the scalar theory of one field with $Z_{2}$-symmetry (symmetry under the transformation $\Phi \rightarrow - \Phi$) on the $d$-dimensional Euclidean space. We collect results obtained by various authors and for different versions of the ERG equation. On general grounds different versions should give physically equivalent results, and concrete calculations confirm this (see the discussion of fixed point solutions and critical exponents below). However, no rigorous proof of the equivalence of different approaches has been given so far. First, there were numerous studies of fixed point solutions for various $d$. We discuss them in turn. 1. \underline{ Fixed points: $d=4$.} Hasenfratz and Hasenftatz performed a study of the Wegner-Houghton version of the ERG equation (analog of Eq.\ (\ref{ERG-eq})) numerically in the leading approximation ${\cal O}(\partial ^{0})$of the derivative expansion (local potential approximation) and showed that it has no non-trivial fixed point solutions \cite{HH}. 2. \underline{ Fixed points: $d=3$.} In the same article it was showed, also numerically, that in order ${\cal O}(\partial ^{0})$ the Wegner-Houghton equation has one non-trivial fixed point solution, the known Wilson-Fisher fixed point \cite{HH}, which is in the universality class of the three-dimensional Ising model. This fixed point was also studied in the ${\cal O}(\partial ^{2})$ approximation, both for the Polchinski type ERG equation (\ref{ERG-eq}) \cite{BHLM} and for the Wegner-Houghton equation \cite{Mo8,Mo3} . As an example, let us review the calculation for the Polchinski type ERG equation. To the order ${\cal O}(\partial ^{0})$ of the derivative expansion the fixed point equation is Eq. (\ref{ERG-d1}) without the term $2\beta U''$ and the l.h.s. $\dot{V} = 0$. It was solved numerically with two boundary conditions at $\phi = 0$: \[ V'(0) = 0, \; \; \; V''(0) = \rho, \] the first one just reflecting the symmetry of the theory. It was shown that a fixed point solution $V^{*}(\phi)$ regular for all finite values of $\phi$ exists only for a special value of $\rho = \rho^{*} = -0.2286 \ldots $ In this approximation the fixed point solution $V^{*}(\phi)$ does not depend on the scheme parameters and the value of the anomalous dimension at the fixed point is $\eta^{*} = 0$. In the next-to-leading approximation ${\cal O}(\partial ^{2})$ the system of fixed point equations , i.e. Eqs. (\ref{ERG-d1}), (\ref{ERG-d2}) with zero left hand sides, was solved numerically with the following boundary conditions for $V(\phi)$ and $U(\phi)$ at $\phi = 0$: \[ V'(0) = 0, \; \; \; V''(0) = \rho, \; \; \; U(0) = 0, \; \; \; U''(0) = 0. \] A regular non-trivial solution exists and is unique only for a special value of $\rho = \rho^{*}$ and the anomalous dimension $\eta = \eta^{*} \neq 0$, but now $\rho^{*}$ and $\eta^{*}$ depend on the scheme parameters (\ref{scheme}). 3. \underline{ Fixed points: $2 < d \leq 4$.} For this range of dimensions the Wegner-Houghton fixed point equation was analyzed in the leading order of the derivative expansion with subsequent approximation of the local potential $V^{*}(\phi)$ by a polynomial in powers of the field \cite{HKLM,HKLM1}: \begin{equation} V_{M}^{*} (\phi) = a_{1}^{*} \phi^{2} + a_{2}^{*}\phi^{4} + \ldots + c_{M}^{*} \phi^{2M}. \label{V-poly} \end{equation} In this case the fixed point ERG equation reduces to a system of $M$ algebraic equations for the coefficients $a_{i}^{*}$. As before, this approximation gives the anomalous dimension $\eta^{*}=0$, nevertheless it captures some general features of the multicritical fixed points below $d=4$. In particular, for approximation (\ref{V-poly}) the system shows the existence of the first $(M-1)$ upper critical dimensions $d_{k} = 2k/(k-1) = 4, 3, 8/3, \ldots$ for $k=2,3,4, \ldots, M$. Then, at $d=d_{k}$ the trivial Gaussian fixed point $V_{M}^{*} = 0$ is a branching point of a new non-trivial fixed point and the Gaussian one below $d_{k}$. A general feature of the polynomial approximation is the appearance of numerous spurious solutions. For example, for $d=3$ from the numerical results, discussed above, we know that there is only one non-trivial fixed point, the Wilson-Fisher fixed point. But if we approximate the potential $V^{*}(\phi)$ by a polynomial $V_{M}^{*}(\phi)$ the system can have till $M$ real valued non-trivial solutions for the coefficients $a_{i}^{*}$, all of them but one being spurious. The problem of spurious solutions can be solved by analyzing solutions of subsequent approximations with various $M$ and selecting the stable ones, which represent the true fixed points. The polynomial approximation permits an analytical study and reproduces some of the qualitative features of the structure of the fixed points. However, arguments were presented which show that the polynomial approximation is not convergent \cite{Mo6}, i.e. for a given $\phi$ $|V_{M}^{*} (\phi) - V^{*}(\phi) |$ approaches small but non-zero value as $M \rightarrow \infty$. 4. \underline{ Fixed points: $d=2$.} The case of $d=2$ dimensions was studied by Morris \cite{Mo8,Mo7}. At ${\cal O}(\partial^{2})$ order of the derivative expansion he found first 10 multicritical fixed points of an infinite series which corresponds to unitary minimal models of conformal field theory and whose existence was conjectured by Zamolodchikov. 5. \underline{Critical exponents}. To calculate the critical exponents at a given fixed point we expand the potentials $V$, $U$, etc. around the fixed point solution, \begin{equation} V(\phi,t) = V^{*}(\phi) + \delta V (\phi, t), \; \; \; U(\phi,t) = U^{*}(\phi) + \delta U (\phi, t), \ldots, \end{equation} linearize the ERG equations (\ref{ERG-d1}), (\ref{ERG-d2}) with respect to the linear deviations $\delta V$, $\delta U$, etc. and represent these deviations in the vicinity of the fixed point as \begin{equation} \delta V (\phi,t) = \sum_{n} v_{n}(\phi) e^{\lambda_{n} t}. \end{equation} The critical exponents $\lambda_{n}$ can be found as eigenvalues of the linearized system of the ERG equations. Most of the study was done for critical exponents of the Wilson-Fisher fixed point for $d=3$. There is one positive critical exponent $\lambda_{1}$ and, correspondingly, one relevant operator, the rest of $\lambda_{n} < 0$ $(n \geq 2)$. We give a summary of the results of various calculations of $\nu \equiv 1/\lambda_{1}$, characterizing the critical exponent of the relevant operator, $w = -\lambda_{2}$ for the first irrelevant operator, and the anomalous dimension $\eta$ in Table 1. \begin{table}[ht] \renewcommand{\arraystretch}{1.5} \hspace*{\fill} \begin{tabular}{|l|l|l|l|} \hline Approach and approximation & $\eta$ & $\nu$ & $w$ \\ \hline Wegner-Houghton eq. \cite{HH}, ${\cal O}(\partial^{0})$, numerically & 0 & 0.687 & 0.595 \\ \hline Wegner-Houghton eq. \cite{HKLM}, ${\cal O}(\partial^{0})$, polynom., $M=7$ & 0 & 0.657 & 0.705 \\ \hline Eq. for the Legendre effective action \cite{Mo2} & & & \\ ${\cal O}(\partial^{0})$, numerically & 0 & 0.660 & 0.628 \\ ${\cal O}(\partial^{2})$, numerically & 0.054 & 0.618 & 0.897 \\ \hline Polchinski eq.\cite{BHLM} & & & \\ ${\cal O}(\partial^{0})$, numerically & 0 & 0.649 & 0.66 \\ ${\cal O}(\partial^{2})$, numerically & 0.019-0.056 & 0.616 - 0.637 & 0.70 - 0.85 \\ \hline World best estimates & 0.035(3) & 0.631 (2) & 0.80(4) \\ \hline \end{tabular} \hspace*{\fill} \renewcommand{\arraystretch}{1} {\caption[Results of calculations] {Results of calculations of the anomalous dimension $\eta$ and the critical exponents $\nu$ and $w$ for the Wilson-Fisher fixed point at $d=3$ by various authors for different versions of the ERG equation. The entries of the last row (taken from the article by Morris \cite{Mo2,Mo8}) were obtained by averaging the world best estimates \cite{World}.}} \label{t1} \end{table} One can see that the results of different schemes and approximations are in a reasonable agreement. In cases when the characteristic under consideration is scheme dependent the intervals of values, which correspond to certain ranges of values of the scheme parameters, are indicated (the reader is referred to the original articles for details). 6. \underline{ Exact flow}. Critical exponents characterize the flow very close to a particular fixed point. Within the approximation, considered here, one can also study the flow globally. Thus, the Wegner-Houghton flow equation was studied in the leading order of the derivative expansion with the potential $V(\phi,t)$ being approximated by the polynomial \begin{equation} V_{M} (\phi,t) = \frac{1}{2} c_{1}(t) \phi^{2} + \frac{1}{4} c_{2}(t)\phi^{4} + \ldots + \frac{1}{2M} c_{M}(t) \phi^{2M} \label{V-poly1} \end{equation} (cf. (\ref{V-poly})) \cite{HKLM}. We refer the reader to this article for details. 7. \underline{ Zamolodchikov $c$-function}. Another interesting application of the ERG equation is the calculation of the approximate Zamolodchikov $c$-function which characterizes the geometry of the space of interactions of a given system. The Zamolodchikov $c$-function $C(t)$ is a function which decreases monotonically along the flow (i.e. as the flow parameter $t$ grows) and which is stationary at fixed points of the theory: \[ \frac{d C (t)}{d t} |_{\mbox{fixed point}} = 0. \] Zamolodchikov proved the existence of such function, which is unique up to a multiplicative factor, in two-dimensional unitary theories \cite{Zam}. There were some attempts to prove the existence of such function or construct it perturbatively in other cases \cite{c-function}. Here we will show that for the Wegner-Houghton ERG equation the flow for the potential, approximated by the polynomial (\ref{V-poly1}), is gradient and permits a $c$-function description. The former means that the beta-functions of the couplings $c_{i}(t)$, parametrizing the potential, are gradients of some function $C(c_{1},c_{2},\ldots)$: \begin{equation} \frac{d}{dt} c_{i}(t) = - g_{ij} \frac{\partial C}{\partial c_{j}}, \label{grad} \end{equation} where $g_{ij}$ is a positive definite metric in the space of coupling constants. This implies that the set of renormalization flows is irreversible \cite{WZ}. Then $C$ is the $c$-function: \[ \frac{d C}{dt} = \frac{\partial C}{\partial c_{i}} \frac{ dc_{i}}{dt} \leq 0. \] The potential $V(\phi,t)$ in the polynomial approximation (\ref{V-poly1}) with $M=2$ for the Wegner-Houghton ERG equation was analyzed and the flow was shown to be gradient in this approximation \cite{HKLM} . In this case the system of flow equations consists of two equations: \begin{eqnarray} \dot{c}_{1}(t) & = & 2 c_{1} + \frac{6c_{2}}{1+c_{1}}, \nonumber \\ \dot{c}_{2}(t) & = & (4 - d) c_{2} - \frac{18 c_{2}^{2}}{(1+c_{1})^{2}} \nonumber \end{eqnarray} and can be cast into the form (\ref{grad}) with \begin{displaymath} g_{ij} = \frac{1}{(1+c_{1})^{2}} \left( \begin{array}{cc} 1 & 0 \\ 0 & \frac{(4-d)}{3} c_{2} \end{array} \right) \end{displaymath} and \begin{equation} C(c_{1},c_{2}) = - \frac{1}{2} (1+c_{1})^{4} + \frac{2}{3} (1+c_{1})^{3} - 3 c_{2}(1+c_{1})^{2} + \frac{27 c_{2}^{2}}{4-d}. \label{c-func} \end{equation} The $c$-function (\ref{c-func}) is stationary at the fixed points. For example, for $d=3$ it has the maximum at the Gaussian fixed point $(c_{1}=0,c_{2}=0)$ with $C(0,0)=1/6$ and a saddle point at the Wilson-Fisher fixed point $(c^{*}_{1}=-1/7, c_{2}^{*}=2/49)$ with $C(c_{1}^{*},c_{2}^{*})=252/2401 \approx 0.105$. \section{Fermionic theory: first attempts.} In this section we discuss briefly the results of ERG studies carried out for a two-dimensional fermionic model \cite{CKM,Co}. The Polchinski version of the ERG equation for pure fermionic theories can be derived in the same way as the one for the scalar theories (with another functional identity, of course). We start with a general action given by \begin{equation} S = \int d^{d}p \bar{\Psi}(p) P_{\Lambda}^{-1} (p) \Psi (-p) + S_{int}(\Psi,\bar{\Psi},\Lambda), \label{action-ferm} \end{equation} with the regularized propagator equal to \[ P_{\Lambda} (p)= i \hat{p} \frac{K\left(\frac{p^{2}}{\Lambda^{2}} \right)}{p^{2}}. \] The analog of Eq. (\ref{ERG-eq}) is: \begin{eqnarray} \frac{\partial S}{\partial t} & = & 2 \int d^{d}q (2\pi)^{d} K'(q^{2}) \left[ \frac{\delta S}{\delta \psi (q)} i\hat{q} \frac{\delta S}{\delta \bar{\psi} (-q)} - \frac{\delta }{\delta \psi (q)} i\hat{q} \frac{\delta S}{\delta \bar{\psi} (-q)} \right] \nonumber \\ & + & d \cdot S + \int d^{d}q \left( \frac{1-d+\eta}{2} - 2q^{2} \frac{K'(q^{2}}{K(q^{2})} \right) \left( \bar{\psi} (q) \frac{\delta S}{\delta \bar{\psi} (q)} + \psi (q) \frac{\delta S}{\delta \psi (q)} \right) \nonumber \\ & - & \int d^{d}q \left( \bar{\psi}(q) q^{\mu} \frac{\partial '} {\partial q^{\mu}} \frac{\delta S}{\delta \bar{\psi} (q)} + \psi(q) q^{\mu} \frac{\partial '} {\partial q^{\mu}} \frac{\delta S}{\delta \psi (q)} \right). \label{ERG-ferm} \end{eqnarray} Similar to the scalar case, $\eta$ is the anomalous dimension, $\psi$ and $\bar{\psi}$ are dimensionless fermionic fields, $q$ is the dimensionless momentum and $t$ is the renormalization flow parameter $t = - \ln (\Lambda / \Lambda_{0})$. A concrete theory, studied within this approach, is the two-dimensional Euclidean chiral Gross-Neveau type model. This is a theory of fermions with $N$ flavours described by 2-component spinors $\psi^{a}(q)$, $\bar{\psi}^{a}(q)$, $a=1,2, \ldots, N$, with $SU(N)$ symmetry with respect to flavour indices of the left and right fields. The $\gamma$-matrices are given by Pauli matrices $\gamma_{1} = \tau_{1}$, $\gamma_{2} = \tau_{2}$, and $\tau_{3}$ plays the role of the chiral matrix $\gamma_{5}$. It can be shown that the most general action respecting such symmetries can be constructed out of the following operators: \begin{equation} S(q_{1},q_{2}) = \bar{\psi}^{a}(q_{1}) \psi^{a}(q_{2}), \; \; \; P(q_{1},q_{2}) = \bar{\psi}^{a}(q_{1}) \gamma_{5} \psi^{a}(q_{2}), \; \; \; V^{\mu}(q_{1},q_{2}) = \bar{\psi}^{a}(q_{1}) \gamma^{\mu} \psi^{a}(q_{2}). \end{equation} The ERG equation is solved for the truncated action which is represented by a finite sum $S = S^{(2,1)} + S^{(4,0)} + S^{(4,2)} + \ldots + S^{(n,m)}$ and which contains all possible operators with up to $n$ fermionic fields and up to $m$ derivatives. Such expression can be viewed as a certain order of the derivative expansion with each term being further approximated by a polynomial in powers of fields. The general form of the first few $S^{(n,m)}$ respecting the symmetries of the theory can be shown to be \begin{eqnarray} S^{(2,1)} & = & - \int \frac{d^{2}q}{(2\pi)^{2}} \bar{\psi}(-q) i\hat{q} \psi (q), \label{action-ferm1} \\ S^{(4,2)} & = & \int \frac{d^{2}q_{1} \ldots d^{2}q_{4}}{(2\pi)^{8}} (2\pi)^2 \delta (\sum q_{i}) \left[ g_{1}(t) (S(q_{1},q_{2})S(q_{3},q_{4}) - P(q_{1},q_{2})P(q_{3},q_{4}) ) \right. \nonumber \\ & + & \left. g_{2}(t) V^{\mu}(q_{1},q_{2}) V^{\mu}(q_{3},q_{4}) \right], \nonumber \\ S^{(4,2)} & = & \int \frac{d^{2}q_{1} \ldots d^{2}q_{4}}{(2\pi)^8} (2\pi)^2 \delta (\sum q_{i}) \left[ \left(m_{1}(t) (q_{1}+q_{2})^2 + m_{2}(t) (q_{1}+q_{2}) (q_{3}-q_{4}) \right. \right. \nonumber \\ & + & \left. \left. m_{3}(t) (q_{1}-q_{2})^2 \right) \left( S(q_{1},q_{2})S(q_{3},q_{4}) - P(q_{1},q_{2})P(q_{3},q_{4}) \right) + \ldots \right]. \nonumber \end{eqnarray} The coefficients $g_{i}(t)$, $m_{i}(t)$, etc. are running coupling constants of various operators. Within such approximation the ERG equation (\ref{ERG-ferm}) becomes a system of equations for these coupling constants. $S^{(2,1)}$ is of course the first term of the expansion of the kinetic part of (\ref{action-ferm}) in powers of momenta (as before, we use normalization $K(0)=1$). We assume that the normalization of the kinetic term is fixed to be the canonical one, so no $t$-dependent coefficient appears here. The action with just $S^{(2,1)}$ and $S^{(4,0)}$ terms of (\ref{action-ferm1}) is the chiral Gross-Neveau model \cite{GN}. This model was studied in a number of papers and was shown to have non-trivial fixed points which cannot be obtained neither by perturbative methods nor in the $1/N$ expansion \cite{DF}. One fixed point corresponds to the abelian Thirring model with $g_{1}=0$ and $g_{2}$ arbitrary. For the second one the coupling $g_{1} = 4\pi /(N+1)$ and $g_{2}$ is again arbitrary. It can be shown that the lowest order approximation which gives a fixed point solution with non-zero value for the anomalous dimension must contain at least 6 fermions and 3 derivatives. Precisely such action $S = S^{(2,1)} + S^{(2,3)} + S^{(4,0)} + S^{(4,2)} + S^{(6,1)} + S^{(6,3)}$ was used for calculations in Ref. \cite{CKM}. It involves 107 operators in total: 1 with 2 fermions and 1 derivative (the kinetic term), 1 in $S^{(2,3)}$, 2 in $S^{(4,0)}$, 11 in $S^{(4,2)}$, 5 in $S^{(6,1)}$ and 87 in $S^{(6,3)}$. It turns out that the coupling constant of the operator with 2 fermions and 3 derivatives decouples from the rest of the system and does not play any role. With this action fixed point solutions were found and corresponding critical exponents were calculated. The results are as follows. First of all, consistent limits $N \rightarrow \infty$ of the system of the fixed point equations can be considered. In this limit the system simplifies dramatically. Two different large-$N$ regimes of the behaviour of the coupling constants were found and correspondingly one fixed point solution for each regime was obtained (let us call them fixed points I and II). Second, for finite $N > 1$ among numerous solutions (most of which are spurious) two sequences of fixed points in the space of coupling constants were identified. Namely, as $N$ grows these solutions match fixed point I and II. For each fixed point, both for finite $N$ and for $N=\infty$, the corresponding anomalous dimension $\eta$ and the critical exponent $\lambda_{1}$ of the most relevant operator were calculated. The fixed point solutions are represented by their values of $\eta$ and $\lambda$ in Fig. 1 and Fig. 2. \vskip 0.1cm \begin{figure}[ht] \epsfxsize=0.9\hsize \epsfbox{fp1.ps} \caption{The values of $ N \eta$ (solid line) and $\lambda_{1}$ (dashed line) for the sequence of fixed points approaching type I fixed point of the $N=\infty$ case.} \end{figure} \vskip 0.1cm \begin{figure}[ht] \epsfxsize=0.9\hsize \epsfbox{fp2.ps} \caption{The values of $\eta$ (solid line) and $\lambda_{1}$ (dashed line) for the second sequence of fixed points. The upper branches approach the corresponding values of type II fixed point of the $N=\infty$ case.} \end{figure} For the sequence of solutions, which approach fixed point I, $\eta \sim 1/N$, whereas $\lambda_{1} $ approaches the finite limit $\lambda_{1}= 1.231...$ (Fig. 1). The value of $\lambda_{1}$ for $N=\infty$ is scheme independent for a wide range of scheme parameters. For this fixed point the constants $g_{2}$ and $m_{1}$ remain unfixed, a feature which resembles the exact result by Dashen and Frishman \cite{DF}. For finite $N$ the value of $\lambda_{1}$ depends on the scheme, so the calculation was done for the regulating function $K(z)=\exp (-z^2)$. The second sequence of solutions, the one which matches fixed point II, corresponds to the upper branches in Fig. 2. It exists only for $N \geq 143$. At the value $N=142.8$ it joins another line of fixed point solutions. The values of $\eta$ and $\lambda_{1}$ are scheme dependent and were fixed in some cases by the minimal sensitivity criterion. All coupling constants are fixed for these fixed points. The case $N=1$ was considered separately. In this case due to the Fierz relations the number of independent operators reduces. For example, all operators in $S^{(6,1)}$ vanish because of the Grasmannian nature of the fermionic fields. In this case the approximation under consideration becomes the {\it complete} derivative expansion with terms up to 3 derivatives, i.e. there are no other operators with 1, 2 and 3 derivatives apart from the ones described above. In this case the system of the ERG equations has just one fixed point solution. Values of all constants are fixed for this solution. Let us note that in this respect the case of finite $N > 1$ is qualitatively different: in this case the approximation considered here is a truncation of the derivative expansion in the number of fields. In this case one observes a big number of fixed point solutions for each $N$. Most of them are, of course, spurious, similar to the polynomial approximation in the scalar case. Since successive approximations were not studied, it was not possible to apply the criterion of stability of solutions for various approximations, as it was done in the scalar theory \cite{HKLM} . So, only those fixed points which form a sequence of solutions matching type I or type II solutions when $N \rightarrow \infty$ were chosen. \section{Conclusions} We would like to finish with a few conclusive remarks. 1. The RG functional equations of type (\ref{func}) and the ERG equation (\ref{ERG-eq}) both reflect functional self-similarity of corresponding quantities and, thus, have very much in common at the fundamental level. The derivation of the RG equation essentially relies on the multiplicative character of finite Dyson transformations. They appear because of the freedom in fixing of finite arbitrariness which is left after the removal of ultraviolet divergencies. This means, first, that some underlying perturbative expansion in a coupling constant (or analyticity in the coupling constant) is assumed and, second, that by construction the RG approach is formulated for renormalizable theories with the upper cutoff $\Lambda_{0}$ being sent to infinity and, thus, all irrelevant operators being removed. This is not the case for ERG approach where the cutoff is kept finite. Thus, the built-in functional self-similarity of functional RG equations of type (\ref{func}) imposes strong restrictions and reduce the number of arguments of $\bar{g}(x,g)$, but these equations do not contain enough of the dynamical information to determine the function \cite{BSh}. The only regular way to proceed is to use renormalization group functions (Gell-Mann-Low $\beta$-function, etc.) calculated within the perturbation theory where contributions of irrelevant operators enter through loop corrections. Contrary to this, the ERG equation (\ref{ERG-eq}) basically contains information about renormalization group evolution of the whole Wilson effective action, i.e. about all operators and, as such, does not need any additional inputs. This allows to search for non-perturbative (i.e. non-analytical in the coupling constant) solutions \footnote{I thank J.I. Latorre for attracting my attention to these issues and clarifying some of them.}. \noindent 2. The ERG method was demonstrated to be a powerful approach for non-perturbative studies of the continuum limit in quantum field theory for scalar and fermionic theories. \noindent 3. The proper derivative expansion is an effective and reliable technique which allows to search for fixed point solutions and calculate critical exponents. When combined with further polynomial approximation, some qualitative features can be reproduced. However, numerous spurious solutions appear and a special procedure should be applied to identify the true fixed points. \noindent 4. For ERG equations with an arbitrary cutoff function $K(z)$ results beyond the leading ${\cal O}(\partial^{0})$ approximation depend on $K(z)$. This regulator dependence is similar to the scheme dependence in usual perturbation theory in the RG approach. This is also related to an important issue of reparametrization invariance. We did not consider it here, and the readers are referred to other articles where this problem is discussed \cite{repinv,Mo8}. \noindent 5. In spite of some progress in the ERG approach in gauge theories \cite{gauge1}, further developments should be made before we have a regular tool for obtaining non-perturbative quantitative results for this class of theories. \section*{Acknowledgements} I thank Jose Gaite, Tim Morris and Chris Stephens for many interesting and illuminating discussions. I am grateful to Jordi Comellas, Jos\'e Ignacio Latorre and Dmitri Vasilievich Shirkov for reading the paper and for their valuable and helpful remarks. The work was supported by the grants INTAS-93-1630.EXT and CERN/P/FAE/1030/95 (JNICT, Portugal) and by CIRIT (Generalitat de Catalunya). \section*{References}
1,108,101,566,240
arxiv
\section{Introduction and motivation} It has been shown in several studies, both theoretically and empirically, that training an ensemble of models, i.e. aggregating predictions from multiple models, is superior to training a single model\cite{brown2005managing,NIPS2018_7614,Fernandez2014hundreds,NIPS2019_9097,liu1999ensemble,ren2016ensemble, NIPS1995_1044, NIPS2019_8443, Zhang2017ensemble, adaboost}. Many works point out that one of the keys for an ensemble to perform well is to encourage diversity among the models \cite{acc_and_div_ens,NIPS2016_6289,NIPS2016_6270,liu1999ensemble,shi2018crowd,NIPS2018_7831,adaboost}. This property is the main motivation our work. \par Sigmoid and Softmax are both well known functions which are used for classification (the former for binary and the second for multi label classifications). Both are used to generate distribution vectors $q_Y(x)=\{q_1(x),..,q_L(x)\}$ over the labels $Y=\{1,..,L\}$, where $x$ is a given input. For Deep Neural Networks (DNNs) the framework of applying a Sigmoid/Softmax on top of the network is very popular, where the goal is to estimate the real distribution $p_Y(x)=\{p_1(x),..,p_L(x)\}$, which might be a 1-hot vector for a hard label. Henceforth, we omit $x$ unless it is crucial for some definition or proof. We denote $p=p_Y(x),q=q_Y(x)$. We optimize $q$ by minimizing the CE cost function \begin{align} H(p,q)&=E_p[-\log q] \nonumber\\ &=-\sum_{i=1}^L p_i\log q_i.\label{CE} \end{align} The optimization is usually gradient based\cite{kingma2014adam, RMSProp}. Hence, one of the main motivations for using the CE cost function over Sigmoid/Softmax outputs is the linear structure of the gradient, which is similar to that obtained by applying the Mean Squared Error (MSE) method over a linear regression estimator. Studies show that this property is important for preventing vanishing gradient phenomena \cite{Goodfellow-et-al-2016,nielsen2015neural}. \par Now let us define the setting of the ensemble problem. We train $K$ classifiers, with distribution functions $q^1,..,q^K$, to generate ensemble $\overline{q}=\frac{1}{K}\sum_{k=1}^Kq^k$, which estimates the real distribution $p$. This setting is very common and the straightforward way to tackle it is by training each model independently using the CE cost function $H(p,q^k)$. Encouraging diversity is manifested by using different training samples or different seeds for weight initialization. However, to the best of our knowledge, there is no explicit way to control the ``amount'' of diversity between the classifiers. \par In this work we present a novel framework, called Amended Cross Entropy (ACE), which makes it possible for us to train each model and, simultaneously, to achieve diversity between the classifiers. Our main result in this work is the introduction of a new cost function \begin{align} H(p,q^k)-\frac{\gamma}{K-1}\sum_{j\neq k}H(q^j,q^k), \end{align} which is applied for the $k$-th classifier and is not independent of the other classifiers. We see that ACE is built from the vanilla CE between $p$ and $q^k$, minus the average of the CE between $q^k$ and the other estimators, factored with $\gamma$. This result is very intuitive since we wish to minimize the CE of the estimated distribution with the real one, while enlarging the CE of the estimator with the others, i.e. encourage diversity. The hyper-parameter $\gamma\in [0,\frac{K-1}{K}]$ explicitly controls the diversity, and is fine-tuned in order to achieve optimal results. The development of ACE starts from an assumption of the structure we wish the gradient to be in. As we show in this paper, a similar assumption lies at the base of applying CE over Softmax. We develop a variant especially for DNNs, which can be stacked on top of the network instead of the vanilla Softmax layer, and makes it possible to yield superior results without significantly increasing the number of parameters or the computational resources. \par This work has been inspired by the Negative Correlation Learning (NCL) \cite{brown2005managing,liu1999ensemble,shi2018crowd} framework, which is used for regression ensembles. In the next section we will present the NCL framework, its development and its results, in order to explain the analogous approach we used in our work. \section{Related work: Negative Correlation Learning (NCL)} \citet{liu1999ensemble} and \citet{brown2005managing} presented the NCL framework as a solution for the diversity issue for ensembles of regression. Let us denote $X$ as the vector of features and $Y$ as the target. The goal is to find $F:\mathcal{X}\rightarrow \mathcal{Y}$ which yields as low as possible error w.r.t. MSE criteria, i.e. to minimize \begin{align} e(F) = \int (F(X, \theta)-Y)^2p(X,Y)d(X,Y).\label{expected_square_error} \end{align} Here, $\theta$ stands for the parameters of $F$. In practice, the distribution $p(X,Y)$ is unknown, so we use $N$ realizations (training set) $\{(x_1,y_1),..(x_N,y_N)\}$ to estimate \eqref{expected_square_error} with an empirical MSE using $\hat{e}(F)=\frac{1}{N}\sum^N_{i=1}(F(x_i, \theta)-y_i)^2$. Under the assumption that $(X_i,Y_i)$ are i.i.d., or at least stationary and ergodic, $\hat{e}(F)$ converges to $e(F)$. We use the short notation $F$ to denote $F(X,\theta)$. Instead of \eqref{expected_square_error} we can use the expectation operator $E$ and decompose the error to the known structure of bias and variance \begin{align} E[(F-Y)^2] &= (E[F]-Y)^2 + E[(F-E[F])^2]\nonumber\\ &= \textit{bias(F)}^2 + \textit{variance(F)}. \end{align} A common way to apply an ensemble of models is to average multiple trained estimators $\{F^1,..,F^K\}$ \begin{align} \overline{F} = \frac{1}{K}\sum^K_{k=1}F^k. \end{align} By checking the decomposition of the ensemble expected error it is straightforward to show that \begin{align} E[(\overline{F}-Y)^2] = &(E[\overline{F}]-Y)^2 + E[(\overline{F}-E[\overline{F}])^2]\nonumber\\ = &\frac{1}{K^2}\sum^K_{k=1}(E[F^k] - Y)^2 + \frac{1}{K^2}\sum^K_{k=1}E[(F^k-E[F^k])^2]\nonumber \\ &+ \frac{1}{K^2}\sum^K_{k=1}\sum_{j\neq k}E[(F^k-E[F^k])(F^j-E[F^j])]\nonumber\\ = &\overline{\textit{bias}}(F)^2 + \overline{\textit{variance}}(F) + \overline{\textit{covariance}}(F). \end{align} This outcome is called the \textit{bias-variance-covariance} decomposition, and is the main motivation for NCL. We notice that by reducing the correlation between the estimators of an ensemble, the ensemble might yield a lower error. Based on this, \citet{liu1999ensemble} proposed a regularization factor that is added to the cost function of any of the single estimators during the training phase. This factor is an estimation of the sum of covariances between the trained estimator and the others. The factor is multiplied by a hyper-parameter $\gamma$, which explicitly controls the ``amount'' of the diversity between the single estimator and the other estimators in the ensemble \begin{align} e^k &= \frac{1}{2}(F^k-Y)^2 + \gamma (F^k-\overline{F})(\sum_{j\neq k}(F^j-\overline{F}))\nonumber\\ &= \frac{1}{2}(F^k-Y)^2 - \gamma(F^k-\overline{F})^2.\label{distance1} \end{align} Notice that in order to avoid a factor of $2$ in the gradient analysis, we multiply the MSE by a factor of $\frac{1}{2}$. By setting $\gamma=0$ we get the conventional MSE cost function, i.e. each model is optimized independently. \paragraph{Gradient analysis} Gradient-wise optimization\cite{kingma2014adam, RMSProp} is a very popular method for optimizing a model. Therefore, conducting analysis over the gradient behaviour of a cost function is advisable. Let us check the gradient of the cost function $e^k$ with respect to $F^k$ \begin{align} \frac{\partial e^k}{\partial F^k} &= (F^k-Y) - \gamma[2(1-\frac{1}{K})(F^k-\overline{F})]. \end{align} By defining $\lambda=2\gamma(1-\frac{1}{K})$, we get \begin{align} \frac{\partial e^k}{\partial F^k}&=(F^k-Y) - \lambda(F^k-\overline{F}) \nonumber\\ &= (1-\lambda)(F^k-Y) + \lambda(\overline{F}-Y) \label{NCL_gradient}. \end{align} We notice again that by setting $\gamma=\lambda=0$ we get the same gradient as with independent training. \subsection{Usage of NCL} \citet{liu1999ensemble} and \citet{brown2005managing} suggested a vanilla approach for optimizing multiple regressors. They suggested training multiple regression models that do not have to be of the same architecture, but train simultaneously in order to reduce the correlation between the models. The architecture is presented in Fig.~\ref{Regular}. However, applying this approach, the computational power and the number of parameters used increases significantly. For example, if we use the same architecture for all of the $K$ models, we use $K$ times the number of parameters used by a single model. If we train a DNN with millions of parameters, this might result in a non scalable training scheme. \begin{figure}[!ht] \centering \includegraphics[clip, trim=3.2cm 12.5cm 0.5cm 9.8cm,width=0.9\linewidth]{vanilla_NCL.pdf} \caption{NCL. A sketch of a training phase of the $k$-th model. First, the input is processed by $K$ models, which yields the predictions $\{F^1,..,F^K\}$. Using this, the cost function $e^k$ is calculated. Finally, the gradient of $\theta^k$ is calculated and model $k$ is updated accordingly.} \label{Regular} \end{figure} \par In order to handle this, \citet{shi2018crowd} suggested a new approach. They suggested stacking a layer of a regressors ensemble on top of a DNN instead of the vanilla regression layer. In this way, they claimed that they got the benefit of NCL while not increasing the number of parameters and computational power significantly. This architecture, called \textit{D-ConvNet}, yields state of the art results in a \textit{Crowd Counting} task. The work, as well as a sketch of the architecture can be seen in their paper \cite{shi2018crowd}. \section{Amended Cross Entropy (ACE)} In this section we first show the main motivation for using the CE cost function for a Softmax classifier. Like many other functions, CE achieves its minima when both of the distribution vectors are equal (MSE, Mean Absolute Error (MAE), etc.). However, CE is the only cost function which yields a linear gradient for a distribution generated by Softmax, similarly to the gradient of the MSE cost function over a linear regressor. We show this over a single classifier case first, and later we use this approach analogously for multi-classifiers, where we wish to yield the same gradient structure as in NCL, in order to analytically develop the ACE framework for multi-classifiers. \paragraph{CE cost function for Softmax classifier} Let us denote $L$ as the size of the set of events (labels), and $p=\{p_1,..,p_L\}$ as the real distribution vector for a given input (which is a 1-hot vector for a hard label). We wish to train an estimator $q=\{q_1,..,q_L\}$ for the real distribution. We denote the estimator parameters as $\theta$. The estimator generates a raw vector $z=\{z_1,..,z_L\}$, which is a function of the input, and applies Softmax $\sigma(z)$ over it in order to yield the estimator $q$, i.e. \begin{align} q &= \sigma(z) \nonumber\\ &= \left\{\frac{e^{z_1}}{\sum_{l=1}^Le^{z_l}}, \dots ,\frac{e^{z_L}}{\sum_{l=1}^Le^{z_l}}\right\} \nonumber\\ &=\{q_1,..,q_L\}. \end{align} Later, a CE cost function is applied to measure the error between the estimator and the real distribution \eqref{CE}. In order to optimize the estimator's parameters $\theta$, gradient based methods are applied\cite{kingma2014adam, RMSProp}. The gradient is calculated using the chain rule \begin{align} \nabla_\theta H(p,q) &= \nabla_\theta z \nabla_z H(p,q). \end{align} Now, let us calculate $\nabla_z H(p,q)$ explicitly \begin{align} \nabla_z H(p,q) &= \nabla_z \left( -\sum_{i=1}^L p_i\log q_i\right) \nonumber\\ &= \nabla_z\left( -\sum_{i=1}^L p_i\log \frac{e^{z_i}}{\sum_{l=1}^Le^{z_l}}\right) \nonumber\\ &= \left\{ \frac{\partial}{\partial z_1}\left(-\sum_{i=1}^L p_i\log \frac{e^{z_i}}{\sum_{l=1}^Le^{z_l}}\right),..,\frac{\partial}{\partial z_L}\left(-\sum_{i=1}^L p_i\log \frac{e^{z_i}}{\sum_{l=1}^Le^{z_l}}\right) \right\}\nonumber\\ &= \left\{ \frac{e^{z_1}}{\sum_{l=1}^Le^{z_l}} - p_1, \dots ,\frac{e^{z_L}}{\sum_{l=1}^Le^{z_l}}-p_L \right\} \nonumber \\ &= \left\{q_1 - p_1, \dots, q_L-p_L \right\} \nonumber\\ &= q-p.\label{gradient1} \end{align} We see that a linear structure of a gradient is obtained when applying CE over a Softmax classifier. This structure is similar to that of the MSE cost function over a linear regression estimator\cite{ Goodfellow-et-al-2016,nielsen2015neural}. \subsection{ACE} Inspired by the NCL result and by our belief that an important consideration for the choice of a cost function is the gradient behaviour (as long as it is a valid cost function), we wish to find a cost function that would yield the same properties. Therefore, we first assume the gradient structure, and later integrate it in order to find the appropriate cost function. Let us denote $K$ as the number of classifiers in the ensemble, $e^k$ as the $k$-th model cost function, $z^k$ as the raw output vector of the $k$-th model, $q^k=\sigma(z^k)$ as the estimated distribution of the $k$-th model, and $\theta^k$ as the parameters of the $k$-th model. We would like to train an ensemble of models $\overline{q} = \frac{1}{K}\sum_{k=1}^K q^k$ to estimate $p$. Since the gradient structure might be one of the most important considerations for choosing and constructing a cost function, by combining the results of \eqref{NCL_gradient} and \eqref{gradient1} we assume a gradient \begin{align} \nabla_{z^k}\, e^k &= (1-\lambda)(q^k-p) + \lambda (\overline{q}-p)\nonumber\\ &= (q^k-p) -\frac{\lambda}{K}\sum_{j\neq k}(q^k-q^j).\label{gradient2} \end{align} This assumption is the foundation of our proposed method and is the basis for developing the ACE framework. In order to find $e^k$ we need to integrate the above with respect to $z^k$ \begin{align} e^k &= \int \left( (q^k-p) -\frac{\lambda}{K}\sum_{j\neq k}(q^k-q^j) \right) dz^k \nonumber\\ &=\int \left(q^k-p \right) dz^k -\frac{\lambda}{K}\sum_{j\neq k}\int \left(q^k-q^j \right) dz^k.\label{integral1} \end{align} By reverse engineering \eqref{gradient1}, and using the fact that $p$ and $q^j,\; \forall j\neq k$ are independent of $z^k$, we get \begin{align} e^k &= H(p,q^k)-\frac{\lambda}{K}\sum_{j\neq k}H(q^j,q^k) + C, \label{result_amended} \end{align} where $C$ is a constant independent of $z^k$. We set $C=0$. We can also set $\gamma=\lambda\frac{K-1}{K}$ in order to get \begin{align} H(p,q^k)-\frac{\gamma}{K-1}\sum_{j\neq k}H(q^j,q^k), \end{align} i.e. the average of the CE between the $k$-th classifier and the others. Notice that by setting $\lambda=\gamma=0$ we get the regular CE cost function. \paragraph{Alternative formulation and analogy to NCL} Using algebraic manipulations, one can show that ACE \eqref{result_amended} has a similar structure to the one of NCL \eqref{distance1}. Let us check the result in \eqref{result_amended} \begin{align} e^k &= H(p,q^k)-\frac{\lambda}{K}\sum_{j\neq k}H(q^j,q^k)\nonumber\\ &= H(p,q^k) - \lambda H(\overline{q},q^k) +\frac{\lambda}{K} H(q^k,q^k).\label{nicer_ACE} \end{align} Note that $H(q^k,q^k)=H(q^k)$, i.e. the entropy of $q^k$. Now let us check the result in \eqref{distance1} \begin{align} e^k &= \frac{1}{2}(F^k-Y)^2 - \gamma(F^k-\overline{F})^2\nonumber\\ & = \frac{1}{2}(F^k-Y)^2 - \gamma(F^k-\overline{F})^2 + (F^k-F^k)^2. \end{align} If we refer to the MSE and CE as divergence operators $D_{MSE}$ and $D_{CE}$, respectively, we can observe that both of the cost functions have the same structure \begin{align} e_{NCL}^k &= a_1 D_{MSE}(F^k,Y) - a_2 D_{MSE}(F^k,\overline{F}) + a_3 D_{MSE}(F^k,F^k),\label{distance3}\\ e_{ACE}^k &= b_1 D_{CE}(q^k,p) - b_2 D_{CE}(q^k,\overline{q}) + b_3 D_{CE}(q^k,q^k),\label{distance4} \end{align} where $a_i,b_i$ are constants. The first component of both expressions in \eqref{distance3} and \eqref{distance4} is the divergence between the real value and the estimator's prediction, i.e. the vanilla error. The second component is a negative divergence between estimator's prediction and the ensemble prediction. Minimizing it (maximizing the divergence) encourages diversity between the estimator and the ensemble. The last component is the minimum of the divergence, where for MSE it is zero and for CE it is the entropy. \paragraph{Non-uniform weights} Let us check the case where our ensemble is aggregated using non-uniform weights, i.e. $\overline{q} = \sum_{k=1}^K \alpha^k q^k$, where $\alpha^k\geq 0$, $\forall k$, and $\sum_{k=1}^K \alpha^k=1$. Instead of \eqref{gradient2} we get \begin{align} \nabla_{z^k}\, e^k &= (1-\lambda)(q^k-p) + \lambda (\overline{q}-p)\nonumber\\ &= (q^k-p) -\lambda\sum_{j\neq k}\alpha^j(q^k-q^j).\label{gradient3} \end{align} Hence, for weights $\alpha^1,..,\alpha^K$ which are independent of $z^k$, instead of \eqref{result_amended} we obtain \begin{align} e^k &= H(p,q^k)-\lambda\sum_{j\neq k}\alpha^j H(q^j,q^k).\label{non_uniform_ACE} \end{align} \section{Implementation} In this section we examine two alternative implementations for the result we got above. \subsection{ACE for multiple models} The straightforward vanilla implementation of our result is training multiple models simultaneously using ACE. In this approach we train $K$ models and fine-tune $\lambda$ to yield the best ensemble result. \begin{wrapfigure}{r}{0.57\textwidth} \begin{minipage}{0.57\textwidth} \begin{algorithm}[H] \SetAlgoLined \For{$k$ in $\{1,..,K\}$}{ calculate predictions $q^k$\; } \For{$k$ in $\{1,..,K\}$}{ calculate loss $e^k$ \eqref{nicer_ACE}\; calculate gradient $\nabla_{\theta^k}e^k$\; apply optimization step over $\theta^k$ using $\nabla_{\theta_k}e^k$\; } \label{algo_1} \caption{Training step of ACE for $K$ models with respect to a single input with probability vector $p$} \end{algorithm} \end{minipage} \end{wrapfigure} The models do not have to be of the same architecture. Let us denote $\theta^1,..\theta^K$ as the parameters of the models $q^1,..,q^K$, respectively. The loss functions $e^k$ are calculated as in \eqref{nicer_ACE}. We calculate the gradient for each parameter set $\theta^k$ with respect to the corresponding loss function $e^k$ (Algorithm \ref{algo_1}). This can also be used over a batch of samples while averaging the gradients. A sketch of this architecture can be viewed in Fig.~\ref{vanilla}. In the inference phase, we calculate the outputs of all of the models, and average them to yield a prediction. \begin{figure}[!ht] \centering \includegraphics[clip, trim=3.2cm 12.5cm 0.5cm 9.8cm,width=0.9\linewidth]{vanilla_amended_CE.pdf} \caption{ACE for multiple models. A sketch of a training phase of the $k$-th model. First, the input is processed by $K$ models, which yields the distribution vectors $\{q^1,..,q^K\}$. Later, the cost function $e^k$ is calculated. Finally, the gradient of $\theta^k$ is calculated and model $k$ is updated accordingly.} \label{vanilla} \end{figure} \subsection{Stacked Mixture Of Classifiers} A drawback of the above usage is that it takes $K$ times the computational power and memory compared to training a single vanilla model. In order to avoid this overhead and to still gain the advantages of training multiple classifiers using ACE we developed a new architecture called Stacked Mixture Of Classifiers (SMOC). This implementation is an ad-hoc variant for DNNs. Let us denote $L$ as the depth of a DNN, and $Z_{L-1}$ as the output vector of the first $L-1$ layers of the net. Usually, we stack a fully-connected layer and Softamx activation on top of $Z_{L-1}$ such that $q = \sigma(wZ_{L-1} + b)$, where $w$ and $b$ are the matrix and the bias of the last fully-connected layer, respectively, and $q$ is the output of the DNN. Instead, we stack a mixture of $K$ fully-connected+Softmax classifiers, and train them with respect to $K$ different loss functions. The output of each classifier is $q^k = \sigma(w^kZ_{L-1} + b^k)$, where $w^k$ and $b^k$ are the matrix and the bias of the $k$-th fully-connected final layer. For optimization we use ACE loss \eqref{nicer_ACE}. In the inference phase we use an average of the $K$ classifiers $\overline{q} = \frac{1}{K}\sum_{k=1}^Kq^k$. We denote this architecture as Stacked Mixture Of Classifiers (SMOC). A sketch of SMOC can be seen in Fig.~\ref{smoc}. The parameters vector $\theta_L^k$ is the set of parameters of the $k$-th final layer, i.e. $\theta_L^k=\{w^k,b^k\}$. As we can see, the number of parameters is increased by $|\theta_L^k|\times (K-1)$ compared to a similar DNN with a vanilla final layer. Using this approach, we can gain a highly diversified ensemble without having to train multiple models and increase the number of parameters significantly. Instead, we use a regular single DNN of $L-1$ layers, and create an ensemble by training multiple fully-connected+Softmax layers over its output. \paragraph{SMOC gradient calculation optimization} We can think about this architecture as training $K$ DNNs which share the parameters of the first $L-1$ layers. Let us denote the shared parameters as \begin{wrapfigure}{r}{0.6\textwidth} \begin{minipage}{0.6\textwidth} \begin{algorithm}[H] \SetAlgoLined calculate $Z_{L-1}$\; \For{$k$ in $\{1,..,K\}$}{ calculate predictions $q^k$\; } \For{$k$ in $\{1,..,K\}$}{ calculate loss $e^k$ \eqref{nicer_ACE}\; calculate gradient $\nabla_{\theta_L^k}e^k$\; calculate gradient $\nabla_{Z_{L-1}}e^k$\; } calculate $g(\theta_{L-1})$ \eqref{chain_theta}\; apply optimization step for $\{\theta_{L-1}, \theta_L^1,..,\theta_L^K\}$ using $\{g(\theta_{L-1}),\nabla_{\theta_L^1}e^1 ,.., \nabla_{\theta_L^K}e^K\}$ respectively\; \caption{Training step of SMOC with K stacked classifiers w.r.t. a single input with probability vector $p$} \label{algo_2} \end{algorithm} \end{minipage} \end{wrapfigure} $\theta_{L-1}$. Similar to ACE for multiple models, we need to calculate $K$ losses and the gradients with respect to them. A naive way to do so would be to calculate the gradients separately for each cost function and to average them over the shared parameters $\theta_{L-1}$. However, this computation has the same complexity as training $K$ different models. Since the gradients are calculated using the chain rule (back-propagation) we can use it to tackle this issue. Let us denote $g(\theta_{L-1})$ as the average of the gradients over the shared parameters \begin{align}\label{average} g(\theta_{L-1}) = \frac{1}{K}\sum_{k=1}^{K}\nabla_{\theta_{L-1}}e^k. \end{align} By using the chain rule we get \begin{align}\label{chain} \nabla_{\theta_{L-1}}e^k &= \nabla_{\theta_L^k}e^k\cdot\nabla_{Z_{L-1}}\theta_L^k\cdot\nabla_{\theta_{L-1}}Z_{L-1} . \end{align} By combining \eqref{average} and \eqref{chain}, and due to the linearity of the gradient we get \begin{align} g(\theta_{L-1}) &=\frac{1}{L}\sum_{k=1}^K\nabla_{\theta_{L-1}}e^k\nonumber\\ &= \frac{1}{K}\sum_{k=1}^K (\nabla_{\theta_L^k}e^k\cdot\nabla_{Z_{L-1}}\theta_L^k\cdot\nabla_{\theta_{L-1}}Z_{L-1})\nonumber\\ &= \left(\frac{1}{K}\sum_{k=1}^K (\nabla_{\theta_L^k}e^k\cdot\nabla_{Z_{L-1}}\theta_L^k)\right)\cdot\nabla_{\theta_{L-1}}Z_{L-1}. \label{chain_theta} \end{align} Therefore, we can apply averaging on $\{\nabla_{Z_{L-1}}e^1,..,\nabla_{Z_{L-1}}e^K\}$, and calculate the gradient for $\theta_{L-1}$ once. The gradients for each $\theta_L^k$ must still be calculated separately with respect to $e^k$ (Algorithm \ref{algo_2}). \begin{figure}[!ht] \centering \includegraphics[clip, trim=0.55cm 12.2cm 0.5cm 9.8cm,width=0.99\linewidth]{SMOC.pdf} \caption{SMOC. A sketch of a training phase of the $k$-th classifier. First, the input is processed by a DNN, which generates $Z_{L-1}$. Second, $Z_{L-1}$ is processed by a pool of classifiers, which yields the distribution vectors $\{q^1,..,q^K\}$. Each classifier is optimized by its corresponding ACE cost function $e^k$. The gradient w.r.t. $\theta_L^k$ is calculated and classifier $k$ is updated accordingly. The gradient w.r.t. $Z_{L-1}$ is calculated and later the $K$ gradients are averaged and used to calculate the gradient w.r.t. $\theta_{L-1}$ \eqref{chain_theta}.} \label{smoc} \end{figure} \section{Experiments} \subsection{ACE for multiple models} For the vanilla version we conducted an experiment over the MNIST dataset. The MNIST is a standard toy dataset, where the task is to classify the images into 10 digit classes. For the ensemble, \begin{wrapfigure}{r}{0.6\textwidth} \begin{minipage}{0.6\textwidth} \begin{table}[H] \caption{ACE for multiple models - MNIST dataset} \centering \begin{tabular}{lllll} \toprule &\multicolumn{2}{c}{Ensemble scores} & \multicolumn{2}{c}{Averaged single NN score} \\ \cmidrule(r){1-5} $\lambda$ & Accuracy & CE & Accuracy & CE \\ \midrule \textbf{0} & 0.9790 & 0.0669 & 0.9767 & 0.0810\\ \textbf{0.05} & 0.9798 & 0.0663 & \textbf{0.9770} & 0.0809\\ \textbf{0.1} & 0.9799 & 0.0664 & 0.9768 & \textbf{0.0802}\\ \textbf{0.3} & 0.9797 & 0.0658 & 0.9767 & 0.0806\\ \textbf{0.5} & \textbf{0.9802} & \textbf{0.0649} & 0.9764 & 0.0842\\ \textbf{0.7} & 0.9800 & 0.0659 & 0.9760 & 0.0866\\ \bottomrule \label{MNIST_table} \end{tabular} \end{table} \end{minipage} \end{wrapfigure} we used 5 models of the same architecture. The architecture was DNN with a single hidden layer and ReLU activation. The results include both the accuracy and the CE of the predictions over the test set. We ran over multiple values of $\lambda\in [0,1]$, where for $\lambda=0$, i.e. vanilla CE, we trained the models independently (different training batches). The results in Table \ref{MNIST_table} show that we succeeded in reducing the error of the ensemble and increasing its accuracy by applying ACE instead of the vanilla CE (i.e. $\lambda>0$). We also added the averaged accuracy and CE of a single DNN. An interesting thing to notice is that even though the result of a single DNN deteriorates when using the optimal $\lambda$, the ensemble result is superior. The reason for this is that we add a penalty for each DNN during the training phase that causes it to perform worse; however, the penalty is coordinated with the other DNNs so that the ensemble would perform better. The results were averaged over 5 experiments. \subsection{Stacked Mixture Of Classifiers} We conducted studies of the SMOC architecture over the CIFAR-10 dataset \cite{krizhevsky2009learning}. We used the architecture and code of ResNet 110 \cite{he2016deep} and stacked on top of it an ensemble of 10 fully-connected+Softmax layers instead of the single one that was used. This resulted in adding $5850$ parameters to a model with an original size of $1731002$, i.e. enlarging the model by $0.34\%$. The results are shown in Table \ref{CIFAR-10_table}. In the table we also show the results for a single classifier with a vanilla single Softmax layer (K=1). The results have been averaged over 5 experiments with different seeds. We notice that the optimal $\lambda$ reduces the accuracy error by $\sim 7\%$ compared to $K=1$ with almost no cost in the number of parameters and computational power. We also notice that the CE reduces significantly. \begin{table}[!ht] \caption{Stacked Mixture Of Classifiers - CIFAR-10 dataset} \label{CIFAR-10_table} \centering \begin{tabular}{lllllllll} \toprule $K$ & $1$ & $10$ & $10$ & $10$ & $10$ & $10$ & $10$ & $10$ \\ $\lambda$ & & $0$ & $0.001$ & $0.01$ & $0.05$ & $0.1$ & $0.3$ & $0.5$ \\ \midrule \textbf{error(\%)} & 6.43 & 6.2 & 6.14 & 6.12 & \textbf{5.98} & 6.09 & 6.13 & 6.31 \\ {\textbf{CE}} & 0.3056& 0.3102 & 0.3041 & 0.3048 & 0.2968 & \textbf{0.2918}& 0.3137 & 0.4957 \\ \bottomrule \end{tabular} \end{table} \section{Conclusion and future work} In this paper we developed a novel framework for encouraging diversity explicitly between ensemble models in classification tasks. First, we introduced the idea of using an amended cost function for multiple classifiers based on NCL results. Later, we showed two usages - a vanilla one and the SMOC. We perform experiments to validate our analytical results for both of the architectures. For SMOC, we showed that by a small change and redundant addition of parameters we achieve superior results compared to the vanilla implementation. In future work, we would like to seek a way of using ACE with a non-uniform and, possibly, trainable weights \eqref{non_uniform_ACE}. Also, in the case of a large amount of labels, using SMOC results in a high amount of added parameters. We would like to research implementation solutions where this can be avoided. \newpage \bibliographystyle{unsrtnat}
1,108,101,566,241
arxiv
\section{Introduction} QCD and electroweak radiative corrections to high energy scattering amplitudes were computed recently using effective field theory methods~\cite{Chiu:2009ft,Chiu:2009mg,Chiu:2009yz,Chiu:2009yx,Chiu:2008vv,Chiu:2007dg,Chiu:2007yn}, by extending SCET~\cite{BFL,SCET1,SCET2,BPS} to broken gauge theories with massive gauge bosons. The radiative corrections (including the purely electroweak ones) are large because of Sudakov double-logarithms; for example the electroweak corrections to transverse $W$ pair production are 37\% at 2~TeV. The computation of radiative corrections is divided into a matching computation from the standard model onto SCET at a high scale $Q$ of order the center-of-mass energy $\sqrt{s}$, and the scattering amplitude in the effective theory. Logarithms of the form $\log^2 Q^2/M_Z^2$, including the Sudakov double logarithms, are summed using renormalization group evolution in the effective theory. The high-scale matching coefficients for vector boson and Higgs production were included in the numerical results of Refs.~\cite{Chiu:2009ft,Chiu:2009mg}. In this paper we give a detailed discussion of the required matching calculation, and explicit results for a gauge theory with an arbitrary number of gauge groups, as well as results for the $SU(3)\times SU(2) \times U(1)$ standard model theory. The computation of radiative corrections to gauge boson and Higgs production is not new, and has been obtained previously by fixed order calculations by many groups~\cite{ccc,ciafaloni1,ciafaloni2,fadin,kps,fkps,jkps,jkps4,beccaria,dp1,dp2,hori,beenakker,dmp,pozzorini,js,melles1,melles2,melles3,Denner:2006jr,kuhnW,Denner:2008yn,Ellis:1985er,Kunszt:1993sd,sack,bardin,Dixon:1998py,Dixon:1999di}. However, the matching computation we need is not readily available in the literature. What is available is the total one-loop scattering amplitude, which is the sum of the matching coefficient and the SCET amplitude, and our results agree with existing computations for the sum. The EFT computation requires the matching and SCET contributions separately, so that large logarithms can be summed using renormalization group evolution in the effective theory. In Sec.~\ref{sec:smatrix}, we show how the matching computation is related to the $S$-matrix for parton scattering. Using this, we can use the matching coefficients to compute the one-loop corrections to the $q \bar q \to g g$ cross-section in QCD. This was computed a long time ago by Ellis and Sexton~\cite{Ellis:1985er}, and we have checked that our amplitude reproduces their cross-section. Kunszt, Signer and Tr\'ocs\'anyi~\cite{Kunszt:1993sd} give the helicity amplitudes for $q \bar q \to gg$ for an $SU(N)$ gauge theory, and we agree with their results. In this paper, we give expressions for the one-loop matching contributions for gauge boson pair production and scalar production. These can then be used to compute the renormalization group improved scattering amplitudes for transverse and longitudinal gauge boson pair production, as well as Higgs production, using the results in Refs.~\cite{Chiu:2009ft,Chiu:2009mg}. We give the results for the individual Feynman diagrams as the product of a Feynman integral and a group theory factor. The results can be used for an arbitrary gauge theory with any number of gauge groups. Gauge bosons from a maximum of three gauge groups can occur in a single diagram at one loop. Section~\ref{sec:outline} gives an outline of the method we use to compute the matching condition. We discuss the relation between the on-shell diagrams in dimensional regularization and the matching calculation, the group theory notation, kinematics and the Dirac basis for the matrix elements. The diagrams for vector boson production are given in Sec.~\ref{sec:vector}, and the standard model amplitude is given in Sec.~\ref{sec:smvv}. Section~\ref{sec:scalar} gives the graphs for scalar production, with the standard model results, including top-quark loops, in Sec.~\ref{sec:smscalar}. A consistency check between the matching condition and the EFT anomalous dimension matrix is verified in Sec.~\ref{sec:check}. Section~\ref{sec:smatrix} gives the relation between the matching calculation and the on-shell $S$-matrix elements in the massless theory. \section{Outline of Method and Notation}\label{sec:outline} The basic processes we consider are $f(p_1) + \bar{f}(p_2) \to V_i^a(p_4) + V_j^b(p_3)$ and $f(p_1) + \bar{f}(p_2) \to \phi^\dagger(p_4) + \phi(p_3)$. Here $f$ and $\bar f$ are incoming fermions and antifermions of momentum $p_1$ and $p_2$, $V^a_i(p)$ is a gauge boson of gauge group $G_i$ with gauge index $a$ and momentum $p$, and $\phi$ is a (complex) scalar field. Note that $i$ and $j$ can refer to different gauge groups, so that our results are also applicable to processes such as $q \bar q \to W g$. The gauge bosons $V$ will be taken to have transverse polarization. Massive gauge bosons which are longitudinally polarized can be computed using the $\phi^\dagger \phi$ amplitude and the Goldstone boson equivalence theorem. Our EFT results are valid in the regime where $\sqrt{s}$ is much larger than the gauge boson masses $M_{W,Z}$, the Higgs mass $M_Z$, and the fermion masses. The matching from the full gauge theory onto the EFT is done at a scale $\mu$ of order $\sqrt{s}$, and power corrections such as $M_Z^2/s$ are neglected. Thus the matching coefficients can be computed by evaluating the graphs in the full theory setting all the particle masses to zero, and neglecting gauge symmetry breaking. For the standard model, this implies that the best way to compute the EFT operators is to match onto operators with $W^{1,2,3}$ and $B$ fields, rather onto operators with $W^{\pm}, Z$ and $A$ fields. We first summarize the standard method used to evaluate matching conditions for an EFT. More details can be found, for example, in Refs.~\cite{Manohar:1996cq,Manohar:1997qy}. The full theory graphs are evaluated using dimensional regularization in $4-2\epsilon$ dimension, which regulate the ultraviolet (UV) and infrared (IR) divergences, and have the schematic form \begin{eqnarray} A_{\text{full}} &=& \left(\sum_{k\ge1} \frac{C_k}{\epsilon^k}\right)_{\text{UV}}+\left(\sum_{k\ge1} \frac{D_k}{\epsilon^k}\right)_{\text{IR}}+A_{\text{full,finite}}\,. \end{eqnarray} The ultraviolet divergences are cancelled by the full theory renormalization counterterms, leaving the infrared divergences, \begin{eqnarray} A_{\text{full}} + \text{c.t.} &=& \left(\sum_{k\ge1} \frac{D_k}{\epsilon^k}\right)_{\text{IR}}+A_{\text{full,finite}}\,. \label{afull} \end{eqnarray} The EFT graphs are also computed using dimensional regularization. Since all the scales that enter the EFT computation (such as masses) have been set to zero, the EFT integrals are all scaleless and vanish. The EFT integrals have the schematic form \begin{eqnarray} A_{\text{EFT}} &=& \left(\sum_{k\ge1} \frac{\widetilde C_k}{\epsilon^k}\right)_{\text{UV}}+\left(\sum_{k\ge1} -\frac{\widetilde C_k}{\epsilon^k}\right)_{\text{IR}}=0\,,\end{eqnarray} i.e.\ a cancellation of $1/\epsilon$ terms arising from ultraviolet and infrared divergences, \emph{without any finite part}. The $(1/\epsilon)_{\text{UV}}$ terms are cancelled by the renormalization counterterms in the EFT, leaving the $(1/\epsilon)_{\text{IR}}$ terms, \begin{eqnarray} A_{\text{EFT}} + \text{c.t.} &=& \left(\sum_{k\ge1} -\frac{\widetilde C_k}{\epsilon^k}\right)_{\text{IR}}\,, \label{aeft} \end{eqnarray} The counterterms (and hence the anomalous dimensions) in the full and effective theories are in general different. The EFT, by construction, is designed to reproduce the infrared structure of the full theory. Thus the $(1/\epsilon)_{\text{IR}}$ in the full and effective theories \emph{must} agree, \begin{eqnarray} D_k= -\widetilde C_k\,, \label{eq5} \end{eqnarray} which provides a non-trivial computational check on the EFT, and also shows that infrared divergences in the full theory are equal to ultraviolet divergences in the EFT. The matching coefficient is given by the difference of the renormalized full and effective theory expressions, Eqs.~(\ref{afull},\ref{aeft}). Using Eq.~(\ref{eq5}), we see that the matching coefficient is $A_{\text{full,finite}}$. This gives the standard method of computing matching coefficients --- compute graphs in the full theory in dimensional regularization setting all EFT scales to zero, and keep only the finite parts. This is the procedure used here. In giving the values for the graphs, we will also give the divergent terms, which should be dropped for the matching corrections. The divergent terms are useful in that they allow one to check the matching of infrared divergences, and also to compare with the results of Refs.~\cite{Ellis:1985er,Kunszt:1993sd}. Scaleless integrals in the full theory computation have been set to zero, so the $1/\epsilon$ divergences can be either UV or IR. \subsection{Group Theory} We consider an arbitrary gauge group $\otimes_r G_r$ which is a product of groups with coupling constants $g_r=\sqrt{4 \pi \alpha_r}$. The generators of $G_r$ are $T^a_r$ and satisfy the commutation relations \begin{equation} \left[T^a_r,T^b_s\right] = i f^{(r)}_{abc} \delta_{rs}\, T^c_r\,, \end{equation} where $f^{(r)}_{abc}$ are the structure constants of $G_r$. Some products of group generators can be simplifed in terms of Casimir operators, e.g.\ \begin{eqnarray} T_j^b T^a_i T^b_j &=& \left(C_R(j) - \frac12 \delta_{ij} C_A(i)\right)T^a_i \,, \end{eqnarray} where $C_R$ is the Casimir of the representation $R$ of the matrices $T_j$, and $C_A(i)$ is the Casimir of the adjoint representation of $G_i$. In general, anti-commutators of group generators such as $\left\{T^a_r,T^b_r\right\}$ cannot be simplified. If $G_r$ is an $SU(N_r)$ group, and $T_r$ is in the fundamental representation, one has \begin{equation} \left\{T^a_r,T^b_r\right\} = \frac{1}{N_r}\delta_{ab}^{(r)}+d^{(r)}_{abc} T^c_r \,, \label{eq8} \end{equation} where $d_{abc}=0$ for $SU(2)$. However, there is no simple expression such as Eq.~(\ref{eq8}) in general, not even for arbitrary representations of $SU(N)$. For this reason, we will give a general expression for the group theory factor valid for arbitrary gauge theories, and then its value for a $SU(N) \times SU(2) \times U(1)$ gauge theory. Diagrams with a closed fermion or scalar loop contribute at one loop order. We use the symbols $\text{Tr}_{WF}$ and $\text{Tr}_{CS}$ to denote traces over the Weyl fermions and the complex scalars of the theory, respectively. For the standard model results, $T^a$ are the color generators, $t^a$ are the $SU(2)$ generators, and $Y$ is the $U(1)$ generator. \subsection{Kinematics} The amplitude $\mathcal{M}$ is defined as \begin{equation} \langle p_3\,p_4\,,\mathrm{out}| p_1\,p_2\,,\mathrm{in} \rangle = (2\pi)^4 \delta^{(4)}(p_1+p_2-p_3-p_4)i \mathcal{M} \, . \nonumber \end{equation} We will work in the center of mass frame (CMS) throughout this article. For $f(p_1) + \bar{f}(p_2) \to V_i^a(p_4) + V_j^b(p_3)$, the Dirac structure can be written as a linear combination of five basic terms \begin{eqnarray} \mathcal{M}_0 &=& \bar{v}(p_2) \slashed{\epsilon}_4 \left(\slashed{p}_4-\slashed{p}_2\right) \slashed{\epsilon}_3 P_L u(p_1) \,,\nonumber \\ \mathcal{M}_1 &=& \bar{v}(p_2) \slashed{p}_4 (\epsilon_4 \cdot \epsilon_3) P_L u(p_1) \,,\nonumber \\ \mathcal{M}_4 &=& \bar{v}(p_2) \slashed{\epsilon}_4 (\epsilon_3 \cdot p_1) P_L u(p_1) \,,\nonumber \\ \mathcal{M}_5 &=& - \bar{v}(p_2) \slashed{\epsilon}_3 (\epsilon_4 \cdot p_2)P_L u(p_1) \,,\nonumber \\ \mathcal{M}_6 &=& \bar{v}(p_2) \slashed{p}_4 (\epsilon_4 \cdot p_2)( \epsilon_3 \cdot p_1) P_L u(p_1) \,, \end{eqnarray} in the notation of Sack~\cite{sack}, where $\epsilon^\mu_i \equiv \epsilon^\mu(p_i)$ and $P_{L} \equiv \left(1-\gamma_5 \right)/2$. The other amplitudes used by Sack ($\mathcal{M}_{2,3,7,8,9}$) vanish for \emph{transversely} polarized \emph{on-shell} gauge bosons, neglecting power corrections. For scalar production, the Dirac structure which enters is \begin{eqnarray} \mathcal{M}_\phi &=& \bar{v}(p_2) \slashed{p}_4 P_L u(p_1) \, . \end{eqnarray} The full amplitude is the sum of all diagrams $R_i$ with group theory factor $\mathcal{C}_i$, \begin{equation} \mathcal{M} = \sum_{i} \mathcal{C}(R_i) R_i \, . \end{equation} In many cases, a diagram $R$ has a corresponding crossed graph which we denote by $\bar{R}$ with group theory factor $\mathcal{C}(\bar{R})$. The Mandelstam variables are defined as \begin{eqnarray} s &=& (p_1+p_2)^2 \,,\nonumber \\ t &=& (p_1-p_4)^2 \,, \nonumber \\ u &=& (p_1-p_3)^2 \, . \end{eqnarray} to agree with the conventions of Refs.~\cite{Chiu:2009ft,Chiu:2009mg}. Under the exchange of the two final state gauge bosons, $\epsilon_3 \leftrightarrow \epsilon_4$, $p_3 \leftrightarrow p_4$, the matrix elements and Mandelstam variables transform as \begin{eqnarray} \mathcal{M}_0 &\leftrightarrow& \mathcal{M}_0+2\mathcal{M}_1\,,\nonumber \\ \mathcal{M}_1 &\rightarrow& -\mathcal{M}_1\,,\nonumber \\ \mathcal{M}_4 &\leftrightarrow& \mathcal{M}_5 \,, \nonumber \\ \mathcal{M}_6 &\leftrightarrow& -\mathcal{M}_6 \,, \nonumber \\ t &\leftrightarrow& u \,, \nonumber \\ s &\leftrightarrow & s \, . \label{mcross} \end{eqnarray} If there is a crossed graph, then $\bar{R}$ is obtained from $R$ using Eq.~(\ref{mcross}). Throughout the article, space-time is $d = 4-2\epsilon$ dimensional which regulates the ultraviolet as well as the infrared behavior, and we work in 't~Hooft-Feynman gauge, $\xi = 1$. Furthermore, we define the function $\mathsf{L}_X \equiv \log(-X-i0^+)/\mu^2$. For scattering kinematics, $s > 0$ and $t,u < 0$, the correct analytical continuation is given by \begin{eqnarray} \mathsf{L}_{s} &=& \log(s/\mu^2)-i\pi\,,\nonumber \\ \mathsf{L}_{t} &=& \log(-t/\mu^2)\,, \nonumber \\ \mathsf{L}_{u} &=& \log(-u/\mu^2) \, . \end{eqnarray} We have assumed that the incoming fermion is a left-chiral field, so that the incoming fermion $f$ has helicity $h=-1/2$ and incoming antifermion $\bar f$ has helicity $h=1/2$. The results for a right-chiral field are given by $P_L \to P_R$. \subsection{EFT Lagrangian} We give the Feynman diagram results for the on-shell scattering amplitude $\mathcal{M}$. This also gives the matching condition onto the SCET operators in the EFT. The EFT Lagrangian is \begin{eqnarray} L &=& \frac12\sum_{p_1,p_2,p_3,p_4} \mathcal{M}^{ia,jb}(p_1,p_2,p_3,p_4) V^{i,a}_{p_4} V^{j,b}_{p_3} \bar \psi_{p_2} \psi_{p_1}\nonumber \\ \end{eqnarray} for vector boson production, and \begin{eqnarray} L &=& \sum_{p_1,p_2,p_3,p_4} \mathcal{M}^{ia,jb}(p_1,p_2,p_3,p_4) \phi^\dagger_{p_4} \phi_{p_3} \bar \psi_{p_2} \psi_{p_1}\nonumber \\ \end{eqnarray} for scalar production. The subscripts $p_i$ are the label momenta of the external SCET fields, and are summed over. The vector boson term has a factor of $1/2$ because there are two identical fields. To make clear the combinatorial factor of $1/2$, consider the production of a $W$ boson with momentum $p_W$ and a gluon with momentum $p_g$. This is obtained from $\mathcal{M}$ by picking out the term with $i,a$ in $SU(2)$ and $j,b$ in $SU(3)$, and setting $p_4=p_W$ and $p_3=p_g$ \emph{or} the term with $i,a$ in $SU(3)$ and $j,b$ in $SU(2)$, and setting $p_4=p_g$ and $p_3=p_W$, \emph{but not both.} \subsection{Topologies} The diagrams are classified in seven different topologies shown in Figure \ref{fig:topos}. Note that we do not explicitly draw the crossed topologies. Because this is a matching calculation, counterterm diagrams and wavefunction corrections are omitted. The on-shell wavefunction graphs are scaleless, and vanish in dimensional regularization. \begin{figure} \begin{center} \begin{tabular}{ccc} \includegraphics[height=1.5cm]{T1.eps} & \includegraphics[height=1.5cm]{T2.eps} & \includegraphics[height=1.5cm]{T3.eps} \\ T1 & T2 & T3 \\[10pt] \includegraphics[height=1.5cm]{T6.eps} & \includegraphics[height=1.5cm]{T7.eps} & \includegraphics[height=1.5cm]{T8.eps} \\ T4 & T5 & T6 \\[10pt] \includegraphics[height=1.5cm]{T9.eps} & & \\ T7 & & \end{tabular} \end{center} \caption{The seven different topologies for a general $2 \to 2$ scattering process. The $\otimes$ denotes a one particle irreducible subdiagram. Wavefunction renormalization diagrams are omitted.}\label{fig:topos} \end{figure} \section{Diagrams for vector boson production}\label{sec:vector} We provide the result of each tree-level and one-loop diagram $R_i$ and list the group theory structure $\mathcal{C}_i$ in a general form in terms of generators of the gauge groups. The pertinent group theory factors for the Standard Model are given in the Section~\ref{sec:smvv}. \subsection{Tree level amplitude} \begin{figure}[tb] \begin{center} \begin{tabular}{cc} \includegraphics[height=1.5cm]{tr2.eps}&\includegraphics[height=1.5cm]{tr1.eps} \\ $R_1$ & $R_2$ \end{tabular} \end{center} \caption{The tree level diagrams. Quarks, gauge bosons, scalars and ghosts are denoted by solid, wavy, dashed and dotted lines, respectively. Crossed diagrams are not shown.}\label{fig:tree} \end{figure} The tree level diagrams are shown in Figure~\ref{fig:tree}. For the tree level amplitude, the group theory factors and the diagrams read \begin{eqnarray} \mathcal{C}(R_1) &=& g_i g_j T^b_j T^a_i \,, \nonumber \\ \mathcal{C}(\bar{R}_1) &=& g_i g_j T^a_i T^b_j \,, \nonumber \\ \mathcal{C}(R_2) &=& g_i^2 \left(-i \delta_{ij} f^{(i)}_{abc}T_i^c\right) \end{eqnarray} \begin{eqnarray} R_1 &=& -\frac{1}{t}\left(\mathcal{M}_0 +2 \mathcal{M}_1\right) \,, \nonumber \\ \bar{R}_1 &=& -\frac{1}{u}\left(\mathcal{M}_0\right)\,, \nonumber \\ R_2 &=& -\frac{1}{s}\left(2 \mathcal{M}_1\right) \, . \end{eqnarray} where $\bar{R}_1$ is the crossed-graph related to $R_1$. $R_2$ does not have a crossed graph. \begin{widetext} \subsection{Topology T1} The four diagrams shown in Figure \ref{fig:t1} share topology T1. \begin{figure} \begin{center} \begin{tabular}{cc} \includegraphics[height=1.5cm]{R1.eps}&\includegraphics[height=1.5cm]{R3.eps} \\ $T1a$ & $T1b$ \\[10pt] \includegraphics[height=1.5cm]{R18.eps} & \includegraphics[height=1.5cm]{R2.eps} \\ $T1c$ & $T1d$ \end{tabular} \end{center} \caption{Diagrams with topology T1. See caption of Figure~\ref{fig:tree}.}\label{fig:t1} \end{figure} \subsubsection{T1a} \begin{equation} \mathcal{C}(R_{T1a}) = \frac{g_i^4}{16\pi^2}\delta_{ij} f_{ebc}^{(i)} f_{aed}^{(i)} T^c_i T^d_i \end{equation} \begin{eqnarray} R_{T1a} &=& \frac{\mathcal{M}_0}{t}\Bigg\{-\frac{2}{\epsilon^2 }+\frac{2 (\mathsf{L}_{s}-1)}{\epsilon } + \frac{1}{u}\biggl[ -3t\mathsf{L}_{s}^2-(s+4t)\mathsf{L}_{t}^2+2(s+4t)\mathsf{L}_{s} \mathsf{L}_{t} +2u \mathsf{L}_{t} -\pi^2\left(\frac76s+\frac{25}{6}t\right)-4u\biggr]\Biggr\}\nonumber \\ &&+ \mathcal{M}_1\Biggl\{-\frac{1}{\epsilon^2 }\left(\frac9s+\frac4t\right)+\frac{1}{\epsilon } \left(\frac{4\mathsf{L}_{s}}{t}+\frac{8\mathsf{L}_{t}}{s}+\frac{\mathsf{L}_{s}}{s}-\frac{2}{s}-\frac{4}{t}\right)+ \frac{1}{u^2ts}\biggl[\frac12t(9s^2+14st+7t^2)\mathsf{L}_{s}^2+s(2s+t)(s+2t)\mathsf{L}_{t}^2\nonumber \\ &&-2(2s^3+9s^2t+10st^2+4t^3)\mathsf{L}_{s}\mathsf{L}_{t}-2t^2u\mathsf{L}_{s}-2us(2s+3t)\mathsf{L}_{t}+\pi^2\Bigl(\frac73s^3+\frac{125}{12}s^2t+\frac{71}{6}st^2+\frac{19}{4}t^3\Bigr)\nonumber \\ &&-8s^3-20s^2t-16st^2-4t^3 \biggr]\Biggr\}\nonumber \\ &&+ \frac{\mathcal{M}_4+\mathcal{M}_5}{t}\Biggl\{-\frac{2}{\epsilon^2}+\frac{1}{\epsilon }\left(\frac{2t}s+2\mathsf{L}_{t}\right)+ \frac{1}{u^2}\biggl[-t(3s+4t)\mathsf{L}_{s}^2 -(s^2+5st+5t^2)\mathsf{L}_{t}^2+2t(3s+4t)\mathsf{L}_{s}\mathsf{L}_{t} \nonumber \\ &&+2ut(2s+t)\frac{\mathsf{L}_{s}}{s}- 2ut\mathsf{L}_{t} +\pi^2\Bigl(\frac{s^2}6-\frac83st-\frac{23}{6}t^2\Bigr) +4\frac{t^3}{s}+4st+8t^2\biggr]\Biggr\}\nonumber \\ &&+ \frac{\mathcal{M}_6}{tu^3}\Biggl\{-4t(s+2t)(\mathsf{L}_{s}-\mathsf{L}_{t})^2 +4u(3s+5t)(\mathsf{L}_{s}-\mathsf{L}_{t})-4\pi^2t\Bigl(s+2t\Bigr)-4u^2 \Biggr\} \end{eqnarray} The crossed graph $\bar{R}_{T1a}$ is given by applying Eq.~(\ref{mcross}) to $R_{T1a}$, and has color factor \begin{equation} \mathcal{C}(\bar R_{T1a}) = \frac{g_i^4}{16\pi^2} \delta_{ij} f_{eac}^{(i)} f_{bed}^{(i)} T^c_i T^d_i \end{equation} given from $\mathcal{C}(R_{T1a})$ by $i,a \leftrightarrow j,b$. \subsubsection{T1b} \begin{eqnarray} \mathcal{C}(R_{T1b}) &=& \frac{g_i^3 g_j}{16\pi^2} (if_{dac}^{(i)}) T^c_i T^b_j T^d_i \end{eqnarray} The diagram is given by \begin{eqnarray} R_{T1b}&=& \frac{\mathcal{M}_0}{tu}\Biggl\{\frac{2s}{\epsilon^2 }+\frac{2u \mathsf{L}_{u} +2t \mathsf{L}_{t}+s}{\epsilon } +\frac{1}{s^2}\biggl[-st(2s+3t)\mathsf{L}_{u}^2 +su \left(s+3t\right)\mathsf{L}_{t}^2 +2s(s^2+3st+3t^2)\mathsf{L}_{u}\mathsf{L}_{t} +s^2t\mathsf{L}_{u}+s^2u\mathsf{L}_{t}\nonumber \\ &&-\pi^2\left(\frac76s^3+3s^2t+3st^2\right)+2s^3 \biggr] \Biggr\}\nonumber \\ &&+\frac{\mathcal{M}_1}{t}\Biggl\{-\frac{4}{\epsilon^2 }+\frac{4 \mathsf{L}_{u}-2}{\epsilon } + \frac{1}{s^2 u}\biggl[3stu\mathsf{L}_{u}^2 +su\left(2s+3t\right)\mathsf{L}_{t}^2 -2su\left(2s+3t\right)\mathsf{L}_{u}\mathsf{L}_{t}+2s^2u\mathsf{L}_{t} -\pi^2\left(\frac73s^3+\frac{16}{3}s^2t+3st^2\right)\nonumber \\ &&+4s^3+4s^2t \biggr]\Biggr\}\nonumber \\ &&+ \frac{\mathcal{M}_4}{t u}\Biggl\{\frac{4s}{\epsilon }+ \frac{1}{s^2}\biggl[-3stu \left(\mathsf{L}_{u}-\mathsf{L}_{t}\right)^2 +4s^2t\mathsf{L}_{u} +4s^2u\mathsf{L}_{t} +\pi^2\left(3s^2t+3st^2\right) +8s^3\biggr]\Biggr\}\nonumber \\ &+& \frac{\mathcal{M}_5}{t u}\Biggl\{\frac{2s}{\epsilon^2}+\frac{2 u \mathsf{L}_{t} + 2 t \mathsf{L}_{u}}{\epsilon }+ \frac{1}{s^2}\biggl[st(2s+3t)\mathsf{L}_{u}^2-su(s+3t)\mathsf{L}_{t}^2 +6stu\mathsf{L}_{u}\mathsf{L}_{t} +\pi^2\Bigl(-\frac16s^3+3s^2t+3st^2\Bigr)\biggr]\Biggr\}\nonumber \\ &&+ \mathcal{M}_6\frac{12}{tu}(\mathsf{L}_{t}-\mathsf{L}_{u}) \, . \end{eqnarray} The crossed graph $\bar R_{T1b}$ is given by applying Eq.~(\ref{mcross}) to $R_{T1b}$, and has color factor \begin{equation} \mathcal{C}(\bar R_{T1b}) = \frac{g_i g_j^3}{16\pi^2} (if_{dbc}^{(j)}) T^c_j T^a_i T^d_j \end{equation} given from $\mathcal{C}(R_{T1a})$ by $i,a \leftrightarrow j,b$. \subsubsection{T1c} \begin{eqnarray} \mathcal{C}(R_{T1c}) &=& \sum_k \frac{g_i g_j g_k^2}{16\pi^2} T^c_k T^b_j T^a_i T^c_k \end{eqnarray} \begin{eqnarray} R_{T1c} &=& \frac{\mathcal{M}_0}{t}\left[\frac{2}{\epsilon^2 }-\frac{2 \mathsf{L}_{s}}{\epsilon } + 2 \mathsf{L}_{s} \mathsf{L}_{t}-\frac{7\pi^2}{6}-\mathsf{L}_{t}^2\right]\nonumber \\ &&+\frac{\mathcal{M}_1}{t}\Biggl\{\frac{4}{\epsilon^2 }-\frac{4 \mathsf{L}_{s}}{\epsilon } + \frac{1}{u^2}\biggl[st \mathsf{L}_{s}^2-(2s^2+3st+2t^2)\mathsf{L}_{t}^2+2(2s^2+3st+2t^2)\mathsf{L}_{s}\mathsf{L}_{t}+2tu(\mathsf{L}_{s}-\mathsf{L}_{t})-\pi^2\Bigl(\frac73s^2+\frac{11}{3}st+\frac73t^2\Bigr)\biggr]\Biggr\}\nonumber \\ &&+\frac{\mathcal{M}_4+\mathcal{M}_5}{t}\Biggl\{\frac{4}{\epsilon }+ \frac{1}{u^2}\biggl[t(3s+2t) (\mathsf{L}_{s}-\mathsf{L}_{t})^2+2ut\mathsf{L}_{s} +2u(2s+t)\mathsf{L}_{t} +\pi^2t\Bigl(3s+2t\Bigr) +8u^2\biggr]\Biggr\}\nonumber \\ &&+ \frac{\mathcal{M}_6}{tu^3}\Biggl\{4t(2s+t)(\mathsf{L}_{s}-\mathsf{L}_{t})^2-4u(3s+t)(\mathsf{L}_{s}-\mathsf{L}_{t})+4\pi^2t\Bigl(2s+t\Bigr)-4u^2\Biggr\} \end{eqnarray} The crossed graph $\bar R_{T1c}$ is given by applying Eq.~(\ref{mcross}) to $R_{T1c}$, and has color factor \begin{equation} \mathcal{C}(\bar R_{T1c}) = \sum_k \frac{g_i g_j g_k^2}{16\pi^2} T^c_k T^a_i T^b_j T^c_k \end{equation} given from $\mathcal{C}(R_{T1c})$ by $i,a \leftrightarrow j,b$. \end{widetext} \subsubsection{T1d} \begin{eqnarray} \mathcal{C}(R_{T1d}) = \frac{g_i^4}{16\pi^2}\delta_{ij} \left\{T^d_i, T^c_i\right\} f_{adg}^{(i)}f_{bcg}^{(i)} \end{eqnarray} The result of the diagram is \begin{eqnarray} R_{T1d}= \frac{\mathcal{M}_4+\mathcal{M}_5}{s}\left[ \frac{2}{\epsilon}-2\mathsf{L}_{s}+4 \right] \, . \end{eqnarray} There is no corresponding crossed graph. \subsection{Topology T2} The diagrams of topology T2 are shown in Figure \ref{fig:t2}. \begin{figure} \begin{center} \begin{tabular}{cc} \includegraphics[height=1.5cm]{R4.eps}&\includegraphics[height=1.5cm]{R5.eps} \\ $T2a$ & $T2b$ \\[10pt] \includegraphics[height=1.5cm]{R6.eps} & \includegraphics[height=1.5cm]{R23.eps} \\ $T2c$ & $T2d$ \\[10pt] \includegraphics[height=1.5cm]{R7.eps} & \includegraphics[height=1.5cm]{R8.eps} \\ $T2e$ & $T2f$ \\[10pt] \includegraphics[height=1.5cm]{R9.eps} & \includegraphics[height=1.5cm]{R10.eps} \\ $T2g$ & $T2h$ \end{tabular} \end{center} \caption{Diagrams of topology T2. See caption of Figure~\ref{fig:tree}.}\label{fig:t2} \end{figure} \subsubsection{T2a} The sum of the ghost graph and its crossed-graph is \begin{eqnarray} \mathcal{C}(R_{T2a}) &=& \frac{g_i^4}{16\pi^2} \delta_{ij} i f_{dcf}^{(i)} f_{fbe}^{(i)} f_{ead}^{(i)} T^c_i\nonumber \\ \end{eqnarray} \begin{eqnarray} R_{T2a} &=& \frac{\mathcal{M}_1}{s}\Biggl[-\frac{1}{6\epsilon}+\frac{\mathsf{L}_{s}}{6}-\frac{11}{18}\Biggr] \end{eqnarray} \subsubsection{T2b} The sum of the scalar graph and its crossed graph is \begin{eqnarray} \mathcal{C}(R_{T2b}) &=& \sum_k \frac{g_i^2 g_k^2}{16\pi^2} \delta_{ij} if_{abg}^{(i)}T^c_k \, \text{Tr}_{CS}\,\left(T^c_k T^g_i\right) \nonumber \\ R_{T2b} &=& \frac{\mathcal{M}_1}{s}\Biggl[\frac{2}{3\epsilon}-\frac{2\mathsf{L}_{s}}{3}+\frac{22}{9}\Biggr]\, . \end{eqnarray} If the gauge generators are orthogonal, then the $\text{Tr}_{CS}$ factor is proportional to $\delta_{ik}$. However, in general, the generators for $U(1)$ factors need not be orthogonal. \subsubsection{T2c} There is no crossed graph since the gauge bosons are real fields. \begin{eqnarray} \mathcal{C}(R_{T2c}) &=& i\frac{g_i^4}{16\pi^2}\delta_{ij} f_{cdf}^{(i)} f_{dae}^{(i)} f_{ebf}^{(i)} T^c_i\,,\nonumber \\ \nonumber \\ R_{T2c} &=& \frac{\mathcal{M}_1}{s}\Biggl[\frac{3}{\epsilon^2}+\frac{17-6\mathsf{L}_{s}}{2\epsilon}+ \frac32\mathsf{L}_{s}^2-\frac{17}{2}\mathsf{L}_{s}-\frac{\pi^2}{4}+\frac{95}{6}\Biggr]\, .\nonumber \\ \end{eqnarray} \subsubsection{T2d}\label{sec:T2d} The sum of graph T2d and its crossed graph is \begin{eqnarray} \mathcal{C}(R_{T2d}) &=& \sum_k \frac{g_i^2 g_k^2}{16\pi^2}i \delta_{ij} f^{(i)}_{abg} T^c_k \text{Tr}_{WF}\,\left(T^c_k T_i^g \right) \nonumber \\ R_{T2d} &=& \frac{\mathcal{M}_1}{s}\Biggl[\frac{4}{3\epsilon}-\frac43\mathsf{L}_{s}+\frac{14}{9} \Biggr] \end{eqnarray} Graph T2d also has a piece proportional to the $\epsilon$ symbol, with a group theory factor proportional to $\text{Tr}_{WF}\,(T^c_k \left\{ T^b_j, T^a_i\right]\}$. This contribution is proportional to the gauge anomaly, must vanish when summed over all fermions in the loop for a consistent gauge theory, and so has not been given explicitly. Our result for T2d differs from that in Ref.~\cite{bardin}. The formul\ae\ in Sec.~(14.13) give $26/9$ instead of $14/9$ for the finite part. \subsubsection{T2e, T2f, T2g and T2h} All these diagrams vanish. \subsection{Topologies T3 and T4} The diagrams with topology T3 and T4 are shown in Figure \ref{fig:t3}. \begin{figure} \begin{center} \begin{tabular}{cc} \includegraphics[height=1.5cm]{R15.eps}&\includegraphics[height=1.5cm]{R14.eps} \\ $T3a$ & $T4a$ \\[10pt] \includegraphics[height=1.5cm]{R20.eps} & \includegraphics[height=1.5cm]{R19.eps} \\ $T3b$ & $T4b$ \end{tabular} \end{center} \caption{Diagrams of topology T3. See caption of Figure~\ref{fig:tree}.}\label{fig:t3} \end{figure} \subsubsection{T3a} \begin{eqnarray} \mathcal{C}(R_{T3a}) &=& \frac{g_i^3 g_j}{16\pi^2} \frac12 C_A(i) T^b_j T^a_i \end{eqnarray} \begin{eqnarray} R_{T3a} &=& \frac{\mathcal{M}_0+2\mathcal{M}_1}{t}\left[\frac{2}{\epsilon^2}-\frac{2\mathsf{L}_{t}}{\epsilon}+\mathsf{L}_{t}^2-\frac{\pi^2}{6}\right]\nonumber \\ &+& \frac{\mathcal{M}_5}{t}\left[-\frac{2}{\epsilon^2}+\frac{2\mathsf{L}_{t}}{\epsilon}-\mathsf{L}_{t}^2+\frac{\pi^2}{6}+2\right] \end{eqnarray} The crossed graph $\bar R_{T3a}$ is given by applying Eq.~(\ref{mcross}) to $R_{3a}$, and has color factor \begin{equation} \mathcal{C}(\bar R_{T3a}) = \frac{g_i g_j^3}{16\pi^2} \frac12 C_A(j) T^a_i T^b_j \end{equation} given from $\mathcal{C}(R_{T3a})$ by $i,a \leftrightarrow j,b$. \subsubsection{T3b} \begin{eqnarray} \mathcal{C}(R_{T3b}) &=& \sum_k \frac{g_i g_j g_k^2}{16\pi^2} T^b_j T^c_k T^a_i T^c_k \end{eqnarray} \begin{eqnarray} R_{T3b}&=& \frac{\mathcal{M}_0+2\mathcal{M}_1}{t}\left[\frac{1}{\epsilon}-\mathsf{L}_{t}+4\right]\nonumber \\ &+& \frac{\mathcal{M}_5}{t}\left[-\frac{4}{\epsilon}+4\mathsf{L}_{t}-10\right] \end{eqnarray} The crossed graph $\bar R_{T3b}$ is given by applying Eq.~(\ref{mcross}) to $R_{3b}$, and has color factor \begin{equation} \mathcal{C}(\bar R_{T3b}) = \sum_k \frac{g_i g_j g_k^2}{16\pi^2} T^i_a T^c_k T^b_j T^c_k \end{equation} given from $\mathcal{C}(R_{T3b})$ by $i,a \leftrightarrow j,b$. \subsubsection{T4a} \begin{eqnarray} \mathcal{C}(R_{T4a}) &=& \frac{g_i g_j^3}{16\pi^2} \frac12 C_A(j) T^b_j T^a_i \end{eqnarray} \begin{eqnarray} R_{T4a} &=& \frac{\mathcal{M}_0+2\mathcal{M}_1}{t}\left[\frac{2}{\epsilon^2}-\frac{2\mathsf{L}_{t}}{\epsilon}+\mathsf{L}_{t}^2-\frac{\pi^2}{6}\right]\nonumber \\ &+& \frac{\mathcal{M}_4}{t}\left[-\frac{2}{\epsilon^2}+\frac{2\mathsf{L}_{t}}{\epsilon}-\mathsf{L}_{t}^2+\frac{\pi^2}{6}+2\right] \end{eqnarray} The crossed graph $\bar R_{T4a}$ is given by applying Eq.~(\ref{mcross}) to $R_{T4a}$, and has color factor \begin{equation} \mathcal{C}(\bar R_{T4a}) = \frac{g_i^3 g_j}{16\pi^2} \frac12 C_A(i) T^a_i T^b_j \end{equation} given from $\mathcal{C}(R_{T4a})$ by $i,a \leftrightarrow j,b$. \subsubsection{T4b} \begin{eqnarray} \mathcal{C}(R_{T4b}) &=& \sum_k \frac{g_i g_j g_k^2}{16\pi^2} T^c_k T^b_j T^c_k T^a_i \end{eqnarray} \begin{eqnarray} R_{T4b}&=& \frac{\mathcal{M}_0+2\mathcal{M}_1}{t}\left[\frac{1}{\epsilon}-\mathsf{L}_{t}+4\right]\nonumber \\ &+&\frac{\mathcal{M}_4}{t}\left[-\frac{4}{\epsilon}+4\mathsf{L}_{t}-10\right] \end{eqnarray} The crossed graph $\bar R_{T4b}$ is given by applying Eq.~(\ref{mcross}) to $R_{4b}$, and has color factor \begin{equation} \mathcal{C}(\bar R_{T4b}) = \sum_k \frac{g_i g_j g_k^2}{16\pi^2} T^c_k T^i_a T^c_k T^b_j \end{equation} given from $\mathcal{C}(R_{T4b})$ by $i,a \leftrightarrow j,b$. \subsection{Topology T5} The diagrams with topology T5 are shown in Figure~\ref{fig:t7}. There are no crossed graphs for this topology. \begin{figure} \begin{center} \begin{tabular}{cc} \includegraphics[height=1.5cm]{R16.eps}&\includegraphics[height=1.5cm]{R22.eps} \\ $T5a$ & $T5b$ \end{tabular} \end{center} \caption{Diagrams of topology T5. See caption of Figure~\ref{fig:tree}.}\label{fig:t7} \end{figure} \subsubsection{T5a} \begin{eqnarray} \mathcal{C}(R_{T5a}) &=& i f_{abc}^{(i)} \delta_{ij}\frac{g_i^4}{16\pi^2} \frac{C_A(i)}{2} T_i^c \nonumber \\ R_{T5a}&=& \frac{\mathcal{M}_1}{s} \left[- \frac{2}{\epsilon}+ 2\mathsf{L}_{s} -4 \right] \end{eqnarray} \subsubsection{T5b} \begin{eqnarray} \mathcal{C}(R_{T5b}) &=& \sum_{k} i f_{abc}^{(i)} \delta_{ij}\frac{g_i^2 g_k^2}{16\pi^2} T_k^d T_i^c T_k^d \nonumber \\ R_{T5b} &=& \frac{\mathcal{M}_1}{s}\biggl[ -\frac{4}{\epsilon^2}-\frac{6}{\epsilon}+\frac{4}{\epsilon}\mathsf{L}_{s}-2\mathsf{L}_{s}^2 + 6\mathsf{L}_{s}-16+\frac{\pi^2}{3} \biggr]\nonumber \\ \end{eqnarray} \subsection{Topology T6} The diagrams with topology T6 are shown in Figure~\ref{fig:t8}. There are no crossed diagrams with this topology. \begin{figure} \begin{center} \begin{tabular}{cc} \includegraphics[height=1.5cm]{R11.eps}&\includegraphics[height=1.5cm]{R12.eps} \\ $T6a$ & $T6b$\\[10pt] \includegraphics[height=1.5cm]{R13.eps}&\includegraphics[height=1.5cm]{R17.eps}\\ $T6c$ & $T6d$ \\[10pt] \includegraphics[height=1.5cm]{R24.eps}&\includegraphics[height=1.5cm]{R25.eps} \\ $T6e$ & $T6f$\\ \end{tabular} \end{center} \caption{Diagrams of topology T6. See caption of Figure~\ref{fig:tree}.}\label{fig:t8} \end{figure} \subsubsection{T6a} \begin{eqnarray} \mathcal{C}(R_{T6a}) &=& \frac{g_i^4}{16\pi^2} if_{abc}^{(i)}\delta_{ij} C_A(i) T^c_i\nonumber \\ R_{T6a} &=& \frac{\mathcal{M}_1}{s} \biggl[ \frac{19}{6\epsilon} - \frac{19}{6} \mathsf{L}_{s} + \frac{58}{9} \biggr] \end{eqnarray} \subsubsection{T6b} \begin{eqnarray} \mathcal{C}(R_{T6b})&=& \frac{g_i^4}{16\pi^2} if_{abc}^{(i)}\delta_{ij} C_A(i) T^c_i\nonumber \\ R_{T6b} &=& \frac{\mathcal{M}_1}{s} \biggl[ \frac{1}{6\epsilon} - \frac{1}{6} \mathsf{L}_{s}+ \frac{4}{9} \biggr] \end{eqnarray} \subsubsection{T6c} \begin{eqnarray} \mathcal{C}(R_{T6c})&=& \sum_k \frac{g_i^2 g_k^2}{16\pi^2}i f_{abc}^{(i)}\delta_{ij} \text{Tr}_{CS}(T^c_i T^d_k)T^d_k \nonumber \\ R_{T6c} &=& \frac{\mathcal{M}_1}{s} \biggl[ -\frac{2}{3\epsilon} + \frac{2}{3} \mathsf{L}_{s} - \frac{16}{9} \biggr] \end{eqnarray} \subsubsection{T6d} \begin{eqnarray} \mathcal{C}(R_{T6d}) &=& \sum_k \frac{g_i^2 g_k^2}{16\pi^2}i f_{abc}^{(i)}\delta_{ij} \text{Tr}_{WF}(T^c_i T^d_k)T^d_k \nonumber \\ R_{T6d} &=& \frac{\mathcal{M}_1}{s} \biggl[ -\frac{4}{3\epsilon} + \frac{4}{3} \mathsf{L}_{s}- \frac{20}{9} \biggr] \end{eqnarray} \subsubsection{T6e, T6f} These diagrams vanish in dimensional regularization. \subsection{Topology T7} The diagram with topology T7 is shown in Figure~\ref{fig:t9}. \begin{figure} \begin{center} \begin{tabular}{cc} \includegraphics[height=1.5cm]{R21.eps}& \\ $T7$ & \end{tabular} \end{center} \caption{Diagram with topology T7. See caption of Figure~\ref{fig:tree}.}\label{fig:t9} \end{figure} \begin{eqnarray} \mathcal{C}(R_{T7}) &=& \sum_k \frac{g_i g_jg_k^2}{16\pi^2} T^b_j T_k^c T_k^c T^a_i \nonumber \\ R_{T7} &=& \frac{\mathcal{M}_0 +2 \mathcal{M}_1}{t}\left[\frac{1}{\epsilon} - \mathsf{L}_{t} +1 \right] \end{eqnarray} The crossed graph $\bar R_{T7}$ is given by applying Eq.~(\ref{mcross}) to $R_{T7}$, and has color factor \begin{equation} \mathcal{C}(\bar R_{T7}) = \sum_k \frac{g_i g_jg_k^2}{16\pi^2} T^a_i T_k^c T_k^c T^b_j \end{equation} given from $\mathcal{C}(R_{T7})$ by $i,a \leftrightarrow j,b$. \section{$\bar{q} q \to VV$ in the Standard Model}\label{sec:smvv} The group theory factors for $\bar{q} q \to V^a_i V^b_j$ have been written in a form where they are applicable to gauge boson production for an arbitrary product group, with fermions in an arbitrary irreducible representation. In this section, we tabulate the group theory factors and give their values for the standard model. The group theory factors when the two vector bosons belong to the same group $G_i$ are given in Table~\ref{tab:VV}. The only assumption we have made on the structure of the gauge theory is that the gauge generators are orthogonal, \begin{eqnarray} \text{Tr}_{CS} \left(T^a_i T^b_j\right) &=& \delta_{ab} \delta_{ij} \trcs{i}\,,\nonumber \\ \text{Tr}_{WF} \left(T^a_i T^b_j\right) &=& \delta_{ab} \delta_{ij} \trwf{i}\,, \label{eq57} \end{eqnarray} which define $\trcs{i}$ and $\trwf{i}$. The orthogonality only needs to be checked if both $i$ and $j$ correspond to $U(1)$ factors, and is satisfied in theories which arise as the low-energy limit of unified theories based on semisimple Lie groups. The group factors and coupling constants have to be evaluated using their values for $G_i$ in the representation of the fermion. The group theory factors have been written in terms of \begin{eqnarray} \mathcal{G}_f &=& i f_{abc}T^c\,, \nonumber \\ \mathcal{G}_+&=& \frac12 \left\{T^a,T^b\right\}\,, \nonumber \\ \mathcal{G}_{TT} &=& \frac12 f_{acg}f_{bch}\left\{T^g, T^h\right\}\, . \end{eqnarray} $\mathcal{G}_+$ and $\mathcal{G}_{TT}$ can not, in general, be written as expressions linear in the group generators $T^a$. For $SU(N)$ groups in the fundamental representation, \begin{eqnarray} \mathcal{G}_+&=& \frac12 d_{abc} T^c + \frac{1}{2N}\delta_{ab} \,, \nonumber \\ \mathcal{G}_{TT} &=& \frac14 C_A d_{abc} T^c + \frac1 2 \delta_{ab}\,, \label{sunform} \end{eqnarray} but these expressions are not valid for general $SU(N)$ representations. Some examples of $\mathcal{G}_{TT}$ for higher $SU(N)$ representations are given in Appendix~A of Ref.~\cite{Dashen:1994qi}. For $U(1)$ groups, \begin{eqnarray} \mathcal{G}_f &=& 0\,,\nonumber \\ \mathcal{G}_+&=& Y_i^2 \,, \nonumber \\ \mathcal{G}_{TT} &=& 0 \,, \label{u1form} \end{eqnarray} where $Y_i$ is the $U(1)$ charge. $\Lambda_Q$ is defined by \begin{eqnarray} \Lambda_Q &=& \sum_i \alpha_i C_{F,Q}(i) \label{eq58} \end{eqnarray} where $C_{F,Q}(i)$ is the quadratic Casimir of the incoming fermion under gauge group $G_i$, and the sum is over all gauge groups. $C_A$ is the Casimir in the adjoint representation. For the standard model, the high-energy amplitude is most conveniently written in terms of the gauge bosons of the unbroken gauge theory --- $W^a$ of $SU(2)$, $B$ of $U(1)$ and gluons $G^a$ of $SU(3)$, and we can use Eqs.~(\ref{sunform},\ref{u1form}) with $Y_i \to Y_Q=1/6$ and $N=2,3$. The $d$-symbol vanishes for $SU(2)$. The factors in Eqs.~(\ref{eq57},\ref{eq58}) are \begin{eqnarray} \trcs{i}&=& \left\{ \begin{array}{cc} 0 & \text{for}\quad SU(3) \\ \frac12 n_S& \text{for}\quad SU(2) \\ \frac12n_S & \text{for}\quad U(1) \,, \end{array} \right. \nonumber \\ \trwf{i} &=& \left\{ \begin{array}{cc} 2n_g & \text{for}\quad SU(3) \\ 2n_g& \text{for}\quad SU(2) \\ \frac{10}{3}n_g& \text{for}\quad U(1) \,, \end{array} \right. \end{eqnarray} and \begin{eqnarray}\label{eq:lambda} \Lambda_Q &=& \frac43 \alpha_3 + \frac34 \alpha_2 +Y_Q^2 \alpha_1 \end{eqnarray} where $n_g=3$ is the number of fermion generations, and $n_S=1$ is the number of Higgs doublets. Group theory factors for the crossed graphs are given by $a \leftrightarrow b$, so that $\mathcal{G}_f \to -\mathcal{G}_f$ changes sign, $\mathcal{G}_+ \to \mathcal{G}_+$ and $\mathcal{G}_{TT} \to \mathcal{G}_{TT}$. \begin{table*} \begin{eqnarray*} \renewcommand{\arraystretch}{1.8} \begin{array}{cc|cc|cc|cc} \hline R_1 & 4\pi \alpha \left(\mathcal{G}_+- \frac12\mathcal{G}_f\right) & R_2 & -4\pi \alpha \mathcal{G}_f & && & \\ T1a & -\alpha^2\left(\mathcal{G}_{TT}-\frac14 C_A \mathcal{G}_f \right)& T1b & -\frac12 \alpha^2 C_A \mathcal{G}_+ + \alpha^2 \mathcal{G}_{TT}& T1c & \alpha\left(\frac14 \alpha C_A -\frac12\Lambda_Q\right)\mathcal{G}_f +\alpha^2 \mathcal{G}_{TT} & T1d & 2\alpha^2 \mathcal{G}_{TT} \\[-5pt] && && & +\alpha \left( \Lambda_Q -\alpha C_A\right)\mathcal{G}_+ & & \\ T2a & \frac12\alpha^2 C_A\mathcal{G}_f & T2b & \alpha^2 \trcs i \mathcal{G}_f & T2c & - \frac12 \alpha^2 C_A\mathcal{G}_f & T2d & \alpha^2 \trwf i \mathcal{G}_f \\ T3a & \frac12 \alpha^2 C_A \left(\mathcal{G}_+- \frac12\mathcal{G}_f\right) & T3b & \alpha \left( \mathcal{G}_+- \frac12\mathcal{G}_f\right) \left[\Lambda_Q-\frac12 \alpha C_A \right] & T4a & \frac12 \alpha^2 C_A\left( \mathcal{G}_+- \frac12\mathcal{G}_f\right) & T4b & \alpha\left(\mathcal{G}_+- \frac12\mathcal{G}_f\right) \left[\Lambda_Q - \frac12 \alpha C_A \right] \\ T5a & \frac12 \alpha^2 C_A \mathcal{G}_f & T5b & \alpha \mathcal{G}_f \left[ \Lambda_Q - \frac12 \alpha C_A \right] & & & & \\ T6a & \alpha^2 C_A\mathcal{G}_f & T6b & \alpha^2 C_A\mathcal{G}_f & T6c & \alpha^2 \trcs i \mathcal{G}_f & T6d & \alpha^2 \trwf i \mathcal{G}_f \\ T7 & \alpha \left(\mathcal{G}_+- \frac12\mathcal{G}_f\right) \Lambda_Q & & & & & & \\ \hline \end{array} \end{eqnarray*} \caption{Group theory coefficients $\mathcal{C}_i$ for the production of two identical gauge bosons. The coefficients $\bar{\mathcal{C}}_i$ of the crossed diagrams are given by $\mathcal{C}_i$ with $a \leftrightarrow b$, under which $\mathcal{G}_f \to -\mathcal{G}_f$, $\mathcal{G}_+ \to \mathcal{G}_+$ and $\mathcal{G}_{TT} \to \mathcal{G}_{TT}$. The notation is explained in the main text.}\label{tab:VV} \end{table*} Group theory factors for the production of gauge bosons in two different gauge groups are given in Table~\ref{tab:GW}. The gauge bosons with momentum $p_4$ and $p_3$ are $V^a_i$ and $V^b_j$, respectively. We define \begin{eqnarray} \rho = \sqrt{\alpha_{i} \alpha_{j}}\ T^a_i T^b_j\, . \end{eqnarray} The factors for the crossed graphs are given by $i \leftrightarrow j$. This gives the group theory factors for the reactions $\bar{q} q \to G^a(p_4) W^b(p_3)$, $\bar{q} q \to G^a(p_4) B(p_3)$ and $\bar{q} q \to W^a(p_4) B(p_3)$. The reaction $\bar{q} q \to W^a(p_3) G^b(p_4)$, is related to $\bar{q} q \to G^a(p_4) W^b(p_3)$ by exchanging the final gauge bosons, i.e.\ by the swap $i,a,p_4 \leftrightarrow j,b,p_3$. \begin{table*} \begin{eqnarray*} \renewcommand{\arraystretch}{1.8} \begin{array}{cc|cc|cc|cc} \hline R_1 & 4\pi \rho & R_2 & 0 & && & \\ T1a & 0 & T1b & -\frac12\rho \alpha_{i} C_A(i) & T1c & \rho \left[ \Lambda_Q- \frac12\alpha_{i} C_A(i)-\frac12 \alpha_{j} C_A(j) \right] & T1d & 0 \\ T2a & 0 & T2b & 0 & T2c & 0 & T2d & 0 \\ T3a &\frac12 \rho \alpha_{i} C_A(i) & T3b & \rho \left[ \Lambda_Q- \frac12 \alpha_{i} C_A(i) \right] & T4a &\frac12 \rho \alpha_{j} C_A(j) & T4b & \rho \left[ \Lambda_Q- \frac12 \alpha_{j} C_A(j) \right] \\ T5a & 0 & T5b & 0 & & & & \\ T6a & 0 & T6b & 0 & T6c & 0 & T6d & 0 \\ T7 & \rho \Lambda_Q & & & & & &\\ \hline \end{array} \end{eqnarray*} \caption{Group theory coefficients $\mathcal{C}_i$ for the production of two different gauge bosons. Here $\rho = \sqrt{\alpha_{i} \alpha_{j}} T^a_i T^b_j$. The coefficients $\bar{\mathcal{C}}_i$ of the crossed diagrams are given by $\mathcal{C}_i$ with $i \leftrightarrow j$.}\label{tab:GW} \end{table*} \section{Scalar production}\label{sec:scalar} The notation for the scalar production is analogous to that for vector boson production. The full amplitude is given by the sum of all diagrams $S_i$ with group theory factor $\mathcal{C}(S_i)$. \begin{equation}\label{eq:amps} \mathcal{M} = \sum_i \mathcal{C}(S_i)S_i\, . \end{equation} As in the vector boson case, the $\bar S$ and $\bar \mathcal{C}$ denote the crossed diagrams and group theory factors. The Dirac matrix element is \begin{eqnarray} \mathcal{M}_\phi = \bar{v}(p_2)\slashed{p}_4 P_L u(p_1)\, . \end{eqnarray} The diagrams are classified in terms of the topologies given in Figure~\ref{fig:topos}. Exchanging the two final state scalars gives \begin{eqnarray} \mathcal{M}_\phi &\leftrightarrow& -\mathcal{M}_\phi\,,\nonumber \\ t &\leftrightarrow& u\, . \label{scross} \end{eqnarray} \subsection{Tree level amplitude} The tree level diagram is shown in Figure~\ref{fig:trees}. \begin{figure} \begin{center} \begin{tabular}{cc} \includegraphics[height=1.5cm]{st.eps}& \\ $S_{1}$ & \end{tabular} \end{center} \caption{Tree level diagram for the production of two scalars. See caption of Figure~\ref{fig:tree}.}\label{fig:trees} \end{figure} At tree level, one finds \begin{eqnarray} \mathcal{C}(S_1) &=& \sum_i g_i^2 T^a_i \otimes T^a_i \,, \nonumber \\ S_1 &=& \frac{2}{s} \, \mathcal{M}_\phi \, . \end{eqnarray} For the group theory factor $X \otimes Y$, $X$ acts on the initial fermion space and $Y$ on the final scalar particle space. \subsection{Topology T1} The diagrams of topology T1 are shown in Figure~\ref{fig:t1s}. \begin{figure} \begin{center} \begin{tabular}{cc} \includegraphics[height=1.5cm]{S1.eps}&\includegraphics[height=1.5cm]{S2.eps} \\ $T1a$ & $T1b$ \end{tabular} \end{center} \caption{Diagram with topology T1. See caption of Figure~\ref{fig:tree}.}\label{fig:t1s} \end{figure} \subsubsection{T1a} This diagram vanishes. \subsubsection{T1b} \begin{eqnarray} \mathcal{C}(S_{T1b}) &=& \sum_{ij} \frac{g_i^2 g_j^2}{16\pi^2} T^a_i T^b_j \otimes T^b_j T^a_i \nonumber \\ S_{T1b} &=& \mathcal{M}_\phi \Biggl[ -\frac{9}{s \epsilon^2}+\frac{1}{s \epsilon}\left(\mathsf{L}_{s}+8\mathsf{L}_{t}-2\right) -\mathsf{L}_{s}^2\frac{1}{2u}\left(\frac{7t}{s}+3\right)\nonumber \\ &&+\frac{2}{u}\mathsf{L}_{t}^2+\mathsf{L}_{s}\mathsf{L}_{t}\frac{4}{u}\left(\frac{t-u}{s}\right) +\mathsf{L}_{s}\frac{2}{s}-\frac{4}{s} \nonumber \\ &&-\frac{\pi^2}{4u}\left(11+\frac{19t}{s}\right)\Biggr] \end{eqnarray} The box diagram $S_{T1b}$ is the only one where a crossed diagram exists for scalar production. Exchanging the final state scalars gives the crossed-box graph. The amplitude is given by applying Eq.~(\ref{scross}) to $S_{T1b}$. For the group theory factor, one finds \begin{eqnarray} \mathcal{C}(\bar S_{T1b}) &=& \sum_{ij} \frac{g_i^2 g_j^2}{16\pi^2} T^a_i T^b_j \otimes T^a_i T^b_j \, . \end{eqnarray} \subsection{Topology T2} The diagrams of topology T2 are shown in Figure~\ref{fig:t2s}. There are no crossed diagrams with this topology. \begin{figure} \begin{center} \begin{tabular}{cc} \includegraphics[height=1.5cm]{S3.eps}&\includegraphics[height=1.5cm]{S14.eps} \\ $T2a$ & $T2b$ \\[10pt] \includegraphics[height=1.5cm]{S10.eps}&\includegraphics[height=1.5cm]{S13.eps} \\ $T2c$ & $T2d$ \\[10pt] \includegraphics[height=1.5cm]{S12.eps} & \\ $T2e$ & \end{tabular} \end{center} \caption{Diagram with topology T2. See caption of Figure~\ref{fig:tree}.}\label{fig:t2s} \end{figure} \subsubsection{T2a} \begin{eqnarray} \mathcal{C}(S_{T2a}) &=& \sum_{i,j} \frac{g_i^2 g_j^2}{16\pi^2} T^a_i \otimes T_j^b T^a_i T_j^b\\ S_{T2a} &=& \frac{ \mathcal{M}_\phi}{s} \biggl[-\frac{4}{\epsilon^2}-\frac{8}{\epsilon}+\frac{4}{\epsilon}\mathsf{L}_{s}-2\mathsf{L}_{s}^2 + 8\mathsf{L}_{s}-16+\frac{\pi^2}{3} \biggr] \nonumber \end{eqnarray} \subsubsection{T2b} \begin{eqnarray} \mathcal{C}(S_{T2b}) &=& \sum_{i} \frac{g_i^4 }{16\pi^2} \frac{C_A(i)}{2} T^a_i \otimes T^a_i \\ S_{T2b} &=& \frac{\mathcal{M}_\phi}{s} \biggl[\frac{1}{\epsilon^2}-\frac{2}{\epsilon} -\frac{1}{\epsilon}\mathsf{L}_{s}+\frac12\mathsf{L}_{s}^2 + 2\mathsf{L}_{s}-4-\frac{\pi^2}{12} \biggr] \nonumber \end{eqnarray} \subsubsection{T2c, T2d, T2e} These diagrams vanish. Graph T2d is the only diagram involving the $\lambda \phi^4$ coupling. \subsection{Topologies T3, T4} There are no diagrams with topology T3 and T4. \subsection{Topology T5} The diagrams of topology T5 are shown in Figure~\ref{fig:t5s}. There are no crossed graphs with this topology. \begin{figure} \begin{center} \begin{tabular}{cc} \includegraphics[height=1.5cm]{S4.eps}&\includegraphics[height=1.5cm]{S5.eps} \\ $T5a$ & $T5b$ \end{tabular} \end{center} \caption{Diagram with topology T5. See caption of Figure~\ref{fig:tree}.}\label{fig:t5s} \end{figure} \subsubsection{T5a} \begin{eqnarray} \mathcal{C}(S_{T5a}) &=& \sum_{i,j} \frac{g_i^2 g_j^2}{16\pi^2} T^b_i T^a_j T^b_i \otimes T^a_j \,, \\ S_{T5a} &=& \frac{\mathcal{M}_\phi}{s}\left[-\frac{4}{\epsilon ^2} +\frac{-6+4 \mathsf{L}_{s}}{ \epsilon }-2 \mathsf{L}_{s}^2+6 \mathsf{L}_{s}+\frac{\pi ^2}{3 }-16 \right] \nonumber \end{eqnarray} \subsubsection{T5b} \begin{eqnarray} \mathcal{C}(S_{T5b}) &=& \sum_k \frac{g_i^4}{16\pi^2}\frac 12 C_A(i) T^a_i \otimes T^c_i \nonumber \\ S_{T5b} &=& \frac{\mathcal{M}_\phi}{s}\left[-\frac{2}{ \epsilon }+2\mathsf{L}_{s} -4 \right] \end{eqnarray} \subsection{Topology T6} The diagrams of topology T6 are shown in Figure~\ref{fig:t6s}. There are no crossed graphs with this topology. \begin{figure} \begin{center} \begin{tabular}{cc} \includegraphics[height=1.5cm]{S7.eps}&\includegraphics[height=1.5cm]{S8.eps} \\ $T6a$ & $T6b$\\[10pt] \includegraphics[height=1.5cm]{S9.eps}&\includegraphics[height=1.5cm]{S6.eps} \\ $T6c$ & $T6d$ \end{tabular} \end{center} \caption{Diagram with topology T5. See caption of Figure~\ref{fig:tree}.}\label{fig:t6s} \end{figure} \subsubsection{T6a} \begin{eqnarray} \mathcal{C}(S_{T6a}) &=& \frac{g_i^4}{16\pi^2} C_A(i) T_i^a \otimes T_i^a \nonumber \\ S_{T6a} &=& \frac{\mathcal{M}_\phi}{s} \biggl[ \frac{19}{6\epsilon} - \frac{19}{6} \mathsf{L}_{s} + \frac{58}{9} \biggr] \end{eqnarray} \subsubsection{T6b} \begin{eqnarray} \mathcal{C}(S_{T6b}) &=& \frac{g_i^4}{16\pi^2} C_A(i) T_i^a \otimes T_i^a \nonumber \\ S_{T6b} &=& \frac{\mathcal{M}_\phi}{s} \biggl[ \frac{1}{6\epsilon}- \frac{1}{6} \mathsf{L}_{s} + \frac{4}{9} \biggr] \end{eqnarray} \subsubsection{T6c} \begin{eqnarray} \mathcal{C}(S_{T6c}) &=& \sum_k \frac{g_i^2 g_k^2}{16\pi^2} T_i^a \otimes T^d_k\text{Tr}_{CS}(T^a_i T^d_k) \nonumber \\ S_{T6c} &=& \frac{\mathcal{M}_\phi}{s} \biggl[ -\frac{2}{3\epsilon} + \frac{2}{3} \mathsf{L}_{s}- \frac{16}{9} \biggr] \end{eqnarray} \subsubsection{T6d} \begin{eqnarray} \mathcal{C}(S_{T6d}) &=& \sum_k \frac{g_i^2 g_k^2}{16\pi^2} T_i^a \otimes T^d_k \text{Tr}_{WF}(T^a_i T^d_k)\nonumber \\ S_{T6d} &=& \frac{\mathcal{M}_\phi}{s} \biggl[ -\frac{4}{3\epsilon} - \frac{20}{9} + \frac{4}{3} \mathsf{L}_{s} \biggr] \end{eqnarray} \section{$q \bar{q} \to \phi^\dagger \phi$ in the Standard Model}\label{sec:smscalar} Group theory factors for scalar production are given in Table~\ref{tab:ss}. They depend on group invariants which are listed below, followed by their values in the standard model. \begin{eqnarray} \Phi_B &=& \sum_i \alpha_i \alpha_j T_i^a T_j^b \otimes T_j^b T_i^a \nonumber \\ &=& \left[\frac12\alpha_2^2 +2 \alpha_1 \alpha_2 Y_Q Y_\phi\right] t^a \otimes t^a\nonumber \\ &&+ \left[ \frac{3}{16} \alpha_2^2 +\alpha_1^2 Y_Q^2 Y_\phi^2\right] \openone \otimes \openone \nonumber \\ \Phi_C &=& \sum_i \alpha_i \alpha_j T_i^a T_j^b \otimes T_i^a T_j^b \nonumber \\ &=& \left[-\frac12\alpha_2^2 +2 \alpha_1 \alpha_2 Y_Q Y_\phi\right] t^a \otimes t^a\nonumber \\ && + \left[ \frac{3}{16} \alpha_2^2 +\alpha_1^2 Y_Q^2 Y_\phi^2\right] \openone \otimes \openone \nonumber \\ \Phi_1 &=& \sum_i \alpha_i T_i^a \otimes T_i^a \nonumber \\ &=& \alpha_2 t^a \otimes t^a +\alpha_1 Y_Q Y_\phi \openone \otimes \openone \nonumber \\ \Phi_2 &=& \sum_i \alpha_i^2 C_A(i) T_i^a \otimes T_i^a = 2 \alpha_2^2 t^a \otimes t^a \nonumber \\ \Phi_{CS} &=& \sum_i \alpha_i^2 \trcs{i} T_i^a \otimes T_i^a \nonumber \\ &=& \frac12\alpha_2^2 n_S t^a \otimes t^a + 2 \alpha_1^2 n_S Y_Q Y_\phi^3 \openone \otimes \openone \nonumber \\ \Phi_{WF} &=& \sum_i \alpha_i^2 \trwf{i} T_i^a \otimes T_i^a \nonumber \\ &=& 2\alpha_2^2 n_g t^a \otimes t^a + \frac{10}{3}\alpha_1^2 n_g Y_Q Y_\phi \openone \otimes \openone \nonumber \\ \Lambda_{Q} &=& \sum_i \alpha_i C_{F,Q}(i) = \frac43 \alpha_3+ \frac{3}{4} \alpha_2 + \alpha_1 Y_Q^2 \nonumber \\ \Lambda_{\phi} &=& \sum_i \alpha_i C_{F,\phi}(i) = \frac{3}{4} \alpha_2 + \alpha_1 Y_\phi^2 \end{eqnarray} Here $Y_\phi=1/2$ is the hypercharge of the Higgs scalar, $C_{F,\phi}$ is the Casimir in the representation of the scalar field, and $\trcs{i}$ and $\trwf{i}$ are defined in Eq.~(\ref{eq57}). \subsection{Top quark loops} Top quark loops have to be included in the high scale matching for scalar production in the standard model, since the top-quark Yukawa coupling is comparable to the gauge couplings. Since this contribution to the high-scale matching depends on the details of the theory, top quark loops are only computed for the standard model. Here $y_t$ is the top quark Yukawa coupling and $Y(t_L)$ and $Y(t_R)$ are used for the $U(1)$ charges of the left- and right-handed top quarks, respectively. Note $Y(t_R)-Y(t_L)=Y_\phi=1/2$. \begin{figure} \begin{center} \begin{tabular}{cc} \includegraphics[height=1.5cm]{Z1.eps}& \\ $S_{\text{top}}$& \end{tabular} \end{center} \caption{Top quark loop contribution to the amplitude. See also caption of Figure~\ref{fig:tree}.}\label{fig:top} \end{figure} The relevant diagram is shown in Figure~\ref{fig:top}, and has to be added to the amplitude for scalar production, Eq.~(\ref{eq:amps}), The graph with $t_L$ coupling to the gauge boson is \begin{eqnarray} &&3y_t^2 \frac{\alpha_1}{4\pi}Y_Q Y(t_L)\frac{\mathcal{M}_\phi}{s} \Biggl[-\frac{2}{\epsilon} +2 \mathsf{L}_{s} -4\Biggr]\ \openone \otimes \openone \nonumber \\ &&-3y_t^2 \frac{\alpha_2}{4\pi} \frac{\mathcal{M}_\phi}{s}\Biggl[-\frac{2}{\epsilon}+2 \mathsf{L}_{s} -4\Biggr] t^a \otimes t^a \end{eqnarray} and the graph with $t_R$ coupling to the gauge boson is \begin{eqnarray} - 3y_t^2\frac{\alpha_1}{4\pi } Y_Q Y(t_R) \frac{\mathcal{M}_\phi}{s}\Biggl[-\frac{2}{\epsilon}+2 \mathsf{L}_{s} -4\Biggr] \openone \otimes \openone \end{eqnarray} \begin{eqnarray} S_{\text{top}} &=& -3y_t^2\frac{ \alpha_1 }{4\pi} Y_Q Y_\phi \frac{\mathcal{M}_\phi}{s}\Biggl[-\frac{2}{\epsilon} +2 \mathsf{L}_{s} -4\Biggr]\openone \otimes \openone \nonumber \\ &&-3y_t^2\frac{ \alpha_2 }{4\pi} \frac{\mathcal{M}_\phi}{s}\Biggl[-\frac{2}{\epsilon} +2 \mathsf{L}_{s} -4\Biggr] t^a \otimes t^a \, . \label{tloop} \end{eqnarray} using $Y(t_R)-Y(t_L)=Y_\phi$. \begin{table} \begin{eqnarray*} \renewcommand{\arraystretch}{1.8} \begin{array}{cc|cc} \hline S_1 & 4\pi \Phi_1 & \\ T1b & \Phi_B & \bar T1b & \Phi_C \\ T2a & -\frac12 \Phi_2 + \Lambda_{\phi} \Phi_1& T2b & \frac12 \Phi_2 \\ T5a & -\frac12 \Phi_2 +\Lambda_{Q} \Phi_1 & T5b & \frac12 \Phi_2 \\ T6a & \Phi_2 & T6b & \Phi_2 \\ T6c &\Phi_{CS} & T6d & \Phi_{WF} \\ \hline \end{array} \end{eqnarray*} \caption{Group theory coefficients for the production of two charged scalars. The notation is explained in the main text.}\label{tab:ss} \end{table} \section{Consistency Checks}\label{sec:check} There is a consistency check on our matching coefficients, which follows from the fact that $S$-matrix elements are independent of the scale $\mu$ at which one matches from the full theory to SCET. Consider, for example electroweak gauge boson production by left-handed quark doublets. There are five SCET operators which contribute~\cite{Chiu:2009ft,Chiu:2009mg} \begin{eqnarray} O_1 &=& \bar Q^{(u)}_2 Q^{(u)}_1 W^a_4W^a_3\nonumber \\ O_2 &=& \bar Q^{(u)}_2 t^c Q^{(u)}_1 i \epsilon^{abc} W^a_4 W^b_3 \nonumber \\ O_3 &=& \bar Q^{(u)}_2 t^a Q^{(u)}_1 B_4 W^a_3\nonumber \\ O_4 &=& \bar Q^{(u)}_2 t^a Q^{(u)}_1 W^a_4 B_3 \nonumber \\ O_5 &=& \bar Q^{(u)}_2 Q^{(u)}_1 B_4 B_3 \label{185.b} \end{eqnarray} with matching coefficients $C_i(\mu)$ at the matching scale $\mu$. Write the coefficients as \begin{eqnarray} C_i &=& C_i^{(0)} + C_i^{(1)} + \ldots \end{eqnarray} where $ C_i^{(0)}$ are the tree-level coefficients, $ C_i^{(1)}$ are the one-loop contributions, etc. Then $\mu$-independence implies the constraint \begin{eqnarray} \sum_k \mu \frac{{\rm d}\alpha_k }{{\rm d}\mu} \frac{\partial C_i^{(0)}}{\partial \alpha_k} + \mu \frac{\partial C_i^{(1)}}{\partial \mu} &=& \gamma^{(1)}_{ij} C_j^{(0)} \label{conscond} \end{eqnarray} where the sum on $k$ is over the three standard model gauge groups, and $\gamma^{(1)}_{ij} $ is the one-loop anomalous dimension in SCET computed in Refs.~\cite{Chiu:2009ft,Chiu:2009mg}. The SCET anomalous dimension is \begin{eqnarray} \bm{\gamma} &=& \left(2 \gamma_Q+ 2 \gamma_V\right) + \bm{\gamma}_S \end{eqnarray} where $\gamma_Q$ and $\gamma_V=\gamma_{W,B}$ are the collinear anomalous dimensions of $Q$, $W$, and $B$, and $\gamma_S$ is the soft anomalous dimension~\cite{Chiu:2009ft,Chiu:2009mg}. Using the values in Refs.~\cite{Chiu:2009ft,Chiu:2009mg} gives \begin{widetext} The anomalous dimension is ($\mathsf{L}=\log s/\mu^2$) \begin{eqnarray} \bm{\gamma} &=& 2 \gamma_Q \openone + \left[ \begin{array}{ccccc} 2 \gamma_W & 0 & 0 & 0 & 0 \\ 0 & 2 \gamma_W & 0 &0 & 0\\ 0 & 0 & \gamma_W+\gamma_B & 0 & 0 \\ 0 & 0 & 0 & \gamma_W+\gamma_B & 0 \\ 0 & 0 & 0 & 0 & 2 \gamma_B \\ \end{array} \right]\nonumber \\ &&+ \frac{\alpha_1}{\pi} \left(-i\pi Y_Q^2\right)+\frac{\alpha_s}{\pi}\left(-\frac43i \pi\right) +\frac{\alpha_2}{\pi}\left[ \begin{array}{ccccc} -\frac{11}{4}i \pi & U-T & 0 & 0 & 0 \\ 2(U-T) & -\frac{11}{4}i \pi +T+U& 0 &0 & 0\\ 0 & 0 & -\frac{7}{4}i \pi + T + U & 0 & 0 \\ 0 & 0 & 0 & -\frac{7}{4}i \pi + T + U & 0 \\ 0 & 0 & 0 & 0 & -\frac{3}{4}i \pi \\ \end{array} \right] \label{adW} \end{eqnarray} where the first line is the collinear contribution, and the second line is the soft contribution. \end{widetext} Here \begin{eqnarray} \gamma_Q &=& \left( \frac{\alpha_s}{4\pi} \frac43 +\frac{\alpha_2}{4\pi} \frac34 +\frac{\alpha_1}{4\pi} Y_Q^2 \right) \left( 2 \log \frac{s}{\mu^2}-3\right)\nonumber \\ \gamma_W &=& \frac{\alpha_2}{4\pi} \left( 4 \log \frac{s}{\mu^2}-\frac{19}{6}\right)\nonumber \\ \gamma_B &=& \frac{\alpha_1}{4\pi} \left( \frac{41}{6}\right)\nonumber \\ \end{eqnarray} $T=\log(-t/s)-i \pi$, $U=\log(-u/s)-i\pi$, and $Y_Q=1/6$. The consistency condition Eq.~(\ref{conscond}) is satisfied using our results for the matching coefficients and Eq.~(\ref{adW}). This only checks the relation between the $\log \mu$ terms at one-loop and the tree-level coefficients. For scalar production by doublet quarks, the EFT operators are \begin{eqnarray} O_1 &=& \bar Q^{(u)} t^a Q^{(u)} \phi^\dagger_4 t^a \phi_3\nonumber \\ O_2 &=& \bar Q^{(u)} Q^{(u)} \phi^\dagger_4 \phi_3 \end{eqnarray} and the EFT anomalous dimension is~\cite{Chiu:2009ft,Chiu:2009mg} \begin{eqnarray} \bm{\gamma} &=& \left(2 \gamma_Q + 2 \gamma_\phi\right)\openone + \bm{\gamma}_S\nonumber \\ && + \frac{\alpha_s}{\pi} \left( -\frac43 i \pi \openone \right)\nonumber \\ &&+ \frac{\alpha_2}{\pi}\left(-\frac32 i \pi \openone + \left[ \begin{array}{cc} T+U & 2(T-U) \\ \frac38(T-U) & 0 \end{array} \right]\right)\nonumber \\ &&+\frac{\alpha_1}{\pi} \left(2 Y_Q Y_\phi (T-U) - i \pi (Y_Q^2+Y_\phi^2)\right) \end{eqnarray} where $Y_\phi=1/2$, and \begin{eqnarray} \gamma_\phi &=&\left(\frac34 \frac{\alpha_2}{4\pi}+\frac {1} {4} \frac{\alpha_1}{4\pi} \right)\left(2 \log \frac{s}{\mu^2}-4\right)+3 \frac{y_t^2}{16\pi^2}\,. \end{eqnarray} The consistency condition Eq.~(\ref{conscond}) is again satisfied by our matching results. Note that the $y_t^2$ term in $\gamma_\phi$ is consistent with the top-quark loop contribution to the matching Eq.~(\ref{tloop}). \section{Relation between the $S$-Matrix and the Matching Coefficient}\label{sec:smatrix} The results in this paper are for the on-shell diagrams with dimensional regularization used to regulate the ultraviolet and infrared divergences, and all low-energy scales set to zero. The total amplitude has the form \begin{eqnarray} A &=& \alpha \mu^{2\epsilon} T + \alpha^2 \mu^{2 \epsilon} L + \ldots\,, \end{eqnarray} where $T$ is the tree amplitude, and the one-loop amplitude $L$ contains $1/\epsilon$ UV and IR divergences, \begin{eqnarray} L &=& \frac{C_2}{\epsilon^2}+\frac{C_1+D_1}{\epsilon}+\frac{C_2}{\epsilon} \log \frac{\mu^2}{s}\nonumber \\ &&+\left(C_1+D_1\right) \log \frac{\mu^2}{s} +\frac12 C_2 \log^2 \frac{\mu^2}{s}+ F(s,t)\, .\nonumber \\ \end{eqnarray} Here $C_{1,2}$ are coefficients of the IR divergences, and $D_1$ is the coefficient of the UV divergence. Since we have set scaleless integrals to zero, we cannot distinguish IR and UV divergences, and our calculation thus gives $C_1+D_1$, but not each term separately. Note that the coefficient of the $\log \mu^2$ term is proportional to the sum of the $1/\epsilon$ UV plus IR singularitites. To this must be added the counterterm graphs, which cancel the ultraviolet divergence $D_1/\epsilon$, to give the renormalized $S$-matrix element \begin{eqnarray} S &=& \alpha \mu^{2\epsilon} T + \alpha^2 \mu^{2 \epsilon}\Biggl\{\frac{C_2}{\epsilon^2}+\frac{C_1}{\epsilon}+\frac{C_2}{\epsilon} \log \frac{\mu^2}{s}\nonumber \\ &&+\left(C_1+D_1\right) \log \frac{\mu^2}{s} +\frac12 C_2 \log^2 \frac{\mu^2}{s}+ F(s,t)\Biggr\}\nonumber \\ &=& A - \alpha^2 \mu^{2 \epsilon}\frac{D_1}{\epsilon} \label{93} \label{smatrix} \end{eqnarray} The counterterm graphs must cancel all the UV singularities, since the the theory is renormalizable, so there is no overall $1/\epsilon$ divergence times a $q \bar q VV$ or $q \bar q \phi^\dagger\phi$ operator. Note that the counterterm graphs cancel $D_1/\epsilon$, but not $D_1 \log \mu^2/s$. The renormalized $S$-matrix has $1/\epsilon$ divergences which are purely IR. These IR divergences lead to IR divergent cross-sections for parton-parton scattering in the massless theory. In QCD, the IR divergences cancel when computing a physical process involving IR safe observables. A textbook example is the cancellation of IR divergences between $e^+ e^- \to q \bar q$ at one-loop, and the tree-level rate for $e^+ e^- \to q \bar q g$, to give a IR safe cross-section for $e^+ e^- \to \text{hadrons}$ at order $\alpha_s$. The renormalized $S$-matrix satisfies the renormalization group equation \begin{eqnarray} \left[ \mu \frac{\partial}{\partial \mu}+\beta(g,\epsilon) \frac{\partial}{\partial g} \right] S &=& 0 \label{srge} \end{eqnarray} where \begin{eqnarray} \beta(g,\epsilon) &=& - \epsilon g - \frac{b_0 g^3}{16\pi^2} + \ldots \end{eqnarray} is the $\beta$-function in $4-2\epsilon$ dimensions, with $b_0=11 C_A/3 -2\trwf{}/3-\trcs{} /3$. Applying Eq.~(\ref{srge}) to Eq.~(\ref{smatrix}) shows that \begin{eqnarray} D_1 &=& \alpha \frac{b_0}{4\pi} T\, . \label{dvalue} \end{eqnarray} The one-loop counterterm contribution is equal to the one-loop $\beta$-function times the tree-level amplitude. Thus we do not need to explicitly compute the counterterm graphs. Equation~(\ref{dvalue}) and Eq.~(\ref{93}) give \begin{eqnarray} S &=& \alpha \mu^{2\epsilon} T + \alpha^2 \mu^{2 \epsilon} L - \alpha^2 \mu^{2 \epsilon}\frac{b_0}{4\pi} \frac{1}{\epsilon} T \ldots\,, \label{expr} \end{eqnarray} which relates the renormalized $S$-matrix to the matching condition. An expression analogous to Eq.~(\ref{expr}) can be derived to higher orders. Eq.~(\ref{expr}) is the renormalized $S$-matrix, so all $1/\epsilon$ singularities are IR divergences, which are present in $S$-matrix elements for massless particles. Using Eq.~(\ref{expr}), we have computed the $q \bar q \to g g$ cross-section and verified that it agrees with Ellis and Sexton~\cite{Ellis:1985er}. Some terms in $A$ do not interfere with the tree amplitude, and hence do not contribute to the cross-section at order $\alpha^3$. The tree-level amplitude only has non-zero helicity amplitudes for $+-$ and $-+$ polarized gauge bosons, so only the real parts of these one-loop helicity amplitudes are checked by the cross-section results of Ref.~\cite{Ellis:1985er}. For example, the matrix element $\mathcal{M}_1$ only contributes to $++$ and $--$ polarization states, and so does not interfere with the tree amplitude. We have also computed the one-loop helicity amplitudes for $++$, $+-$, $-+$ and $++$ polarized gauge bosons from the $S$-matrix Eq.~(\ref{expr}), and verified that they agree with the results in Ref.~\cite{Kunszt:1993sd} for an $SU(N)$ gauge theory. This provides a check on the result of Sec.~\ref{sec:T2d}, which is proportional to $\mathcal{M}_1$. \section{Conclusion} We have computed the high-scale matching at one-loop for vector boson production $q \bar q \to V^a_i V^b_j$ and scalar production $q \bar q \to \phi^\dagger \phi$, for an arbitrary gauge theory, and given the group theory factors for the standard model. When combined with the EFT results of Refs.~\cite{Chiu:2009mg,Chiu:2009ft}, this gives the renormalization group improved amplitudes for gauge boson and Higgs production in the standard model. Numerical plots using these results were already presented in Refs.~\cite{Chiu:2009mg,Chiu:2009ft}. The electroweak corrections to standard model processes at TeV energies are substantial; for example the correction to transverse $W$ pair production at 2~TeV is $37$\%.
1,108,101,566,242
arxiv
\section{Introduction} An embedded Lagrangian $L$ in a cotangent bundle $(T^*Q,d(pdq))$, is {\em exact} if $pdq|_L = df$ for some function $f: L \to \mathbb{R}$. Arnold's nearby Lagrangian conjecture predicts that if $Q$ and $L$ are closed, then $L$ is Hamiltonian-isotopic to the zero-section $Q \subset T^*Q$. This result is currently known to hold only for a limited list of examples, including $Q = S^2$ \cite{Hind} and $T^2$ \cite{DGI}. The work of many authors has also led to a proof that the composition $L \to T^*Q \to Q$ (where the first map is the embedding and the second is projection to the zero-section) is a simple homotopy equivalence \cite{AbouzaidKraghSimple}. Very little is know if one drops the requirement that $L$ be exact. We will consider the case of $L$ {\em monotone}, by which we mean that there is a constant $\tau \geq 0$ such that, for every map $u:(D^2 , \partial D^2) \to (T^*Q,L)$, $$ \int_{D^2} u^*\omega = \tau \cdot \mu(u) $$ where $\mu(u)$ is the {Maslov index} of $u$. Note that we allow the case $\tau=0$, which happens, for instance when $L$ is exact (if the map $H^1(T^*Q;\mathbb{R}) \to H^1(L;\mathbb{R})$ is trivial, then $\tau=0$ implies that $L$ is exact). For some results about monotone Lagrangians in cotangent bundles, see for instance \cite{GadbledCotangent}. The focus of this paper is on closed monotone Lagrangians in cotangent bundles of spheres, from the point of view of Floer theory, more specifically using {\em wrapped Floer cohomology}. Given closed Lagrangians $L,L' \subset T^*Q$ (possibly equipped with additional data like bounding cochains or local systems) in a symplectic manifold, one can sometimes define their Floer cohomology $HF^*(L,L')$, which is invariant under Hamiltonian perturbations of either $L$ or $L'$. If $HF^*(L,L') \neq 0$, then $L$ is not Hamiltonian-displaceable from $L'$ (which means that $\varphi(L)\cap L' \neq \emptyset$ for every Hamiltonian diffeomorphism $\varphi$ of $T^*Q$) \cite{FloerLagrangian}. Unless we say otherwise, we will take Floer cohomology with coefficients in the Novikov field over $\mathbb{C}$, which is denoted by $\mathbb{K}$ and defined below. There is a 1-parameter family of disjoint monotone Lagrangians $(S^1\times S^{n-1})_\tau \subset T^*S^n$, of different monotonicity constants $\tau>0$, whose construction will be reviewed below. These Lagrangians can be equipped with local systems such that their Floer cohomologies are non-trivial. In $T^*S^3$, the same holds for a 1-parameter family of disjoint monotone Lagrangian tori $T^3_\tau$, see \cite{ChanPomerleanoUeda1}. We will review the construction of these tori below as well. We will prove the following result. \begin{theorem} \label{T:non-displ} Take $n\geq 2$ and let $L\subset T^*S^n$ be a closed orientable spin monotone Lagrangian with a local system of rank 1 for which $HF^*(L,L;\mathbb{K}) \neq 0$. Then, either $HF^*(L,S^n;\mathbb{K}) \neq 0$ (where the zero-section $S^n$ is equipped with a suitable bounding cochain) or there is a $\tau>0$ for which $HF^*(L,(S^1\times S^{n-1})_\tau;\mathbb{K}) \neq 0$ (where $(S^1\times S^{n-1})_\tau$ is equipped with a suitable unitary local system of rank 1). In particular, $L$ is not Hamiltonian-displaceable from either $S^n$ or from $(S^1\times S^{n-1})_\tau$, for some $\tau >0$. Furthermore, for $T^*S^3$ we can replace the Lagrangians $(S^1\times S^{2})_\tau$ with the tori $T^3_\tau$ in the previous statement. \end{theorem} Our work towards the proof of Theorem \ref{T:non-displ} will also imply the following. \begin{theorem} \label{T:S1xS2 and T3} Let $\tau, \tau' > 0$. Then $\tau = \tau'$ iff the Lagrangians $(S^1 \times S^2)_\tau$ and $T^3_{\tau'}$ can be equipped with local systems with respect to which $HF^*((S^1 \times S^2)_\tau,T^3_{\tau'};\mathbb{K}) \neq 0$. In particular, $(S^1 \times S^2)_\tau$ is not Hamiltonian-displaceable from $T^3_\tau$. \end{theorem} We now describe the structure of the proof of Theorem \ref{T:non-displ}. The Lagrangians $L$ in the statement give objects in a {monotone wrapped Fukaya} category of $T^*S^n$, which also includes a cotangent fiber $F = T^*_q S^n$ (for some $q\in S^n$). This is an $A_\infty$-category (with only a $\mathbb{Z}/2\mathbb{Z}$-grading, since we allow monotone Lagrangians), which we denote temporarily by $\mathcal{W}$ (and will refer to it as $\mathcal{W}^{\mathbb{Z}/2\mathbb{Z}}_{\mon}(Y;\mathbb{K})$ in Section \ref{S:W}). The category $\mathcal{W}$ is generated by the cotangent fiber $F$. This is an adaptation of a result in \cite{AbouzaidCotangentFiber} (which in its original form was for the wrapped Fukaya category of exact Lagrangians). Let us consider some algebraic consequences of this generation result. Let $A_\mathbb{K} := HW^*(F,F;\mathbb{K})$ be the wrapped Floer cohomology algebra of $F$. The graded algebra $A_\mathbb{K}$ is isomorphic to $H_{-*}(\Omega_q S^n;\mathbb{K})$, where $q\in S^n$ is a basepoint and $\Omega_q$ denotes the based loop space, see \cite{AbouzaidBasedLoops}. Hence, $A_\mathbb{K}$ is isomorphic to a polynomial algebra $\mathbb{K}[u]$, where $\deg(u) = 1-n$. There is a {\em Yoneda functor} \begin{align*} Y : \mathcal{W} &\to \mmod(A_\mathbb{K}) \\ L & \mapsto HF^*(F,L;\mathbb{K}) \end{align*} where $\mmod(A_\mathbb{K})$ is the category of $\mathbb{Z}/2\mathbb{Z}$-graded right $A_\mathbb{K}$-modules., with the morphism space between two objects $M,M'$ in $\mmod(A_\mathbb{K})$ being $\Ext_{A_\mathbb{K}}^*(M,M')$. The generation result mentioned above, together with formality results for $A_\infty$-modules over $A_\mathbb{K}$ that we prove in Section \ref{S:Formality algebra}, imply that $Y$ is a cohomologically full and faithful functor, in the sense that it induces an isomorphism on cohomology $$ HW^*(L,L';\mathbb{K}) \cong \Ext_{A_\mathbb{K}}^*(Y(L),Y(L')) $$ for any pair of objects $L,L'$. Take the subcategory $\mathcal{F} \subset \mathcal{W}$ (denoted as $\mathcal{F}^{\mathbb{Z}/2\mathbb{Z}}_{\mon}(Y;\mathbb{K})$ in Section \ref{S:W}) that does not include the object $F$, but only compact Lagrangians. Given such an object $L$ of this subcategory (where we suppress the additional data of local systems or bounding cochains), $HW^*(F,L;\mathbb{K}) = HF^*(F,L;\mathbb{K})$ is a finite dimensional $\mathbb{K}$-vector space, so $Y$ restricts to a cohomologically full and faithful embedding \begin{equation} \label{def:Y_c} Y_c : \mathcal{F} \to \mmod_{pr}(A_\mathbb{K}) \end{equation} where $\mmod_{pr}(A_\mathbb{K}) \subset \mmod(A_\mathbb{K})$ is the subcategory of proper $A_\mathbb{K}$-modules $M$ (those which are finite dimensional over $A_\mathbb{K}$). The approach of this paper will be to study the category $\mathcal{F}$ by analyzing the algebraic category $\mmod_{pr}(A_\mathbb{K})$. Corollary \ref{S generate} below gives generators for $\mmod_{pr}(A_\mathbb{K})$, and it implies the following result (which will appear below as Corollaries \ref{generate Fuk Sn} and \ref{generate Fuk}). \begin{theorem} \label{generate F} Given $n\geq 2$, the functor $Y_c$ in \eqref{def:Y_c}, when extended to the split-closure of $\mathcal F$ (which is the monotone Fukaya category of $T^*S^n$), is a quasi-equivalence of categories. The category $\mathcal{F}$ is split-generated by the uncountable collection of objects consisting of $S^n$ (equipped with suitable bounding cochains) and the $(S^1\times S^{n-1})_\tau$ (equipped with unitary local systems of rank 1). In the case of $T^*S^3$, we can replace the $(S^1\times S^{2})_\tau$ with the $T^3_\tau$ above. \end{theorem} \begin{proof}[Proof of Theorem \ref{T:non-displ}] Given $L$ such that $HF^*(L,L;\mathbb{K}) \neq 0$, $L$ is a non-trivial object in $\mathcal{F}$. Theorem \ref{generate F} then implies that $HF^*(L,L';\mathbb{K}) \neq 0$, where $L'$ is one of the split-generators. \end{proof} \begin{remark}[Relation to mirror symmetry] As mentioned, the tori $T^3_\tau$ were studied in \cite{ChanPomerleanoUeda1}. They are fibers of an SYZ fibration in the complement of an anticanonical divisor $H$ in $T^*S^3$ ($H$ is anticanonical in the sense that the Lagrangian tori in the SYZ fibration have vanishing Maslov class in the complement of $H$). In this setting, the authors compute the disk potentials associated to SYZ-fibers by studying wall-crossing for pseudoholomorphic disks. This information is used to construct a Landau--Ginzburg model that is mirror to $T^*S^3$. The critical locus of the Landau--Ginzburg potential is an affine line. If the mirror is constructed over the Novikov field, then the points in this critical line with negative valuation correspond to (split summands of) the monotone Lagrangians $T^3_\tau$, equipped with suitable unitary local systems of rank 1. The points with non-negative valuation correspond to bounding cochains on the zero section $S^3$. \end{remark} \begin{remark}[Relation to abstract flux] The monotone Lagrangians $(S^1\times S^{n-1})_\tau$ can be obtained geometrically as follows. Let $f:S^n \to \mathbb{R}$ be a Morse function with exactly two critical points. The graph of $df$ intersects the zero section of $T^*S^n$ transversely in the two critical points, and one can perform surgery on this transverse intersection to produce the family $(S^1\times S^{n-1})_\tau$. Similarly, the tori $T^3_\tau$ can be obtained by taking a Morse--Bott function $g:S^3 \to \mathbb{R}$ whose critical locus is a Hopf link, and performing surgery in $T^*S^3$ on the clean intersection of the zero section and the graph of $dg$. Recall that given a compact manifold $Q$ and a class $\alpha \in H^1(Q;\mathbb{R})$, one can take the {\em flux deformation} of the zero-section of $T^*Q$ in the direction of $\alpha$, by flowing $Q$ along a symplectic vector field $X$ such that $[\omega(.,X)] = i^*\alpha$ (where $i:Q\to T^*Q$ is the inclusion). Using the Weinstein tubular neighborhood theorem, one can similarly deform a compact Lagrangian $L$ in a symplectic manifold $(M,\omega)$ along a class $\alpha \in H^1(L;\mathbb{R})$. Motivated by \cite{SeidelAbstractFlux}, one can think of the family of Lagrangians $(S^1\times S^{n-1})_\tau$ (respectively, $T^3_\tau$) as an {\em abstract flux deformation} of two copies of the zero section $S^n$ (respectively, $S^3$) in the direction of a class $\beta\in H^n(S^n;\mathbb{R})$ (respectively, $H^3(S^3;\mathbb{R})$), if $n$ is odd. The case of $n$ even is more subtle, as we will see. \end{remark} This paper is organized as follows. In Section \ref{S:construct Lagrangians}, we present the construction of the monotone Lagrangians $(S^1 \times S^{n-1})_\tau$ in $T^*S^n$ and $T^3_\tau$ in $T^*S^3$. In Section \ref{S:Fukaya categories}, we recall the definitions of several versions of Fukaya categories of $T^*S^n$, including a monotone wrapped Fukaya category where Lagrangians are allowed to intersect cleanly. In Section \ref{S:HF computations}, we perform several Floer cohomology computations, with a view towards proving Theorem \ref{generate F}. The remaining sections have a more algebraic nature, and are about $A_\infty$-algebras and $A_\infty$-modules. In Section \ref{S:Formality algebra}, we establish formality results for a category of modules associated to a cotangent fiber in $T^*S^n$. In Section \ref{S:Generation modules}, we obtain generators for that category of modules. \subsection*{Acknowledgements} The first named author was supported by the Simons Foundation through its ``Homological Mirror Symmetry'' Collaboration grant SIMONS 385571, and by NSF grants DMS-1609148, and DMS-1564172. The second named author thanks Yank{\i} Lekili, Maksim Maydanskiy and Daniel Pomerleano for helpful conversations. He also thanks Institut Mittag-Leffler for the hospitality during the last stages of preparation of this article. \section{Monotone Lagrangians in $T^*S^n$} \label{S:construct Lagrangians} \subsection{Lagrangians in $T^*S^n$} \label{SS:Lagrs in T*Sn} Recall that $T^*S^n$ is symplectomorphic to the complex affine quadric $$ X_n = \{(z_0, \ldots, z_{n}) \in \mathbb{C}^{n+1} \,|\, z_0^2 + \ldots + z_{n}^2 = 1 \}, $$ equipped with the K\"ahler form $\omega$ obtained from the restriction of $\frac{i}{2}\sum_{j=0}^{n} d z_j \wedge d \overline{z_j}$ on $\mathbb{C}^{n+1}$ \cite{McDuffSalamonIntro}*{Exercise 6.20}. The projection to the first coordinate defines a Lefschetz fibration \begin{align*} \pi_n\colon X_n &\to \mathbb{C} \\ (z_0, \ldots, z_{n}) &\mapsto z_0 \end{align*} with critical values $\pm 1$. For every regular value $p\neq \pm 1$, the fiber $\pi_n^{-1}(p)$ is symplectomorphic to $T^*S^{n-1}$, and contains the Lagrangian sphere $$ V_{p}:= \{(p,\sqrt{1-p^2} \, x_1, \ldots, \sqrt{1-p^2} \, x_{n}) \in X_n \, | \, (x_1, \ldots, x_{n}) \in S^{n-1}\}, $$ where $S^{n-1}\subset \mathbb{R}^{n}$ is the unit sphere and $\sqrt{1-p^2}$ is one of the two square roots of $1-p^2$. Write also $V_{\pm1} = \{(\pm1,0,\ldots,0)\}$. \begin{figure \begin{center} \def0.5\textwidth{0.5\textwidth} \input{F_w.pdf_tex} \end{center} \caption{Some curves in $\mathbb{C}\setminus \{\pm 1\}$} \label{F_w_fig} \end{figure} We will be interested in the following types of Lagrangians that project to curves under $\pi_n$. See Figure \ref{F_w_fig} for relevant examples of such curves. \begin{definition}\label{D:Lagr F} Given a curve $C\subset \mathbb{C}\setminus \{-1, 1\}$ that is the image of an embedding of $S^1$, let $$ L_C := \bigcup_{z\in C} V_{z}. $$ Given an embedding $\eta: [0,\infty) \to \mathbb{C}$ such that \begin{itemize} \item $\eta(0) \in \{-1,1\}$, \item $\eta\big((0,\infty)\big) \subset \mathbb{C}\setminus \{-1,1\}$ and \item $\eta(t) = at+b$ for some $a\in \mathbb{C}^*$, $b\in \mathbb{C}$ and $t$ large enough, let \end{itemize} \begin{equation*} F_\eta := \bigcup_{t\geq 0} V_{\eta(t)}. \end{equation*} \end{definition} \begin{lemma} \label{L:L_C and F_eta} The subsets $L_C$ and $F_\eta$ of $X_n$ in Definition \ref{D:Lagr F} are Lagrangian submanifolds. If $C$ encloses both points $\pm 1$, then $L_C$ is diffeomorphic to $S^1\times S^{n-1}$, while $F_\eta$ is Hamiltonian isotopic to a cotangent fiber in $T^*S^n$. \end{lemma} \begin{proof} The $L_C$ and $F_\eta$ are Lagrangians because parallel transport with respect to the connection induced by the symplectic fibration $\pi_n$ preserves the spheres $V_p$ (they are vanishing cycles for arbitrary vanishing paths in the base), see \cite{SeidelBook}*{Lemma 16.3}. Since there are only two types of $S^{n-1}$-bundles over $S^1$, and the closed curve $C$ encircles two critical values which have the same monodromy (a Dehn twist), it follows that $L_C$ is the trivial bundle. We now consider the Lagrangians $F_\eta$. Take $\eta_\pm$ such that $\eta_\pm(t)=\pm(t+1)$ for all $t\geq 0$. Then, $F_{\eta_\pm}$ is mapped to $T_{\pm 1}^*S^n$ by the symplectomorphism $X_n\to T^*S^n$ in \cite{McDuffSalamonIntro}*{Exercise 6.20}. For any other $\eta$, there is an isotopy to one of the $\eta_\pm$ that lifts to a Hamiltonian isotopy by an application on Moser's trick. \end{proof} \begin{remark} The Floer cohomology of the Lagrangian submanifolds $L_C\cong S^1\times S^{n-1}$ in $T^*S^n$ in the previous lemma was studied in \cite{AlbersFrauenfelderTorus}. \end{remark} \begin{remark} In this Lefschetz fibration description $\pi_n : X_n\to \mathbb{C}$ of $T^*S^n$, the zero section $S^n$ is the Lagrangian lift of the interval $[-1,1] \subset \mathbb{C}$. \end{remark} Let us continue with our study of the Lagrangians $L_C$, where $C$ encloses $\{\pm1\}$. Much of what follows in this section is an adaptation of results in \cite{LekiliMaydanskiy}*{Section 2.2}. The homology long exact sequence of the pair $(T^*S^n,L_C)$ implies that $$ H_2(T^*S^n,L;\mathbb{Z}) \cong H_2(T^*S^n;\mathbb{Z}) \oplus H_1(L_C;\mathbb{Z}) $$ if $n\geq 2$. The group $H_2(T^*S^n;\mathbb{Z})$ vanishes unless $n=2$, but also in this case both $\omega$ and $c_1(T^*S^2)$ vanish on $H_2(T^*S^2;\mathbb{Z})$. The group $H_1(L_C;\mathbb{Z})$ has rank 2 for $n=2$ and rank 1 for all $n\geq 3$. For $n\geq 3$, $H_2(T^*S^n,L_C;\mathbb{Z})\cong \mathbb{Z}$ is generated by a class $\beta$ such that $\pi_n\circ \beta$ covers $C$ once. For $n=2$, we can pick $\alpha, \beta\in H_2(T^*S^2,L_C;\mathbb{Z})$ such that their boundaries give a basis for $H_1(L_C;\mathbb{Z})\cong \mathbb{Z}^2$, with the following properties: $\alpha$ is a Lefschetz thimble for some vanishing cycle $V_p$ and hence has vanishing Maslov index and symplectic area, while the boundary of $\pi_2\circ \beta$ covers $C$ once. We will now study the $\omega$-area and Maslov index of the disks $\beta$. We need some auxiliary notation. Denote by $\sigma_{std}:= \frac{i}{2} dz\wedge d\overline {z} = r dr \wedge d\theta$ the standard area form in $\mathbb{C}$. Define, on the set of regular values of the Lefschetz fibration $\pi_n$, which is $\mathbb{C} \setminus \{\pm 1\}$, the 2-form $$ \sigma := \frac{i}{2} dz_0\wedge d\overline {z_0} + f^*\sigma_{std}, $$ where $f : \mathbb{C} \setminus \{\pm 1\} \to \mathbb{C}\setminus \{0\}$ is given by $f(z) = \frac{{1-z^2}}{\sqrt{2|1-z^2|}}$. The function $f$ can be thought of as the composition of the two maps \begin{align*} \mathbb{C}\setminus\{\pm 1\} &\to \mathbb{C}\setminus\{0\} & \mathbb{C}\setminus \{0\} &\to \mathbb{C}\setminus\{0\} \\ z &\mapsto 1-z^2 & r e^{i\theta} &\mapsto \sqrt{\frac{r}{2}} e^{i\theta} \end{align*} The first map is holomorphic and the second is smooth and orientation-preserving, so $\sigma$ defines a positive measure on $\mathbb{C}\setminus \{\pm 1\}$. It extends to all of $\mathbb{C}$, as a measure that is absolutely continuous with respect to the Lebesgue measure. \begin{lemma} \label{L:sigma area} Given a disk $\beta: (D^2,\partial D^2)\to (X_n,L_C)$ such that $\pi_n\circ \beta$ covers $C$ once, we have $$ \int_\beta \omega = \int_{\pi_n (\beta)} \sigma. $$ \end{lemma} \begin{proof} Take $\beta$ as in the lemma. We can assume the boundary of $\beta$ to be given by $c(t) = \left(\gamma(t), \sqrt{1-\gamma(t)^2}\, s(t)\right)$, where $\gamma:[0,1]\to \mathbb{C}\setminus \{\pm 1\}$ is a degree 1 parametrization of $C$ and $s(t) = (s_1(t),\ldots,s_{n+1}(t))\in S^{n-1}$. Here, $\sqrt{.}$ is the analytic continuation of a branch of the square root along the path $1-\gamma^2$. Write $g(t):= \sqrt{1-\gamma(t)^2}$. We have \begin{equation} \label{difference} \int_\beta \omega - \frac{i}{2} dz_0\wedge d\overline {z_0} = \int_c \sum_{j=1}^{n} \frac{i}{4} (z_j \, d\overline{z_j} - \overline{z_j} \, d{z_j}) = \frac{i}{4} \int_0^1 g \overline g' - \overline g g' dt, \end{equation} using on the first identity Stokes' theorem and the fact that $\frac{i}{4} (z_j \, d\overline{z_j} - \overline{z_j} \, d{z_j})$ is a primitive of $\frac{i}{2} dz_j\wedge d\overline{z_j}$. A calculation shows that the right side of \eqref{difference} can be written as $$ \int_0^1 g^*\left(\frac{1}{2}r^2 d\theta\right) = \int_{C}f^*\left(\frac{1}{2} r^2 d\theta\right) , $$ where $f$ is the function defined before the lemma. Identifying $C$ with the boundary of $\pi_n(\beta)$ and using Stokes' theorem, the integral on the right equals $$ \int_{\pi_n(\beta)} f^*\sigma_{std}, $$ which finishes the proof. \end{proof} \begin{remark} The previous argument also goes through if $C$ is a piecewise smooth curve. This will be helpful in Section \ref{S:HF computations}, when computing operations $\mu^k$ involving several Lagrangians that fiber over paths in $\mathbb{C}$. \end{remark} \begin{corollary} Suppose that the simple curves $C$ and $C'$ in $\mathbb{C}\setminus \{-1, 1\}$ both enclose $\{-1,1\}$. Then, they bound the same $\sigma$-area if and only if $L_C$ and $L_{C'}$ are Hamiltonian isotopic. \end{corollary} \begin{proof} The proof is similar to that of \cite{LekiliMaydanskiy}*{Corollary 2.5}. \end{proof} \begin{lemma} \label{Maslov L_C} The Maslov index of an oriented disk in $X_n$ with boundary in $L_C$, whose boundary projects to a degree 1 cover of $C$, is $2(n-1)$. The Lagrangians $L_C$ are monotone with monotonicity constant $\tau_C = \frac{\int_{\Omega_C}\sigma}{2(n-1)}$, where $\Omega_C\subset \mathbb{C}$ is the region bounded by $C$ in the plane. \end{lemma} \begin{proof} We begin by considering the Lagrangian lift $L_0$ of the unit circle in the model Lefschetz fibration $\pi : \mathbb{C}^n \to \mathbb{C}$, where $\pi(z_1,\ldots,z_n) = z_1^2 + \ldots + z_n^2$. The vanishing cycle over $p\in \mathbb{C}\setminus \{0\}$ of a vanishing path through $p$ is $$ V'_{p}:= \{\sqrt{p} (x_1, \ldots, x_{n}) \, | \, (x_1, \ldots, x_{n}) \in S^{n-1}\}, $$ see \cite{SeidelBook}*{Example 16.5}. We can use the holomorphic volume form \begin{equation} \label{Omega} \Omega = d z_1 \wedge \ldots \wedge d z_{n} \end{equation} on $\mathbb{C}^n$ to compute the Maslov index of a disk with boundary in $L_0$. Let $u$ be such a disk, of positive symplectic area and with boundary projecting to a simple cover of the unit circle. Let $\gamma: S^1\to L$ be a parametrization of this boundary loop such that $\pi(\gamma(t)) = e^{it}$. The imaginary part of $\left(e^{-i(nt+\pi)/2}\, \Omega\right)|_{L_0}$ vanishes, hence the Maslov index of $u$ is $n$ (see \cite{SeidelThomas} for similar computations). To compute the Maslov class of $L_C$ in the statement of the lemma, we observe that $C$ is Lagrangian-isotopic to a connected sum $C_{-1} \# C_{1}$, where $C_{\pm 1}$ is a small simple loop around $\pm 1$ (this is inspired by \cite{SeidelLES}). By picking a local trivialization of the Lefschetz fibration $\pi_n$ near $\pm1$, we see that the Maslov class of $L_{C_{\pm 1}}$ can be identified with that of $L_0$ above. This implies that one can think of a disk in $X_n$ with positive symplectic area, and with boundary in $L_C$ projecting to a simple cover of $C$, as a connected sum of two disks as in the previous paragraph. Hence, the Maslov index of the disk with boundary in $L_C$ is $2(n-1)$, as wanted. The monotonicity of $L_C$ and the value of $\tau_C$ now follow from Lemma \ref{L:sigma area}. \end{proof} Recall that, given a monotone Lagrangian $L$ in a symplectic manifold $(M,\omega)$ and a choice of basis $h_1,\ldots,h_m$ for the free part of $H_1(L;\mathbb \mathbb{Z})$, we can define the {\em disk potential} $W_{L} : (\mathbb{C}^*)^m \to \mathbb{C}$ as \begin{equation} \label{disk potential} W_L(x_1,\ldots,x_m) = \sum_{u\in \mathcal{M}} \pm x^{\partial u}, \end{equation} where $\mathcal{M}$ is the moduli space of $J$-holomorphic maps $u:(D^2,\partial D^2)\to (M,L)$ of Maslov index 2, such that $u(1)=p$, for a generic choice of point $p\in L$ and compatible almost complex structure $J$ on $(M,\omega)$. The sign associated to $u$ depends on the spin structure of $L$. If we write $\langle \partial u,h_i\rangle$ for the $h_i$-component of $[\partial u]$ in the free part of $H_1(L;\mathbb \mathbb{Z})$, then $x^{\partial u}$ stands for the product $x_1^{\langle \partial u,h_1\rangle} \ldots x_m^{\langle \partial u,h_m\rangle}$. The disk potential does not depend on the choices of generic $p$ and $J$. \begin{lemma} \label{L:disk potential L_C} For $n=2$, the disk potential of $L_C$ is $W_{L_C} = x_1(1+x_2)^2$, in a basis $h_1,h_2 \in H_1(L_C;\mathbb{Z})\cong \mathbb{Z}^2$ where $h_1$ is a loop projecting to $C$ in degree 1 and $h_2$ is a fiber of the projection $\pi_2|_{L_C}$. The disk potential is zero if $n>2$. \end{lemma} \begin{proof} For $n=2$, the disk potential is computed in \cite{LekiliMaydanskiy}*{Lemma 2.19}, using the degeneration argument from \cite{SeidelLES}. In the proof of \cite{AurouxAnticanonical}*{Corollary 5.13}, the relevant Maslov index 2 disks are also computed explicitly, using the integrable complex structure in the target. The case $n>2$ follows from Lemma \ref{Maslov L_C}. \end{proof} Fix $\tau>0$ and a smooth embedded loop $C_\tau\subset \mathbb{C}\setminus \{-1,1\}$ that winds once around $-1$ and $1$ and bounds $\sigma$-area $2(n-1)\tau$. Denote by $L_\tau$, or $(S^1\times S^{n-1})_\tau$, the corresponding Lagrangian $L_{C_\tau}$. By Lemma \ref{Maslov L_C}, $L_\tau$ is monotone with monotonicity constant $\tau$. Observe that we can exhaust $\mathbb{C}\setminus [-1,1]$ by a collection of disjoint simple curves $C$, such that the corresponding monotonicity constants $\tau_{C}$ cover $\mathbb{R}_{>0}$ without repetitions. The matching sphere over the interval $[-1,1]\subset \mathbb{C}$ is the zero section $S^n\subset T^*S^n$. Assume that $C_\tau$ is the curve $C$ in Figure \ref{F_w_fig}, and denote by $F_i$ the lifts of the paths $\eta_i$ in the same figure. Similarly, denote by $F'$ the lift of the path $\eta'$. Recall that two Lagrangian submanifolds $L,L' \subset (X,\omega)$ {\em intersect cleanly} if $K:=L\cap L'$ is a manifold and for every $x\in K$ we have $T_x K = T_xL \, \cap \, T_x L' \subset T_x X$. \begin{lemma} \label{clean inters1} For every $i\geq 0$, $F_i$ and $L_\tau$ intersect cleanly. For every $i,j\geq 0$, $F_i$ and $F_{j}$ intersect cleanly. Also, all these Lagrangians intersect $F'$ cleanly. \end{lemma} \begin{proof} This follows from the fact that the Lagrangians project under the map $\pi_n:X_n\to \mathbb{C}$ to curves that intersect transversely. \end{proof} \subsection{More Lagrangians in $T^*S^3$} \label{SS:Lagrs in T*S3} It will be useful to also consider an alternative description of the complex affine quadric 3-fold, which is symplectomorphic to $T^*S^3$. We borrow some notation from \cite{ChanPomerleanoUeda1}. Write $$ X = \{(z,u_1,v_1,u_2,v_2) \in \mathbb{C}^5 \,|\, u_1 v_1 = z +1, u_2 v_2 = z - 1 \}. $$ Consider the Lefschetz fibrations \begin{align*} \pi^i: \mathbb{C}^2 &\to \mathbb{C} \\ (u_i,v_i) & \mapsto u_i v_i + (-1)^i, \end{align*} where $i\in \{1,2\}$. The map $\pi^i$ has a unique critical value at $(-1)^i$ and, given $p\in \mathbb{C}\setminus \{(-1)^i\}$, the vanishing circle in $(\pi^i)^{-1}(p)$ of a vanishing path through $p$ is $$ V_{i,p}:= \{(u_i,v_i)\in \mathbb{C}^2 \, | \, \pi^i(u_i, v_i) = p, |u_i|=|v_i| \}. $$ Write also $V_{i,(-1)^i} = \{(0,0)\}$. For more details, see \cite{ChanPomerleanoUeda1} and \cite{SeidelBook}*{Example 16.5}. The quadric $X$ is the fiber product of these two fibrations: \begin{displaymath} \xymatrix{ & X \ar[ld]_{f_1} \ar[rd]^{f_2} \ar@{-->}[dd]^z \\ \mathbb{C}^2 \ar[rd]_{\pi^1} & & \mathbb{C}^2 \ar[ld]^{\pi^2} \\ & \mathbb{C} } \end{displaymath} The map $z:X\to \mathbb{C}$ is not a Lefschetz fibration, but it can be thought of as a Morse--Bott analogue, with critical values $\pm 1$ and such that the critical locus over $\pm 1$ is a copy of $\mathbb{C}^*$. We will consider the following analogues of the Lagrangians $L_C$ and $F_\eta$ from the previous section. It will again be useful to have Figure \ref{F_w_fig} in mind. \begin{definition}\label{D:Lagr N} Given a curve $C\subset \mathbb{C}\setminus \{\pm 1\}$ that is the image of an embedding of $S^1$, let $$ T_C := \bigcup_{z\in C} V_{1,z}\times V_{2,z}. $$ Given an embedding $\eta: [0,\infty) \to \mathbb{C}$ such that \begin{itemize} \item $\eta(0) = 1$, \item $\eta\big((0,\infty)\big) \subset \mathbb{C}\setminus \{\pm 1\}$ and \item $\eta(t) = at+b$ for some $a\in \mathbb{C}^*$, $b\in \mathbb{C}$ and $t$ large enough, \end{itemize} let \begin{align*} N_\eta &:= \bigcup_{t\geq 0} V_{1,\eta(t)}\times V_{2,\eta(t)}. \end{align*} \end{definition} Several arguments in the previous section can be adapted to this setting. This time, if $C$ encloses $\{-1,1\}$, then it is diffeomorphic to a torus $T^3$ and we have $$ H_2(T^*S^3,T_C;\mathbb{Z}) \cong H_1(T_C;\mathbb{Z}) \cong \mathbb{Z}^3. $$ We can pick a basis $\alpha_1$, $\alpha_2$, $\beta$ for this relative homology group, such that $\alpha_1$ is a fiber product of a Lefschetz thimble for $\pi^1$ by a point, and $\alpha_2$ is a fiber product of a point by a Lefschetz thimble for $\pi^2$. We choose $\beta$ so that its boundary projects to a degree 1 cover of $C$. Again, the fact that the $\alpha_i$ are Lagrangian implies that they have vanishing area and Maslov index. We are left with determining the area and index of $\beta$. As before, there is a positive measure $\sigma'$ on $\mathbb{C}$, absolutely continuous with respect to the Lebesgue measure and smooth in $\mathbb{C}\setminus \{\pm 1\}$, such that the following result holds. \begin{lemma} $T_C$ and $N_\eta$ are Lagrangian submanifolds of $X$. The Lagrangian $T_C$ is diffeomorphic to $T^3$. Given $\beta$ as above, its $\omega$-area is $\int_{\Omega_C} \sigma'$, where $\Omega_C \subset \mathbb{C}$ is the region bounded by $C$, and its Maslov index is 2. Therefore, $T_C$ is monotone with monotonicity constant $\tau_C = \int_{\Omega_C} \sigma' /2$. The $N_\eta$ are Hamiltonian-isotopic to the conormal Lagrangian of the unknot in $S^3$. In particular, they are diffeomorphic to $S^1\times \mathbb{R}^2$ and are exact. \end{lemma} \begin{proof} The proof uses arguments similar to the ones in the previous section, so we omit them. See \cite{ChanPomerleanoUeda1} for the proofs of some of these statements. \end{proof} We can also write the disk potential of $T_C$. \begin{lemma} \label{L:disk potential T_C} The disk potential of $T_C$ is $$ W_{T_C} = x_1 (1 + x_2)(1 + x_3), $$ in a basis $h_1,h_2,h_3\in H_1(T_C;\mathbb{Z})\cong \mathbb{Z}^3$ such that $h_1$ is a loop projecting to $C$ in degree 1, $h_2 = V_{1,z}\times \{p_2\}$ for some $z\in C$ and $p_2\in V_{2,z}$, and $h_3 = \{p_1\}\times V_{2,z}$ for some $z\in C$ and $p_1\in V_{1,z}$. \end{lemma} \begin{proof} This is computed in \cite{ChanPomerleanoUeda1}. \end{proof} We can again exhaust $\mathbb{C}\setminus [-1,1]$ by disjoint simple closed curves $C$, such that the collection of monotonicity constants $\tau_C$ of the $T_C$ covers $\mathbb{R}_{>0}$ injectively. Fix $\tau>0$ and denote by $T^3_\tau$ the Lagrangian torus with monotonicity $\tau$ in this family. Assume that $T^3_\tau$ is the lift of the curve $C$ in Figure \ref{F_w_fig}. Denote also by $N_i$, resp.~$N'$ the lifts of the paths $\eta_i$, resp.~$\eta'$, in Figure \ref{F_w_fig}. \begin{lemma} For every $i\geq 0$, $N_i$ and $T^3_\tau$ intersect cleanly. For every $i,j\geq 0$, $N_i$ and $N_{j}$ intersect cleanly. All these Lagrangians intersect $N'$ cleanly. \end{lemma} \begin{proof} As in Lemma \ref{clean inters1}, this result follows from the fact that the Lagrangians project to curves in the plane that intersect transversely. \end{proof} \section{Wrapped Fukaya categories} \label{S:Fukaya categories} The wrapped Fukaya category of a Liouville domain $M$ was introduced in \cite{AbouzaidSeidelViterbo}. In the original definition, the objects are exact Lagrangians in the completed Liouville manifold $\widehat M$. The Lagrangians are either compact or agree outside of a compact set with the product of $\mathbb{R}$ with a Legendrian submanifold of the contact manifold $\partial M$. We will consider various versions of the wrapped Fukaya category, possibly allowing for closed monotone Lagrangians, as in \cite{RitterSmith}. For Lagrangians intersecting cleanly, we will use a Morse--Bott formalism similar to that of \cite{SeidelAbstractFlux} to compute the associated $A_\infty$-maps $\mu^k$. \subsection{Coefficients} Some of the Floer cohomology groups we will study are defined over $\mathbb{Z}$, and some over a {\em Novikov field}. Given a commutative ring $R$, which for us will always be either $\mathbb{Z}$ or $\mathbb{C}$, write $$ \mathbb{K}_R := \left\{ \sum_{i=0}^\infty a_i T^{\lambda_i} \,| \, a_i \in R , \lambda_i \in \mathbb{R}, \lambda_i< \lambda_{i+1}, \lim_{i\to \infty}\lambda_i = \infty \right\}. $$ We will be mostly interested in $\mathbb{K}_\mathbb{C}$, which will be denoted simply by $\mathbb{K}$. We can replace $\mathbb{C}$ with any algebraically closed field of characteristic zero, so that the Novikov field is algebraically closed. See Section \ref{S:Generation modules} for more on this point. There is a {\em valuation} map \begin{align*} \val \colon \mathbb{K}_R &\to (-\infty,\infty] \\ \sum_{i=0}^\infty a_i T^{\lambda_i} &\mapsto \min \{\lambda_i \,|\, a_i \neq 0\} \end{align*} where $\val(0) = \infty$. Say that $\alpha \in \mathbb{K}_R$ is {\em unitary} if $\val(\alpha) = 0$. Denote by $U_{\mathbb{K}_R} := \left\{ \alpha \in \mathbb{K}_R \,|\, \val(\alpha) = 0\right\}$ the group of unitary elements in $\mathbb{K}_R$, by $\mathbb{K}_{R,0} := \left\{ \alpha \in \mathbb{K}_R \,|\, \val(\alpha) \geq 0\right\}$ the {\em Novikov ring} and by $\mathbb{K}_{R,+} := \left\{ \alpha \in \mathbb{K}_R \,|\, \val(\alpha) > 0\right\}$ the maximal ideal in $\mathbb{K}_{R,0}$. \subsection{Morse--Bott Floer cohomology for clean intersections} We will use a Morse--Bott version of the Fukaya category, where Lagrangians are allowed to intersect cleanly, as in \cite{SeidelAbstractFlux}*{Section 3.2}. For more details on a construction of the Fukaya category with a Morse--Bott definition of the $A_\infty$-algebra of endomorphisms of a Lagrangian submanifold, see \cite{SheridanPOP}*{Section 4}. These references assume that the Lagrangians are exact, which precludes disk bubbling. Lemma \ref{L:vanishing potential} below guarantees that if $(L,\xi)$ is a Lagrangian with a rank 1 unitary local system giving a non-trivial object in the Fukaya category, then $\xi$ corresponds to a zero of the disk potential of $L$. This is useful when considering 2-parameter families of pearly trees of holomorphic disks (to prove the $A_\infty$-relations, for instance), since the vanishing of the disk potentials implies the cancellation of configurations with disk bubbles. Therefore, we can assume for many purposes that the relevant Lagrangians bound no holomorphic disks. A detailed Morse--Bott construction of Floer cohomology groups of cleanly intersecting Lagrangians is given in \cite{SchmaeschkeClean}, after earlier work in \cite{FOOO} and \cite{FrauenfelderThesis}. Let us briefly define the relevant Floer complexes. Let $L_0,L_1$ be two Lagrangians such that each $L_i$ is equipped with: \begin{itemize} \item an orientation and a spin structure; \item a unitary local system $\xi$ on a trivial $\mathbb{K}_R$-bundle $E_i = \oplus_k E_{i,k}$, where the direct sum is finite and the summand $E_{i,k}$ of grading $k$ is a finite rank trivial vector bundle over $L_i$. \end{itemize} \begin{remark} In this article, the zero section in $T^*S^n$, with $n>1$ even, is the only class of Lagrangians that we will equip with local systems of rank greater than 1. In that case, the rank will be 2 and the holonomy will be trivial since the Lagrangians are simply connected. \end{remark} \begin{remark} The choices of spin structures on the Lagrangians are necessary to orient moduli spaces of holomorphic curves. Nevertheless, in our computations we will not be very careful in specifying spin structures on the Lagrangians. This is because the effect of changing the spin structure on a Lagrangian is a change in signs associated to holomorphic curves, and this change can be compensated by the choice of a different local system $\xi$ on the Lagrangian. \end{remark} The categories of exact Lagrangians we will consider are $\mathbb{Z}$-graded, so we will need additional choices of gradings for such $L_i$ (as in \cite{SeidelBook}*{Section 12a}). The categories of monotone Lagrangians will only be $\mathbb{Z}/2\mathbb{Z}$-graded, so we will not need gradings in that case. Denote $\mathcal L_i\coloneqq (L_i,\xi_i)$, where $\xi_i$ is a local system on the $\mathbb{K}_R$-bundle $E_i$. Assume that the Lagrangians intersect cleanly and let $f: L_0 \cap L_1 \to \mathbb{R}$ be a Morse function. Define the cochain complex $$ CF^k(\mathcal L_0,\mathcal L_1) := \bigoplus_{C\subset L_0\cap L_1} \qquad \bigoplus_{\mathclap{\substack{p\in \crit(f|_C)}}} \, {\operatorname{Hom}}^{k-\deg(p)}_{\mathbb{K}_R}\left((E_0)_p,(E_1)_p\right) \otimes_{\mathbb{Z}} \mathfrak o $$ where the $C\subset L_0\cap L_1$ are connected components of the intersection, $\mathfrak o$ is the orientation line (a rank 1 local system over $\mathbb{Z}$ depending on the spin structures of the $L_i$), and ${\operatorname{Hom}}^{k-\deg(p)}_{\mathbb{K}_R}$ denotes $\mathbb{K}_R$-linear maps of degree $k-\deg(p)$. Here, the Floer degree associated to the critical point $p$ is $\deg(p) = \dim(C)- \ind(p) + \deg(C)$, where $\ind(p)$ is the Morse index of $p$ as a critical point of $f|_C$ and $\deg(C)$ is an absolute Maslov index, which depends on the gradings of the $L_i$. The operations $\mu^k$ are defined on tensor products of these chain complexes, via counts of {\em pearly trees}. We give a very brief description of these, referring the reader to \cite{SeidelAbstractFlux}*{Section 3.2} for more details. Given a collection $\mathcal L_0, \ldots, \mathcal L_k$ of Lagrangians with local systems, a pearly tree contributing to $$ \mu^k: CF^*(\mathcal L_{k-1},\mathcal L_k) \otimes \ldots \otimes CF^*(\mathcal L_0,\mathcal L_1) \to CF^*(\mathcal L_0,\mathcal L_k) $$ is a collection of perturbed pseudoholomorphic disks (with respect to auxiliary almost complex structures and perturbing 1-forms) with boundary punctures and Lagrangian boundary conditions, connected by gradient flow lines of auxiliary Morse functions and metrics on the clean intersections of the $L_i$. This collection of disks and flow lines can be concatenated into a continuous map from a disk with $n+1$ boundary punctures to the symplectic manifold, with boundary components of the disk mapping to the Lagrangians $L_0, \ldots, L_k$, see Figure \ref{pearly_tree_fig}. The contribution of a rigid configuration of disks and flow lines to $\mu^k$ is determined by the areas of the pseudoholomorphic disks (which are encoded in the exponents of the variable $T$ in the Novikov field), by signs specified by the spin structures on the $L_i$, and by parallel transport with respect to the local systems $\xi_i$ on the $E_i$ along the boundary components of the concatenated disk (with the input elements of ${\operatorname{Hom}}_{\mathbb{K}_R}(E_i,E_{i+1})$ applied at the boundary punctures). The $\mu^k$ satisfy the $A_\infty$-relations, which can be written in abbreviated form as $\mu\circ\mu=0$. \begin{figure \begin{center} \def0.5\textwidth{0.4\textwidth} \input{pearly_tree.pdf_tex} \end{center} \caption{A pearly tree contributing to $\mu^4$} \label{pearly_tree_fig} \end{figure} We will also want to consider Fukaya categories containing additional objects. A {\em bounding cochain} on an object $\mathcal L$ in a $\mathbb{Z}/2\mathbb{Z}$-graded Fukaya category is $b\in CF^{\odd}(\mathcal L,\mathcal L)$ satisfying the {\em Maurer--Cartan equation} \begin{equation} \label{MC} \sum_{k=1}^\infty \mu^k(b,\ldots,b) = 0, \end{equation} see \cite{FOOO} (to ensure convergence, we have to assume that $b$ has strictly positive valuation if it corresponds geometrically to a Morse chain of degree $1$). We can enlarge our category by allowing objects of the form $(\mathcal L,b)$, for such $b$. The object $\mathcal L$ can be identified with $(\mathcal L,0)$. Given objects $(\mathcal L_0,b_0),\ldots,(\mathcal L_k,b_k)$ in the enlarged category, the $A_\infty$-maps $$ \hat\mu^k : CF^*(\mathcal L_{k-1},\mathcal L_{k})\otimes \ldots \otimes CF^*(\mathcal L_0,\mathcal L_{1}) \to CF^*(\mathcal L_0,\mathcal L_{k}) $$ are given by $$ \hat\mu^k(x_k,\ldots,x_1) := \sum_{l_0,\ldots,l_k \ge1 0} \mu^{(k+\sum_i l_i)}(\underbrace{b_k,\ldots,b_k}_{l_k},x_k,b_{k-1},\ldots,b_1,x_1,\underbrace{b_0,\ldots,b_0}_{l_1}). $$ The fact that the $b_i$ satisfy the Maurer--Cartan equation \eqref{MC} implies the $A_\infty$-equations $\hat\mu \circ \hat \mu = 0$. Since $\hat \mu^k$ agrees with $\mu^k$ when all $b_i=0$, we will continue to write $\mu^k$ instead of $\hat\mu^k$. \subsection{Wrapped Floer cohomology} We will use a model for wrapped Floer cohomology from \cite{AbouzaidSeidelFuture}, which is presented in \cite{GPS1} and \cite{SylvanFunctors}. Let $L_0$ be a non-compact Lagrangian, which in this paper will be either a cotangent fiber $F$ or a conormal Lagrangian of the unknot $N$. We pick a family $L_i$ of Lagrangians that are lifts of paths $\eta_i$ in the base of the Lefschetz fibration $\pi_n$ from Section \ref{SS:Lagrs in T*Sn} (in the case of $F$), or in the base of the fiber product of Lefschetz fibrations $\pi^i$ from Section \ref{SS:Lagrs in T*S3} (in the case of $N$), where the path $\eta_i$ wraps $i$ times around the two critical values, see Figure \ref{F_w_fig}. Then, given another Lagrangian $L'$, we have $$ HW^*(L_0,L') := \lim_{i\to \infty} HF^*(L_i,L'), $$ with the limit taken with respect to the continuation maps relating $L_i$ and $L_{i+1}$. For the equivalence of this model with the usual definitions involving fast growing Hamiltonians, see \cite{GPS1}*{Lemma 3.37} and \cite{SylvanFunctors}*{Proposition 2.6}. In these references, the wrapped Fukaya category is defined more precisely by localizing the Fukaya category on the continuation maps that were just mentioned. We will combine this approach to wrapped Floer cohomology with the definition of Morse--Bott Floer cohomology above, where the Lagrangians intersect cleanly and are possibly equipped with local systems and bounding cochains. \subsection{Wrapped Fukaya categories} \label{S:W} We will consider several versions of the Fukaya $A_\infty$-category of $T^*S^n$. Recall that $R$ is either $\mathbb{Z}$ or $\mathbb{C}$. \begin{itemize} \item $\mathcal{W}^\mathbb{Z}(T^*S^n;\mathbb{Z})$ is a category whose objects are either the $F_\eta$ from Definition \ref{D:Lagr F} or compact oriented exact Lagrangians. When $n=3$, we also include the objects $N_\eta$ from Definition \ref{D:Lagr N}. Objects are equipped with $\mathbb{Z}$-gradings and spin structures. Morphism spaces are wrapped Floer cochain complexes with coefficients in $\mathbb{Z}$. The differential and higher $A_\infty$-operations count rigid pearly trees, without keeping track of areas (which can be thought of as setting $T=1$ in the Novikov field $\mathbb{K}_\mathbb{Z}$). In \cite{AbouzaidCotangentFiber}, it is shown that every $F_\eta$ (which is Hamiltonian isotopic to a cotangent fiber, by Lemma \ref{L:L_C and F_eta}) generates $\mathcal{W}^\mathbb{Z}(T^*S^n;\mathbb{Z})$. \item $\mathcal{W}^\mathbb{Z}(T^*S^n;\mathbb{K}_R)$ has the same objects as $\mathcal{W}^\mathbb{Z}(T^*S^n;\mathbb{Z})$. The difference is that the morphism spaces are now wrapped Floer cochain complexes {\em with coefficients in $\mathbb{K}_R$}, to keep track of the symplectic areas of the disks in the pearly trees that contribute to the $A_\infty$ operations. \item $\mathcal{W}^{\mathbb{Z}/2\mathbb{Z}}(T^*S^n;\mathbb{K}_R)$ is obtained from $\mathcal{W}^\mathbb{Z}(T^*S^n;\mathbb{K}_R)$ by collapsing the $\mathbb{Z}$-gradings to $\mathbb{Z}/2\mathbb{Z}$-gradings. If $n$ is odd, allow also objects of the form $(S^n,b_\alpha)$, where $S^n$ is the zero section and $b_\alpha=\alpha [pt]$ is a bounding cochain with $\alpha \in \mathbb{K}_{R,0}$ and $[pt] \in H^n(S^n;\mathbb{K}_R)$. See Remark \ref{R:alpha in K_0} below for why we impose $\alpha\in \mathbb{K}_{R,0}$. We have implicitly chosen a perfect Morse function on $S^n$, and $[pt]$ is given by the minimum of that function (the maximum yields the unit in the $A_\infty$-algebra of $S^n$). Since $S^n$ bounds no disks, it is clear that all the summands in \eqref{MC} vanish for $b=b_\alpha$. If $n$ is even, we want to allow instead objects corresponding to bounding cochains in $S^n\oplus S^n[1]$ (sum and shift of objects is allowed in the additive enlargement of the Fukaya category). We implement this by equipping $S^n$ with the trivial graded $\mathbb{K}_R$-bundle $E:=\mathbb{K}_R\oplus \mathbb{K}_R[1]$, and bounding cochains $b_{\alpha,\beta}\in H^{odd}(S^n;\operatorname{End}(E))$ of the form $$ b_{\alpha,\beta} \coloneqq \begin{pmatrix} 0 & \beta \\ \alpha & 0 \end{pmatrix}_{[pt]}, $$ where $\alpha,\beta\in \mathbb{K}_{R,0}$ and the matrix represents an endomorphism of the fiber of $E$ at the minimum of the auxiliary perfect Morse function on $S^n$. \item $\mathcal{W}^{\mathbb{Z}/2\mathbb{Z}}_{\mon}(T^*S^n;\mathbb{K}_R)$ is an extension of $\mathcal{W}^{\mathbb{Z}/2\mathbb{Z}}(T^*S^n;\mathbb{K}_R)$, allowing all closed monotone Lagrangians. The objects are equipped with orientations and spin structures, and are $\mathbb{Z}/2\mathbb{Z}$-graded. We also equip monotone Lagrangians with unitary rank 1 local systems over $\mathbb{K}_R$. The construction of the monotone wrapped Fukaya category is given in \cite{RitterSmith}. Their results also imply that the monotone wrapped Fukaya category of a cotangent bundle is generated by a cotangent fiber. See also \cite{SheridanFano} for a definition of the monotone Fukaya category in a closed setting. \item $\mathcal{F}^{\mathbb{Z}/2\mathbb{Z}}_{\mon}(T^*S^n;\mathbb{K}_R)$ is the full subcategory of $\mathcal{W}^{\mathbb{Z}/2\mathbb{Z}}_{\mon}(T^*S^n;\mathbb{K}_R)$ containing only those objects whose underlying Lagrangians are closed. \end{itemize} It is an important fact that Hamiltonian isotopies give rise to isomorphic objects in all these categories; in the presence of bounding cochains, this means that if $b$ is a bounding cochain on $L$, and $L'$ is Hamiltonian isotopic to $L$, then there is a bounding cochain $b'$ on $L'$ so that the two corresponding objects of the Fukaya category are isomorphic. \begin{remark} \label{R:alpha in K_0} If we equip Lagrangians with bounding cochains valued in the maximal ideal $\mathbb{K}_{R,+}$ of $\mathbb{K}_{R,0}$, then we are guaranteed convergence of all the $A_\infty$-operations deformed by such bounding cochains. In our case, since the degree of $[pt]\in H^*(S^n;\mathbb{Z})$ is $n>1$, we could in fact allow bounding cochains $\alpha [pt]$ for arbitrary $\alpha\in \mathbb{K}_R$ in the category of exact Lagrangians $\mathcal{W}^{\mathbb{Z}/2\mathbb{Z}}(T^*S^n;\mathbb{K}_R)$. Nevertheless, we would run into convergence issues when taking morphisms with monotone Lagrangians in $\mathcal{W}^{\mathbb{Z}/2\mathbb{Z}}_{\mon}(T^*S^n;\mathbb{K}_R)$, which is why we restrict to bounding cochains with coefficients in $\mathbb{K}_{R,0}$. With minor modifications to our arguments, we could also have equipped all objects in $\mathcal{W}^{\mathbb{Z}/2\mathbb{Z}}_{\mon}(T^*S^n;\mathbb{K}_R)$ with finite rank unitary local systems and suitable bounding cochains. \end{remark} Observe that we can define several functors between these categories: \begin{itemize} \item $\mathcal G_1 \colon \mathcal{W}^\mathbb{Z}(T^*S^n;\mathbb{Z}) \to \mathcal{W}^\mathbb{Z}(T^*S^n;\mathbb{K}_R)$ is the identity on objects. Fix a primitive $f_L$ for every exact Lagrangian $L$. Given exact Lagrangians $L_0,L_1$ in $\mathcal{W}^\mathbb{Z}(T^*S^n;\mathbb{Z})$, map $x\in CW^*(L_0,L_1;\mathbb{Z})$ to $T^{f_1(x)-f_0(x)}x \in CW^*(L_0,L_1;\mathbb{K}_R)$, where $f_i :=f_{L_i}$. If $u$ is a pearly tree contributing to $\mu^k$ in $\mathcal{W}^\mathbb{Z}(T^*S^n;\mathbb{Z})$, then the contribution of $u$ to $\mu^k$ in $\mathcal{W}^\mathbb{Z}(T^*S^n;\mathbb{K}_R)$ is weighted by the factor $T^{\int_{D^2} u^*\omega}$, where the integral is over all the holomorphic disks in the pearly tree. The functor $\mathcal G_1$ depends on the choices of primitives $f_L$, but different choices yield isomorphic functors (we could eliminate this choice by incorporating the primitives in the definition of objects of the exact Fukaya category). \item $\mathcal G_2 \colon \mathcal{W}^\mathbb{Z}(T^*S^n;\mathbb{K}_R) \to \mathcal{W}^{\mathbb{Z}/2\mathbb{Z}}(T^*S^n;\mathbb{K}_R)$ is given by collapsing the $\mathbb{Z}$-grading to a $\mathbb{Z}/2\mathbb{Z}$-grading, followed by inclusion of objects. \item $\mathcal G_3 \colon \mathcal{W}^{\mathbb{Z}/2\mathbb{Z}}(T^*S^n;\mathbb{K}_R) \to \mathcal{W}^{\mathbb{Z}/2\mathbb{Z}}_{\mon}(T^*S^n;\mathbb{K}_R)$ is given by inclusion of objects, as are $\mathcal G_4 \colon \mathcal{W}^{\mathbb{Z}/2\mathbb{Z}}_{\mon}(T^*S^n;\mathbb{K}_\mathbb{Z}) \to \mathcal{W}^{\mathbb{Z}/2\mathbb{Z}}_{\mon}(T^*S^n;\mathbb{K})$ (recall that $\mathbb{K}=\mathbb{K}_\mathbb{C}$) and $\mathcal G_5 \colon \mathcal{F}^{\mathbb{Z}/2\mathbb{Z}}_{\mon}(T^*S^n;\mathbb{K}_\mathbb{Z}) \to \mathcal{W}^{\mathbb{Z}/2\mathbb{Z}}_{\mon}(T^*S^n;\mathbb{K})$. \end{itemize} \begin{remark} Let $L$ be a monotone Lagrangian. A unitary local system $\xi$ on the trivial $\mathbb{K}_R$-bundle over $L$ can be specified by a homomorphism $$ \hol_\xi : H_1(L;\mathbb{Z}) \to U_{\mathbb{K}_R}. $$ If, in the definition \eqref{disk potential} of the disk potential $W_L$, we replace $x^{\partial u}$ with $\hol_\xi(\partial u)$, then we get an element of $\mathbb{K}_R$ that we denote by $W_L(\xi)$. When defining the monotone category $\mathcal{W}^{\mathbb{Z}/2\mathbb{Z}}_{\mon}(T^*S^n;\mathbb{K}_R)$, one can only take morphisms between objects $(L_1,\xi_1)$ and $(L_2,\xi_2)$ if $W_{L_1}(\xi_1)=W_{L_2}(\xi_2)$, see \cite{OhMonotoneI}. This does not impose an additional constraint in our case, due to the following result. It can be interpreted as saying that the monotone Fukaya category of $T^*S^n$ is unobstructed. \end{remark} \begin{lemma} \label{L:vanishing potential} Let $L\subset T^*S^n$ be a compact monotone Lagrangian with a unitary local system $\xi$ on a trivial line bundle. Write $\mathcal L = (L,\xi)$. If $HF^*(\mathcal L,\mathcal L;\mathbb{K}_R) \neq 0$, then $W_L(\xi)=0$. \end{lemma} This follows from \cite{RitterSmith}*{Theorem 3.2}, according to which the disk potentials of monotone Lagrangians in $(M,\omega)$ with nontrivial Floer cohomology are eigenvalues of quantum multiplication by $c_1$ on the quantum cohomology of $(M,\omega)$. Since $c_1$ of the total space of $T^*S^n$ vanishes, the lemma follows. \begin{remark} \label{R:crit pts of potentials} Let $L^n$ be a monotone Lagrangian torus with disk potential $W_L$. The critical points of $W_L$ in $(U_{\mathbb{K}_R})^n$ correspond to the rank 1 unitary local systems $\xi$ on the trivial $\mathbb{K}_R$-line bundle over $L$ for which $HF^*(\mathcal L,\mathcal L;\mathbb{K}_R)\neq 0$, where $\mathcal L = (L,\xi)$, see \cite{SheridanFano}*{Proposition 4.2}. This is to say that $\mathcal L$ is non-trivial in $\mathcal F^{\mathbb{Z}/2\mathbb{Z}}_{\mon}(T^*S^n;\mathbb{K}_R)$. Recall from Lemma \ref{L:disk potential L_C} that Lagrangian tori $L_C\subset T^*S^2$ have disk potential $W_1=x_1(1+x_2)^2$. The critical locus of this potential is given by the condition $x_2=-1$. Recall also that Lemma \ref{L:disk potential T_C} says that the disk potential of a Lagrangian torus $T_C\subset T^*S^3$ is $W_2=x_1(1+x_2)(1+x_3)$, whose critical locus is given by $x_2=x_3=-1$. Observe that, for both tori $L_C$ and $T_C$, the disk potentials vanish on their critical points, which is compatible with Lemma \ref{L:vanishing potential}. \end{remark} \subsection{Yoneda functors} \label{SS:Yoneda} In this section we will be working over the field $\mathbb{K}_\mathbb{C} = \mathbb{K}$, since we will use some formality results from Section \ref{S:Formality algebra}. Let $\mathcal{A}_{\mathbb{K}} := CW^*(F,F;\mathbb{K})$ be the $A_\infty$-algebra of a cotangent fiber in $T^*S^n$, with $n\geq 2$, and let $\mmod^{A_\infty}(\mathcal{A}_{\mathbb{K}})$ be the differential $\mathbb{Z}/2\mathbb{Z}$-graded category of right $A_\infty$-modules over $\mathcal{A}_{\mathbb{K}}$. Given two objects $\mathcal{M}$ and $\mathcal{M}'$ in $\mmod^{A_\infty}(\mathcal{A}_{\mathbb{K}})$, the morphism space $\hom_{\mmod^{A_\infty}(\mathcal{A}_{\mathbb{K}})}(\mathcal{M},\mathcal{M}')$ is a chain complex computing $\Ext_{\mathcal{A}_{\mathbb{K}}}^*(\mathcal{M},\mathcal{M}')$, see \cite{SeidelBook}*{Remark 2.15}. There is a Yoneda functor \begin{align*} \mathcal{Y} : \mathcal{W}^{\mathbb{Z}/2\mathbb{Z}}_{\mon}(T^*S^n;\mathbb{K}) &\to \mmod^{A_\infty}(\mathcal{A}_{\mathbb{K}}) \\ \mathcal L & \mapsto CW^*(F,\mathcal L) \end{align*} which restricts to a functor $$ \mathcal{Y}_c : \mathcal{F}^{\mathbb{Z}/2\mathbb{Z}}_{\mon}(T^*S^n;\mathbb{K}) \to \mmod^{A_\infty}_{pr}(\mathcal{A}_{\mathbb{K}}), $$ where $\mmod^{A_\infty}_{pr}(\mathcal{A}_{\mathbb{K}}) \subset \mmod^{A_\infty}(\mathcal{A}_{\mathbb{K}})$ is the subcategory of {\em proper modules} $\mathcal{M}$, such that $H^*(\mathcal{M})$ is finite dimensional over ${\mathbb{K}}$ (the subscript in $\mathcal{Y}_c$ stands for `compact'). Now, let $A_{\mathbb{K}} := H^*(\mathcal{A}_{\mathbb{K}})$ be the cohomology algebra of $\mathcal{A}_{\mathbb{K}}$. Let $\mmod(A_{\mathbb{K}})$ be the $\mathbb{Z}/2\mathbb{Z}$-graded category of right $A_{\mathbb{K}}$-modules, such that morphism spaces are $\Ext_{A_{\mathbb{K}}}^*$ groups (respecting the $\mathbb{Z}/2\mathbb{Z}$-gradings). There is a functor \begin{align*} H:\mmod^{A_\infty}(\mathcal{A}_{\mathbb{K}}) &\to \mmod(A_{\mathbb{K}}) \\ \mathcal{M} & \mapsto H^*(\mathcal{M}) \end{align*} which restricts to $$ H_c:\mmod^{A_\infty}_{pr}(\mathcal{A}_{\mathbb{K}}) \to \mmod_{pr}(A_{\mathbb{K}}) \\ $$ where $\mmod_{pr}(A_{\mathbb{K}}) \subset \mmod(A_{\mathbb{K}})$ is the subcategory of finite dimensional $\mathbb{Z}/2\mathbb{Z}$-graded modules over $A_{\mathbb{K}}$. Proposition \ref{P:N split-generates} below implies that the functor $\mathcal{Y}$ (hence also $\mathcal{Y}_c$) is cohomologically full and faithful. According to Corollary \ref{C:mod(A) is formal} below, $H$ (hence also $H_c$) is a quasi-equivalence of categories. We conclude the following. \begin{proposition} \label{C:Yoneda ff} The composition \begin{align*} Y:=H\circ \mathcal{Y} : \mathcal{W}^{\mathbb{Z}/2\mathbb{Z}}_{\mon}(T^*S^n;\mathbb{K}) &\to \mmod(A_{\mathbb{K}}) \\ \mathcal L & \mapsto HW^*(F,\mathcal L) \end{align*} and its restriction $$ Y_c:=H_c\circ \mathcal{Y}_c : \mathcal{F}^{\mathbb{Z}/2\mathbb{Z}}_{\mon}(T^*S^n;\mathbb{K}) \to \mmod_{pr}(A_{\mathbb{K}}) $$ are cohomologically full and faithful embeddings. \qed \end{proposition} \section{Floer cohomology computations} \label{S:HF computations} \subsection{The Lagrangians $F$ and $N$} Recall from Section \ref{S:construct Lagrangians} that the Lagrangian lifts $F_\eta\subset T^*S^n$ and $N_\eta\subset T^*S^3$ are Hamiltonian-isotopic to, respectively, a cotangent fiber (which we denote by $F$) and the conormal Lagrangian of an unknot in $S^3$ (which we denote by $N$). \begin{proposition} \label{P:N split-generates} The cotangent fiber $F$ generates $\mathcal{W}^\mathbb{Z}(T^*S^n;\mathbb{Z})$, $\mathcal{W}^\mathbb{Z}(T^*S^n;\mathbb{K}_R)$ and $\mathcal{W}^{\mathbb{Z}/2\mathbb{Z}}_{\mon}(T^*S^n;\mathbb{K}_R)$. When $n=3$, the Lagrangian $N$ split-generates $\mathcal{W}^\mathbb{Z}(T^*S^3;\mathbb{Z})$, $\mathcal{W}^\mathbb{Z}(T^*S^3;\mathbb{K}_R)$ and $\mathcal{W}^{\mathbb{Z}/2\mathbb{Z}}_{\mon}(T^*S^3;\mathbb{K}_R)$. \end{proposition} \begin{proof} The fact that a cotangent fiber generates $\mathcal{W}^\mathbb{Z}(T^*S^n;\mathbb{Z})$ is proven in \cite{AbouzaidCotangentFiber}. The result follows for $\mathcal{W}^\mathbb{Z}(T^*S^n;\mathbb{K}_R)$. The fact that a cotangent fiber generates $\mathcal{W}^{\mathbb{Z}/2\mathbb{Z}}_{\mon}(T^*S^n;\mathbb{K}_R)$ follows from \cite{RitterSmith}. The previous paragraph and Lemma \ref{HW(N)} below imply the result for $N$. \end{proof} Recall that $HW^*(F,F;\mathbb{Z}) \cong \mathbb{Z}[u]$, where $\deg(u)=1-n$, which follows from \cite{AbouzaidBasedLoops}. Denote this ring by $A_\mathbb{Z}$. Also denote by $F_0$ a cotangent fiber corresponding to a lift of a path through the critical value $1$ of $\pi_n$, and by $F'$ one that is a lift of a path through $-1$, see Figure \ref{F_w_fig}. Since $F_0$ and $F'$ are Hamiltonian-isotopic, we have \begin{equation} \label{HF(F,F')} HW^*(F_0,F';\mathbb{Z}) \cong A_\mathbb{Z}. \end{equation} On the other hand, $$ HW^*(F_0,F';\mathbb{Z}) \cong \lim_{i\to \infty} HF^*(F_i,F';\mathbb{Z}), $$ where the $F_i$ are lifts of the paths $\eta_i$ illustrated in Figure \ref{F_w_fig}. In our Morse--Bott model, the cochain complex for $CF^*(F_i,F';\mathbb{Z})$ is described, as a graded abelian group, as $$ \bigoplus_{k=0}^i H^{*+(1-n)(1+2k)}(S^{n-1};\mathbb{Z}). $$ Equation \eqref{HF(F,F')} and the fact that, in this Morse--Bott model, the continuation maps $$ CF^*(F_i,F';\mathbb{Z}) \to CF^*(F_{i+1},F';\mathbb{Z}) $$ are inclusions, imply that the differentials vanish on these chain complexes. In particular, we have the following. \begin{lemma} \label{u in F} Up to a factor $\pm 1$, the unit $e$, resp.~the generator $u$, in $A_\mathbb{Z}$ is represented by the minimum, resp.~maximum, of the auxiliary Morse function on $F_0\cap F' \cong S^{n-1}$, thought of as a class in $HF^{0}(F_0,F';\mathbb{Z})$, resp.~$HF^{1-n}(F_0,F';\mathbb{Z})$. \end{lemma} We now consider the Lagrangian $N$. \begin{remark} In the following, we use the cohomological degree shift notation, where $[k]$ corresponds to a shift by $-k$. \end{remark} \begin{proposition}\label{HW(N)} The Lagrangian $N \in T^*S^3$ is quasi-isomorphic to $F \oplus F[1]$ in $\mathcal{W}^\mathbb{Z}(T^*S^3;\mathbb{Z})$. In particular, $HW^*(N,N;\mathbb{Z})$ is isomorphic to the graded matrix algebra $$ B_\mathbb{Z}:=\begin{pmatrix} A_\mathbb{Z} & A_\mathbb{Z}[1] \\ A_\mathbb{Z}[-1] & A_\mathbb{Z} \end{pmatrix}. $$ Hence, $HW^*(N,N;\mathbb{K}_R)$ is isomorphic to $B_\mathbb{Z}\otimes_\mathbb{Z} \mathbb{K}_R$ for any commutative ring $R$. \end{proposition} \begin{proof} Recall the construction, in \cite{AbouzaidBasedLoops}, of a cohomologically fully faithful $A_\infty$-functor $$ \mathcal F : \mathcal W^\mathbb{Z}(T^*Q;\mathbb{Z}) \to \Tw(\mathcal P(Q)), $$ where the target is a category of twisted complexes on a Pontryagin category $\mathcal P(Q)$ of a closed $\operatorname{Spin}$ manifold $Q$. Objects in $\mathcal P(Q)$ are points in $Q$, with $\hom_{\mathcal P(Q)}(q_1,q_2) = C_{-*}(\Omega_{q_1,q_2}(Q);\mathbb{Z})$ and composition given by the Pontryagin product of loops. Here, $\Omega_{p,p'}(Q)$ is the space of loops in $Q$ that start at $p$ and end at $p'$. Write also $\Omega_{p}$ for $\Omega_{p,p}$. Given an object in $\mathcal W^\mathbb{Z}(T^*Q;\mathbb{Z})$, which is a $\mathbb{Z}$-graded exact $\operatorname{Spin}$ Lagrangian $L$ in $T^*Q$, we can assume (up to a Hamiltonian isotopy) that $L$ intersects the zero-section transversely at the points $q_1,\ldots,q_m$. The image of $L$ under $\mathcal F$ is a twisted complex supported on a direct sum of grading shifts of the $q_i$. The differential in the twisted complex is constructed from moduli spaces of Floer strips between $Q$ and $L$. Let us use this functor in our setting. The Lagrangian $N$ intersects $S^3$ cleanly along a copy of $S^1$. One can deform $N$ by a Hamiltonian isotopy so that it intersects $S^3$ transversely at exactly two points $q_1$ and $q_2$, with consecutive indices. Hence, $\mathcal F(N)$ is a twisted complex supported on the sum of shifts of $q_1$ and of $q_2$. The differential on this twisted complex is given by a cycle in $C_{0}(\Omega_{q_1,q_2}(S^3);\mathbb{Z})$. Homologous cycles yield quasi-isomorphic twisted complexes, so the differential on $\mathcal F(N)$ is determined by an element $x\in H_{0}(\Omega_{q_1,q_2}(S^3);\mathbb{Z}) \cong \mathbb{Z}$. Given $q\in S^3$ and identifying $H_{-*}(\Omega_{q}(S^3);\mathbb{Z})$ with $HW^*(F_q,F_q;\mathbb{Z})$, we can say that $N$ is quasi-isomorphic to $\Cone(F_q\stackrel{x}{\to} F_q)$ in a category of twisted complexes over $\mathcal W^\mathbb{Z}(T^*Q;\mathbb{Z})$, where $x$ is now thought of as a class in $HW^0(F_q,F_q;\mathbb{Z}) \cong \mathbb{Z}$. In particular, up to a degree shift, \begin{align*} HF^*(N,S^3;\mathbb{Z}) &\cong H^*\left(Cone(HF^*(F_q,S^3;\mathbb{Z}) \stackrel{x}{\to} HF^*(F_q,S^3;\mathbb{Z})) \right) \cong \\ &\cong H^*\left(Cone(\mathbb{Z} \stackrel{x}{\to} \mathbb{Z})\right) \cong \begin{cases} \mathbb{Z}[1]\oplus \mathbb{Z} & \text{ if } x=0 \\ (\mathbb{Z}/x \mathbb{Z}) & \text{ otherwise} \end{cases}. \end{align*} On the other hand, one can adapt \cite{PozniakThesis}*{Proposition 3.4.6} to Floer cohomology with $\mathbb{Z}$-coefficients (instead of $\mathbb{Z}/2\mathbb{Z}$), and conclude that $$ HF^*(N,S^3;\mathbb{Z}) \cong H^*(S^1;\mathbb{Z}), $$ up to a degree shift. Therefore, we have that $x=0$, the differential in the twisted complex $\mathcal F(N)$ is trivial, and $N$ is quasi-isomorphic to $F\oplus F[1]$, as wanted. \end{proof} \begin{remark} Strictly speaking, the argument above only implies that $N=F\oplus F[1]$ up to a global degree shift. However, this will be enough for our purposes, since the main application of the previous proposition will be in Lemma \ref{HF(N,T)}, which is about the $\mathbb{Z}/2\mathbb{Z}$-graded monotone Fukaya category. \end{remark} The ring $B_\mathbb{Z}$ of endomorphisms of $F\oplus F[1]$ can be represented pictorially as follows: \tikzset{node distance=2cm, auto} \begin{center} \begin{tikzpicture} \node (A) {$F$}; \node (B) [right of=A] {$F[1]$}; \draw [->,out=45,in=135,looseness=0.75] (A.north) to node[above]{$% \begin{pmatrix} 0 & * \\ 0 & 0 \end{pmatrix}% $} (B.north); \draw [->,out=-135,in=-45,looseness=0.75] (B.south) to node[below]{$% \begin{pmatrix} 0 & 0 \\ * & 0 \end{pmatrix}% $} (A.south); \path (A) edge [->,in=160, out = 200, loop] node[left] {$% \begin{pmatrix} * & 0 \\ 0 & 0 \end{pmatrix}% $} (A); \path (B) edge [->,in=340, out = 15, loop] node[right] {$% \begin{pmatrix} 0 & 0 \\ 0 & * \end{pmatrix}% $} (B); \end{tikzpicture} \end{center} Define $$ e_1 := \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}, \, e_2 := \begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix}, \, e_{21} := \begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix}, \, e_{12} := \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}. $$ Note that $$ |e_1| = 0 = |e_2|, \, |e_{21}| = 1, \, |e_{12}| = -1. $$ As a graded free abelian group, $B_\mathbb{Z}$ has generators in low degrees given by $$ \begin{tabular}{c|c|c|c|c|c} degree & 1 & 0 & $-1$ & $-2$ & $-3$ \\ \hline generator & $e_{21}$ & $e_1, e_2$ & $e_{12}, u e_{21}$ & $u e_1, u e_2$ & $u e_{12}, u^2 e_{21}$ \end{tabular} $$ \ In a manner similar to what we did above for $F$, let us give a more explicit description of the Morse--Bott wrapped Floer cohomology of $N$. Denote by $N'$ the lift of a path $\eta'$ through $-1$ and by the $N_i$ lift of a path $\eta_i$ through $1$ that winds $i$ times around the critical values of the Morse--Bott Lefschetz fibration, see Figures \ref{F_w_fig} and \ref{N0N1_fig}. By Proposition \ref{HW(N)}, we know that, $$ B_\mathbb{Z}\cong HW^*(N_0,N';\mathbb{Z}) \cong \lim_{i\to \infty} HF^*(N_i,N';\mathbb{Z}). $$ The Morse--Bott Floer cochain complex for $CF^*(N_i,N';\mathbb{Z})$ with $i\geq 0$ is given, as a graded abelian group, by $$ \bigoplus_{k=0}^i H^{*-2k-1}(T^2;\mathbb{Z}). $$ Similarly to what we saw above for $F$, the continuation maps $$ CF^*(N_i,N';\mathbb{Z}) \to CF^*(N_{i+1},N';\mathbb{Z}) $$ are inclusions and the differentials vanish on these chain complexes. \begin{figure \begin{center} \def0.5\textwidth{0.3\textwidth} \input{N0N1.pdf_tex} \end{center} \caption{$N_0$ and $N_1$} \label{N0N1_fig} \end{figure} We also have $$ B_\mathbb{Z}\cong HW^*(N_0,N_0;\mathbb{Z}) \cong \lim_{i\to \infty} HF^*(N_i,N_0;\mathbb{Z}). $$ For $i>0$, the Morse--Bott Floer cochain complex for $CF^*(N_i,N_0;\mathbb{Z})$ is \begin{equation} \label{HF(N1,N0)} H^*(S^1)\oplus \bigoplus_{k=1}^i H^{*-2k}(T^2;\mathbb{Z}) \end{equation} and degree considerations imply again that the continuation maps are inclusions and that the differentials vanish. As we saw after Proposition \ref{HW(N)}, the free abelian group $HW^*(N,N;\mathbb{Z})$ has two generators in degree $-2$, denoted by $u e_1$ and $u e_2$. \begin{remark} At several points in this paper, including the proof of the next result, we will explicitly compute certain products $\mu^2$. Since we will always be in a position where we can compute the product on cohomology, and since the relevant holomorphic curves will always project to triangles in $\mathbb{C}$ over which the Lefschetz fibrations of interest are trivial, it will suffice to make all the calculations using a product complex structure, for which it will be evident that the relevant holomorphic curves are regular. \end{remark} \begin{lemma} \phantomsection \label{ue geometric} \begin{enumerate} \item Up to signs, the class of the unit $e = e_1 + e_2\in HW^0(N_1,N_0,\mathbb{Z})$ is represented by the fundamental class of $S^1$ in \eqref{HF(N1,N0)}, with $i=1$. \item Up to signs, the class $u e_1 + u e_2 = u e\in HW^{-2}(N_1,N_0;\mathbb{Z})$ is represented by the fundamental class of $T^2$ in \eqref{HF(N1,N0)}, with $i=1$. \end{enumerate} \end{lemma} \begin{proof} The statement in (1) follows from the fact that the canonical map $H^*(S^1) \to HW^*(N_0,N_0;\mathbb{Z})$ is a ring map, so it preserves units. For (2), it is convenient to also consider the Lagrangian $N'$. The product $\mu^2$ gives a map $$ HF^0(N_0,N';\mathbb{Z}) \otimes HF^{-2}(N_1,N_0,\mathbb{Z}) \to HF^{-2}(N_1,N';\mathbb{Z}). $$ Figure \ref{N0N1_fig} will be useful to understand the map \begin{equation} \label{0 to -2} HF^0(N_0,N';\mathbb{Z}) \to HF^{-2}(N_1,N';\mathbb{Z}) \end{equation} given by right multiplication with the fundamental class of $T^2$ in $HF^{-2}(N_1,N_0,\mathbb{Z})$, which lives over the intersection point $y$ in Figure \ref{N0N1_fig}. Note that $HF^0(N_0,N';\mathbb{Z})\cong HW^0(N_0,N';\mathbb{Z}) \cong \mathbb{Z}^2$ lives over the point $x$ in the Figure, and that $HF^{-2}(N_1,N';\mathbb{Z})\cong HW^{-2}(N_1,N';\mathbb{Z}) \cong \mathbb{Z}^2$ lives over the point $z$. The product can now be computed by lifting the shaded triangle. Since the fibration is trivial over this triangle, there is a $T^2$-family of such lifts, which implies that the map \eqref{0 to -2} is an isomorphism. Since we are working over $\mathbb{Z}$, this means that the fundamental class of $T^2$ represents $\pm u e_1 \pm u e_2$, and we can conclude that, up to signs, it can be identified with $ue$, as wanted. \end{proof} \subsection{Computations in $T^*S^n$} We begin by assuming that {\bf $n$ is odd}. The wrapped Fukaya category $\mathcal{W}^{\mathbb{Z}/2\mathbb{Z}}(T^*S^n;\mathbb{K}_R)$ contains objects of the form $(S^n,\alpha[pt])$, were $\alpha \in \mathbb{K}_{R,0}$ and $[pt] \in H^n(S^n;\mathbb{K}_R)$ is the class of a point. We want to understand how a cotangent fiber $F$ acts on such an object. Let $F_i$ and $F'$ be as in the previous section. Given $a \in HF^*(F_i,F';\mathbb{K}_R)$ and $X\in \mathcal{W}^{\mathbb{Z}/2\mathbb{Z}}(T^*S^n;\mathbb{K}_R)$, define a map \begin{align*} \psi_a^X \colon HF^*(F',X;\mathbb{K}_R) & \to HF^{*+\deg(a)}(F_i,X;\mathbb{K}_R) \\ x & \mapsto \mu^2(x,a) \end{align*} \begin{figure \begin{center} \def0.5\textwidth{0.4\textwidth} \input{HF_F_Sn.pdf_tex} \end{center} \caption{The chain complexes $CF^*(F_0,(S^n, \alpha[pt]))$ and $CF^*(F',(S^n, \alpha[pt]))$} \label{HF(F,Sn)_fig} \end{figure} \begin{lemma} \phantomsection \label{HF(F,Sn) odd} \begin{enumerate} \item There is an isomorphism \label{HF odd} $$ HF^*(F,(S^n, \alpha[pt]);\mathbb{K}_R) \cong \mathbb{K}_R. $$ \item \label{u act on Sn odd} Using the identification in Lemma \ref{u in F} of $e\in HF^{0}(F_0,F';\mathbb{Z})$ with the class of a point in $S^{n-1}$, and of $u\in HF^{1-n}(F_0,F';\mathbb{Z})$ with the fundamental class of $S^{n-1}$, we have $$\psi_{u}^{(S^n,\alpha[pt])} = \pm\alpha \, \psi_{e}^{(S^n,\alpha[pt])}.$$ \end{enumerate} \end{lemma} \begin{proof} As we saw, in the Lefschetz fibration description $\pi_n : X_n\to \mathbb{C}$ of $T^*S^n$ the zero section $S^n$ is the Lagrangian lift of the interval $[-1,1] \subset \mathbb{C}$. For part \eqref{HF odd}, we can replace a cotangent fiber $F$ with its Hamiltonian-isotopic Lagrangians $F_0$ and $F'$, as in the previous section. Recall that these are lifts of paths out of the critical values that intersect the interval $[-1,1] \subset \mathbb{C}$ transversely and only at one of the endpoints of the interval. Then, $CF^*(F_0,(S^n, \alpha[pt]);\mathbb{K}_R)$ has a single generator in degree 0, and the result follows. The same is true replacing $F_0$ with $F'$. Let us give an alternative argument, with an eye towards part \eqref{u act on Sn odd}. This time, let $F_0$ and $F'$ be lifts of paths that intersect the interior of $[-1,1]$, as in Figure \ref{HF(F,Sn)_fig}. We start with $F_0$. The chain complex $CF^*(F_0,(S^n, \alpha[pt]);\mathbb{K}_R)$ now has generators $x,y,z$ in degrees $-n$, $1-n$ and 0, respectively ($y$ is the maximum and $z$ the minimum of an auxiliary Morse function on the component of $S^n \cap F_0$ that is diffeomorphic to $S^{n-1}$), see Figure \ref{HF(F,Sn)_fig}. The fact that $\partial x$ is of the form $\pm T^A y$, where $A$ is the $\sigma$-area of the lightly shaded bigon (recall the definition of $\sigma$ in Section \ref{S:construct Lagrangians}), follows from the fact that the algebraic count of lifts of the shaded strip is $\pm 1$. That can be seen using the Hamiltonian invariance of $HF^*(\mathbb{R}^n,i\mathbb{R}^n)$ in $\mathbb{C}^n$, which is of rank 1. It follows that the cohomology is of rank 1, generated by $z$. There is a similar argument for $F'$ instead of $F_0$, with $z'$ now being the maximum of an auxiliary Morse function on $S^{n-1}$. To prove \eqref{u act on Sn odd}, we use again the representation of $F_0$ and $F'$ in Figure \ref{HF(F,Sn)_fig}. The dark triangle in Figure \ref{HF(F,Sn)_fig} does not contain critical values of $\pi_n$, so the restriction of the Lefschetz fibration to that triangle is trivial. The triangle hence lifts to an $S^{n-1}$-family of holomorphic triangles with the appropriate Lagrangian boundary conditions. This family can be made rigid by using $e\in CF^*(F_0,F';\mathbb{K}_R)$ (represented by a minimum) as an input in $$ \psi_{e}^{(S^n,\alpha[pt])}(z') = \mu^2(z',e)= \pm T^B z, $$ where $B$ is the $\sigma$-area of the dark triangle. Similarly, the family of lifted triangles can be rigidified by using the bounding cochain $\alpha [pt]$ as an input in $$ \psi_{u}^{(S^n,\alpha[pt])}(z') = \mu^3(\alpha [pt],z',u)=\pm T^B \alpha z. $$ The result now follows. \end{proof} Consider now the case of {\bf even $n$}. Recall that we equip $S^n$ with the trivial rank 2 vector bundle of mixed degree $E=\mathbb{K}_R\oplus \mathbb{K}_R[1]$, and with bounding cochains of the form $b_{\alpha,\beta} = \begin{pmatrix} 0 & \beta \\ \alpha & 0 \end{pmatrix}_{[pt]}$, such that $\alpha,\beta \in \mathbb{K}_{R,0}$ and $[pt]\in H^n(S^n;\mathbb{Z})$ is represented by the minimum of a perfect Morse function on $S^n$. Let $F_0, F'$ be as before. \begin{lemma} \phantomsection \label{HF(F,Sn) even} \begin{enumerate} \item There is an isomorphism \label{HF even} $$ HF^*(F,(S^n, b_{\alpha,\beta});\mathbb{K}_R) \cong \mathbb{K}_R\oplus \mathbb{K}_R[1]. $$ \item \label{u act on Sn even} Using the identification in Lemma \ref{u in F} of $e\in HF^{0}(F_0,F';\mathbb{Z})$ with the class of a point in $S^{n-1}$, and of $u\in HF^{1-n}(F_0,F';\mathbb{Z})$ with the fundamental class of $S^{n-1}$, we have $$\psi_{u e}^{(S^n,b_{\alpha,\beta})} = \pm \begin{pmatrix} 0 & \beta \\ \alpha & 0 \end{pmatrix} \, \psi_{e}^{(S^n,b_{\alpha,\beta})}.$$ \end{enumerate} \end{lemma} \begin{proof} The proof of \eqref{HF even} is similar to the one in Lemma \ref{HF(F,Sn) odd}. One can again replace $F$ with either $F_0$ or $F'$ as in Figure \ref{HF(F,Sn)_fig}. We obtain a $\mathbb{K}_R$-basis $v_0,v_1$ for $HF^*(F_0,(S^n, b_{\alpha,\beta});\mathbb{K}_R)$, where $v_0,v_1$ is the standard basis for the fiber of $E=\mathbb{K}_R\oplus \mathbb{K}_R[1]$ at $z$ (the fiber minimum) indicated in Figure \ref{HF(F,Sn)_fig}. Similarly, we denote by $v'_0, v'_1$ the analogous basis for $HF^*(F',(S^n, b_{\alpha,\beta});\mathbb{K}_R)$, with $z$ replaced by $z'$ (the fiber maximum) in Figure \ref{HF(F,Sn)_fig}. The result in \eqref{u act on Sn even} follows again from the study of lifts of the dark triangle in Figure \ref{HF(F,Sn)_fig}. Once more, the lifts of the triangle can be rigidified by either taking $e$ as an input in $\mu^2$ or by inputing the bounding cochain $b_{\alpha,\beta}$ in $\mu^3$. Taking bases $v_i$ and $v'_i$ above, we get $$ \psi_{e}^{(S^n,b_{\alpha,\beta})}(v_i') = \mu^2(v_i',e)= \pm T^B v_i, $$ for $i = 0, 1$, where $B$ is the $\sigma$-area of the dark triangle. We also get $$ \psi_{u}^{(S^n,b_{\alpha,\beta})}(v_0') = \mu^3(b_\alpha,v_0',u)=\pm T^B \alpha v_1 $$ and $$ \psi_{u}^{(S^n,b_{\alpha,\beta})}(v_1') = \mu^3(b_{\alpha,\beta},v_1',u)=\pm T^B \beta v_0, $$ as wanted. \end{proof} \begin{figure \begin{center} \def0.5\textwidth{0.4\textwidth} \input{cone.pdf_tex} \end{center} \caption{$L_\tau$ as a cone on morphisms between $F_0$ and $F_1$} \label{cone_fig} \end{figure} We now consider the Lagrangians $L_\tau$, which are diffeomorphic to $S^1\times S^{n-1}$. Let $U\in U_{\mathbb{K}_R}$ be a unitary element in the Novikov field and take $\alpha := T^{-2(n-1)\tau} U^{-1}\in \mathbb{K}_R \setminus \mathbb{K}_{R,0}$. If $n>2$, write $L_\alpha$ for the Lagrangian $L_\tau$ equipped with the unitary local system $\xi$ in the trivial $\mathbb{K}_R$-bundle over $L_\tau$, such that the holonomy of $\xi$ along a loop that projects in degree 1 to the curve $C_\tau$ (recall that we think of $C_\tau$ as the curve $C$ in Figure \ref{F_w_fig}) is $U$. If $n=2$, recall that we picked a basis $h_1,h_2$ for $H_1(L_\tau;\mathbb{Z})$ in Lemma \ref{L:disk potential L_C}, to write the disk potential of $L_\tau$. The curve $h_1$ projects in degree 1 to $C_\tau$ and $h_2$ is a fiber of $\pi|_{L_\tau}$. In Remark \ref{R:crit pts of potentials}, we observed that the Floer cohomology of $(L_\tau,\xi)$ is non-trivial precisely when $\xi$ is a local system with holonomy $-1$ around $h_2$. Write $L_\alpha$ for $(L_\tau,\xi)$, such that the holonomy of $\xi$ is $U$ around $h_1$ and $-1$ around $h_2$. If the $\sigma$-areas of the two shadedd regions in Figure \ref{cone_fig} are the same, then the figure suggests that $L_\tau$ should be equivalent to surgery on morphisms supported on the two connected components of the intersection $F_1\cap F_0 = \{*\}\cup S^{n-1}$. Recall that surgery on an intersection point of two Lagrangians corresponds in the Fukaya category to taking the cone on the morphism given by the intersection point, see Chapter 10 of \cite{FOOO}. This motivates the following result. \begin{lemma} \label{Ltau is cone} For the appropriate choice of $\operatorname{Spin}$ structure, $L_\alpha$ is isomorphic in $\mathcal W_{\mon}^{\mathbb{Z}/2\mathbb{Z}}(T^*S^n;\mathbb{K}_R)$ to $\Cone( u^2 - \alpha e)$, where $ u^2 - \alpha e$ is thought of as a morphism in $HW^{\rm even}(F,F)$. In particular, $$ HF^*(F, L_\alpha;\mathbb{K}_R) \cong H^*(S^{n-1};\mathbb{K}_R), $$ as $\mathbb{Z}/2\mathbb{Z}$-graded free $\mathbb{K}_R$-modules. \end{lemma} \begin{proof} \begin{figure \begin{center} \def0.5\textwidth{0.8\textwidth} \input{cone1.pdf_tex} \end{center} \caption{The action of $u$ on $L_\alpha$} \label{cone1_fig} \end{figure} Given a monic polynomial $p(u) = u^d + a_{d-1} u^{d-1}+ \ldots +a_0$ in $\mathbb{K}_R[u] \cong HW^*(F,F;\mathbb{K}_R)$, the object $\Cone(p(u))$ is such that $HF^*(F,\Cone(p(u));\mathbb{K}_R)$ is a free $\mathbb{K}_R$-module of rank $d$. The right action of $u$ on $HF^*(F,\Cone(p(u));\mathbb{K}_R)$ is by the transpose of the companion matrix to $p(u)$: \begin{equation*} \begin{pmatrix} 0 & 1 & 0 & \cdots & 0 \\ 0 & 0 & 1 & \cdots & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots \\ -a_0 & -a_1 & -a_2 & \cdots & -a_{d-1} \end{pmatrix}. \end{equation*} Hence, we want to show that $HF^*(F,L_\alpha;\mathbb{K}_R)$ is a free $\mathbb{K}_R$-module of rank 2, where $u$ acts on the right as \begin{equation} \label{eq: u act on cone1} \begin{pmatrix} 0 & 1 \\ \alpha & 0 \end{pmatrix} \end{equation} Let us represent the Hamiltonian isotopy class of $F$ by $F_0$ and by $F'$, as before. In Figure \ref{cone1_fig}, we see that $F_0 \cap L_\tau $ is diffeomorphic to $S^{n-1}$. Choosing an auxiliary perfect Morse function on this sphere, we get a chain model for $CF^*(F_0,L_\alpha;\mathbb{K}_R)$ whose generators are the minimum $m$ and the maximum $M$. We can similarly get generators $m', M'$ for $CF^*(F',L_\alpha;\mathbb{K}_R)$. We will first work with coefficients in $\mathbb{K}_\mathbb{Z}$, and then argue that the case of $\mathbb{K}_\mathbb{C}$-coefficients follows. Recall Lemma \ref{u in F}. The element $e\in CF^0(F_0,F';\mathbb{K}_\mathbb{Z})$ (the minimum in its $S^{n-1}$ fiber) acts on $CF^*(F',L_\alpha,\mathbb{K}_\mathbb{Z})$ by $$ \psi_{e}^{L_\alpha}(M') = \mu^2(M',e)= \pm T^A m, $$ by taking lifts of the shaded triangle on the left in Figure \ref{cone1_fig}. The $\sigma$-area of this triangle is $A$. Note that the Lefschetz fibration is trivial over the triangle, so it has an $S^{n-1}$-family of holomorphic lifts. The remaining contributions to right multiplication by $e$ must come from lifts of the shaded triangle on the right in Figure \ref{cone1_fig}. Since $e$ acts by an isomorphism over $\mathbb{K}_\mathbb{Z}$, we conclude that the lifts of that triangle contribute to $$ \psi_{e}^{L_\alpha}(m') = \mu^2(m',e)= \pm T^B U M, $$ where $B$ is the $\sigma$-area of that right triangle in the plane. Note that these lifted triangles pick up holonomy $U$. The same holomorphic triangles determine the action of $e$ over $\mathbb{K}=\mathbb{K}_\mathbb{C}$, so we conclude that $\psi_{e}^{L_\alpha}$ is given by the same formulas over $\mathbb{K}$ as over $\mathbb{K}_\mathbb{Z}$. The element $u\in CF^{1-n}(F_0,F';\mathbb{K}_R)$ is represented by the maximum in the same $S^{n-1}$ fiber as $e$, and acts on $CF^*(F',L_\alpha;\mathbb{K}_R)$ by $$ \psi_{u}^{L_\alpha}(M') = \mu^2(M',u)= \pm T^A M, $$ and $$ \psi_{u}^{L_\alpha}(m') = \mu^2(m',u)= \pm T^A m. $$ In both cases, this corresponds to lifting the triangle on the left in Figure \ref{cone1_fig}. Observe that $B = A + 2(n-1)\tau$, so we can write $$ \psi_u^{L_\alpha} = \begin{pmatrix} 0 & \pm 1 \\ \pm \alpha & 0 \end{pmatrix} \psi_e^{L_\alpha}. $$ To get the signs as in \eqref{eq: u act on cone1}, we note that changing the $\operatorname{Spin}$ structure has the effect of replacing $\alpha$ by $-\alpha$. Note also that $\Cone(x) \cong \Cone (-x)$. We still need to show that $HF^*(F,L_\alpha;\mathbb{K}_R)\neq 0$. We prove that $HF^*(F',L_\alpha;\mathbb{K}_R)\neq 0$. If $n$ is odd, then this is obvious, since the indices of the generators $m',M'$ have the same parity, so the differential is zero. Observe that the case $n=2$ is addressed in Remark \ref{R:crit pts of potentials}. For general even $n$, we write $$ \mu^1(m) = \kappa_1 M, \qquad \mu^1(M) = \kappa_2 m, \qquad \mu^1(m') = \kappa_1 M', \qquad \mu^1(M') = \kappa_2 m', $$ for some $\kappa_1, \kappa_2, \kappa_1', \kappa_2' \in \mathbb{K}$. The Leibniz rule (and the fact that $\mu^1(u)=0$) yields \begin{align*} \mu^1(\mu^2(m', u)) &= \mu^2(\mu^1(m'), u) = \kappa_1' \mu^2(M', u) = \pm \kappa_1' T^A M \\ &= \pm \mu^1 (T^A m) = \pm T^A \kappa_1 M \Longrightarrow \kappa_1 = \pm \kappa_1' \end{align*} and \begin{align*} \mu^1(\mu^2(m', e)) &= \mu^2(\mu^1(m'), e) = \kappa_1' \mu^2(M', e) = \pm \kappa_1' T^A m \\ &= \pm \mu^1 (T^B U M) = \pm T^B U \kappa_2 m \Longrightarrow T^{B-A} U \kappa_2 = \pm \kappa_1' = \pm \kappa_1. \end{align*} Therefore, $$ \mu^1\circ \mu^1(M) = \kappa_2 \mu^1(m) = \pm T^{B-A} U \kappa_2^2 M. $$ But $\mu^1\circ \mu^1=0$, because $F$ and $L_\alpha$ both have vanishing disk potential. Since $T^{B-A} U$ is invertible, we conclude that $\kappa_1=\kappa_2=0$, and that $HF^*(F',L_\alpha;\mathbb{K}_R)\neq 0$, as wanted. \end{proof} As we saw, Lemma \ref{Ltau is cone} can be rephrased as saying that $HF^*(F,L_\alpha;\mathbb{K}_R)$ is isomorphic to $\mathbb{K}_R^2$, if $n$ is odd, and to $\mathbb{K}_R\oplus \mathbb{K}_R[1]$, if $n$ is even, and that the action of $u$ is represented by the matrix \eqref{eq: u act on cone1}. To relate this with the generation results for modules that will be discussed below, it is convenient to restrict our atention to $\mathbb{K}=\mathbb{K}_\mathbb{C}$, which is an algebraically closed field. Since the eigenvalues of the matrix \eqref{eq: u act on cone1} are $\pm \sqrt{\alpha}$ (the two square roots of $\alpha$ in $\mathbb{K}$), we conclude the following. \begin{corollary} \label{u act on Ltau} If $n$ is odd, then $HF^*(F',L_\alpha;\mathbb{K})$ and $HF^*(F_0,L_\alpha;\mathbb{K})$ have bases in which $\psi_{u e}^{L_\alpha} = \begin{pmatrix} \sqrt{\alpha} & 0 \\ 0 & - \sqrt{\alpha} \end{pmatrix} \, \psi_{e}^{L_\alpha}$. \end{corollary} We are now ready to prove the following result, up to Corollary \ref{S generate} below. \begin{theorem} \label{split generators Sn} If $n$ is odd, then the collection of right $A_{\mathbb{K}}$-modules $$ \{HF^*(F,(S^n,\alpha[pt]);\mathbb{K})\}_{\val(\alpha) \geq 0} \cup \{HF^*(F, L_\alpha;\mathbb{K})\}_{\val(\alpha) < 0} $$ split-generates the category $\mmod_{pr}(A_{\mathbb{K}})$. If $n$ is even, then the same holds if we replace $(S^n,\alpha[pt])$ with $(S^n,b_{\alpha,1})$, as in Lemma \ref{HF(F,Sn) even}. \end{theorem} \begin{proof} In the $n$ odd case, if $\val(\alpha) \geq 0$, then Lemma \ref{HF(F,Sn) odd} implies that $$ HF^*(F_0,(S^n,\alpha[pt]);\mathbb{K}) \cong S_{\pm\alpha} $$ as right $A_{\mathbb{K}}$-modules, where $S_\alpha$ is the 1-dimensional (over $\mathbb{K}$) right $A_{\mathbb{K}}$-module on which $u\in A_{\mathbb{K}}$ acts as multiplication by $\alpha$ (as in Lemma \ref{L:triangulated closure} below). If $\val(\alpha) < 0$, Lemma \ref{Ltau is cone} and Corollary \ref{u act on Ltau} imply that $$ HF^*(F, L_\alpha;\mathbb{K}) \cong S_{\sqrt{\alpha}} \oplus S_{-\sqrt{\alpha}} $$ as right $A_{\mathbb{K}}$-modules. Corollary \ref{S generate} below now implies the result when $n$ is odd. The case of $n$ even is analogous, where this time we apply Lemma \ref{HF(F,Sn) even} instead of Lemma \ref{HF(F,Sn) odd} and Corollary \ref{S tilde generate} instead of Corollary \ref{S generate}. \end{proof} The following is a version of Theorem \ref{generate F} from the Introduction. \begin{corollary} \label{generate Fuk Sn} The category $\mathcal{F}^{\mathbb{Z}/2\mathbb{Z}}_{mon}(T^*S^n;\mathbb{K})$ is split-generated by the collection of objects $\{(S^n,\alpha[pt])\}_{\val(\alpha) \geq 0} \cup \{L_\alpha\}_{\val(\alpha) < 0}$ when $n$ is odd. When $n$ is even, the same is true if we replace $(S^n,\alpha[pt])$ with $(S^n,b_{\alpha,1})$. \end{corollary} \begin{proof} This follows from Theorem \ref{split generators Sn} and Proposition \ref{C:Yoneda ff}. \end{proof} \subsection{Computations in $T^*S^3$} We now want to study how $u\in HW^*(F,F;\mathbb{K}_R)\cong \mathbb{K}_R[u]= A_{\mathbb{K}_R}$ acts on the tori $T^3_\tau$. Recall from Lemma \ref{L:disk potential T_C} that the disk potential of $T^3_\tau$ can be computed in a basis $h_1,h_2,h_3$ of $H_1(T^3_\tau;\mathbb{Z})$, where $h_1$ is a loop projecting bijectively to the curve $C_\tau\subset \mathbb{C}\setminus \{\pm 1\}$ (that $T_\tau^3$ covers), while $h_2$ and $h_3$ are vanishing circles that project to points under the fibration. As observed in Remark \ref{R:crit pts of potentials}, the critical points of the disk potential that belong to $(U_{\mathbb{K}_R})^3$ correspond to unitary local systems on $T^3_\tau$, whose holonomy around $h_1$ is arbitrary, and whose holonomy around each of $h_2$ and $h_3$ is $-1$. Given $U\in U_{\mathbb{K}_R}$, let $\alpha := T^{-2\tau} U^{-1} \in \mathbb{K}_R\setminus \mathbb{K}_{R,0}$ and denote by $T_\alpha$ the Lagrangian $T^3_\tau$ equipped with a unitary local system $\xi$ in the trivial $\mathbb{K}_R$-bundle, whose holonomy around $h_1$ is $U$, and whose holonomies around $h_2$ and $h_3$ are $-1$. Given $a \in HF^*(N_1,N_0;\mathbb{K}_R)$ and $X\in \mathcal{W}^{\mathbb{Z}/2\mathbb{Z}}_{\mon}(T^*S^3;\mathbb{K}_R)$, define a map \begin{align*} \phi_a^X \colon HF^*(N_0,L;\mathbb{K}_R) & \to HF^*(N_1,L;\mathbb{K}_R) \\ x & \mapsto \mu^2(x,a) \end{align*} \begin{figure \begin{center} \def0.5\textwidth{0.5\textwidth} \input{HF_N_L.pdf_tex} \end{center} \caption{The action of $u$ on $T^3_\alpha$} \label{HF(N,L)_fig} \end{figure} \begin{lemma} \phantomsection \label{HF(N,T)} For the appropriate choice of $\operatorname{Spin}$ structure on $T_\alpha$, \begin{enumerate} \item there is an isomorphism $$ HF^*(N,T_\alpha;\mathbb{K}) \cong H^*(T^2;\mathbb{K}), $$ possibly with a degree shift; \item using the identification in Lemma \ref{ue geometric} of $e\in HF^{0}(N_1,N_0;\mathbb{Z})$ with the fundamental class of $S^1$, and of $ue\in HF^{-2}(N_1,N_0;\mathbb{Z})$ with the fundamental class of $T^2$, we have $$ \phi_{u e}^{T_\alpha} = \alpha \, \phi_{e}^{T_\alpha}. $$ \end{enumerate} \end{lemma} \begin{proof} We begin with (1). By Remark \ref{R:crit pts of potentials}, $T_\alpha$ corresponds to a critical point of the disk potential of the torus $T_\tau^3$. Hence, $HF^*(T_\alpha,T_\alpha;\mathbb{K})\cong H^*(T^3;\mathbb{K})$ has rank 8. Observe that for the Lagrangian lifts $N_i$ of the paths $\eta_i$ in Figure \ref{F_w_fig}, each of the graded $\mathbb{K}$-vector spaces $CF^*(N_i,T_\alpha;\mathbb{K})$ is isomorphic to $H^*(T^2;\mathbb{K})$. This has rank 4, so the rank of $HF^*(N_i,T_\alpha;\mathbb{K})$ can only be 0, 2 or 4. On the other hand, by Proposition \ref{HW(N)}, $$HF^*(N_i,T_\alpha,\mathbb{K}) \cong HF^*(F,T_\alpha;\mathbb{K})\oplus HF^*(F,T_\alpha;\mathbb{K})[1].$$ Since $F$ generates the wrapped Fukaya category and $T_\alpha$ is a non-trivial object, we conclude that the rank of $HF^*(F,T_\alpha;\mathbb{K})$ is 1 or 2. Denote by $M$ this $\mathbb{K}[u]$-module. The full faithfulness of the Yoneda embedding implies that $$ HF^*(T_\tau, T_\tau;\mathbb{K}) \cong \Ext_{\mathbb{K}[u]}^*(M,M) $$ and the rank of the right side cannot be 8 if $M$ has rank 1, since the space of endomorphisms of a skyscraper sheaf in the derived category of a smooth curve has rank 2 (see \eqref{ExtSaSa} below for a more general result). We conclude that $M$ has rank 2, $HF^*(N_i, T_\alpha;\mathbb{K})$ has rank 4 and the differential on $CF^*(N_i, T_\alpha;\mathbb{K})$ vanishes. This implies the statement in (1). To prove (2), we use Figure \ref{HF(N,L)_fig}. First, we need to determine the image of $e$ and $ue$ under the functor $\mathcal G_1 : \mathcal W^\mathbb{Z}(T^*S^3;\mathbb{Z}) \to \mathcal W^\mathbb{Z}(T^*S^3;\mathbb{K}_R)$ defined in Section \ref{S:W}. Using an auxiliary Morse function on $N_0\cap N_1\cong S^1\cap T^2$, we know that $e$ is given by the maximum (which we denote by $x$) on the component $S^1$, and $ue$ is given by the maximum (which we denote by $y$) on the component $T^2$. Pick primitives $f_i$ for the restriction of the Liouville form $\lambda$ on $T^*S^3$ to the $N_i$, $i\in \{0,1\}$. Assume that $f_0$ and $f_1$ both vanish at $x$. Then, $\mathcal G_1(e) = T^{f_1(x)-f_0(x)} x = x \in CF^0(N_1,N_0;\mathbb{K}_R)$. Similarly, $\mathcal G_1(ue) = T^{f_1(y)-f_0(y)} y \in CF^{-2}(N_1,N_0;\mathbb{K}_R)$. Under our assumptions, $f_1(y) - f_0(y)$ is the negative of the symplectic area of a strip between $N_1$ and $N_0$, obtained from lifting the union of the darkly shaded and the white triangles in Figure \ref{HF(N,L)_fig}. We denote this area by $A+C$ in the figure. We can conclude that $ue$ is represented by $T^{-A-C} y$ in $CF^{-2}(N_1,N_0;\mathbb{K}_R)$. Each of the shaded triangles in the figure lifts to a $T^2$-family of holomorphic triangles with suitable boundary conditions. Since using the fundamental classes of $S^1$ and $T^2$ as inputs in $\mu^2$ does not constrain the $T^2$-families of holomorphic triangles, we conclude that, for suitable bases of $HF^*(N_i,T^3_\alpha;\mathbb{K})$, $i\in \{0,1\}$, we have \begin{equation} \label{formula with signs} \phi_{ue}^{T_\alpha} = T^{-A-C} T^A \operatorname{id} = \alpha T^B U \operatorname{id} = \pm \alpha \phi_e^{T_\alpha}. \end{equation} We used the fact that the sum of the $\sigma$-areas of the lightly shaded and white triangles in the figure is $B+C = 2\tau$. The sign can be fixed as wanted on the right side of \eqref{formula with signs}, since by changing the $\operatorname{Spin}$ structure on $T_\alpha$ one can replace $\alpha$ by $-\alpha$. \end{proof} \begin{remark} The fibration is trivial over the darkly shaded triangle in Figure \ref{HF(N,L)_fig}, which is why it has a $T^2$-family of lifts. However, the fibration is not trivial over the lightly shaded triangle, because one of its vertices is a critical value, but this triangle still has a $T^2$-family of disks. Note that one could also modify $N_0$ slightly, in a manner similar to what is done in Figure \ref{HF(F,Sn)_fig} for the proof of Lemma \ref{HF(F,Sn) odd}, so that the analogue of the lightly shaded triangle now includes no critical points, even at the corners. \end{remark} We can now prove the following analogue of Theorem \ref{split generators Sn}. \begin{theorem} \label{split generators S3} The collection of $A_{\mathbb{K}}$-modules $$ \{HF^*(F,(S^3,\alpha[pt]);\mathbb{K})\}_{\val(\alpha) \geq 0} \cup \{HF^*(N,T_\alpha;\mathbb{K})\}_{\val(\alpha) < 0} $$ split-generates the category $\mmod_{pr}(A_{\mathbb{K}})$. \end{theorem} \begin{proof} We just have to show that we can replace the objects supported on the collection of Lagrangians $\{(S^1\times S^2)_\tau\}_{\tau>0}$ by the objects supported on the collection $\{T^3_\tau\}_{\tau>0}$. By Lemma \ref{HF(N,T)}, $$ HF^*(N_0,T_\alpha;\mathbb{K}) \cong S_{\alpha}^2 \oplus S_{\alpha}^2[1] $$ as right $A_{\mathbb{K}}$-modules, where $S_\alpha$ is the 1-dimensional $\mathbb{K}$-vector space with $u$ acting by multiplication by $\alpha$. The result follows from Corollary \ref{S generate}. \end{proof} \begin{corollary} \label{generate Fuk} The category $\mathcal{F}^{\mathbb{Z}/2\mathbb{Z}}_{mon}(T^*S^3;\mathbb{K})$ is split-generated by the collection of objects $\{(S^3,\alpha[pt])\}_{\val(\alpha) \geq 0} \cup \{T_\alpha\}_{\val(\alpha) < 0}$. \end{corollary} \begin{proof} This follows from Theorem \ref{split generators S3} and Proposition \ref{C:Yoneda ff}. \end{proof} Now that we understand the $F$-modules associated to the Lagrangians $(S^1\times S^2)_\tau$ and $T^3_\tau$ in $T^*S^3$, we can also prove Theorem \ref{T:S1xS2 and T3}. \begin{proof}[Proof of Theorem \ref{T:S1xS2 and T3}] We wish to show that, if we fix $\tau, \tau'>0$, then $\tau = \tau'$ iff $(S^1\times S^2)_\tau$ and $T^3_{\tau'}$ can be equipped with local systems such that their Floer cohomology is non-trivial. Let $U\in U_\mathbb{K}$ and write $\alpha = T^{-4 \tau} U^{-1}$. Recall that the minimal Maslov number of $(S^1\times S^2)_\tau$ is 4, and that $(S^1\times S^2)_\alpha$ denotes $(S^1\times S^2)_\tau$ equipped with a local system of holonomy $U$. The proof of Theorem \ref{split generators Sn} implies that $HF^*(F,(S^1\times S^2)_\alpha;\mathbb{K}) \cong S_{\sqrt{\alpha}} \oplus S_{-\sqrt{\alpha}}$, where $\sqrt{\alpha} = T^{-2\tau} (\sqrt{U})^{-1}$ for some square root $\sqrt{U}\in \mathbb{K}$ of $U$. Write also $\alpha' = T^{-4 \tau'} U^{-1}$ and $\sqrt{\alpha'} = T^{-2\tau'} (\sqrt{U})^{-1}$. The minimal Maslov number of $T^3_{\tau'}$ is 2, and we have that $T_{\sqrt{\alpha'}}$ denotes $T^3_{\tau'}$ with a local system of holonomy $\sqrt U$. The proof of Theorem \ref{split generators S3} implies that $HF^*(F, T_{\sqrt{\alpha'}};\mathbb{K}) \cong S_{\sqrt{\alpha'}} \oplus S_{\sqrt{\alpha'}}[1]$. The result now follows from Proposition \ref{C:Yoneda ff} and the fact that, given $\beta, \beta'\in \mathbb{K}$ and the corresponding $\mathbb{Z}/2\mathbb{Z}$-graded $A_{\mathbb{K}}$-modules $S_\beta$, $S_{\beta'}$, we have \begin{equation} \label{ExtSaSa} \Ext_{A_{\mathbb{K}}}^*(S_\beta,S_{\beta'}) \cong \begin{cases} \mathbb{K}\oplus \mathbb{K}[1] & \text{ if } \beta = \beta' \\ 0 & \text{ otherwise } \end{cases}. \end{equation} \end{proof} \section{Intrinsic formality of algebras and modules} \label{S:Formality algebra} Recall that $A_{\mathbb{K}}=HW^*(F,F;\mathbb{K})$ is isomorphic to the polynomial algebra $\mathbb{K}[u]$, where $\deg(u)=1-n$. From this point on, we will always work over $\mathbb{K}$, and write $A$ instead of $A_\mathbb{K}$. We want to show that all $A_\infty$-algebras whose cohomology algebra is $A$ are quasi-isomorphic to $A$ (which is to say that $A$ is {\em intrinsically formal}), in preparation for proving the analogous result for certain types of modules. Denoting by $|A|$ the algebra $A$ where we {\em ignore the grading}, we can define the Hochschild cohomology $HH^r(|A|,|A|)$ as the homology of $CC^r(|A|,|A|) := {\operatorname{Hom}}_\mathbb{K}(|A|^{\otimes r},|A|)$, for $r\geq 0$, with respect to the Hochschild differential, see for instance \cite{WeibelHA}. To keep track of the grading on $A$, one can define $$ CC^{r}(A,A[s]) := {\operatorname{Hom}}^s_\mathbb{K}(A^{\otimes r},A), $$ which consists of graded homomorphisms that increase the degree by $s\in \mathbb{Z}$ (we continue to use the cohomological convention under which $A[s]$ is obtained by {\em subtracting} $s$ from all degrees in $A$), see \cite{SeidelThomas}. Some references use the alternative notation $CC^{r+s}(A,A)^s$, see for instance \cite{SeidelBook}. The Hochschild differential preserves $s$, so the $CC^{r}(A,A[s])$ are subcomplexes of $CC^r(|A|,|A|)$. Hence, for each $s$ we have a direct sum of chain complexes $$ CC^*(|A|,|A|) = CC^{*}(A,A[s]) \oplus Q^{*,s} $$ where $Q^{r,s} \subset CC^*(|A|,|A|)$ consists of those homomorphisms that have no term of degree $s$. One can identify $Q^{r,s}$ with the quotient $CC^r(|A|,|A|) / CC^{r}(A,A[s])$. We can conclude that there are inclusions on cohomology $$ HH^{r}(A,A[s]) \subset HH^r(|A|,|A|). $$ \begin{remark} In general, none of the inclusions $$\bigoplus_{s\in \mathbb{Z}} CC^{r}(A,A[s]) \subset CC^r(|A|,|A|) \subset \prod_{s\in \mathbb{Z}} CC^{r}(A,A[s])$$ can be claimed to be the identity. \end{remark} By \cite{WeibelHA}*{Corollary 9.1.5}, $HH^*(|A|,|A|) \cong \Ext^*_{|A|^e}(|A|,|A|)$, where $|A|^e = |A|\otimes_\mathbb{K} |A|^{\op}$ (this is isomorphic to $|A|\otimes_\mathbb{K} |A|$, since $|A|$ is commutative). Note that $\Ext_{|A|^e}(|A|,|A|)$ can be computed using any projective resolution of $|A|$ as an $|A|^e$-module. We use the {\em Koszul resolution} \begin{equation} \label{Koszul res} 0 \to |A|^e \stackrel{f}{\to} |A|^e \stackrel{g}{\to} |A| \to 0, \end{equation} where $f(a(u)\otimes b(u)) = a(u) u \otimes b(u) - a(u) \otimes u b(u)$ and $g(p(u),q(u)) = p(u) q(u)$. The existence of this 2-step resolution implies that $HH^r(|A|,|A|) = 0$ if $r \notin \{ 0,1\}$. Since $HH^{r}(A,A[s]) \subset HH^{r}(|A|,|A|)$, this implies that $HH^{r}(A,A[s]) = 0$ for all $s$ and for all $r\notin\{0,1\}$, in particular for $r\geq 3$. It is known that $A$ is intrinsically formal if $HH^{r}(A,A[2-r]) = 0$ for all $r\geq 3$, see \cite{SeidelThomas}*{Theorem 4.7}. We can thus conclude that the $\mathbb{Z}$-graded algebra $A$ is intrinsically formal. The additional vanishing for $r=2$ means that it is also not possible to deform the product structure on $A$. The previous argument can be adapted to show that, if we collapse the $\mathbb{Z}$-grading of $A$ to a $\mathbb{Z}/2\mathbb{Z}$-grading, $A$ is still intrinsically formal. More specifically, we can take $$CC^{r}(A,A[\even]) = \bigoplus_{s \even} CC^{r}(A,A[s]) \quad \text{and} \quad CC^{r}(A,A[\odd]) = \bigoplus_{s \odd} CC^{r}(A,A[s]),$$ both of which are subcomplexes of $CC^{r}(|A|,|A|)$. We get a decomposition $$ HH^{r}(|A|,|A|) \cong HH^{r}(A,A[\even]) \oplus HH^{r}(A,A[\odd]). $$ In the $\mathbb{Z}/2\mathbb{Z}$-graded case, intrinsic formality of $A$ follows from the simultaneous vanishing $$ \begin{cases} HH^r(A,A[\even]) = 0 \text{, for all } r\geq 3 \text{ even} \\ HH^r(A,A[\odd]) = 0 \text{, for all } r\geq 3 \text{ odd} \end{cases} $$ which is again a consequence of the fact that $HH^r(|A|,|A|) = 0$ if $r \notin \{ 0,1\}$. We can conclude the following. \begin{proposition} \label{prop:A intrinsically formal} $A=\mathbb{K}[u]$ is intrinsically formal as a $\mathbb{Z}$-graded algebra and as a $\mathbb{Z}/2\mathbb{Z}$-graded algebra. \qed \end{proposition} We now discuss right modules over the graded algebra $A$. In a manner similar to the previous discussion, let $|A|$, $|M|$ and $|N|$ be the result of forgetting the $\mathbb{Z}$-gradings of the algebra $A$ and of right $A$-modules $M$ and $N$. Then, ${\operatorname{Hom}}_\mathbb{K}(|M|,|N|)$ is an $A$-bimodule and its Hochschild cochain complex is, for $r\geq 0$, \begin{align*} CC^r(|A|,{\operatorname{Hom}}_\mathbb{K}(|M|,|N|)) &:= {\operatorname{Hom}}_\mathbb{K}(|A|^{\otimes r},{\operatorname{Hom}}_\mathbb{K}(|M|,|N|)) \\ & \cong {\operatorname{Hom}}_\mathbb{K}(|M|\otimes_\mathbb{K} |A|^{\otimes r},|N|). \end{align*} Remembering the $\mathbb{Z}$-gradings, we can denote as before the homomorphisms of degree $s\in \mathbb{Z}$ by $$ CC^{r}(A,{\operatorname{Hom}}_\mathbb{K}(M,N)[s]) := {\operatorname{Hom}}^s_\mathbb{K}(A^{\otimes r},{\operatorname{Hom}}_\mathbb{K}(M,N)) \cong {\operatorname{Hom}}^s_\mathbb{K}(M\otimes_\mathbb{K} A^{\otimes r},N). $$ The Hochschild differential preserves $s$ and we get inclusions on cohomology $$ HH^{r}(A,{\operatorname{Hom}}_\mathbb{K}(M,N)[s]) \subset HH^r(|A|,{\operatorname{Hom}}_\mathbb{K}(|M|,|N|)). $$ Using again \cite{WeibelHA}*{Corollary 9.1.5}, we get that $HH^*(|A|,{\operatorname{Hom}}_\mathbb{K}(|M|,|N|)) \cong \Ext^*_{|A|^e}(|A|,{\operatorname{Hom}}_\mathbb{K}(|M|,|N|))$. The existence of the 2-step Koszul resolution \eqref{Koszul res} now implies that $HH^r(|A|,{\operatorname{Hom}}_\mathbb{K}(|M|,|N|))=0$ for $r\geq 2$ and for every $M,N$. Consequently, we get $HH^{r}(A,{\operatorname{Hom}}_\mathbb{K}(M,N)[s])= 0$ for $r\geq 2$ and for all $s\in \mathbb{Z}$. \begin{remark} It is worth pointing out that $HH^*(|A|,{\operatorname{Hom}}_\mathbb{K}(|M|,|N|))$ is also isomorphic to $\Ext^*_{|A|/\mathbb{K}}(|M|,|N|)$, the {\em relative} $\Ext$, see \cite{WeibelHA}*{Lemma 9.1.9}. \end{remark} Say that a $\mathbb{Z}$-graded right $A$-module $M$ is {\em intrinsically formal} if, for every $\mathbb{Z}$-graded right $A_\infty$-module $\mathcal{M}$ over $A$ such that the $A$-module $H^*\mathcal{M}$ is isomorphic to $M$, we have that $\mathcal{M}$ is quasi-isomorphic to $M$ (as $A$-modules). In an analogous manner to the Hochschild cohomology criterion for intrinsic formality of graded algebras discussed earlier, it can be shown that if $$HH^{r}(A,{\operatorname{Hom}}_\mathbb{K}(M,M)[1-r]) = 0$$ for all $r\geq 2$, then $M$ is intrinsically formal. What we saw above implies that every $\mathbb{Z}$-graded right $A$-module is intrinsically formal. The same argument could again be adapted to the case of $\mathbb{Z}/2\mathbb{Z}$-modules over $A$ (with its grading collapsed to $\mathbb{Z}/2\mathbb{Z}$). If $M$ and $N$ are $\mathbb{Z}/2\mathbb{Z}$-graded right $A$-modules, we can define cohomology groups $HH^{r}(A,{\operatorname{Hom}}_\mathbb{K}(M,N)[s])$ with $s\in \{0,1\}$. This time, we have a decomposition $$ HH^r(|A|,{\operatorname{Hom}}_\mathbb{K}(|M|,|N|)) \cong HH^{r}(A,{\operatorname{Hom}}_\mathbb{K}(M,N)) \oplus HH^{r}(A,{\operatorname{Hom}}_\mathbb{K}(M,N)[1]). $$ The sufficient condition for intrinsic formality of $M$ is now given by the simultaneous vanishing $$ \begin{cases} HH^r(A,{\operatorname{Hom}}_\mathbb{K}(M,N)[1]) = 0 \text{, for all } r\geq 2 \text{ even} \\ HH^r(A,{\operatorname{Hom}}_\mathbb{K}(M,N)) = 0 \text{, for all } r\geq 2 \text{ odd} \end{cases} $$ and this criterion is again met by the discussion above. We can conclude the following. \begin{proposition} \label{modules formal} All $\mathbb{Z}$-graded and all $\mathbb{Z}/2\mathbb{Z}$-graded right $A$-modules are intrinsically formal. \end{proposition} As in Section \ref{SS:Yoneda}, denote by $\mmod(A)$ the category of right $A$-modules (we do not mean $A_\infty$-modules). These modules are $\mathbb{Z}$- or $\mathbb{Z}/2\mathbb{Z}$-graded, depending on the context. Given two modules $M, N$, we have $$\hom^*_{\mmod(A)}(M,N) = \Ext^*_A(M,N)$$ (instead of usual $A$-module homomorphisms). The following is a consequence of the results of this section. \begin{corollary} \label{C:mod(A) is formal} Passing to cohomology gives a functor \begin{align*} H:\mmod^{A_\infty}(\mathcal{A}) &\to \mmod(A) \\ \mathcal{M} & \mapsto H^*(\mathcal{M}) \end{align*} which is a quasi-equivalence (meaning that it induces an equivalence of categories on cohomology). The category $\mmod(A)$ is equivalent to the cohomology category of $\mmod^{A_\infty}(\mathcal{A})$. \end{corollary} \begin{proof} The fact that morphisms on the cohomology category of $\mmod^{A_\infty}(\mathcal{A})$ are given by $\Ext$ groups is explained in \cite{SeidelBook}*{Remark 2.15}. There is a composition of quasi-equivalences of dg-categories $$ \mmod(A) \to \mmod^{A_\infty}(A) \to \mmod^{A_\infty}(\mathcal{A}). $$ The fact that $\mathcal{A}$ is formal (by Proposition \ref{prop:A intrinsically formal}) implies that the functor on the right is a quasi-equivalence, see \cite{SeidelBook}*{Section 2f}. The functor on the left is given by inclusion (thinking of $\mmod(A)$ as a dg-category with trivial differentials), and it is a quasi-equivalence by Proposition \ref{modules formal}. The functor $H$ in the statement is a quasi-inverse for this composition. \end{proof} \section{Generation of categories of modules} \label{S:Generation modules} \begin{definition} Let $\mmod_{pr}(A)$ be the subcategory of $\mmod(A)$, whose objets are finite dimensional right $A$-modules (the subscript stands for {\em proper}). \end{definition} The fact that $\mathbb{C}$ is algebraically closed of characteristic zero implies that $\mathbb{K}$ is also algebraically closed, see \cite{FOOOCompactToricI}*{Appendix A}.% \footnote{In characteristic $p>0$, the polynomial $x^p - x - T^{-1}$ does not have roots in the Novikov field. See \cite{Kedlaya} for a discussion of the algebraic closure of the power series field in positive characteristic.} This will enable us to study the category $\mmod_{pr}(A)$ using Jordan normal forms. Recall that $A = \mathbb{K}[u]$, where $\deg(u) = 1-n$. Since the monotone Fukaya category is $\mathbb{Z}/2\mathbb{Z}$-graded, we will consider two cases, depending on the parity of $n$. \subsection{When $n$ is odd} Take an object $M \oplus N$ of $\mmod_{pr}(A)$, where $M$ is in degree 0 and $N$ is in degree 1. Since $\mathbb{K}$ is algebraically closed, $M$ has a splitting $$ M \cong \bigoplus_{i = 1}^m M_{\alpha_i}^{k_i} $$ where $\alpha_i\in \mathbb{K}$, $k_i\in \mathbb{Z}_+$ and $M_{\alpha}^{k}$ is the vector space to $\mathbb{K}^k$ with a right action of $u$ by the $k\times k$ transposed Jordan block $$ \begin{pmatrix} \alpha \\ 1 & \alpha \\ & \ddots & \ddots \\ & & 1 & \alpha \\ & & & 1 & \alpha \end{pmatrix} $$ The module $N$ also has a splitting $$ N \cong \bigoplus_{j = 1}^n M_{\beta_j}^{l_j}[1] $$ for $\beta_j\in \mathbb{K}$ and $l_j\in \mathbb{Z}_+$. Denote the 1-dimensional module $M_\alpha^1$ by $S_\alpha$. \begin{lemma} \label{L:triangulated closure} For every $k\in \mathbb{Z}_+$, $M^k_\alpha$ is in the triangulated closure of $S_\alpha$. \end{lemma} \begin{proof} Observe that there are $A$-module homomorphisms $$ \varphi_\alpha^k \colon M_\alpha^k \to S_\alpha $$ obtained by projecting onto the last coordinate. We can think of an $A$-module homomorphism as a homomorphism of $A_\infty$-modules, and take its cone. Recall that $\Cone(\varphi_\alpha^k)$ is the right $A_\infty$-module over $A$ given by the chain complex $$ (M_\alpha^{k}[1]\oplus S_\alpha, \mu^1 = \varphi_\alpha^k), $$ with $\mu^2 = (\mu_{M_\alpha^{k}[1]}^2,\mu^2_{S_\alpha})$ and trivial higher $A_\infty$-maps, see \cite{SeidelBook}*{Section (3e)}. We have that $H^*\Cone(\varphi_\alpha^k) \cong M_\alpha^{k-1}[1]$ and so $\Cone(\varphi_\alpha^k)$ is quasi-isomorphic to $M_\alpha^{k-1}[1]$. We can now argue by induction on $k$ to prove the statement in the lemma. Since there is a distinguished triangle $$ M^k_\alpha \to S_\alpha \to M^{k-1}_\alpha[1] \to M^k_\alpha [1], $$ axiom TR2 for triangulated categories (see for instance \cite{WeibelHA}*{Definition 10.2.1}) implies that there is also a distinguished triangle $$ S_\alpha \to M^{k-1}_\alpha[1] \to M^k_\alpha [1] \to S_\alpha[1], $$ and by induction on $k$ we get that $M^k_\alpha$ is in the triangulated closure of $S_\alpha$ for all $k\geq 1$. \end{proof} We can now conclude the following. \begin{corollary} \label{S generate} The category $\mmod_{pr}(A)$ is generated by the collection of modules $\{S_\alpha\}_{\alpha \in \mathbb{K}}$. \end{corollary} \subsection{When $n$ is even} This case is more subtle, because now $u$ is an operator of odd degree. Take again an object $M\oplus N$ in $\mmod_{pr}(A)$, where $M$ is a finite dimensional $\mathbb{K}$-vector space in degree 0 and $N$ is finite dimensional in degree 1. Since we will be interested in split-generation of $\mmod_{pr}(A)$, we can assume that $\dim_\mathbb{K} M = \dim_\mathbb{K} N$, by taking a direct sum of $M$ or $N$ with a $\mathbb{K}$-vector space with trivial $u$-action, if necessary. Since $u$ has odd degree, by picking bases for $M$ and $N$, the $u$-action is given by a matrix of the form $\begin{pmatrix} 0 & R \\ S & 0 \end{pmatrix}, $ where $R$ and $S$ are square matrices. Observe that $u^2$ has even degree, so it gives endomorphisms of $M$ and $N$. As we saw in the case of $n$ odd, we can pick bases for $M$ and $N$ so that the right action of $u^2$ is represented by the transpose of a Jordan matrix. This means that we can assume that $RS$ and $SR$ consist of finitely many transposed Jordan blocks along the diagonal. A simple calculation yields the following. \begin{lemma} The eigenvalues of the $u^2$-action on $M$ and on $N$ are the same. If $v_1,\ldots,v_m$ is a basis for $M$ in which $u^2$ is represented by a transposed Jordan matrix $J^T$, then $v_1 \cdot u, \ldots, v_m\cdot u$ is a basis for $N$ in which $u^2$ is also given by $J^T$. Hence, in these bases, $R=I$ and $S=J^T$. \end{lemma} Given $\alpha \in \mathbb{K}$, let $\tilde S_{\alpha}$ be the right $A$-module consisting of the $\mathbb{K}$-vector space $\mathbb{K}\oplus \mathbb{K}[1]$, on which $u$ acts by the matrix $\begin{pmatrix} 0 & 1 \\ \alpha & 0 \end{pmatrix}$. \begin{corollary} \label{S tilde generate} The category $\mmod_{pr}(A)$ is split-generated by the collection of modules $\{\tilde S_{\alpha}\}_{\alpha\in \mathbb{K}}$. \end{corollary} \begin{proof} The argument is analogous to the proof of Corollary \ref{S generate} above. \end{proof} \bibliographystyle{alpha} \section{Introduction} An embedded Lagrangian $L$ in a cotangent bundle $(T^*Q,d(pdq))$, is {\em exact} if $pdq|_L = df$ for some function $f: L \to \mathbb{R}$. Arnold's nearby Lagrangian conjecture predicts that if $Q$ and $L$ are closed, then $L$ is Hamiltonian-isotopic to the zero-section $Q \subset T^*Q$. This result is currently known to hold only for a limited list of examples, including $Q = S^2$ \cite{Hind} and $T^2$ \cite{DGI}. The work of many authors has also led to a proof that the composition $L \to T^*Q \to Q$ (where the first map is the embedding and the second is projection to the zero-section) is a simple homotopy equivalence \cite{AbouzaidKraghSimple}. Very little is know if one drops the requirement that $L$ be exact. We will consider the case of $L$ {\em monotone}, by which we mean that there is a constant $\tau \geq 0$ such that, for every map $u:(D^2 , \partial D^2) \to (T^*Q,L)$, $$ \int_{D^2} u^*\omega = \tau \cdot \mu(u) $$ where $\mu(u)$ is the {Maslov index} of $u$. Note that we allow the case $\tau=0$, which happens, for instance when $L$ is exact (if the map $H^1(T^*Q;\mathbb{R}) \to H^1(L;\mathbb{R})$ is trivial, then $\tau=0$ implies that $L$ is exact). For some results about monotone Lagrangians in cotangent bundles, see for instance \cite{GadbledCotangent}. The focus of this paper is on closed monotone Lagrangians in cotangent bundles of spheres, from the point of view of Floer theory, more specifically using {\em wrapped Floer cohomology}. Given closed Lagrangians $L,L' \subset T^*Q$ (possibly equipped with additional data like bounding cochains or local systems) in a symplectic manifold, one can sometimes define their Floer cohomology $HF^*(L,L')$, which is invariant under Hamiltonian perturbations of either $L$ or $L'$. If $HF^*(L,L') \neq 0$, then $L$ is not Hamiltonian-displaceable from $L'$ (which means that $\varphi(L)\cap L' \neq \emptyset$ for every Hamiltonian diffeomorphism $\varphi$ of $T^*Q$) \cite{FloerLagrangian}. Unless we say otherwise, we will take Floer cohomology with coefficients in the Novikov field over $\mathbb{C}$, which is denoted by $\mathbb{K}$ and defined below. There is a 1-parameter family of disjoint monotone Lagrangians $(S^1\times S^{n-1})_\tau \subset T^*S^n$, of different monotonicity constants $\tau>0$, whose construction will be reviewed below. These Lagrangians can be equipped with local systems such that their Floer cohomologies are non-trivial. In $T^*S^3$, the same holds for a 1-parameter family of disjoint monotone Lagrangian tori $T^3_\tau$, see \cite{ChanPomerleanoUeda1}. We will review the construction of these tori below as well. We will prove the following result. \begin{theorem} \label{T:non-displ} Take $n\geq 2$ and let $L\subset T^*S^n$ be a closed orientable spin monotone Lagrangian with a local system of rank 1 for which $HF^*(L,L;\mathbb{K}) \neq 0$. Then, either $HF^*(L,S^n;\mathbb{K}) \neq 0$ (where the zero-section $S^n$ is equipped with a suitable bounding cochain) or there is a $\tau>0$ for which $HF^*(L,(S^1\times S^{n-1})_\tau;\mathbb{K}) \neq 0$ (where $(S^1\times S^{n-1})_\tau$ is equipped with a suitable unitary local system of rank 1). In particular, $L$ is not Hamiltonian-displaceable from either $S^n$ or from $(S^1\times S^{n-1})_\tau$, for some $\tau >0$. Furthermore, for $T^*S^3$ we can replace the Lagrangians $(S^1\times S^{2})_\tau$ with the tori $T^3_\tau$ in the previous statement. \end{theorem} Our work towards the proof of Theorem \ref{T:non-displ} will also imply the following. \begin{theorem} \label{T:S1xS2 and T3} Let $\tau, \tau' > 0$. Then $\tau = \tau'$ iff the Lagrangians $(S^1 \times S^2)_\tau$ and $T^3_{\tau'}$ can be equipped with local systems with respect to which $HF^*((S^1 \times S^2)_\tau,T^3_{\tau'};\mathbb{K}) \neq 0$. In particular, $(S^1 \times S^2)_\tau$ is not Hamiltonian-displaceable from $T^3_\tau$. \end{theorem} We now describe the structure of the proof of Theorem \ref{T:non-displ}. The Lagrangians $L$ in the statement give objects in a {monotone wrapped Fukaya} category of $T^*S^n$, which also includes a cotangent fiber $F = T^*_q S^n$ (for some $q\in S^n$). This is an $A_\infty$-category (with only a $\mathbb{Z}/2\mathbb{Z}$-grading, since we allow monotone Lagrangians), which we denote temporarily by $\mathcal{W}$ (and will refer to it as $\mathcal{W}^{\mathbb{Z}/2\mathbb{Z}}_{\mon}(Y;\mathbb{K})$ in Section \ref{S:W}). The category $\mathcal{W}$ is generated by the cotangent fiber $F$. This is an adaptation of a result in \cite{AbouzaidCotangentFiber} (which in its original form was for the wrapped Fukaya category of exact Lagrangians). Let us consider some algebraic consequences of this generation result. Let $A_\mathbb{K} := HW^*(F,F;\mathbb{K})$ be the wrapped Floer cohomology algebra of $F$. The graded algebra $A_\mathbb{K}$ is isomorphic to $H_{-*}(\Omega_q S^n;\mathbb{K})$, where $q\in S^n$ is a basepoint and $\Omega_q$ denotes the based loop space, see \cite{AbouzaidBasedLoops}. Hence, $A_\mathbb{K}$ is isomorphic to a polynomial algebra $\mathbb{K}[u]$, where $\deg(u) = 1-n$. There is a {\em Yoneda functor} \begin{align*} Y : \mathcal{W} &\to \mmod(A_\mathbb{K}) \\ L & \mapsto HF^*(F,L;\mathbb{K}) \end{align*} where $\mmod(A_\mathbb{K})$ is the category of $\mathbb{Z}/2\mathbb{Z}$-graded right $A_\mathbb{K}$-modules., with the morphism space between two objects $M,M'$ in $\mmod(A_\mathbb{K})$ being $\Ext_{A_\mathbb{K}}^*(M,M')$. The generation result mentioned above, together with formality results for $A_\infty$-modules over $A_\mathbb{K}$ that we prove in Section \ref{S:Formality algebra}, imply that $Y$ is a cohomologically full and faithful functor, in the sense that it induces an isomorphism on cohomology $$ HW^*(L,L';\mathbb{K}) \cong \Ext_{A_\mathbb{K}}^*(Y(L),Y(L')) $$ for any pair of objects $L,L'$. Take the subcategory $\mathcal{F} \subset \mathcal{W}$ (denoted as $\mathcal{F}^{\mathbb{Z}/2\mathbb{Z}}_{\mon}(Y;\mathbb{K})$ in Section \ref{S:W}) that does not include the object $F$, but only compact Lagrangians. Given such an object $L$ of this subcategory (where we suppress the additional data of local systems or bounding cochains), $HW^*(F,L;\mathbb{K}) = HF^*(F,L;\mathbb{K})$ is a finite dimensional $\mathbb{K}$-vector space, so $Y$ restricts to a cohomologically full and faithful embedding \begin{equation} \label{def:Y_c} Y_c : \mathcal{F} \to \mmod_{pr}(A_\mathbb{K}) \end{equation} where $\mmod_{pr}(A_\mathbb{K}) \subset \mmod(A_\mathbb{K})$ is the subcategory of proper $A_\mathbb{K}$-modules $M$ (those which are finite dimensional over $A_\mathbb{K}$). The approach of this paper will be to study the category $\mathcal{F}$ by analyzing the algebraic category $\mmod_{pr}(A_\mathbb{K})$. Corollary \ref{S generate} below gives generators for $\mmod_{pr}(A_\mathbb{K})$, and it implies the following result (which will appear below as Corollaries \ref{generate Fuk Sn} and \ref{generate Fuk}). \begin{theorem} \label{generate F} Given $n\geq 2$, the functor $Y_c$ in \eqref{def:Y_c}, when extended to the split-closure of $\mathcal F$ (which is the monotone Fukaya category of $T^*S^n$), is a quasi-equivalence of categories. The category $\mathcal{F}$ is split-generated by the uncountable collection of objects consisting of $S^n$ (equipped with suitable bounding cochains) and the $(S^1\times S^{n-1})_\tau$ (equipped with unitary local systems of rank 1). In the case of $T^*S^3$, we can replace the $(S^1\times S^{2})_\tau$ with the $T^3_\tau$ above. \end{theorem} \begin{proof}[Proof of Theorem \ref{T:non-displ}] Given $L$ such that $HF^*(L,L;\mathbb{K}) \neq 0$, $L$ is a non-trivial object in $\mathcal{F}$. Theorem \ref{generate F} then implies that $HF^*(L,L';\mathbb{K}) \neq 0$, where $L'$ is one of the split-generators. \end{proof} \begin{remark}[Relation to mirror symmetry] As mentioned, the tori $T^3_\tau$ were studied in \cite{ChanPomerleanoUeda1}. They are fibers of an SYZ fibration in the complement of an anticanonical divisor $H$ in $T^*S^3$ ($H$ is anticanonical in the sense that the Lagrangian tori in the SYZ fibration have vanishing Maslov class in the complement of $H$). In this setting, the authors compute the disk potentials associated to SYZ-fibers by studying wall-crossing for pseudoholomorphic disks. This information is used to construct a Landau--Ginzburg model that is mirror to $T^*S^3$. The critical locus of the Landau--Ginzburg potential is an affine line. If the mirror is constructed over the Novikov field, then the points in this critical line with negative valuation correspond to (split summands of) the monotone Lagrangians $T^3_\tau$, equipped with suitable unitary local systems of rank 1. The points with non-negative valuation correspond to bounding cochains on the zero section $S^3$. \end{remark} \begin{remark}[Relation to abstract flux] The monotone Lagrangians $(S^1\times S^{n-1})_\tau$ can be obtained geometrically as follows. Let $f:S^n \to \mathbb{R}$ be a Morse function with exactly two critical points. The graph of $df$ intersects the zero section of $T^*S^n$ transversely in the two critical points, and one can perform surgery on this transverse intersection to produce the family $(S^1\times S^{n-1})_\tau$. Similarly, the tori $T^3_\tau$ can be obtained by taking a Morse--Bott function $g:S^3 \to \mathbb{R}$ whose critical locus is a Hopf link, and performing surgery in $T^*S^3$ on the clean intersection of the zero section and the graph of $dg$. Recall that given a compact manifold $Q$ and a class $\alpha \in H^1(Q;\mathbb{R})$, one can take the {\em flux deformation} of the zero-section of $T^*Q$ in the direction of $\alpha$, by flowing $Q$ along a symplectic vector field $X$ such that $[\omega(.,X)] = i^*\alpha$ (where $i:Q\to T^*Q$ is the inclusion). Using the Weinstein tubular neighborhood theorem, one can similarly deform a compact Lagrangian $L$ in a symplectic manifold $(M,\omega)$ along a class $\alpha \in H^1(L;\mathbb{R})$. Motivated by \cite{SeidelAbstractFlux}, one can think of the family of Lagrangians $(S^1\times S^{n-1})_\tau$ (respectively, $T^3_\tau$) as an {\em abstract flux deformation} of two copies of the zero section $S^n$ (respectively, $S^3$) in the direction of a class $\beta\in H^n(S^n;\mathbb{R})$ (respectively, $H^3(S^3;\mathbb{R})$), if $n$ is odd. The case of $n$ even is more subtle, as we will see. \end{remark} This paper is organized as follows. In Section \ref{S:construct Lagrangians}, we present the construction of the monotone Lagrangians $(S^1 \times S^{n-1})_\tau$ in $T^*S^n$ and $T^3_\tau$ in $T^*S^3$. In Section \ref{S:Fukaya categories}, we recall the definitions of several versions of Fukaya categories of $T^*S^n$, including a monotone wrapped Fukaya category where Lagrangians are allowed to intersect cleanly. In Section \ref{S:HF computations}, we perform several Floer cohomology computations, with a view towards proving Theorem \ref{generate F}. The remaining sections have a more algebraic nature, and are about $A_\infty$-algebras and $A_\infty$-modules. In Section \ref{S:Formality algebra}, we establish formality results for a category of modules associated to a cotangent fiber in $T^*S^n$. In Section \ref{S:Generation modules}, we obtain generators for that category of modules. \subsection*{Acknowledgements} The first named author was supported by the Simons Foundation through its ``Homological Mirror Symmetry'' Collaboration grant SIMONS 385571, and by NSF grants DMS-1609148, and DMS-1564172. The second named author thanks Yank{\i} Lekili, Maksim Maydanskiy and Daniel Pomerleano for helpful conversations. He also thanks Institut Mittag-Leffler for the hospitality during the last stages of preparation of this article. \section{Monotone Lagrangians in $T^*S^n$} \label{S:construct Lagrangians} \subsection{Lagrangians in $T^*S^n$} \label{SS:Lagrs in T*Sn} Recall that $T^*S^n$ is symplectomorphic to the complex affine quadric $$ X_n = \{(z_0, \ldots, z_{n}) \in \mathbb{C}^{n+1} \,|\, z_0^2 + \ldots + z_{n}^2 = 1 \}, $$ equipped with the K\"ahler form $\omega$ obtained from the restriction of $\frac{i}{2}\sum_{j=0}^{n} d z_j \wedge d \overline{z_j}$ on $\mathbb{C}^{n+1}$ \cite{McDuffSalamonIntro}*{Exercise 6.20}. The projection to the first coordinate defines a Lefschetz fibration \begin{align*} \pi_n\colon X_n &\to \mathbb{C} \\ (z_0, \ldots, z_{n}) &\mapsto z_0 \end{align*} with critical values $\pm 1$. For every regular value $p\neq \pm 1$, the fiber $\pi_n^{-1}(p)$ is symplectomorphic to $T^*S^{n-1}$, and contains the Lagrangian sphere $$ V_{p}:= \{(p,\sqrt{1-p^2} \, x_1, \ldots, \sqrt{1-p^2} \, x_{n}) \in X_n \, | \, (x_1, \ldots, x_{n}) \in S^{n-1}\}, $$ where $S^{n-1}\subset \mathbb{R}^{n}$ is the unit sphere and $\sqrt{1-p^2}$ is one of the two square roots of $1-p^2$. Write also $V_{\pm1} = \{(\pm1,0,\ldots,0)\}$. \begin{figure \begin{center} \def0.5\textwidth{0.5\textwidth} \input{F_w.pdf_tex} \end{center} \caption{Some curves in $\mathbb{C}\setminus \{\pm 1\}$} \label{F_w_fig} \end{figure} We will be interested in the following types of Lagrangians that project to curves under $\pi_n$. See Figure \ref{F_w_fig} for relevant examples of such curves. \begin{definition}\label{D:Lagr F} Given a curve $C\subset \mathbb{C}\setminus \{-1, 1\}$ that is the image of an embedding of $S^1$, let $$ L_C := \bigcup_{z\in C} V_{z}. $$ Given an embedding $\eta: [0,\infty) \to \mathbb{C}$ such that \begin{itemize} \item $\eta(0) \in \{-1,1\}$, \item $\eta\big((0,\infty)\big) \subset \mathbb{C}\setminus \{-1,1\}$ and \item $\eta(t) = at+b$ for some $a\in \mathbb{C}^*$, $b\in \mathbb{C}$ and $t$ large enough, let \end{itemize} \begin{equation*} F_\eta := \bigcup_{t\geq 0} V_{\eta(t)}. \end{equation*} \end{definition} \begin{lemma} \label{L:L_C and F_eta} The subsets $L_C$ and $F_\eta$ of $X_n$ in Definition \ref{D:Lagr F} are Lagrangian submanifolds. If $C$ encloses both points $\pm 1$, then $L_C$ is diffeomorphic to $S^1\times S^{n-1}$, while $F_\eta$ is Hamiltonian isotopic to a cotangent fiber in $T^*S^n$. \end{lemma} \begin{proof} The $L_C$ and $F_\eta$ are Lagrangians because parallel transport with respect to the connection induced by the symplectic fibration $\pi_n$ preserves the spheres $V_p$ (they are vanishing cycles for arbitrary vanishing paths in the base), see \cite{SeidelBook}*{Lemma 16.3}. Since there are only two types of $S^{n-1}$-bundles over $S^1$, and the closed curve $C$ encircles two critical values which have the same monodromy (a Dehn twist), it follows that $L_C$ is the trivial bundle. We now consider the Lagrangians $F_\eta$. Take $\eta_\pm$ such that $\eta_\pm(t)=\pm(t+1)$ for all $t\geq 0$. Then, $F_{\eta_\pm}$ is mapped to $T_{\pm 1}^*S^n$ by the symplectomorphism $X_n\to T^*S^n$ in \cite{McDuffSalamonIntro}*{Exercise 6.20}. For any other $\eta$, there is an isotopy to one of the $\eta_\pm$ that lifts to a Hamiltonian isotopy by an application on Moser's trick. \end{proof} \begin{remark} The Floer cohomology of the Lagrangian submanifolds $L_C\cong S^1\times S^{n-1}$ in $T^*S^n$ in the previous lemma was studied in \cite{AlbersFrauenfelderTorus}. \end{remark} \begin{remark} In this Lefschetz fibration description $\pi_n : X_n\to \mathbb{C}$ of $T^*S^n$, the zero section $S^n$ is the Lagrangian lift of the interval $[-1,1] \subset \mathbb{C}$. \end{remark} Let us continue with our study of the Lagrangians $L_C$, where $C$ encloses $\{\pm1\}$. Much of what follows in this section is an adaptation of results in \cite{LekiliMaydanskiy}*{Section 2.2}. The homology long exact sequence of the pair $(T^*S^n,L_C)$ implies that $$ H_2(T^*S^n,L;\mathbb{Z}) \cong H_2(T^*S^n;\mathbb{Z}) \oplus H_1(L_C;\mathbb{Z}) $$ if $n\geq 2$. The group $H_2(T^*S^n;\mathbb{Z})$ vanishes unless $n=2$, but also in this case both $\omega$ and $c_1(T^*S^2)$ vanish on $H_2(T^*S^2;\mathbb{Z})$. The group $H_1(L_C;\mathbb{Z})$ has rank 2 for $n=2$ and rank 1 for all $n\geq 3$. For $n\geq 3$, $H_2(T^*S^n,L_C;\mathbb{Z})\cong \mathbb{Z}$ is generated by a class $\beta$ such that $\pi_n\circ \beta$ covers $C$ once. For $n=2$, we can pick $\alpha, \beta\in H_2(T^*S^2,L_C;\mathbb{Z})$ such that their boundaries give a basis for $H_1(L_C;\mathbb{Z})\cong \mathbb{Z}^2$, with the following properties: $\alpha$ is a Lefschetz thimble for some vanishing cycle $V_p$ and hence has vanishing Maslov index and symplectic area, while the boundary of $\pi_2\circ \beta$ covers $C$ once. We will now study the $\omega$-area and Maslov index of the disks $\beta$. We need some auxiliary notation. Denote by $\sigma_{std}:= \frac{i}{2} dz\wedge d\overline {z} = r dr \wedge d\theta$ the standard area form in $\mathbb{C}$. Define, on the set of regular values of the Lefschetz fibration $\pi_n$, which is $\mathbb{C} \setminus \{\pm 1\}$, the 2-form $$ \sigma := \frac{i}{2} dz_0\wedge d\overline {z_0} + f^*\sigma_{std}, $$ where $f : \mathbb{C} \setminus \{\pm 1\} \to \mathbb{C}\setminus \{0\}$ is given by $f(z) = \frac{{1-z^2}}{\sqrt{2|1-z^2|}}$. The function $f$ can be thought of as the composition of the two maps \begin{align*} \mathbb{C}\setminus\{\pm 1\} &\to \mathbb{C}\setminus\{0\} & \mathbb{C}\setminus \{0\} &\to \mathbb{C}\setminus\{0\} \\ z &\mapsto 1-z^2 & r e^{i\theta} &\mapsto \sqrt{\frac{r}{2}} e^{i\theta} \end{align*} The first map is holomorphic and the second is smooth and orientation-preserving, so $\sigma$ defines a positive measure on $\mathbb{C}\setminus \{\pm 1\}$. It extends to all of $\mathbb{C}$, as a measure that is absolutely continuous with respect to the Lebesgue measure. \begin{lemma} \label{L:sigma area} Given a disk $\beta: (D^2,\partial D^2)\to (X_n,L_C)$ such that $\pi_n\circ \beta$ covers $C$ once, we have $$ \int_\beta \omega = \int_{\pi_n (\beta)} \sigma. $$ \end{lemma} \begin{proof} Take $\beta$ as in the lemma. We can assume the boundary of $\beta$ to be given by $c(t) = \left(\gamma(t), \sqrt{1-\gamma(t)^2}\, s(t)\right)$, where $\gamma:[0,1]\to \mathbb{C}\setminus \{\pm 1\}$ is a degree 1 parametrization of $C$ and $s(t) = (s_1(t),\ldots,s_{n+1}(t))\in S^{n-1}$. Here, $\sqrt{.}$ is the analytic continuation of a branch of the square root along the path $1-\gamma^2$. Write $g(t):= \sqrt{1-\gamma(t)^2}$. We have \begin{equation} \label{difference} \int_\beta \omega - \frac{i}{2} dz_0\wedge d\overline {z_0} = \int_c \sum_{j=1}^{n} \frac{i}{4} (z_j \, d\overline{z_j} - \overline{z_j} \, d{z_j}) = \frac{i}{4} \int_0^1 g \overline g' - \overline g g' dt, \end{equation} using on the first identity Stokes' theorem and the fact that $\frac{i}{4} (z_j \, d\overline{z_j} - \overline{z_j} \, d{z_j})$ is a primitive of $\frac{i}{2} dz_j\wedge d\overline{z_j}$. A calculation shows that the right side of \eqref{difference} can be written as $$ \int_0^1 g^*\left(\frac{1}{2}r^2 d\theta\right) = \int_{C}f^*\left(\frac{1}{2} r^2 d\theta\right) , $$ where $f$ is the function defined before the lemma. Identifying $C$ with the boundary of $\pi_n(\beta)$ and using Stokes' theorem, the integral on the right equals $$ \int_{\pi_n(\beta)} f^*\sigma_{std}, $$ which finishes the proof. \end{proof} \begin{remark} The previous argument also goes through if $C$ is a piecewise smooth curve. This will be helpful in Section \ref{S:HF computations}, when computing operations $\mu^k$ involving several Lagrangians that fiber over paths in $\mathbb{C}$. \end{remark} \begin{corollary} Suppose that the simple curves $C$ and $C'$ in $\mathbb{C}\setminus \{-1, 1\}$ both enclose $\{-1,1\}$. Then, they bound the same $\sigma$-area if and only if $L_C$ and $L_{C'}$ are Hamiltonian isotopic. \end{corollary} \begin{proof} The proof is similar to that of \cite{LekiliMaydanskiy}*{Corollary 2.5}. \end{proof} \begin{lemma} \label{Maslov L_C} The Maslov index of an oriented disk in $X_n$ with boundary in $L_C$, whose boundary projects to a degree 1 cover of $C$, is $2(n-1)$. The Lagrangians $L_C$ are monotone with monotonicity constant $\tau_C = \frac{\int_{\Omega_C}\sigma}{2(n-1)}$, where $\Omega_C\subset \mathbb{C}$ is the region bounded by $C$ in the plane. \end{lemma} \begin{proof} We begin by considering the Lagrangian lift $L_0$ of the unit circle in the model Lefschetz fibration $\pi : \mathbb{C}^n \to \mathbb{C}$, where $\pi(z_1,\ldots,z_n) = z_1^2 + \ldots + z_n^2$. The vanishing cycle over $p\in \mathbb{C}\setminus \{0\}$ of a vanishing path through $p$ is $$ V'_{p}:= \{\sqrt{p} (x_1, \ldots, x_{n}) \, | \, (x_1, \ldots, x_{n}) \in S^{n-1}\}, $$ see \cite{SeidelBook}*{Example 16.5}. We can use the holomorphic volume form \begin{equation} \label{Omega} \Omega = d z_1 \wedge \ldots \wedge d z_{n} \end{equation} on $\mathbb{C}^n$ to compute the Maslov index of a disk with boundary in $L_0$. Let $u$ be such a disk, of positive symplectic area and with boundary projecting to a simple cover of the unit circle. Let $\gamma: S^1\to L$ be a parametrization of this boundary loop such that $\pi(\gamma(t)) = e^{it}$. The imaginary part of $\left(e^{-i(nt+\pi)/2}\, \Omega\right)|_{L_0}$ vanishes, hence the Maslov index of $u$ is $n$ (see \cite{SeidelThomas} for similar computations). To compute the Maslov class of $L_C$ in the statement of the lemma, we observe that $C$ is Lagrangian-isotopic to a connected sum $C_{-1} \# C_{1}$, where $C_{\pm 1}$ is a small simple loop around $\pm 1$ (this is inspired by \cite{SeidelLES}). By picking a local trivialization of the Lefschetz fibration $\pi_n$ near $\pm1$, we see that the Maslov class of $L_{C_{\pm 1}}$ can be identified with that of $L_0$ above. This implies that one can think of a disk in $X_n$ with positive symplectic area, and with boundary in $L_C$ projecting to a simple cover of $C$, as a connected sum of two disks as in the previous paragraph. Hence, the Maslov index of the disk with boundary in $L_C$ is $2(n-1)$, as wanted. The monotonicity of $L_C$ and the value of $\tau_C$ now follow from Lemma \ref{L:sigma area}. \end{proof} Recall that, given a monotone Lagrangian $L$ in a symplectic manifold $(M,\omega)$ and a choice of basis $h_1,\ldots,h_m$ for the free part of $H_1(L;\mathbb \mathbb{Z})$, we can define the {\em disk potential} $W_{L} : (\mathbb{C}^*)^m \to \mathbb{C}$ as \begin{equation} \label{disk potential} W_L(x_1,\ldots,x_m) = \sum_{u\in \mathcal{M}} \pm x^{\partial u}, \end{equation} where $\mathcal{M}$ is the moduli space of $J$-holomorphic maps $u:(D^2,\partial D^2)\to (M,L)$ of Maslov index 2, such that $u(1)=p$, for a generic choice of point $p\in L$ and compatible almost complex structure $J$ on $(M,\omega)$. The sign associated to $u$ depends on the spin structure of $L$. If we write $\langle \partial u,h_i\rangle$ for the $h_i$-component of $[\partial u]$ in the free part of $H_1(L;\mathbb \mathbb{Z})$, then $x^{\partial u}$ stands for the product $x_1^{\langle \partial u,h_1\rangle} \ldots x_m^{\langle \partial u,h_m\rangle}$. The disk potential does not depend on the choices of generic $p$ and $J$. \begin{lemma} \label{L:disk potential L_C} For $n=2$, the disk potential of $L_C$ is $W_{L_C} = x_1(1+x_2)^2$, in a basis $h_1,h_2 \in H_1(L_C;\mathbb{Z})\cong \mathbb{Z}^2$ where $h_1$ is a loop projecting to $C$ in degree 1 and $h_2$ is a fiber of the projection $\pi_2|_{L_C}$. The disk potential is zero if $n>2$. \end{lemma} \begin{proof} For $n=2$, the disk potential is computed in \cite{LekiliMaydanskiy}*{Lemma 2.19}, using the degeneration argument from \cite{SeidelLES}. In the proof of \cite{AurouxAnticanonical}*{Corollary 5.13}, the relevant Maslov index 2 disks are also computed explicitly, using the integrable complex structure in the target. The case $n>2$ follows from Lemma \ref{Maslov L_C}. \end{proof} Fix $\tau>0$ and a smooth embedded loop $C_\tau\subset \mathbb{C}\setminus \{-1,1\}$ that winds once around $-1$ and $1$ and bounds $\sigma$-area $2(n-1)\tau$. Denote by $L_\tau$, or $(S^1\times S^{n-1})_\tau$, the corresponding Lagrangian $L_{C_\tau}$. By Lemma \ref{Maslov L_C}, $L_\tau$ is monotone with monotonicity constant $\tau$. Observe that we can exhaust $\mathbb{C}\setminus [-1,1]$ by a collection of disjoint simple curves $C$, such that the corresponding monotonicity constants $\tau_{C}$ cover $\mathbb{R}_{>0}$ without repetitions. The matching sphere over the interval $[-1,1]\subset \mathbb{C}$ is the zero section $S^n\subset T^*S^n$. Assume that $C_\tau$ is the curve $C$ in Figure \ref{F_w_fig}, and denote by $F_i$ the lifts of the paths $\eta_i$ in the same figure. Similarly, denote by $F'$ the lift of the path $\eta'$. Recall that two Lagrangian submanifolds $L,L' \subset (X,\omega)$ {\em intersect cleanly} if $K:=L\cap L'$ is a manifold and for every $x\in K$ we have $T_x K = T_xL \, \cap \, T_x L' \subset T_x X$. \begin{lemma} \label{clean inters1} For every $i\geq 0$, $F_i$ and $L_\tau$ intersect cleanly. For every $i,j\geq 0$, $F_i$ and $F_{j}$ intersect cleanly. Also, all these Lagrangians intersect $F'$ cleanly. \end{lemma} \begin{proof} This follows from the fact that the Lagrangians project under the map $\pi_n:X_n\to \mathbb{C}$ to curves that intersect transversely. \end{proof} \subsection{More Lagrangians in $T^*S^3$} \label{SS:Lagrs in T*S3} It will be useful to also consider an alternative description of the complex affine quadric 3-fold, which is symplectomorphic to $T^*S^3$. We borrow some notation from \cite{ChanPomerleanoUeda1}. Write $$ X = \{(z,u_1,v_1,u_2,v_2) \in \mathbb{C}^5 \,|\, u_1 v_1 = z +1, u_2 v_2 = z - 1 \}. $$ Consider the Lefschetz fibrations \begin{align*} \pi^i: \mathbb{C}^2 &\to \mathbb{C} \\ (u_i,v_i) & \mapsto u_i v_i + (-1)^i, \end{align*} where $i\in \{1,2\}$. The map $\pi^i$ has a unique critical value at $(-1)^i$ and, given $p\in \mathbb{C}\setminus \{(-1)^i\}$, the vanishing circle in $(\pi^i)^{-1}(p)$ of a vanishing path through $p$ is $$ V_{i,p}:= \{(u_i,v_i)\in \mathbb{C}^2 \, | \, \pi^i(u_i, v_i) = p, |u_i|=|v_i| \}. $$ Write also $V_{i,(-1)^i} = \{(0,0)\}$. For more details, see \cite{ChanPomerleanoUeda1} and \cite{SeidelBook}*{Example 16.5}. The quadric $X$ is the fiber product of these two fibrations: \begin{displaymath} \xymatrix{ & X \ar[ld]_{f_1} \ar[rd]^{f_2} \ar@{-->}[dd]^z \\ \mathbb{C}^2 \ar[rd]_{\pi^1} & & \mathbb{C}^2 \ar[ld]^{\pi^2} \\ & \mathbb{C} } \end{displaymath} The map $z:X\to \mathbb{C}$ is not a Lefschetz fibration, but it can be thought of as a Morse--Bott analogue, with critical values $\pm 1$ and such that the critical locus over $\pm 1$ is a copy of $\mathbb{C}^*$. We will consider the following analogues of the Lagrangians $L_C$ and $F_\eta$ from the previous section. It will again be useful to have Figure \ref{F_w_fig} in mind. \begin{definition}\label{D:Lagr N} Given a curve $C\subset \mathbb{C}\setminus \{\pm 1\}$ that is the image of an embedding of $S^1$, let $$ T_C := \bigcup_{z\in C} V_{1,z}\times V_{2,z}. $$ Given an embedding $\eta: [0,\infty) \to \mathbb{C}$ such that \begin{itemize} \item $\eta(0) = 1$, \item $\eta\big((0,\infty)\big) \subset \mathbb{C}\setminus \{\pm 1\}$ and \item $\eta(t) = at+b$ for some $a\in \mathbb{C}^*$, $b\in \mathbb{C}$ and $t$ large enough, \end{itemize} let \begin{align*} N_\eta &:= \bigcup_{t\geq 0} V_{1,\eta(t)}\times V_{2,\eta(t)}. \end{align*} \end{definition} Several arguments in the previous section can be adapted to this setting. This time, if $C$ encloses $\{-1,1\}$, then it is diffeomorphic to a torus $T^3$ and we have $$ H_2(T^*S^3,T_C;\mathbb{Z}) \cong H_1(T_C;\mathbb{Z}) \cong \mathbb{Z}^3. $$ We can pick a basis $\alpha_1$, $\alpha_2$, $\beta$ for this relative homology group, such that $\alpha_1$ is a fiber product of a Lefschetz thimble for $\pi^1$ by a point, and $\alpha_2$ is a fiber product of a point by a Lefschetz thimble for $\pi^2$. We choose $\beta$ so that its boundary projects to a degree 1 cover of $C$. Again, the fact that the $\alpha_i$ are Lagrangian implies that they have vanishing area and Maslov index. We are left with determining the area and index of $\beta$. As before, there is a positive measure $\sigma'$ on $\mathbb{C}$, absolutely continuous with respect to the Lebesgue measure and smooth in $\mathbb{C}\setminus \{\pm 1\}$, such that the following result holds. \begin{lemma} $T_C$ and $N_\eta$ are Lagrangian submanifolds of $X$. The Lagrangian $T_C$ is diffeomorphic to $T^3$. Given $\beta$ as above, its $\omega$-area is $\int_{\Omega_C} \sigma'$, where $\Omega_C \subset \mathbb{C}$ is the region bounded by $C$, and its Maslov index is 2. Therefore, $T_C$ is monotone with monotonicity constant $\tau_C = \int_{\Omega_C} \sigma' /2$. The $N_\eta$ are Hamiltonian-isotopic to the conormal Lagrangian of the unknot in $S^3$. In particular, they are diffeomorphic to $S^1\times \mathbb{R}^2$ and are exact. \end{lemma} \begin{proof} The proof uses arguments similar to the ones in the previous section, so we omit them. See \cite{ChanPomerleanoUeda1} for the proofs of some of these statements. \end{proof} We can also write the disk potential of $T_C$. \begin{lemma} \label{L:disk potential T_C} The disk potential of $T_C$ is $$ W_{T_C} = x_1 (1 + x_2)(1 + x_3), $$ in a basis $h_1,h_2,h_3\in H_1(T_C;\mathbb{Z})\cong \mathbb{Z}^3$ such that $h_1$ is a loop projecting to $C$ in degree 1, $h_2 = V_{1,z}\times \{p_2\}$ for some $z\in C$ and $p_2\in V_{2,z}$, and $h_3 = \{p_1\}\times V_{2,z}$ for some $z\in C$ and $p_1\in V_{1,z}$. \end{lemma} \begin{proof} This is computed in \cite{ChanPomerleanoUeda1}. \end{proof} We can again exhaust $\mathbb{C}\setminus [-1,1]$ by disjoint simple closed curves $C$, such that the collection of monotonicity constants $\tau_C$ of the $T_C$ covers $\mathbb{R}_{>0}$ injectively. Fix $\tau>0$ and denote by $T^3_\tau$ the Lagrangian torus with monotonicity $\tau$ in this family. Assume that $T^3_\tau$ is the lift of the curve $C$ in Figure \ref{F_w_fig}. Denote also by $N_i$, resp.~$N'$ the lifts of the paths $\eta_i$, resp.~$\eta'$, in Figure \ref{F_w_fig}. \begin{lemma} For every $i\geq 0$, $N_i$ and $T^3_\tau$ intersect cleanly. For every $i,j\geq 0$, $N_i$ and $N_{j}$ intersect cleanly. All these Lagrangians intersect $N'$ cleanly. \end{lemma} \begin{proof} As in Lemma \ref{clean inters1}, this result follows from the fact that the Lagrangians project to curves in the plane that intersect transversely. \end{proof} \section{Wrapped Fukaya categories} \label{S:Fukaya categories} The wrapped Fukaya category of a Liouville domain $M$ was introduced in \cite{AbouzaidSeidelViterbo}. In the original definition, the objects are exact Lagrangians in the completed Liouville manifold $\widehat M$. The Lagrangians are either compact or agree outside of a compact set with the product of $\mathbb{R}$ with a Legendrian submanifold of the contact manifold $\partial M$. We will consider various versions of the wrapped Fukaya category, possibly allowing for closed monotone Lagrangians, as in \cite{RitterSmith}. For Lagrangians intersecting cleanly, we will use a Morse--Bott formalism similar to that of \cite{SeidelAbstractFlux} to compute the associated $A_\infty$-maps $\mu^k$. \subsection{Coefficients} Some of the Floer cohomology groups we will study are defined over $\mathbb{Z}$, and some over a {\em Novikov field}. Given a commutative ring $R$, which for us will always be either $\mathbb{Z}$ or $\mathbb{C}$, write $$ \mathbb{K}_R := \left\{ \sum_{i=0}^\infty a_i T^{\lambda_i} \,| \, a_i \in R , \lambda_i \in \mathbb{R}, \lambda_i< \lambda_{i+1}, \lim_{i\to \infty}\lambda_i = \infty \right\}. $$ We will be mostly interested in $\mathbb{K}_\mathbb{C}$, which will be denoted simply by $\mathbb{K}$. We can replace $\mathbb{C}$ with any algebraically closed field of characteristic zero, so that the Novikov field is algebraically closed. See Section \ref{S:Generation modules} for more on this point. There is a {\em valuation} map \begin{align*} \val \colon \mathbb{K}_R &\to (-\infty,\infty] \\ \sum_{i=0}^\infty a_i T^{\lambda_i} &\mapsto \min \{\lambda_i \,|\, a_i \neq 0\} \end{align*} where $\val(0) = \infty$. Say that $\alpha \in \mathbb{K}_R$ is {\em unitary} if $\val(\alpha) = 0$. Denote by $U_{\mathbb{K}_R} := \left\{ \alpha \in \mathbb{K}_R \,|\, \val(\alpha) = 0\right\}$ the group of unitary elements in $\mathbb{K}_R$, by $\mathbb{K}_{R,0} := \left\{ \alpha \in \mathbb{K}_R \,|\, \val(\alpha) \geq 0\right\}$ the {\em Novikov ring} and by $\mathbb{K}_{R,+} := \left\{ \alpha \in \mathbb{K}_R \,|\, \val(\alpha) > 0\right\}$ the maximal ideal in $\mathbb{K}_{R,0}$. \subsection{Morse--Bott Floer cohomology for clean intersections} We will use a Morse--Bott version of the Fukaya category, where Lagrangians are allowed to intersect cleanly, as in \cite{SeidelAbstractFlux}*{Section 3.2}. For more details on a construction of the Fukaya category with a Morse--Bott definition of the $A_\infty$-algebra of endomorphisms of a Lagrangian submanifold, see \cite{SheridanPOP}*{Section 4}. These references assume that the Lagrangians are exact, which precludes disk bubbling. Lemma \ref{L:vanishing potential} below guarantees that if $(L,\xi)$ is a Lagrangian with a rank 1 unitary local system giving a non-trivial object in the Fukaya category, then $\xi$ corresponds to a zero of the disk potential of $L$. This is useful when considering 2-parameter families of pearly trees of holomorphic disks (to prove the $A_\infty$-relations, for instance), since the vanishing of the disk potentials implies the cancellation of configurations with disk bubbles. Therefore, we can assume for many purposes that the relevant Lagrangians bound no holomorphic disks. A detailed Morse--Bott construction of Floer cohomology groups of cleanly intersecting Lagrangians is given in \cite{SchmaeschkeClean}, after earlier work in \cite{FOOO} and \cite{FrauenfelderThesis}. Let us briefly define the relevant Floer complexes. Let $L_0,L_1$ be two Lagrangians such that each $L_i$ is equipped with: \begin{itemize} \item an orientation and a spin structure; \item a unitary local system $\xi$ on a trivial $\mathbb{K}_R$-bundle $E_i = \oplus_k E_{i,k}$, where the direct sum is finite and the summand $E_{i,k}$ of grading $k$ is a finite rank trivial vector bundle over $L_i$. \end{itemize} \begin{remark} In this article, the zero section in $T^*S^n$, with $n>1$ even, is the only class of Lagrangians that we will equip with local systems of rank greater than 1. In that case, the rank will be 2 and the holonomy will be trivial since the Lagrangians are simply connected. \end{remark} \begin{remark} The choices of spin structures on the Lagrangians are necessary to orient moduli spaces of holomorphic curves. Nevertheless, in our computations we will not be very careful in specifying spin structures on the Lagrangians. This is because the effect of changing the spin structure on a Lagrangian is a change in signs associated to holomorphic curves, and this change can be compensated by the choice of a different local system $\xi$ on the Lagrangian. \end{remark} The categories of exact Lagrangians we will consider are $\mathbb{Z}$-graded, so we will need additional choices of gradings for such $L_i$ (as in \cite{SeidelBook}*{Section 12a}). The categories of monotone Lagrangians will only be $\mathbb{Z}/2\mathbb{Z}$-graded, so we will not need gradings in that case. Denote $\mathcal L_i\coloneqq (L_i,\xi_i)$, where $\xi_i$ is a local system on the $\mathbb{K}_R$-bundle $E_i$. Assume that the Lagrangians intersect cleanly and let $f: L_0 \cap L_1 \to \mathbb{R}$ be a Morse function. Define the cochain complex $$ CF^k(\mathcal L_0,\mathcal L_1) := \bigoplus_{C\subset L_0\cap L_1} \qquad \bigoplus_{\mathclap{\substack{p\in \crit(f|_C)}}} \, {\operatorname{Hom}}^{k-\deg(p)}_{\mathbb{K}_R}\left((E_0)_p,(E_1)_p\right) \otimes_{\mathbb{Z}} \mathfrak o $$ where the $C\subset L_0\cap L_1$ are connected components of the intersection, $\mathfrak o$ is the orientation line (a rank 1 local system over $\mathbb{Z}$ depending on the spin structures of the $L_i$), and ${\operatorname{Hom}}^{k-\deg(p)}_{\mathbb{K}_R}$ denotes $\mathbb{K}_R$-linear maps of degree $k-\deg(p)$. Here, the Floer degree associated to the critical point $p$ is $\deg(p) = \dim(C)- \ind(p) + \deg(C)$, where $\ind(p)$ is the Morse index of $p$ as a critical point of $f|_C$ and $\deg(C)$ is an absolute Maslov index, which depends on the gradings of the $L_i$. The operations $\mu^k$ are defined on tensor products of these chain complexes, via counts of {\em pearly trees}. We give a very brief description of these, referring the reader to \cite{SeidelAbstractFlux}*{Section 3.2} for more details. Given a collection $\mathcal L_0, \ldots, \mathcal L_k$ of Lagrangians with local systems, a pearly tree contributing to $$ \mu^k: CF^*(\mathcal L_{k-1},\mathcal L_k) \otimes \ldots \otimes CF^*(\mathcal L_0,\mathcal L_1) \to CF^*(\mathcal L_0,\mathcal L_k) $$ is a collection of perturbed pseudoholomorphic disks (with respect to auxiliary almost complex structures and perturbing 1-forms) with boundary punctures and Lagrangian boundary conditions, connected by gradient flow lines of auxiliary Morse functions and metrics on the clean intersections of the $L_i$. This collection of disks and flow lines can be concatenated into a continuous map from a disk with $n+1$ boundary punctures to the symplectic manifold, with boundary components of the disk mapping to the Lagrangians $L_0, \ldots, L_k$, see Figure \ref{pearly_tree_fig}. The contribution of a rigid configuration of disks and flow lines to $\mu^k$ is determined by the areas of the pseudoholomorphic disks (which are encoded in the exponents of the variable $T$ in the Novikov field), by signs specified by the spin structures on the $L_i$, and by parallel transport with respect to the local systems $\xi_i$ on the $E_i$ along the boundary components of the concatenated disk (with the input elements of ${\operatorname{Hom}}_{\mathbb{K}_R}(E_i,E_{i+1})$ applied at the boundary punctures). The $\mu^k$ satisfy the $A_\infty$-relations, which can be written in abbreviated form as $\mu\circ\mu=0$. \begin{figure \begin{center} \def0.5\textwidth{0.4\textwidth} \input{pearly_tree.pdf_tex} \end{center} \caption{A pearly tree contributing to $\mu^4$} \label{pearly_tree_fig} \end{figure} We will also want to consider Fukaya categories containing additional objects. A {\em bounding cochain} on an object $\mathcal L$ in a $\mathbb{Z}/2\mathbb{Z}$-graded Fukaya category is $b\in CF^{\odd}(\mathcal L,\mathcal L)$ satisfying the {\em Maurer--Cartan equation} \begin{equation} \label{MC} \sum_{k=1}^\infty \mu^k(b,\ldots,b) = 0, \end{equation} see \cite{FOOO} (to ensure convergence, we have to assume that $b$ has strictly positive valuation if it corresponds geometrically to a Morse chain of degree $1$). We can enlarge our category by allowing objects of the form $(\mathcal L,b)$, for such $b$. The object $\mathcal L$ can be identified with $(\mathcal L,0)$. Given objects $(\mathcal L_0,b_0),\ldots,(\mathcal L_k,b_k)$ in the enlarged category, the $A_\infty$-maps $$ \hat\mu^k : CF^*(\mathcal L_{k-1},\mathcal L_{k})\otimes \ldots \otimes CF^*(\mathcal L_0,\mathcal L_{1}) \to CF^*(\mathcal L_0,\mathcal L_{k}) $$ are given by $$ \hat\mu^k(x_k,\ldots,x_1) := \sum_{l_0,\ldots,l_k \ge1 0} \mu^{(k+\sum_i l_i)}(\underbrace{b_k,\ldots,b_k}_{l_k},x_k,b_{k-1},\ldots,b_1,x_1,\underbrace{b_0,\ldots,b_0}_{l_1}). $$ The fact that the $b_i$ satisfy the Maurer--Cartan equation \eqref{MC} implies the $A_\infty$-equations $\hat\mu \circ \hat \mu = 0$. Since $\hat \mu^k$ agrees with $\mu^k$ when all $b_i=0$, we will continue to write $\mu^k$ instead of $\hat\mu^k$. \subsection{Wrapped Floer cohomology} We will use a model for wrapped Floer cohomology from \cite{AbouzaidSeidelFuture}, which is presented in \cite{GPS1} and \cite{SylvanFunctors}. Let $L_0$ be a non-compact Lagrangian, which in this paper will be either a cotangent fiber $F$ or a conormal Lagrangian of the unknot $N$. We pick a family $L_i$ of Lagrangians that are lifts of paths $\eta_i$ in the base of the Lefschetz fibration $\pi_n$ from Section \ref{SS:Lagrs in T*Sn} (in the case of $F$), or in the base of the fiber product of Lefschetz fibrations $\pi^i$ from Section \ref{SS:Lagrs in T*S3} (in the case of $N$), where the path $\eta_i$ wraps $i$ times around the two critical values, see Figure \ref{F_w_fig}. Then, given another Lagrangian $L'$, we have $$ HW^*(L_0,L') := \lim_{i\to \infty} HF^*(L_i,L'), $$ with the limit taken with respect to the continuation maps relating $L_i$ and $L_{i+1}$. For the equivalence of this model with the usual definitions involving fast growing Hamiltonians, see \cite{GPS1}*{Lemma 3.37} and \cite{SylvanFunctors}*{Proposition 2.6}. In these references, the wrapped Fukaya category is defined more precisely by localizing the Fukaya category on the continuation maps that were just mentioned. We will combine this approach to wrapped Floer cohomology with the definition of Morse--Bott Floer cohomology above, where the Lagrangians intersect cleanly and are possibly equipped with local systems and bounding cochains. \subsection{Wrapped Fukaya categories} \label{S:W} We will consider several versions of the Fukaya $A_\infty$-category of $T^*S^n$. Recall that $R$ is either $\mathbb{Z}$ or $\mathbb{C}$. \begin{itemize} \item $\mathcal{W}^\mathbb{Z}(T^*S^n;\mathbb{Z})$ is a category whose objects are either the $F_\eta$ from Definition \ref{D:Lagr F} or compact oriented exact Lagrangians. When $n=3$, we also include the objects $N_\eta$ from Definition \ref{D:Lagr N}. Objects are equipped with $\mathbb{Z}$-gradings and spin structures. Morphism spaces are wrapped Floer cochain complexes with coefficients in $\mathbb{Z}$. The differential and higher $A_\infty$-operations count rigid pearly trees, without keeping track of areas (which can be thought of as setting $T=1$ in the Novikov field $\mathbb{K}_\mathbb{Z}$). In \cite{AbouzaidCotangentFiber}, it is shown that every $F_\eta$ (which is Hamiltonian isotopic to a cotangent fiber, by Lemma \ref{L:L_C and F_eta}) generates $\mathcal{W}^\mathbb{Z}(T^*S^n;\mathbb{Z})$. \item $\mathcal{W}^\mathbb{Z}(T^*S^n;\mathbb{K}_R)$ has the same objects as $\mathcal{W}^\mathbb{Z}(T^*S^n;\mathbb{Z})$. The difference is that the morphism spaces are now wrapped Floer cochain complexes {\em with coefficients in $\mathbb{K}_R$}, to keep track of the symplectic areas of the disks in the pearly trees that contribute to the $A_\infty$ operations. \item $\mathcal{W}^{\mathbb{Z}/2\mathbb{Z}}(T^*S^n;\mathbb{K}_R)$ is obtained from $\mathcal{W}^\mathbb{Z}(T^*S^n;\mathbb{K}_R)$ by collapsing the $\mathbb{Z}$-gradings to $\mathbb{Z}/2\mathbb{Z}$-gradings. If $n$ is odd, allow also objects of the form $(S^n,b_\alpha)$, where $S^n$ is the zero section and $b_\alpha=\alpha [pt]$ is a bounding cochain with $\alpha \in \mathbb{K}_{R,0}$ and $[pt] \in H^n(S^n;\mathbb{K}_R)$. See Remark \ref{R:alpha in K_0} below for why we impose $\alpha\in \mathbb{K}_{R,0}$. We have implicitly chosen a perfect Morse function on $S^n$, and $[pt]$ is given by the minimum of that function (the maximum yields the unit in the $A_\infty$-algebra of $S^n$). Since $S^n$ bounds no disks, it is clear that all the summands in \eqref{MC} vanish for $b=b_\alpha$. If $n$ is even, we want to allow instead objects corresponding to bounding cochains in $S^n\oplus S^n[1]$ (sum and shift of objects is allowed in the additive enlargement of the Fukaya category). We implement this by equipping $S^n$ with the trivial graded $\mathbb{K}_R$-bundle $E:=\mathbb{K}_R\oplus \mathbb{K}_R[1]$, and bounding cochains $b_{\alpha,\beta}\in H^{odd}(S^n;\operatorname{End}(E))$ of the form $$ b_{\alpha,\beta} \coloneqq \begin{pmatrix} 0 & \beta \\ \alpha & 0 \end{pmatrix}_{[pt]}, $$ where $\alpha,\beta\in \mathbb{K}_{R,0}$ and the matrix represents an endomorphism of the fiber of $E$ at the minimum of the auxiliary perfect Morse function on $S^n$. \item $\mathcal{W}^{\mathbb{Z}/2\mathbb{Z}}_{\mon}(T^*S^n;\mathbb{K}_R)$ is an extension of $\mathcal{W}^{\mathbb{Z}/2\mathbb{Z}}(T^*S^n;\mathbb{K}_R)$, allowing all closed monotone Lagrangians. The objects are equipped with orientations and spin structures, and are $\mathbb{Z}/2\mathbb{Z}$-graded. We also equip monotone Lagrangians with unitary rank 1 local systems over $\mathbb{K}_R$. The construction of the monotone wrapped Fukaya category is given in \cite{RitterSmith}. Their results also imply that the monotone wrapped Fukaya category of a cotangent bundle is generated by a cotangent fiber. See also \cite{SheridanFano} for a definition of the monotone Fukaya category in a closed setting. \item $\mathcal{F}^{\mathbb{Z}/2\mathbb{Z}}_{\mon}(T^*S^n;\mathbb{K}_R)$ is the full subcategory of $\mathcal{W}^{\mathbb{Z}/2\mathbb{Z}}_{\mon}(T^*S^n;\mathbb{K}_R)$ containing only those objects whose underlying Lagrangians are closed. \end{itemize} It is an important fact that Hamiltonian isotopies give rise to isomorphic objects in all these categories; in the presence of bounding cochains, this means that if $b$ is a bounding cochain on $L$, and $L'$ is Hamiltonian isotopic to $L$, then there is a bounding cochain $b'$ on $L'$ so that the two corresponding objects of the Fukaya category are isomorphic. \begin{remark} \label{R:alpha in K_0} If we equip Lagrangians with bounding cochains valued in the maximal ideal $\mathbb{K}_{R,+}$ of $\mathbb{K}_{R,0}$, then we are guaranteed convergence of all the $A_\infty$-operations deformed by such bounding cochains. In our case, since the degree of $[pt]\in H^*(S^n;\mathbb{Z})$ is $n>1$, we could in fact allow bounding cochains $\alpha [pt]$ for arbitrary $\alpha\in \mathbb{K}_R$ in the category of exact Lagrangians $\mathcal{W}^{\mathbb{Z}/2\mathbb{Z}}(T^*S^n;\mathbb{K}_R)$. Nevertheless, we would run into convergence issues when taking morphisms with monotone Lagrangians in $\mathcal{W}^{\mathbb{Z}/2\mathbb{Z}}_{\mon}(T^*S^n;\mathbb{K}_R)$, which is why we restrict to bounding cochains with coefficients in $\mathbb{K}_{R,0}$. With minor modifications to our arguments, we could also have equipped all objects in $\mathcal{W}^{\mathbb{Z}/2\mathbb{Z}}_{\mon}(T^*S^n;\mathbb{K}_R)$ with finite rank unitary local systems and suitable bounding cochains. \end{remark} Observe that we can define several functors between these categories: \begin{itemize} \item $\mathcal G_1 \colon \mathcal{W}^\mathbb{Z}(T^*S^n;\mathbb{Z}) \to \mathcal{W}^\mathbb{Z}(T^*S^n;\mathbb{K}_R)$ is the identity on objects. Fix a primitive $f_L$ for every exact Lagrangian $L$. Given exact Lagrangians $L_0,L_1$ in $\mathcal{W}^\mathbb{Z}(T^*S^n;\mathbb{Z})$, map $x\in CW^*(L_0,L_1;\mathbb{Z})$ to $T^{f_1(x)-f_0(x)}x \in CW^*(L_0,L_1;\mathbb{K}_R)$, where $f_i :=f_{L_i}$. If $u$ is a pearly tree contributing to $\mu^k$ in $\mathcal{W}^\mathbb{Z}(T^*S^n;\mathbb{Z})$, then the contribution of $u$ to $\mu^k$ in $\mathcal{W}^\mathbb{Z}(T^*S^n;\mathbb{K}_R)$ is weighted by the factor $T^{\int_{D^2} u^*\omega}$, where the integral is over all the holomorphic disks in the pearly tree. The functor $\mathcal G_1$ depends on the choices of primitives $f_L$, but different choices yield isomorphic functors (we could eliminate this choice by incorporating the primitives in the definition of objects of the exact Fukaya category). \item $\mathcal G_2 \colon \mathcal{W}^\mathbb{Z}(T^*S^n;\mathbb{K}_R) \to \mathcal{W}^{\mathbb{Z}/2\mathbb{Z}}(T^*S^n;\mathbb{K}_R)$ is given by collapsing the $\mathbb{Z}$-grading to a $\mathbb{Z}/2\mathbb{Z}$-grading, followed by inclusion of objects. \item $\mathcal G_3 \colon \mathcal{W}^{\mathbb{Z}/2\mathbb{Z}}(T^*S^n;\mathbb{K}_R) \to \mathcal{W}^{\mathbb{Z}/2\mathbb{Z}}_{\mon}(T^*S^n;\mathbb{K}_R)$ is given by inclusion of objects, as are $\mathcal G_4 \colon \mathcal{W}^{\mathbb{Z}/2\mathbb{Z}}_{\mon}(T^*S^n;\mathbb{K}_\mathbb{Z}) \to \mathcal{W}^{\mathbb{Z}/2\mathbb{Z}}_{\mon}(T^*S^n;\mathbb{K})$ (recall that $\mathbb{K}=\mathbb{K}_\mathbb{C}$) and $\mathcal G_5 \colon \mathcal{F}^{\mathbb{Z}/2\mathbb{Z}}_{\mon}(T^*S^n;\mathbb{K}_\mathbb{Z}) \to \mathcal{W}^{\mathbb{Z}/2\mathbb{Z}}_{\mon}(T^*S^n;\mathbb{K})$. \end{itemize} \begin{remark} Let $L$ be a monotone Lagrangian. A unitary local system $\xi$ on the trivial $\mathbb{K}_R$-bundle over $L$ can be specified by a homomorphism $$ \hol_\xi : H_1(L;\mathbb{Z}) \to U_{\mathbb{K}_R}. $$ If, in the definition \eqref{disk potential} of the disk potential $W_L$, we replace $x^{\partial u}$ with $\hol_\xi(\partial u)$, then we get an element of $\mathbb{K}_R$ that we denote by $W_L(\xi)$. When defining the monotone category $\mathcal{W}^{\mathbb{Z}/2\mathbb{Z}}_{\mon}(T^*S^n;\mathbb{K}_R)$, one can only take morphisms between objects $(L_1,\xi_1)$ and $(L_2,\xi_2)$ if $W_{L_1}(\xi_1)=W_{L_2}(\xi_2)$, see \cite{OhMonotoneI}. This does not impose an additional constraint in our case, due to the following result. It can be interpreted as saying that the monotone Fukaya category of $T^*S^n$ is unobstructed. \end{remark} \begin{lemma} \label{L:vanishing potential} Let $L\subset T^*S^n$ be a compact monotone Lagrangian with a unitary local system $\xi$ on a trivial line bundle. Write $\mathcal L = (L,\xi)$. If $HF^*(\mathcal L,\mathcal L;\mathbb{K}_R) \neq 0$, then $W_L(\xi)=0$. \end{lemma} This follows from \cite{RitterSmith}*{Theorem 3.2}, according to which the disk potentials of monotone Lagrangians in $(M,\omega)$ with nontrivial Floer cohomology are eigenvalues of quantum multiplication by $c_1$ on the quantum cohomology of $(M,\omega)$. Since $c_1$ of the total space of $T^*S^n$ vanishes, the lemma follows. \begin{remark} \label{R:crit pts of potentials} Let $L^n$ be a monotone Lagrangian torus with disk potential $W_L$. The critical points of $W_L$ in $(U_{\mathbb{K}_R})^n$ correspond to the rank 1 unitary local systems $\xi$ on the trivial $\mathbb{K}_R$-line bundle over $L$ for which $HF^*(\mathcal L,\mathcal L;\mathbb{K}_R)\neq 0$, where $\mathcal L = (L,\xi)$, see \cite{SheridanFano}*{Proposition 4.2}. This is to say that $\mathcal L$ is non-trivial in $\mathcal F^{\mathbb{Z}/2\mathbb{Z}}_{\mon}(T^*S^n;\mathbb{K}_R)$. Recall from Lemma \ref{L:disk potential L_C} that Lagrangian tori $L_C\subset T^*S^2$ have disk potential $W_1=x_1(1+x_2)^2$. The critical locus of this potential is given by the condition $x_2=-1$. Recall also that Lemma \ref{L:disk potential T_C} says that the disk potential of a Lagrangian torus $T_C\subset T^*S^3$ is $W_2=x_1(1+x_2)(1+x_3)$, whose critical locus is given by $x_2=x_3=-1$. Observe that, for both tori $L_C$ and $T_C$, the disk potentials vanish on their critical points, which is compatible with Lemma \ref{L:vanishing potential}. \end{remark} \subsection{Yoneda functors} \label{SS:Yoneda} In this section we will be working over the field $\mathbb{K}_\mathbb{C} = \mathbb{K}$, since we will use some formality results from Section \ref{S:Formality algebra}. Let $\mathcal{A}_{\mathbb{K}} := CW^*(F,F;\mathbb{K})$ be the $A_\infty$-algebra of a cotangent fiber in $T^*S^n$, with $n\geq 2$, and let $\mmod^{A_\infty}(\mathcal{A}_{\mathbb{K}})$ be the differential $\mathbb{Z}/2\mathbb{Z}$-graded category of right $A_\infty$-modules over $\mathcal{A}_{\mathbb{K}}$. Given two objects $\mathcal{M}$ and $\mathcal{M}'$ in $\mmod^{A_\infty}(\mathcal{A}_{\mathbb{K}})$, the morphism space $\hom_{\mmod^{A_\infty}(\mathcal{A}_{\mathbb{K}})}(\mathcal{M},\mathcal{M}')$ is a chain complex computing $\Ext_{\mathcal{A}_{\mathbb{K}}}^*(\mathcal{M},\mathcal{M}')$, see \cite{SeidelBook}*{Remark 2.15}. There is a Yoneda functor \begin{align*} \mathcal{Y} : \mathcal{W}^{\mathbb{Z}/2\mathbb{Z}}_{\mon}(T^*S^n;\mathbb{K}) &\to \mmod^{A_\infty}(\mathcal{A}_{\mathbb{K}}) \\ \mathcal L & \mapsto CW^*(F,\mathcal L) \end{align*} which restricts to a functor $$ \mathcal{Y}_c : \mathcal{F}^{\mathbb{Z}/2\mathbb{Z}}_{\mon}(T^*S^n;\mathbb{K}) \to \mmod^{A_\infty}_{pr}(\mathcal{A}_{\mathbb{K}}), $$ where $\mmod^{A_\infty}_{pr}(\mathcal{A}_{\mathbb{K}}) \subset \mmod^{A_\infty}(\mathcal{A}_{\mathbb{K}})$ is the subcategory of {\em proper modules} $\mathcal{M}$, such that $H^*(\mathcal{M})$ is finite dimensional over ${\mathbb{K}}$ (the subscript in $\mathcal{Y}_c$ stands for `compact'). Now, let $A_{\mathbb{K}} := H^*(\mathcal{A}_{\mathbb{K}})$ be the cohomology algebra of $\mathcal{A}_{\mathbb{K}}$. Let $\mmod(A_{\mathbb{K}})$ be the $\mathbb{Z}/2\mathbb{Z}$-graded category of right $A_{\mathbb{K}}$-modules, such that morphism spaces are $\Ext_{A_{\mathbb{K}}}^*$ groups (respecting the $\mathbb{Z}/2\mathbb{Z}$-gradings). There is a functor \begin{align*} H:\mmod^{A_\infty}(\mathcal{A}_{\mathbb{K}}) &\to \mmod(A_{\mathbb{K}}) \\ \mathcal{M} & \mapsto H^*(\mathcal{M}) \end{align*} which restricts to $$ H_c:\mmod^{A_\infty}_{pr}(\mathcal{A}_{\mathbb{K}}) \to \mmod_{pr}(A_{\mathbb{K}}) \\ $$ where $\mmod_{pr}(A_{\mathbb{K}}) \subset \mmod(A_{\mathbb{K}})$ is the subcategory of finite dimensional $\mathbb{Z}/2\mathbb{Z}$-graded modules over $A_{\mathbb{K}}$. Proposition \ref{P:N split-generates} below implies that the functor $\mathcal{Y}$ (hence also $\mathcal{Y}_c$) is cohomologically full and faithful. According to Corollary \ref{C:mod(A) is formal} below, $H$ (hence also $H_c$) is a quasi-equivalence of categories. We conclude the following. \begin{proposition} \label{C:Yoneda ff} The composition \begin{align*} Y:=H\circ \mathcal{Y} : \mathcal{W}^{\mathbb{Z}/2\mathbb{Z}}_{\mon}(T^*S^n;\mathbb{K}) &\to \mmod(A_{\mathbb{K}}) \\ \mathcal L & \mapsto HW^*(F,\mathcal L) \end{align*} and its restriction $$ Y_c:=H_c\circ \mathcal{Y}_c : \mathcal{F}^{\mathbb{Z}/2\mathbb{Z}}_{\mon}(T^*S^n;\mathbb{K}) \to \mmod_{pr}(A_{\mathbb{K}}) $$ are cohomologically full and faithful embeddings. \qed \end{proposition} \section{Floer cohomology computations} \label{S:HF computations} \subsection{The Lagrangians $F$ and $N$} Recall from Section \ref{S:construct Lagrangians} that the Lagrangian lifts $F_\eta\subset T^*S^n$ and $N_\eta\subset T^*S^3$ are Hamiltonian-isotopic to, respectively, a cotangent fiber (which we denote by $F$) and the conormal Lagrangian of an unknot in $S^3$ (which we denote by $N$). \begin{proposition} \label{P:N split-generates} The cotangent fiber $F$ generates $\mathcal{W}^\mathbb{Z}(T^*S^n;\mathbb{Z})$, $\mathcal{W}^\mathbb{Z}(T^*S^n;\mathbb{K}_R)$ and $\mathcal{W}^{\mathbb{Z}/2\mathbb{Z}}_{\mon}(T^*S^n;\mathbb{K}_R)$. When $n=3$, the Lagrangian $N$ split-generates $\mathcal{W}^\mathbb{Z}(T^*S^3;\mathbb{Z})$, $\mathcal{W}^\mathbb{Z}(T^*S^3;\mathbb{K}_R)$ and $\mathcal{W}^{\mathbb{Z}/2\mathbb{Z}}_{\mon}(T^*S^3;\mathbb{K}_R)$. \end{proposition} \begin{proof} The fact that a cotangent fiber generates $\mathcal{W}^\mathbb{Z}(T^*S^n;\mathbb{Z})$ is proven in \cite{AbouzaidCotangentFiber}. The result follows for $\mathcal{W}^\mathbb{Z}(T^*S^n;\mathbb{K}_R)$. The fact that a cotangent fiber generates $\mathcal{W}^{\mathbb{Z}/2\mathbb{Z}}_{\mon}(T^*S^n;\mathbb{K}_R)$ follows from \cite{RitterSmith}. The previous paragraph and Lemma \ref{HW(N)} below imply the result for $N$. \end{proof} Recall that $HW^*(F,F;\mathbb{Z}) \cong \mathbb{Z}[u]$, where $\deg(u)=1-n$, which follows from \cite{AbouzaidBasedLoops}. Denote this ring by $A_\mathbb{Z}$. Also denote by $F_0$ a cotangent fiber corresponding to a lift of a path through the critical value $1$ of $\pi_n$, and by $F'$ one that is a lift of a path through $-1$, see Figure \ref{F_w_fig}. Since $F_0$ and $F'$ are Hamiltonian-isotopic, we have \begin{equation} \label{HF(F,F')} HW^*(F_0,F';\mathbb{Z}) \cong A_\mathbb{Z}. \end{equation} On the other hand, $$ HW^*(F_0,F';\mathbb{Z}) \cong \lim_{i\to \infty} HF^*(F_i,F';\mathbb{Z}), $$ where the $F_i$ are lifts of the paths $\eta_i$ illustrated in Figure \ref{F_w_fig}. In our Morse--Bott model, the cochain complex for $CF^*(F_i,F';\mathbb{Z})$ is described, as a graded abelian group, as $$ \bigoplus_{k=0}^i H^{*+(1-n)(1+2k)}(S^{n-1};\mathbb{Z}). $$ Equation \eqref{HF(F,F')} and the fact that, in this Morse--Bott model, the continuation maps $$ CF^*(F_i,F';\mathbb{Z}) \to CF^*(F_{i+1},F';\mathbb{Z}) $$ are inclusions, imply that the differentials vanish on these chain complexes. In particular, we have the following. \begin{lemma} \label{u in F} Up to a factor $\pm 1$, the unit $e$, resp.~the generator $u$, in $A_\mathbb{Z}$ is represented by the minimum, resp.~maximum, of the auxiliary Morse function on $F_0\cap F' \cong S^{n-1}$, thought of as a class in $HF^{0}(F_0,F';\mathbb{Z})$, resp.~$HF^{1-n}(F_0,F';\mathbb{Z})$. \end{lemma} We now consider the Lagrangian $N$. \begin{remark} In the following, we use the cohomological degree shift notation, where $[k]$ corresponds to a shift by $-k$. \end{remark} \begin{proposition}\label{HW(N)} The Lagrangian $N \in T^*S^3$ is quasi-isomorphic to $F \oplus F[1]$ in $\mathcal{W}^\mathbb{Z}(T^*S^3;\mathbb{Z})$. In particular, $HW^*(N,N;\mathbb{Z})$ is isomorphic to the graded matrix algebra $$ B_\mathbb{Z}:=\begin{pmatrix} A_\mathbb{Z} & A_\mathbb{Z}[1] \\ A_\mathbb{Z}[-1] & A_\mathbb{Z} \end{pmatrix}. $$ Hence, $HW^*(N,N;\mathbb{K}_R)$ is isomorphic to $B_\mathbb{Z}\otimes_\mathbb{Z} \mathbb{K}_R$ for any commutative ring $R$. \end{proposition} \begin{proof} Recall the construction, in \cite{AbouzaidBasedLoops}, of a cohomologically fully faithful $A_\infty$-functor $$ \mathcal F : \mathcal W^\mathbb{Z}(T^*Q;\mathbb{Z}) \to \Tw(\mathcal P(Q)), $$ where the target is a category of twisted complexes on a Pontryagin category $\mathcal P(Q)$ of a closed $\operatorname{Spin}$ manifold $Q$. Objects in $\mathcal P(Q)$ are points in $Q$, with $\hom_{\mathcal P(Q)}(q_1,q_2) = C_{-*}(\Omega_{q_1,q_2}(Q);\mathbb{Z})$ and composition given by the Pontryagin product of loops. Here, $\Omega_{p,p'}(Q)$ is the space of loops in $Q$ that start at $p$ and end at $p'$. Write also $\Omega_{p}$ for $\Omega_{p,p}$. Given an object in $\mathcal W^\mathbb{Z}(T^*Q;\mathbb{Z})$, which is a $\mathbb{Z}$-graded exact $\operatorname{Spin}$ Lagrangian $L$ in $T^*Q$, we can assume (up to a Hamiltonian isotopy) that $L$ intersects the zero-section transversely at the points $q_1,\ldots,q_m$. The image of $L$ under $\mathcal F$ is a twisted complex supported on a direct sum of grading shifts of the $q_i$. The differential in the twisted complex is constructed from moduli spaces of Floer strips between $Q$ and $L$. Let us use this functor in our setting. The Lagrangian $N$ intersects $S^3$ cleanly along a copy of $S^1$. One can deform $N$ by a Hamiltonian isotopy so that it intersects $S^3$ transversely at exactly two points $q_1$ and $q_2$, with consecutive indices. Hence, $\mathcal F(N)$ is a twisted complex supported on the sum of shifts of $q_1$ and of $q_2$. The differential on this twisted complex is given by a cycle in $C_{0}(\Omega_{q_1,q_2}(S^3);\mathbb{Z})$. Homologous cycles yield quasi-isomorphic twisted complexes, so the differential on $\mathcal F(N)$ is determined by an element $x\in H_{0}(\Omega_{q_1,q_2}(S^3);\mathbb{Z}) \cong \mathbb{Z}$. Given $q\in S^3$ and identifying $H_{-*}(\Omega_{q}(S^3);\mathbb{Z})$ with $HW^*(F_q,F_q;\mathbb{Z})$, we can say that $N$ is quasi-isomorphic to $\Cone(F_q\stackrel{x}{\to} F_q)$ in a category of twisted complexes over $\mathcal W^\mathbb{Z}(T^*Q;\mathbb{Z})$, where $x$ is now thought of as a class in $HW^0(F_q,F_q;\mathbb{Z}) \cong \mathbb{Z}$. In particular, up to a degree shift, \begin{align*} HF^*(N,S^3;\mathbb{Z}) &\cong H^*\left(Cone(HF^*(F_q,S^3;\mathbb{Z}) \stackrel{x}{\to} HF^*(F_q,S^3;\mathbb{Z})) \right) \cong \\ &\cong H^*\left(Cone(\mathbb{Z} \stackrel{x}{\to} \mathbb{Z})\right) \cong \begin{cases} \mathbb{Z}[1]\oplus \mathbb{Z} & \text{ if } x=0 \\ (\mathbb{Z}/x \mathbb{Z}) & \text{ otherwise} \end{cases}. \end{align*} On the other hand, one can adapt \cite{PozniakThesis}*{Proposition 3.4.6} to Floer cohomology with $\mathbb{Z}$-coefficients (instead of $\mathbb{Z}/2\mathbb{Z}$), and conclude that $$ HF^*(N,S^3;\mathbb{Z}) \cong H^*(S^1;\mathbb{Z}), $$ up to a degree shift. Therefore, we have that $x=0$, the differential in the twisted complex $\mathcal F(N)$ is trivial, and $N$ is quasi-isomorphic to $F\oplus F[1]$, as wanted. \end{proof} \begin{remark} Strictly speaking, the argument above only implies that $N=F\oplus F[1]$ up to a global degree shift. However, this will be enough for our purposes, since the main application of the previous proposition will be in Lemma \ref{HF(N,T)}, which is about the $\mathbb{Z}/2\mathbb{Z}$-graded monotone Fukaya category. \end{remark} The ring $B_\mathbb{Z}$ of endomorphisms of $F\oplus F[1]$ can be represented pictorially as follows: \tikzset{node distance=2cm, auto} \begin{center} \begin{tikzpicture} \node (A) {$F$}; \node (B) [right of=A] {$F[1]$}; \draw [->,out=45,in=135,looseness=0.75] (A.north) to node[above]{$% \begin{pmatrix} 0 & * \\ 0 & 0 \end{pmatrix}% $} (B.north); \draw [->,out=-135,in=-45,looseness=0.75] (B.south) to node[below]{$% \begin{pmatrix} 0 & 0 \\ * & 0 \end{pmatrix}% $} (A.south); \path (A) edge [->,in=160, out = 200, loop] node[left] {$% \begin{pmatrix} * & 0 \\ 0 & 0 \end{pmatrix}% $} (A); \path (B) edge [->,in=340, out = 15, loop] node[right] {$% \begin{pmatrix} 0 & 0 \\ 0 & * \end{pmatrix}% $} (B); \end{tikzpicture} \end{center} Define $$ e_1 := \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}, \, e_2 := \begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix}, \, e_{21} := \begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix}, \, e_{12} := \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}. $$ Note that $$ |e_1| = 0 = |e_2|, \, |e_{21}| = 1, \, |e_{12}| = -1. $$ As a graded free abelian group, $B_\mathbb{Z}$ has generators in low degrees given by $$ \begin{tabular}{c|c|c|c|c|c} degree & 1 & 0 & $-1$ & $-2$ & $-3$ \\ \hline generator & $e_{21}$ & $e_1, e_2$ & $e_{12}, u e_{21}$ & $u e_1, u e_2$ & $u e_{12}, u^2 e_{21}$ \end{tabular} $$ \ In a manner similar to what we did above for $F$, let us give a more explicit description of the Morse--Bott wrapped Floer cohomology of $N$. Denote by $N'$ the lift of a path $\eta'$ through $-1$ and by the $N_i$ lift of a path $\eta_i$ through $1$ that winds $i$ times around the critical values of the Morse--Bott Lefschetz fibration, see Figures \ref{F_w_fig} and \ref{N0N1_fig}. By Proposition \ref{HW(N)}, we know that, $$ B_\mathbb{Z}\cong HW^*(N_0,N';\mathbb{Z}) \cong \lim_{i\to \infty} HF^*(N_i,N';\mathbb{Z}). $$ The Morse--Bott Floer cochain complex for $CF^*(N_i,N';\mathbb{Z})$ with $i\geq 0$ is given, as a graded abelian group, by $$ \bigoplus_{k=0}^i H^{*-2k-1}(T^2;\mathbb{Z}). $$ Similarly to what we saw above for $F$, the continuation maps $$ CF^*(N_i,N';\mathbb{Z}) \to CF^*(N_{i+1},N';\mathbb{Z}) $$ are inclusions and the differentials vanish on these chain complexes. \begin{figure \begin{center} \def0.5\textwidth{0.3\textwidth} \input{N0N1.pdf_tex} \end{center} \caption{$N_0$ and $N_1$} \label{N0N1_fig} \end{figure} We also have $$ B_\mathbb{Z}\cong HW^*(N_0,N_0;\mathbb{Z}) \cong \lim_{i\to \infty} HF^*(N_i,N_0;\mathbb{Z}). $$ For $i>0$, the Morse--Bott Floer cochain complex for $CF^*(N_i,N_0;\mathbb{Z})$ is \begin{equation} \label{HF(N1,N0)} H^*(S^1)\oplus \bigoplus_{k=1}^i H^{*-2k}(T^2;\mathbb{Z}) \end{equation} and degree considerations imply again that the continuation maps are inclusions and that the differentials vanish. As we saw after Proposition \ref{HW(N)}, the free abelian group $HW^*(N,N;\mathbb{Z})$ has two generators in degree $-2$, denoted by $u e_1$ and $u e_2$. \begin{remark} At several points in this paper, including the proof of the next result, we will explicitly compute certain products $\mu^2$. Since we will always be in a position where we can compute the product on cohomology, and since the relevant holomorphic curves will always project to triangles in $\mathbb{C}$ over which the Lefschetz fibrations of interest are trivial, it will suffice to make all the calculations using a product complex structure, for which it will be evident that the relevant holomorphic curves are regular. \end{remark} \begin{lemma} \phantomsection \label{ue geometric} \begin{enumerate} \item Up to signs, the class of the unit $e = e_1 + e_2\in HW^0(N_1,N_0,\mathbb{Z})$ is represented by the fundamental class of $S^1$ in \eqref{HF(N1,N0)}, with $i=1$. \item Up to signs, the class $u e_1 + u e_2 = u e\in HW^{-2}(N_1,N_0;\mathbb{Z})$ is represented by the fundamental class of $T^2$ in \eqref{HF(N1,N0)}, with $i=1$. \end{enumerate} \end{lemma} \begin{proof} The statement in (1) follows from the fact that the canonical map $H^*(S^1) \to HW^*(N_0,N_0;\mathbb{Z})$ is a ring map, so it preserves units. For (2), it is convenient to also consider the Lagrangian $N'$. The product $\mu^2$ gives a map $$ HF^0(N_0,N';\mathbb{Z}) \otimes HF^{-2}(N_1,N_0,\mathbb{Z}) \to HF^{-2}(N_1,N';\mathbb{Z}). $$ Figure \ref{N0N1_fig} will be useful to understand the map \begin{equation} \label{0 to -2} HF^0(N_0,N';\mathbb{Z}) \to HF^{-2}(N_1,N';\mathbb{Z}) \end{equation} given by right multiplication with the fundamental class of $T^2$ in $HF^{-2}(N_1,N_0,\mathbb{Z})$, which lives over the intersection point $y$ in Figure \ref{N0N1_fig}. Note that $HF^0(N_0,N';\mathbb{Z})\cong HW^0(N_0,N';\mathbb{Z}) \cong \mathbb{Z}^2$ lives over the point $x$ in the Figure, and that $HF^{-2}(N_1,N';\mathbb{Z})\cong HW^{-2}(N_1,N';\mathbb{Z}) \cong \mathbb{Z}^2$ lives over the point $z$. The product can now be computed by lifting the shaded triangle. Since the fibration is trivial over this triangle, there is a $T^2$-family of such lifts, which implies that the map \eqref{0 to -2} is an isomorphism. Since we are working over $\mathbb{Z}$, this means that the fundamental class of $T^2$ represents $\pm u e_1 \pm u e_2$, and we can conclude that, up to signs, it can be identified with $ue$, as wanted. \end{proof} \subsection{Computations in $T^*S^n$} We begin by assuming that {\bf $n$ is odd}. The wrapped Fukaya category $\mathcal{W}^{\mathbb{Z}/2\mathbb{Z}}(T^*S^n;\mathbb{K}_R)$ contains objects of the form $(S^n,\alpha[pt])$, were $\alpha \in \mathbb{K}_{R,0}$ and $[pt] \in H^n(S^n;\mathbb{K}_R)$ is the class of a point. We want to understand how a cotangent fiber $F$ acts on such an object. Let $F_i$ and $F'$ be as in the previous section. Given $a \in HF^*(F_i,F';\mathbb{K}_R)$ and $X\in \mathcal{W}^{\mathbb{Z}/2\mathbb{Z}}(T^*S^n;\mathbb{K}_R)$, define a map \begin{align*} \psi_a^X \colon HF^*(F',X;\mathbb{K}_R) & \to HF^{*+\deg(a)}(F_i,X;\mathbb{K}_R) \\ x & \mapsto \mu^2(x,a) \end{align*} \begin{figure \begin{center} \def0.5\textwidth{0.4\textwidth} \input{HF_F_Sn.pdf_tex} \end{center} \caption{The chain complexes $CF^*(F_0,(S^n, \alpha[pt]))$ and $CF^*(F',(S^n, \alpha[pt]))$} \label{HF(F,Sn)_fig} \end{figure} \begin{lemma} \phantomsection \label{HF(F,Sn) odd} \begin{enumerate} \item There is an isomorphism \label{HF odd} $$ HF^*(F,(S^n, \alpha[pt]);\mathbb{K}_R) \cong \mathbb{K}_R. $$ \item \label{u act on Sn odd} Using the identification in Lemma \ref{u in F} of $e\in HF^{0}(F_0,F';\mathbb{Z})$ with the class of a point in $S^{n-1}$, and of $u\in HF^{1-n}(F_0,F';\mathbb{Z})$ with the fundamental class of $S^{n-1}$, we have $$\psi_{u}^{(S^n,\alpha[pt])} = \pm\alpha \, \psi_{e}^{(S^n,\alpha[pt])}.$$ \end{enumerate} \end{lemma} \begin{proof} As we saw, in the Lefschetz fibration description $\pi_n : X_n\to \mathbb{C}$ of $T^*S^n$ the zero section $S^n$ is the Lagrangian lift of the interval $[-1,1] \subset \mathbb{C}$. For part \eqref{HF odd}, we can replace a cotangent fiber $F$ with its Hamiltonian-isotopic Lagrangians $F_0$ and $F'$, as in the previous section. Recall that these are lifts of paths out of the critical values that intersect the interval $[-1,1] \subset \mathbb{C}$ transversely and only at one of the endpoints of the interval. Then, $CF^*(F_0,(S^n, \alpha[pt]);\mathbb{K}_R)$ has a single generator in degree 0, and the result follows. The same is true replacing $F_0$ with $F'$. Let us give an alternative argument, with an eye towards part \eqref{u act on Sn odd}. This time, let $F_0$ and $F'$ be lifts of paths that intersect the interior of $[-1,1]$, as in Figure \ref{HF(F,Sn)_fig}. We start with $F_0$. The chain complex $CF^*(F_0,(S^n, \alpha[pt]);\mathbb{K}_R)$ now has generators $x,y,z$ in degrees $-n$, $1-n$ and 0, respectively ($y$ is the maximum and $z$ the minimum of an auxiliary Morse function on the component of $S^n \cap F_0$ that is diffeomorphic to $S^{n-1}$), see Figure \ref{HF(F,Sn)_fig}. The fact that $\partial x$ is of the form $\pm T^A y$, where $A$ is the $\sigma$-area of the lightly shaded bigon (recall the definition of $\sigma$ in Section \ref{S:construct Lagrangians}), follows from the fact that the algebraic count of lifts of the shaded strip is $\pm 1$. That can be seen using the Hamiltonian invariance of $HF^*(\mathbb{R}^n,i\mathbb{R}^n)$ in $\mathbb{C}^n$, which is of rank 1. It follows that the cohomology is of rank 1, generated by $z$. There is a similar argument for $F'$ instead of $F_0$, with $z'$ now being the maximum of an auxiliary Morse function on $S^{n-1}$. To prove \eqref{u act on Sn odd}, we use again the representation of $F_0$ and $F'$ in Figure \ref{HF(F,Sn)_fig}. The dark triangle in Figure \ref{HF(F,Sn)_fig} does not contain critical values of $\pi_n$, so the restriction of the Lefschetz fibration to that triangle is trivial. The triangle hence lifts to an $S^{n-1}$-family of holomorphic triangles with the appropriate Lagrangian boundary conditions. This family can be made rigid by using $e\in CF^*(F_0,F';\mathbb{K}_R)$ (represented by a minimum) as an input in $$ \psi_{e}^{(S^n,\alpha[pt])}(z') = \mu^2(z',e)= \pm T^B z, $$ where $B$ is the $\sigma$-area of the dark triangle. Similarly, the family of lifted triangles can be rigidified by using the bounding cochain $\alpha [pt]$ as an input in $$ \psi_{u}^{(S^n,\alpha[pt])}(z') = \mu^3(\alpha [pt],z',u)=\pm T^B \alpha z. $$ The result now follows. \end{proof} Consider now the case of {\bf even $n$}. Recall that we equip $S^n$ with the trivial rank 2 vector bundle of mixed degree $E=\mathbb{K}_R\oplus \mathbb{K}_R[1]$, and with bounding cochains of the form $b_{\alpha,\beta} = \begin{pmatrix} 0 & \beta \\ \alpha & 0 \end{pmatrix}_{[pt]}$, such that $\alpha,\beta \in \mathbb{K}_{R,0}$ and $[pt]\in H^n(S^n;\mathbb{Z})$ is represented by the minimum of a perfect Morse function on $S^n$. Let $F_0, F'$ be as before. \begin{lemma} \phantomsection \label{HF(F,Sn) even} \begin{enumerate} \item There is an isomorphism \label{HF even} $$ HF^*(F,(S^n, b_{\alpha,\beta});\mathbb{K}_R) \cong \mathbb{K}_R\oplus \mathbb{K}_R[1]. $$ \item \label{u act on Sn even} Using the identification in Lemma \ref{u in F} of $e\in HF^{0}(F_0,F';\mathbb{Z})$ with the class of a point in $S^{n-1}$, and of $u\in HF^{1-n}(F_0,F';\mathbb{Z})$ with the fundamental class of $S^{n-1}$, we have $$\psi_{u e}^{(S^n,b_{\alpha,\beta})} = \pm \begin{pmatrix} 0 & \beta \\ \alpha & 0 \end{pmatrix} \, \psi_{e}^{(S^n,b_{\alpha,\beta})}.$$ \end{enumerate} \end{lemma} \begin{proof} The proof of \eqref{HF even} is similar to the one in Lemma \ref{HF(F,Sn) odd}. One can again replace $F$ with either $F_0$ or $F'$ as in Figure \ref{HF(F,Sn)_fig}. We obtain a $\mathbb{K}_R$-basis $v_0,v_1$ for $HF^*(F_0,(S^n, b_{\alpha,\beta});\mathbb{K}_R)$, where $v_0,v_1$ is the standard basis for the fiber of $E=\mathbb{K}_R\oplus \mathbb{K}_R[1]$ at $z$ (the fiber minimum) indicated in Figure \ref{HF(F,Sn)_fig}. Similarly, we denote by $v'_0, v'_1$ the analogous basis for $HF^*(F',(S^n, b_{\alpha,\beta});\mathbb{K}_R)$, with $z$ replaced by $z'$ (the fiber maximum) in Figure \ref{HF(F,Sn)_fig}. The result in \eqref{u act on Sn even} follows again from the study of lifts of the dark triangle in Figure \ref{HF(F,Sn)_fig}. Once more, the lifts of the triangle can be rigidified by either taking $e$ as an input in $\mu^2$ or by inputing the bounding cochain $b_{\alpha,\beta}$ in $\mu^3$. Taking bases $v_i$ and $v'_i$ above, we get $$ \psi_{e}^{(S^n,b_{\alpha,\beta})}(v_i') = \mu^2(v_i',e)= \pm T^B v_i, $$ for $i = 0, 1$, where $B$ is the $\sigma$-area of the dark triangle. We also get $$ \psi_{u}^{(S^n,b_{\alpha,\beta})}(v_0') = \mu^3(b_\alpha,v_0',u)=\pm T^B \alpha v_1 $$ and $$ \psi_{u}^{(S^n,b_{\alpha,\beta})}(v_1') = \mu^3(b_{\alpha,\beta},v_1',u)=\pm T^B \beta v_0, $$ as wanted. \end{proof} \begin{figure \begin{center} \def0.5\textwidth{0.4\textwidth} \input{cone.pdf_tex} \end{center} \caption{$L_\tau$ as a cone on morphisms between $F_0$ and $F_1$} \label{cone_fig} \end{figure} We now consider the Lagrangians $L_\tau$, which are diffeomorphic to $S^1\times S^{n-1}$. Let $U\in U_{\mathbb{K}_R}$ be a unitary element in the Novikov field and take $\alpha := T^{-2(n-1)\tau} U^{-1}\in \mathbb{K}_R \setminus \mathbb{K}_{R,0}$. If $n>2$, write $L_\alpha$ for the Lagrangian $L_\tau$ equipped with the unitary local system $\xi$ in the trivial $\mathbb{K}_R$-bundle over $L_\tau$, such that the holonomy of $\xi$ along a loop that projects in degree 1 to the curve $C_\tau$ (recall that we think of $C_\tau$ as the curve $C$ in Figure \ref{F_w_fig}) is $U$. If $n=2$, recall that we picked a basis $h_1,h_2$ for $H_1(L_\tau;\mathbb{Z})$ in Lemma \ref{L:disk potential L_C}, to write the disk potential of $L_\tau$. The curve $h_1$ projects in degree 1 to $C_\tau$ and $h_2$ is a fiber of $\pi|_{L_\tau}$. In Remark \ref{R:crit pts of potentials}, we observed that the Floer cohomology of $(L_\tau,\xi)$ is non-trivial precisely when $\xi$ is a local system with holonomy $-1$ around $h_2$. Write $L_\alpha$ for $(L_\tau,\xi)$, such that the holonomy of $\xi$ is $U$ around $h_1$ and $-1$ around $h_2$. If the $\sigma$-areas of the two shadedd regions in Figure \ref{cone_fig} are the same, then the figure suggests that $L_\tau$ should be equivalent to surgery on morphisms supported on the two connected components of the intersection $F_1\cap F_0 = \{*\}\cup S^{n-1}$. Recall that surgery on an intersection point of two Lagrangians corresponds in the Fukaya category to taking the cone on the morphism given by the intersection point, see Chapter 10 of \cite{FOOO}. This motivates the following result. \begin{lemma} \label{Ltau is cone} For the appropriate choice of $\operatorname{Spin}$ structure, $L_\alpha$ is isomorphic in $\mathcal W_{\mon}^{\mathbb{Z}/2\mathbb{Z}}(T^*S^n;\mathbb{K}_R)$ to $\Cone( u^2 - \alpha e)$, where $ u^2 - \alpha e$ is thought of as a morphism in $HW^{\rm even}(F,F)$. In particular, $$ HF^*(F, L_\alpha;\mathbb{K}_R) \cong H^*(S^{n-1};\mathbb{K}_R), $$ as $\mathbb{Z}/2\mathbb{Z}$-graded free $\mathbb{K}_R$-modules. \end{lemma} \begin{proof} \begin{figure \begin{center} \def0.5\textwidth{0.8\textwidth} \input{cone1.pdf_tex} \end{center} \caption{The action of $u$ on $L_\alpha$} \label{cone1_fig} \end{figure} Given a monic polynomial $p(u) = u^d + a_{d-1} u^{d-1}+ \ldots +a_0$ in $\mathbb{K}_R[u] \cong HW^*(F,F;\mathbb{K}_R)$, the object $\Cone(p(u))$ is such that $HF^*(F,\Cone(p(u));\mathbb{K}_R)$ is a free $\mathbb{K}_R$-module of rank $d$. The right action of $u$ on $HF^*(F,\Cone(p(u));\mathbb{K}_R)$ is by the transpose of the companion matrix to $p(u)$: \begin{equation*} \begin{pmatrix} 0 & 1 & 0 & \cdots & 0 \\ 0 & 0 & 1 & \cdots & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots \\ -a_0 & -a_1 & -a_2 & \cdots & -a_{d-1} \end{pmatrix}. \end{equation*} Hence, we want to show that $HF^*(F,L_\alpha;\mathbb{K}_R)$ is a free $\mathbb{K}_R$-module of rank 2, where $u$ acts on the right as \begin{equation} \label{eq: u act on cone1} \begin{pmatrix} 0 & 1 \\ \alpha & 0 \end{pmatrix} \end{equation} Let us represent the Hamiltonian isotopy class of $F$ by $F_0$ and by $F'$, as before. In Figure \ref{cone1_fig}, we see that $F_0 \cap L_\tau $ is diffeomorphic to $S^{n-1}$. Choosing an auxiliary perfect Morse function on this sphere, we get a chain model for $CF^*(F_0,L_\alpha;\mathbb{K}_R)$ whose generators are the minimum $m$ and the maximum $M$. We can similarly get generators $m', M'$ for $CF^*(F',L_\alpha;\mathbb{K}_R)$. We will first work with coefficients in $\mathbb{K}_\mathbb{Z}$, and then argue that the case of $\mathbb{K}_\mathbb{C}$-coefficients follows. Recall Lemma \ref{u in F}. The element $e\in CF^0(F_0,F';\mathbb{K}_\mathbb{Z})$ (the minimum in its $S^{n-1}$ fiber) acts on $CF^*(F',L_\alpha,\mathbb{K}_\mathbb{Z})$ by $$ \psi_{e}^{L_\alpha}(M') = \mu^2(M',e)= \pm T^A m, $$ by taking lifts of the shaded triangle on the left in Figure \ref{cone1_fig}. The $\sigma$-area of this triangle is $A$. Note that the Lefschetz fibration is trivial over the triangle, so it has an $S^{n-1}$-family of holomorphic lifts. The remaining contributions to right multiplication by $e$ must come from lifts of the shaded triangle on the right in Figure \ref{cone1_fig}. Since $e$ acts by an isomorphism over $\mathbb{K}_\mathbb{Z}$, we conclude that the lifts of that triangle contribute to $$ \psi_{e}^{L_\alpha}(m') = \mu^2(m',e)= \pm T^B U M, $$ where $B$ is the $\sigma$-area of that right triangle in the plane. Note that these lifted triangles pick up holonomy $U$. The same holomorphic triangles determine the action of $e$ over $\mathbb{K}=\mathbb{K}_\mathbb{C}$, so we conclude that $\psi_{e}^{L_\alpha}$ is given by the same formulas over $\mathbb{K}$ as over $\mathbb{K}_\mathbb{Z}$. The element $u\in CF^{1-n}(F_0,F';\mathbb{K}_R)$ is represented by the maximum in the same $S^{n-1}$ fiber as $e$, and acts on $CF^*(F',L_\alpha;\mathbb{K}_R)$ by $$ \psi_{u}^{L_\alpha}(M') = \mu^2(M',u)= \pm T^A M, $$ and $$ \psi_{u}^{L_\alpha}(m') = \mu^2(m',u)= \pm T^A m. $$ In both cases, this corresponds to lifting the triangle on the left in Figure \ref{cone1_fig}. Observe that $B = A + 2(n-1)\tau$, so we can write $$ \psi_u^{L_\alpha} = \begin{pmatrix} 0 & \pm 1 \\ \pm \alpha & 0 \end{pmatrix} \psi_e^{L_\alpha}. $$ To get the signs as in \eqref{eq: u act on cone1}, we note that changing the $\operatorname{Spin}$ structure has the effect of replacing $\alpha$ by $-\alpha$. Note also that $\Cone(x) \cong \Cone (-x)$. We still need to show that $HF^*(F,L_\alpha;\mathbb{K}_R)\neq 0$. We prove that $HF^*(F',L_\alpha;\mathbb{K}_R)\neq 0$. If $n$ is odd, then this is obvious, since the indices of the generators $m',M'$ have the same parity, so the differential is zero. Observe that the case $n=2$ is addressed in Remark \ref{R:crit pts of potentials}. For general even $n$, we write $$ \mu^1(m) = \kappa_1 M, \qquad \mu^1(M) = \kappa_2 m, \qquad \mu^1(m') = \kappa_1 M', \qquad \mu^1(M') = \kappa_2 m', $$ for some $\kappa_1, \kappa_2, \kappa_1', \kappa_2' \in \mathbb{K}$. The Leibniz rule (and the fact that $\mu^1(u)=0$) yields \begin{align*} \mu^1(\mu^2(m', u)) &= \mu^2(\mu^1(m'), u) = \kappa_1' \mu^2(M', u) = \pm \kappa_1' T^A M \\ &= \pm \mu^1 (T^A m) = \pm T^A \kappa_1 M \Longrightarrow \kappa_1 = \pm \kappa_1' \end{align*} and \begin{align*} \mu^1(\mu^2(m', e)) &= \mu^2(\mu^1(m'), e) = \kappa_1' \mu^2(M', e) = \pm \kappa_1' T^A m \\ &= \pm \mu^1 (T^B U M) = \pm T^B U \kappa_2 m \Longrightarrow T^{B-A} U \kappa_2 = \pm \kappa_1' = \pm \kappa_1. \end{align*} Therefore, $$ \mu^1\circ \mu^1(M) = \kappa_2 \mu^1(m) = \pm T^{B-A} U \kappa_2^2 M. $$ But $\mu^1\circ \mu^1=0$, because $F$ and $L_\alpha$ both have vanishing disk potential. Since $T^{B-A} U$ is invertible, we conclude that $\kappa_1=\kappa_2=0$, and that $HF^*(F',L_\alpha;\mathbb{K}_R)\neq 0$, as wanted. \end{proof} As we saw, Lemma \ref{Ltau is cone} can be rephrased as saying that $HF^*(F,L_\alpha;\mathbb{K}_R)$ is isomorphic to $\mathbb{K}_R^2$, if $n$ is odd, and to $\mathbb{K}_R\oplus \mathbb{K}_R[1]$, if $n$ is even, and that the action of $u$ is represented by the matrix \eqref{eq: u act on cone1}. To relate this with the generation results for modules that will be discussed below, it is convenient to restrict our atention to $\mathbb{K}=\mathbb{K}_\mathbb{C}$, which is an algebraically closed field. Since the eigenvalues of the matrix \eqref{eq: u act on cone1} are $\pm \sqrt{\alpha}$ (the two square roots of $\alpha$ in $\mathbb{K}$), we conclude the following. \begin{corollary} \label{u act on Ltau} If $n$ is odd, then $HF^*(F',L_\alpha;\mathbb{K})$ and $HF^*(F_0,L_\alpha;\mathbb{K})$ have bases in which $\psi_{u e}^{L_\alpha} = \begin{pmatrix} \sqrt{\alpha} & 0 \\ 0 & - \sqrt{\alpha} \end{pmatrix} \, \psi_{e}^{L_\alpha}$. \end{corollary} We are now ready to prove the following result, up to Corollary \ref{S generate} below. \begin{theorem} \label{split generators Sn} If $n$ is odd, then the collection of right $A_{\mathbb{K}}$-modules $$ \{HF^*(F,(S^n,\alpha[pt]);\mathbb{K})\}_{\val(\alpha) \geq 0} \cup \{HF^*(F, L_\alpha;\mathbb{K})\}_{\val(\alpha) < 0} $$ split-generates the category $\mmod_{pr}(A_{\mathbb{K}})$. If $n$ is even, then the same holds if we replace $(S^n,\alpha[pt])$ with $(S^n,b_{\alpha,1})$, as in Lemma \ref{HF(F,Sn) even}. \end{theorem} \begin{proof} In the $n$ odd case, if $\val(\alpha) \geq 0$, then Lemma \ref{HF(F,Sn) odd} implies that $$ HF^*(F_0,(S^n,\alpha[pt]);\mathbb{K}) \cong S_{\pm\alpha} $$ as right $A_{\mathbb{K}}$-modules, where $S_\alpha$ is the 1-dimensional (over $\mathbb{K}$) right $A_{\mathbb{K}}$-module on which $u\in A_{\mathbb{K}}$ acts as multiplication by $\alpha$ (as in Lemma \ref{L:triangulated closure} below). If $\val(\alpha) < 0$, Lemma \ref{Ltau is cone} and Corollary \ref{u act on Ltau} imply that $$ HF^*(F, L_\alpha;\mathbb{K}) \cong S_{\sqrt{\alpha}} \oplus S_{-\sqrt{\alpha}} $$ as right $A_{\mathbb{K}}$-modules. Corollary \ref{S generate} below now implies the result when $n$ is odd. The case of $n$ even is analogous, where this time we apply Lemma \ref{HF(F,Sn) even} instead of Lemma \ref{HF(F,Sn) odd} and Corollary \ref{S tilde generate} instead of Corollary \ref{S generate}. \end{proof} The following is a version of Theorem \ref{generate F} from the Introduction. \begin{corollary} \label{generate Fuk Sn} The category $\mathcal{F}^{\mathbb{Z}/2\mathbb{Z}}_{mon}(T^*S^n;\mathbb{K})$ is split-generated by the collection of objects $\{(S^n,\alpha[pt])\}_{\val(\alpha) \geq 0} \cup \{L_\alpha\}_{\val(\alpha) < 0}$ when $n$ is odd. When $n$ is even, the same is true if we replace $(S^n,\alpha[pt])$ with $(S^n,b_{\alpha,1})$. \end{corollary} \begin{proof} This follows from Theorem \ref{split generators Sn} and Proposition \ref{C:Yoneda ff}. \end{proof} \subsection{Computations in $T^*S^3$} We now want to study how $u\in HW^*(F,F;\mathbb{K}_R)\cong \mathbb{K}_R[u]= A_{\mathbb{K}_R}$ acts on the tori $T^3_\tau$. Recall from Lemma \ref{L:disk potential T_C} that the disk potential of $T^3_\tau$ can be computed in a basis $h_1,h_2,h_3$ of $H_1(T^3_\tau;\mathbb{Z})$, where $h_1$ is a loop projecting bijectively to the curve $C_\tau\subset \mathbb{C}\setminus \{\pm 1\}$ (that $T_\tau^3$ covers), while $h_2$ and $h_3$ are vanishing circles that project to points under the fibration. As observed in Remark \ref{R:crit pts of potentials}, the critical points of the disk potential that belong to $(U_{\mathbb{K}_R})^3$ correspond to unitary local systems on $T^3_\tau$, whose holonomy around $h_1$ is arbitrary, and whose holonomy around each of $h_2$ and $h_3$ is $-1$. Given $U\in U_{\mathbb{K}_R}$, let $\alpha := T^{-2\tau} U^{-1} \in \mathbb{K}_R\setminus \mathbb{K}_{R,0}$ and denote by $T_\alpha$ the Lagrangian $T^3_\tau$ equipped with a unitary local system $\xi$ in the trivial $\mathbb{K}_R$-bundle, whose holonomy around $h_1$ is $U$, and whose holonomies around $h_2$ and $h_3$ are $-1$. Given $a \in HF^*(N_1,N_0;\mathbb{K}_R)$ and $X\in \mathcal{W}^{\mathbb{Z}/2\mathbb{Z}}_{\mon}(T^*S^3;\mathbb{K}_R)$, define a map \begin{align*} \phi_a^X \colon HF^*(N_0,L;\mathbb{K}_R) & \to HF^*(N_1,L;\mathbb{K}_R) \\ x & \mapsto \mu^2(x,a) \end{align*} \begin{figure \begin{center} \def0.5\textwidth{0.5\textwidth} \input{HF_N_L.pdf_tex} \end{center} \caption{The action of $u$ on $T^3_\alpha$} \label{HF(N,L)_fig} \end{figure} \begin{lemma} \phantomsection \label{HF(N,T)} For the appropriate choice of $\operatorname{Spin}$ structure on $T_\alpha$, \begin{enumerate} \item there is an isomorphism $$ HF^*(N,T_\alpha;\mathbb{K}) \cong H^*(T^2;\mathbb{K}), $$ possibly with a degree shift; \item using the identification in Lemma \ref{ue geometric} of $e\in HF^{0}(N_1,N_0;\mathbb{Z})$ with the fundamental class of $S^1$, and of $ue\in HF^{-2}(N_1,N_0;\mathbb{Z})$ with the fundamental class of $T^2$, we have $$ \phi_{u e}^{T_\alpha} = \alpha \, \phi_{e}^{T_\alpha}. $$ \end{enumerate} \end{lemma} \begin{proof} We begin with (1). By Remark \ref{R:crit pts of potentials}, $T_\alpha$ corresponds to a critical point of the disk potential of the torus $T_\tau^3$. Hence, $HF^*(T_\alpha,T_\alpha;\mathbb{K})\cong H^*(T^3;\mathbb{K})$ has rank 8. Observe that for the Lagrangian lifts $N_i$ of the paths $\eta_i$ in Figure \ref{F_w_fig}, each of the graded $\mathbb{K}$-vector spaces $CF^*(N_i,T_\alpha;\mathbb{K})$ is isomorphic to $H^*(T^2;\mathbb{K})$. This has rank 4, so the rank of $HF^*(N_i,T_\alpha;\mathbb{K})$ can only be 0, 2 or 4. On the other hand, by Proposition \ref{HW(N)}, $$HF^*(N_i,T_\alpha,\mathbb{K}) \cong HF^*(F,T_\alpha;\mathbb{K})\oplus HF^*(F,T_\alpha;\mathbb{K})[1].$$ Since $F$ generates the wrapped Fukaya category and $T_\alpha$ is a non-trivial object, we conclude that the rank of $HF^*(F,T_\alpha;\mathbb{K})$ is 1 or 2. Denote by $M$ this $\mathbb{K}[u]$-module. The full faithfulness of the Yoneda embedding implies that $$ HF^*(T_\tau, T_\tau;\mathbb{K}) \cong \Ext_{\mathbb{K}[u]}^*(M,M) $$ and the rank of the right side cannot be 8 if $M$ has rank 1, since the space of endomorphisms of a skyscraper sheaf in the derived category of a smooth curve has rank 2 (see \eqref{ExtSaSa} below for a more general result). We conclude that $M$ has rank 2, $HF^*(N_i, T_\alpha;\mathbb{K})$ has rank 4 and the differential on $CF^*(N_i, T_\alpha;\mathbb{K})$ vanishes. This implies the statement in (1). To prove (2), we use Figure \ref{HF(N,L)_fig}. First, we need to determine the image of $e$ and $ue$ under the functor $\mathcal G_1 : \mathcal W^\mathbb{Z}(T^*S^3;\mathbb{Z}) \to \mathcal W^\mathbb{Z}(T^*S^3;\mathbb{K}_R)$ defined in Section \ref{S:W}. Using an auxiliary Morse function on $N_0\cap N_1\cong S^1\cap T^2$, we know that $e$ is given by the maximum (which we denote by $x$) on the component $S^1$, and $ue$ is given by the maximum (which we denote by $y$) on the component $T^2$. Pick primitives $f_i$ for the restriction of the Liouville form $\lambda$ on $T^*S^3$ to the $N_i$, $i\in \{0,1\}$. Assume that $f_0$ and $f_1$ both vanish at $x$. Then, $\mathcal G_1(e) = T^{f_1(x)-f_0(x)} x = x \in CF^0(N_1,N_0;\mathbb{K}_R)$. Similarly, $\mathcal G_1(ue) = T^{f_1(y)-f_0(y)} y \in CF^{-2}(N_1,N_0;\mathbb{K}_R)$. Under our assumptions, $f_1(y) - f_0(y)$ is the negative of the symplectic area of a strip between $N_1$ and $N_0$, obtained from lifting the union of the darkly shaded and the white triangles in Figure \ref{HF(N,L)_fig}. We denote this area by $A+C$ in the figure. We can conclude that $ue$ is represented by $T^{-A-C} y$ in $CF^{-2}(N_1,N_0;\mathbb{K}_R)$. Each of the shaded triangles in the figure lifts to a $T^2$-family of holomorphic triangles with suitable boundary conditions. Since using the fundamental classes of $S^1$ and $T^2$ as inputs in $\mu^2$ does not constrain the $T^2$-families of holomorphic triangles, we conclude that, for suitable bases of $HF^*(N_i,T^3_\alpha;\mathbb{K})$, $i\in \{0,1\}$, we have \begin{equation} \label{formula with signs} \phi_{ue}^{T_\alpha} = T^{-A-C} T^A \operatorname{id} = \alpha T^B U \operatorname{id} = \pm \alpha \phi_e^{T_\alpha}. \end{equation} We used the fact that the sum of the $\sigma$-areas of the lightly shaded and white triangles in the figure is $B+C = 2\tau$. The sign can be fixed as wanted on the right side of \eqref{formula with signs}, since by changing the $\operatorname{Spin}$ structure on $T_\alpha$ one can replace $\alpha$ by $-\alpha$. \end{proof} \begin{remark} The fibration is trivial over the darkly shaded triangle in Figure \ref{HF(N,L)_fig}, which is why it has a $T^2$-family of lifts. However, the fibration is not trivial over the lightly shaded triangle, because one of its vertices is a critical value, but this triangle still has a $T^2$-family of disks. Note that one could also modify $N_0$ slightly, in a manner similar to what is done in Figure \ref{HF(F,Sn)_fig} for the proof of Lemma \ref{HF(F,Sn) odd}, so that the analogue of the lightly shaded triangle now includes no critical points, even at the corners. \end{remark} We can now prove the following analogue of Theorem \ref{split generators Sn}. \begin{theorem} \label{split generators S3} The collection of $A_{\mathbb{K}}$-modules $$ \{HF^*(F,(S^3,\alpha[pt]);\mathbb{K})\}_{\val(\alpha) \geq 0} \cup \{HF^*(N,T_\alpha;\mathbb{K})\}_{\val(\alpha) < 0} $$ split-generates the category $\mmod_{pr}(A_{\mathbb{K}})$. \end{theorem} \begin{proof} We just have to show that we can replace the objects supported on the collection of Lagrangians $\{(S^1\times S^2)_\tau\}_{\tau>0}$ by the objects supported on the collection $\{T^3_\tau\}_{\tau>0}$. By Lemma \ref{HF(N,T)}, $$ HF^*(N_0,T_\alpha;\mathbb{K}) \cong S_{\alpha}^2 \oplus S_{\alpha}^2[1] $$ as right $A_{\mathbb{K}}$-modules, where $S_\alpha$ is the 1-dimensional $\mathbb{K}$-vector space with $u$ acting by multiplication by $\alpha$. The result follows from Corollary \ref{S generate}. \end{proof} \begin{corollary} \label{generate Fuk} The category $\mathcal{F}^{\mathbb{Z}/2\mathbb{Z}}_{mon}(T^*S^3;\mathbb{K})$ is split-generated by the collection of objects $\{(S^3,\alpha[pt])\}_{\val(\alpha) \geq 0} \cup \{T_\alpha\}_{\val(\alpha) < 0}$. \end{corollary} \begin{proof} This follows from Theorem \ref{split generators S3} and Proposition \ref{C:Yoneda ff}. \end{proof} Now that we understand the $F$-modules associated to the Lagrangians $(S^1\times S^2)_\tau$ and $T^3_\tau$ in $T^*S^3$, we can also prove Theorem \ref{T:S1xS2 and T3}. \begin{proof}[Proof of Theorem \ref{T:S1xS2 and T3}] We wish to show that, if we fix $\tau, \tau'>0$, then $\tau = \tau'$ iff $(S^1\times S^2)_\tau$ and $T^3_{\tau'}$ can be equipped with local systems such that their Floer cohomology is non-trivial. Let $U\in U_\mathbb{K}$ and write $\alpha = T^{-4 \tau} U^{-1}$. Recall that the minimal Maslov number of $(S^1\times S^2)_\tau$ is 4, and that $(S^1\times S^2)_\alpha$ denotes $(S^1\times S^2)_\tau$ equipped with a local system of holonomy $U$. The proof of Theorem \ref{split generators Sn} implies that $HF^*(F,(S^1\times S^2)_\alpha;\mathbb{K}) \cong S_{\sqrt{\alpha}} \oplus S_{-\sqrt{\alpha}}$, where $\sqrt{\alpha} = T^{-2\tau} (\sqrt{U})^{-1}$ for some square root $\sqrt{U}\in \mathbb{K}$ of $U$. Write also $\alpha' = T^{-4 \tau'} U^{-1}$ and $\sqrt{\alpha'} = T^{-2\tau'} (\sqrt{U})^{-1}$. The minimal Maslov number of $T^3_{\tau'}$ is 2, and we have that $T_{\sqrt{\alpha'}}$ denotes $T^3_{\tau'}$ with a local system of holonomy $\sqrt U$. The proof of Theorem \ref{split generators S3} implies that $HF^*(F, T_{\sqrt{\alpha'}};\mathbb{K}) \cong S_{\sqrt{\alpha'}} \oplus S_{\sqrt{\alpha'}}[1]$. The result now follows from Proposition \ref{C:Yoneda ff} and the fact that, given $\beta, \beta'\in \mathbb{K}$ and the corresponding $\mathbb{Z}/2\mathbb{Z}$-graded $A_{\mathbb{K}}$-modules $S_\beta$, $S_{\beta'}$, we have \begin{equation} \label{ExtSaSa} \Ext_{A_{\mathbb{K}}}^*(S_\beta,S_{\beta'}) \cong \begin{cases} \mathbb{K}\oplus \mathbb{K}[1] & \text{ if } \beta = \beta' \\ 0 & \text{ otherwise } \end{cases}. \end{equation} \end{proof} \section{Intrinsic formality of algebras and modules} \label{S:Formality algebra} Recall that $A_{\mathbb{K}}=HW^*(F,F;\mathbb{K})$ is isomorphic to the polynomial algebra $\mathbb{K}[u]$, where $\deg(u)=1-n$. From this point on, we will always work over $\mathbb{K}$, and write $A$ instead of $A_\mathbb{K}$. We want to show that all $A_\infty$-algebras whose cohomology algebra is $A$ are quasi-isomorphic to $A$ (which is to say that $A$ is {\em intrinsically formal}), in preparation for proving the analogous result for certain types of modules. Denoting by $|A|$ the algebra $A$ where we {\em ignore the grading}, we can define the Hochschild cohomology $HH^r(|A|,|A|)$ as the homology of $CC^r(|A|,|A|) := {\operatorname{Hom}}_\mathbb{K}(|A|^{\otimes r},|A|)$, for $r\geq 0$, with respect to the Hochschild differential, see for instance \cite{WeibelHA}. To keep track of the grading on $A$, one can define $$ CC^{r}(A,A[s]) := {\operatorname{Hom}}^s_\mathbb{K}(A^{\otimes r},A), $$ which consists of graded homomorphisms that increase the degree by $s\in \mathbb{Z}$ (we continue to use the cohomological convention under which $A[s]$ is obtained by {\em subtracting} $s$ from all degrees in $A$), see \cite{SeidelThomas}. Some references use the alternative notation $CC^{r+s}(A,A)^s$, see for instance \cite{SeidelBook}. The Hochschild differential preserves $s$, so the $CC^{r}(A,A[s])$ are subcomplexes of $CC^r(|A|,|A|)$. Hence, for each $s$ we have a direct sum of chain complexes $$ CC^*(|A|,|A|) = CC^{*}(A,A[s]) \oplus Q^{*,s} $$ where $Q^{r,s} \subset CC^*(|A|,|A|)$ consists of those homomorphisms that have no term of degree $s$. One can identify $Q^{r,s}$ with the quotient $CC^r(|A|,|A|) / CC^{r}(A,A[s])$. We can conclude that there are inclusions on cohomology $$ HH^{r}(A,A[s]) \subset HH^r(|A|,|A|). $$ \begin{remark} In general, none of the inclusions $$\bigoplus_{s\in \mathbb{Z}} CC^{r}(A,A[s]) \subset CC^r(|A|,|A|) \subset \prod_{s\in \mathbb{Z}} CC^{r}(A,A[s])$$ can be claimed to be the identity. \end{remark} By \cite{WeibelHA}*{Corollary 9.1.5}, $HH^*(|A|,|A|) \cong \Ext^*_{|A|^e}(|A|,|A|)$, where $|A|^e = |A|\otimes_\mathbb{K} |A|^{\op}$ (this is isomorphic to $|A|\otimes_\mathbb{K} |A|$, since $|A|$ is commutative). Note that $\Ext_{|A|^e}(|A|,|A|)$ can be computed using any projective resolution of $|A|$ as an $|A|^e$-module. We use the {\em Koszul resolution} \begin{equation} \label{Koszul res} 0 \to |A|^e \stackrel{f}{\to} |A|^e \stackrel{g}{\to} |A| \to 0, \end{equation} where $f(a(u)\otimes b(u)) = a(u) u \otimes b(u) - a(u) \otimes u b(u)$ and $g(p(u),q(u)) = p(u) q(u)$. The existence of this 2-step resolution implies that $HH^r(|A|,|A|) = 0$ if $r \notin \{ 0,1\}$. Since $HH^{r}(A,A[s]) \subset HH^{r}(|A|,|A|)$, this implies that $HH^{r}(A,A[s]) = 0$ for all $s$ and for all $r\notin\{0,1\}$, in particular for $r\geq 3$. It is known that $A$ is intrinsically formal if $HH^{r}(A,A[2-r]) = 0$ for all $r\geq 3$, see \cite{SeidelThomas}*{Theorem 4.7}. We can thus conclude that the $\mathbb{Z}$-graded algebra $A$ is intrinsically formal. The additional vanishing for $r=2$ means that it is also not possible to deform the product structure on $A$. The previous argument can be adapted to show that, if we collapse the $\mathbb{Z}$-grading of $A$ to a $\mathbb{Z}/2\mathbb{Z}$-grading, $A$ is still intrinsically formal. More specifically, we can take $$CC^{r}(A,A[\even]) = \bigoplus_{s \even} CC^{r}(A,A[s]) \quad \text{and} \quad CC^{r}(A,A[\odd]) = \bigoplus_{s \odd} CC^{r}(A,A[s]),$$ both of which are subcomplexes of $CC^{r}(|A|,|A|)$. We get a decomposition $$ HH^{r}(|A|,|A|) \cong HH^{r}(A,A[\even]) \oplus HH^{r}(A,A[\odd]). $$ In the $\mathbb{Z}/2\mathbb{Z}$-graded case, intrinsic formality of $A$ follows from the simultaneous vanishing $$ \begin{cases} HH^r(A,A[\even]) = 0 \text{, for all } r\geq 3 \text{ even} \\ HH^r(A,A[\odd]) = 0 \text{, for all } r\geq 3 \text{ odd} \end{cases} $$ which is again a consequence of the fact that $HH^r(|A|,|A|) = 0$ if $r \notin \{ 0,1\}$. We can conclude the following. \begin{proposition} \label{prop:A intrinsically formal} $A=\mathbb{K}[u]$ is intrinsically formal as a $\mathbb{Z}$-graded algebra and as a $\mathbb{Z}/2\mathbb{Z}$-graded algebra. \qed \end{proposition} We now discuss right modules over the graded algebra $A$. In a manner similar to the previous discussion, let $|A|$, $|M|$ and $|N|$ be the result of forgetting the $\mathbb{Z}$-gradings of the algebra $A$ and of right $A$-modules $M$ and $N$. Then, ${\operatorname{Hom}}_\mathbb{K}(|M|,|N|)$ is an $A$-bimodule and its Hochschild cochain complex is, for $r\geq 0$, \begin{align*} CC^r(|A|,{\operatorname{Hom}}_\mathbb{K}(|M|,|N|)) &:= {\operatorname{Hom}}_\mathbb{K}(|A|^{\otimes r},{\operatorname{Hom}}_\mathbb{K}(|M|,|N|)) \\ & \cong {\operatorname{Hom}}_\mathbb{K}(|M|\otimes_\mathbb{K} |A|^{\otimes r},|N|). \end{align*} Remembering the $\mathbb{Z}$-gradings, we can denote as before the homomorphisms of degree $s\in \mathbb{Z}$ by $$ CC^{r}(A,{\operatorname{Hom}}_\mathbb{K}(M,N)[s]) := {\operatorname{Hom}}^s_\mathbb{K}(A^{\otimes r},{\operatorname{Hom}}_\mathbb{K}(M,N)) \cong {\operatorname{Hom}}^s_\mathbb{K}(M\otimes_\mathbb{K} A^{\otimes r},N). $$ The Hochschild differential preserves $s$ and we get inclusions on cohomology $$ HH^{r}(A,{\operatorname{Hom}}_\mathbb{K}(M,N)[s]) \subset HH^r(|A|,{\operatorname{Hom}}_\mathbb{K}(|M|,|N|)). $$ Using again \cite{WeibelHA}*{Corollary 9.1.5}, we get that $HH^*(|A|,{\operatorname{Hom}}_\mathbb{K}(|M|,|N|)) \cong \Ext^*_{|A|^e}(|A|,{\operatorname{Hom}}_\mathbb{K}(|M|,|N|))$. The existence of the 2-step Koszul resolution \eqref{Koszul res} now implies that $HH^r(|A|,{\operatorname{Hom}}_\mathbb{K}(|M|,|N|))=0$ for $r\geq 2$ and for every $M,N$. Consequently, we get $HH^{r}(A,{\operatorname{Hom}}_\mathbb{K}(M,N)[s])= 0$ for $r\geq 2$ and for all $s\in \mathbb{Z}$. \begin{remark} It is worth pointing out that $HH^*(|A|,{\operatorname{Hom}}_\mathbb{K}(|M|,|N|))$ is also isomorphic to $\Ext^*_{|A|/\mathbb{K}}(|M|,|N|)$, the {\em relative} $\Ext$, see \cite{WeibelHA}*{Lemma 9.1.9}. \end{remark} Say that a $\mathbb{Z}$-graded right $A$-module $M$ is {\em intrinsically formal} if, for every $\mathbb{Z}$-graded right $A_\infty$-module $\mathcal{M}$ over $A$ such that the $A$-module $H^*\mathcal{M}$ is isomorphic to $M$, we have that $\mathcal{M}$ is quasi-isomorphic to $M$ (as $A$-modules). In an analogous manner to the Hochschild cohomology criterion for intrinsic formality of graded algebras discussed earlier, it can be shown that if $$HH^{r}(A,{\operatorname{Hom}}_\mathbb{K}(M,M)[1-r]) = 0$$ for all $r\geq 2$, then $M$ is intrinsically formal. What we saw above implies that every $\mathbb{Z}$-graded right $A$-module is intrinsically formal. The same argument could again be adapted to the case of $\mathbb{Z}/2\mathbb{Z}$-modules over $A$ (with its grading collapsed to $\mathbb{Z}/2\mathbb{Z}$). If $M$ and $N$ are $\mathbb{Z}/2\mathbb{Z}$-graded right $A$-modules, we can define cohomology groups $HH^{r}(A,{\operatorname{Hom}}_\mathbb{K}(M,N)[s])$ with $s\in \{0,1\}$. This time, we have a decomposition $$ HH^r(|A|,{\operatorname{Hom}}_\mathbb{K}(|M|,|N|)) \cong HH^{r}(A,{\operatorname{Hom}}_\mathbb{K}(M,N)) \oplus HH^{r}(A,{\operatorname{Hom}}_\mathbb{K}(M,N)[1]). $$ The sufficient condition for intrinsic formality of $M$ is now given by the simultaneous vanishing $$ \begin{cases} HH^r(A,{\operatorname{Hom}}_\mathbb{K}(M,N)[1]) = 0 \text{, for all } r\geq 2 \text{ even} \\ HH^r(A,{\operatorname{Hom}}_\mathbb{K}(M,N)) = 0 \text{, for all } r\geq 2 \text{ odd} \end{cases} $$ and this criterion is again met by the discussion above. We can conclude the following. \begin{proposition} \label{modules formal} All $\mathbb{Z}$-graded and all $\mathbb{Z}/2\mathbb{Z}$-graded right $A$-modules are intrinsically formal. \end{proposition} As in Section \ref{SS:Yoneda}, denote by $\mmod(A)$ the category of right $A$-modules (we do not mean $A_\infty$-modules). These modules are $\mathbb{Z}$- or $\mathbb{Z}/2\mathbb{Z}$-graded, depending on the context. Given two modules $M, N$, we have $$\hom^*_{\mmod(A)}(M,N) = \Ext^*_A(M,N)$$ (instead of usual $A$-module homomorphisms). The following is a consequence of the results of this section. \begin{corollary} \label{C:mod(A) is formal} Passing to cohomology gives a functor \begin{align*} H:\mmod^{A_\infty}(\mathcal{A}) &\to \mmod(A) \\ \mathcal{M} & \mapsto H^*(\mathcal{M}) \end{align*} which is a quasi-equivalence (meaning that it induces an equivalence of categories on cohomology). The category $\mmod(A)$ is equivalent to the cohomology category of $\mmod^{A_\infty}(\mathcal{A})$. \end{corollary} \begin{proof} The fact that morphisms on the cohomology category of $\mmod^{A_\infty}(\mathcal{A})$ are given by $\Ext$ groups is explained in \cite{SeidelBook}*{Remark 2.15}. There is a composition of quasi-equivalences of dg-categories $$ \mmod(A) \to \mmod^{A_\infty}(A) \to \mmod^{A_\infty}(\mathcal{A}). $$ The fact that $\mathcal{A}$ is formal (by Proposition \ref{prop:A intrinsically formal}) implies that the functor on the right is a quasi-equivalence, see \cite{SeidelBook}*{Section 2f}. The functor on the left is given by inclusion (thinking of $\mmod(A)$ as a dg-category with trivial differentials), and it is a quasi-equivalence by Proposition \ref{modules formal}. The functor $H$ in the statement is a quasi-inverse for this composition. \end{proof} \section{Generation of categories of modules} \label{S:Generation modules} \begin{definition} Let $\mmod_{pr}(A)$ be the subcategory of $\mmod(A)$, whose objets are finite dimensional right $A$-modules (the subscript stands for {\em proper}). \end{definition} The fact that $\mathbb{C}$ is algebraically closed of characteristic zero implies that $\mathbb{K}$ is also algebraically closed, see \cite{FOOOCompactToricI}*{Appendix A}.% \footnote{In characteristic $p>0$, the polynomial $x^p - x - T^{-1}$ does not have roots in the Novikov field. See \cite{Kedlaya} for a discussion of the algebraic closure of the power series field in positive characteristic.} This will enable us to study the category $\mmod_{pr}(A)$ using Jordan normal forms. Recall that $A = \mathbb{K}[u]$, where $\deg(u) = 1-n$. Since the monotone Fukaya category is $\mathbb{Z}/2\mathbb{Z}$-graded, we will consider two cases, depending on the parity of $n$. \subsection{When $n$ is odd} Take an object $M \oplus N$ of $\mmod_{pr}(A)$, where $M$ is in degree 0 and $N$ is in degree 1. Since $\mathbb{K}$ is algebraically closed, $M$ has a splitting $$ M \cong \bigoplus_{i = 1}^m M_{\alpha_i}^{k_i} $$ where $\alpha_i\in \mathbb{K}$, $k_i\in \mathbb{Z}_+$ and $M_{\alpha}^{k}$ is the vector space to $\mathbb{K}^k$ with a right action of $u$ by the $k\times k$ transposed Jordan block $$ \begin{pmatrix} \alpha \\ 1 & \alpha \\ & \ddots & \ddots \\ & & 1 & \alpha \\ & & & 1 & \alpha \end{pmatrix} $$ The module $N$ also has a splitting $$ N \cong \bigoplus_{j = 1}^n M_{\beta_j}^{l_j}[1] $$ for $\beta_j\in \mathbb{K}$ and $l_j\in \mathbb{Z}_+$. Denote the 1-dimensional module $M_\alpha^1$ by $S_\alpha$. \begin{lemma} \label{L:triangulated closure} For every $k\in \mathbb{Z}_+$, $M^k_\alpha$ is in the triangulated closure of $S_\alpha$. \end{lemma} \begin{proof} Observe that there are $A$-module homomorphisms $$ \varphi_\alpha^k \colon M_\alpha^k \to S_\alpha $$ obtained by projecting onto the last coordinate. We can think of an $A$-module homomorphism as a homomorphism of $A_\infty$-modules, and take its cone. Recall that $\Cone(\varphi_\alpha^k)$ is the right $A_\infty$-module over $A$ given by the chain complex $$ (M_\alpha^{k}[1]\oplus S_\alpha, \mu^1 = \varphi_\alpha^k), $$ with $\mu^2 = (\mu_{M_\alpha^{k}[1]}^2,\mu^2_{S_\alpha})$ and trivial higher $A_\infty$-maps, see \cite{SeidelBook}*{Section (3e)}. We have that $H^*\Cone(\varphi_\alpha^k) \cong M_\alpha^{k-1}[1]$ and so $\Cone(\varphi_\alpha^k)$ is quasi-isomorphic to $M_\alpha^{k-1}[1]$. We can now argue by induction on $k$ to prove the statement in the lemma. Since there is a distinguished triangle $$ M^k_\alpha \to S_\alpha \to M^{k-1}_\alpha[1] \to M^k_\alpha [1], $$ axiom TR2 for triangulated categories (see for instance \cite{WeibelHA}*{Definition 10.2.1}) implies that there is also a distinguished triangle $$ S_\alpha \to M^{k-1}_\alpha[1] \to M^k_\alpha [1] \to S_\alpha[1], $$ and by induction on $k$ we get that $M^k_\alpha$ is in the triangulated closure of $S_\alpha$ for all $k\geq 1$. \end{proof} We can now conclude the following. \begin{corollary} \label{S generate} The category $\mmod_{pr}(A)$ is generated by the collection of modules $\{S_\alpha\}_{\alpha \in \mathbb{K}}$. \end{corollary} \subsection{When $n$ is even} This case is more subtle, because now $u$ is an operator of odd degree. Take again an object $M\oplus N$ in $\mmod_{pr}(A)$, where $M$ is a finite dimensional $\mathbb{K}$-vector space in degree 0 and $N$ is finite dimensional in degree 1. Since we will be interested in split-generation of $\mmod_{pr}(A)$, we can assume that $\dim_\mathbb{K} M = \dim_\mathbb{K} N$, by taking a direct sum of $M$ or $N$ with a $\mathbb{K}$-vector space with trivial $u$-action, if necessary. Since $u$ has odd degree, by picking bases for $M$ and $N$, the $u$-action is given by a matrix of the form $\begin{pmatrix} 0 & R \\ S & 0 \end{pmatrix}, $ where $R$ and $S$ are square matrices. Observe that $u^2$ has even degree, so it gives endomorphisms of $M$ and $N$. As we saw in the case of $n$ odd, we can pick bases for $M$ and $N$ so that the right action of $u^2$ is represented by the transpose of a Jordan matrix. This means that we can assume that $RS$ and $SR$ consist of finitely many transposed Jordan blocks along the diagonal. A simple calculation yields the following. \begin{lemma} The eigenvalues of the $u^2$-action on $M$ and on $N$ are the same. If $v_1,\ldots,v_m$ is a basis for $M$ in which $u^2$ is represented by a transposed Jordan matrix $J^T$, then $v_1 \cdot u, \ldots, v_m\cdot u$ is a basis for $N$ in which $u^2$ is also given by $J^T$. Hence, in these bases, $R=I$ and $S=J^T$. \end{lemma} Given $\alpha \in \mathbb{K}$, let $\tilde S_{\alpha}$ be the right $A$-module consisting of the $\mathbb{K}$-vector space $\mathbb{K}\oplus \mathbb{K}[1]$, on which $u$ acts by the matrix $\begin{pmatrix} 0 & 1 \\ \alpha & 0 \end{pmatrix}$. \begin{corollary} \label{S tilde generate} The category $\mmod_{pr}(A)$ is split-generated by the collection of modules $\{\tilde S_{\alpha}\}_{\alpha\in \mathbb{K}}$. \end{corollary} \begin{proof} The argument is analogous to the proof of Corollary \ref{S generate} above. \end{proof} \bibliographystyle{alpha}
1,108,101,566,243
arxiv
\section{Conclusions \major{and future work}} \label{sec:conclus} In this paper, we focused on the problem of people \textit{looking at each other (LAEO)} in videos. We proposed LAEO-Net++\@\xspace, which takes as input head tracks and determines if the people in the track are LAEO. This is the first work that uses \textit{tracks} instead of bounding-boxes as input to reason about people on the whole track. LAEO-Net++\@\xspace consists of three branches, one for each character's tracked head and one for the relative position of the two heads. Moreover, we introduced two LAEO video datasets: UCO-LAEO\xspace and AVA-LAEO. Our experiments showed the ability of LAEO-Net++\@\xspace to correctly detect LAEO events and the temporal window where they happen. Our model achieves state-of-the-art results on the TVHID-LAEO dataset. Furthermore, we demonstrated the generality of our model by applying it to a social case scenario, where we automatically infer the \textit{social relationship } between two people based on the frequency they LAEO i.e.\@\xspace \textit{friend}-ness, and showed that our metric can be useful for guided search of interactions between characters in videos (i.e.\@\xspace interaction prediction). Finally, in Section 5 in the suppl.\ material we examine two other applications of LAEO-Net++\@\xspace, i.e.\@\xspace head pose classification and regression. As future work, we identify the following research directions: incorporating explicit 3D information of humans (\eg \cite{zhu2017face}) to the model and exploring other kinds of social situations (\eg \cite{salsa}). \section{Introduction} \label{sec:intro} } \IEEEPARstart{E}{ye} contact or `mutual gaze' is an important part of the non-verbal communication between two people~\cite{loeb1972mutual}. The duration and frequency of eye contact depends on the nature of the relationship and reflects the power relationships, the attraction or the antagonism between the participants~\cite{abele1986gaze}. Therefore, in order to understand and interpret the social interactions that are occurring, it is important to capture this signal accurately. The importance of detecting people Looking At Each Other (LAEO) has already been recognized in a series of computer vision papers~\cite{marin2013ijcv,palmero2018laeo} as well as in other papers that study human gaze~\cite{chong2018eccv,recasens2015nips,recasens2017iccv,brau2018eccv}. LAEO is complementary to other forms of human non-verbal communication such as facial expressions, gestures, proxemics (distance), body language and pose, paralanguage (the tone of the voice, prosody), and interactions (e.g.\ hugging, handshake). Many of these have been the subject of recent papers \cite{marin2014mva,vondrick2016cvpr,gu2018ava,kukleva2020learning}. In this paper, we introduce a new deep convolutional neural network (CNN) for determining LAEO in video material, coined \textbf{LAEO-Net++\@\xspace}. Unlike previous works, our approach answers the question of whether two characters are LAEO over a temporal period by using a spatio-temporal model, whereas previous models have only considered individual frames. The problem with frame-wise LAEO is that when characters blink or momentarily move their head, then they are considered non-LAEO, and this can severely affect the accuracy of the LAEO measurement over a time period. The model we introduce considers head tracks over multiple frames, and determines whether two characters are LAEO for a time period based on the pose of their heads and their relative position. Such an example is in Figure~\ref{fig:teaser}. \begin{figure*}[t] \centerline{ \includegraphics[width=1\linewidth]{Figure1.pdf} } \caption{\small{\textbf{Intimacy or hostility?} Head pose, along with body pose and facial expressions, is a rich source of information for interpreting human interactions. Being able to automatically understand the non-verbal cues provided by the relative head orientations of people in a scene enables a new level of human-centric video understanding. Green and red pairs of heads represent LAEO and non-LAEO cases, respectively. Video source of second row: \url{https://youtu.be/B3eFZMvNS1U} }} \aftercaptions \label{fig:teaser} \end{figure*} We make the following contributions: first, we introduce a spatio-temporal LAEO model that consists of three branches, one for each character's tracked head and one for their relative position, together with a fusion block. This is described in Section~\ref{sec:model}. To the best of our knowledge, this is the first work that uses tracks as input and reasons about people LAEO in the whole track, instead of using only individual frames. Second, we introduce two new datasets (Section~\ref{sec:datasets}): (i)~{\bf UCO-LAEO\xspace}, a new dataset for training and testing LAEO. It consists of $129$ ($3$-$12$ sec) clips from four popular TV shows; and (ii)~{\bf AVA-LAEO}, a new dataset, which extends the existing large scale AVA dataset~\cite{gu2018ava} with LAEO annotations for the training and validation sets. We evaluate the performance of the spatio-temporal LAEO model on both these new datasets (Section~\ref{sec:expers}). Third, we show that our model achieves the state of the art on the existing TVHID-LAEO dataset~\cite{marin2013ijcv} by a significant margin ($3\%$). Finally, in Section~\ref{sec:friends}, we show that the LAEO score can be used as tool not only for demonstrating social relationships between people but also for guiding the search for human interactions in videos; we demonstrate these for one episode of the TV comedy `Friends'. A preliminary version of this work has been published in CVPR 2019~\cite{marin19cvpr}. We significantly extend it in the following ways: \begin{itemize} \item \textbf{Design.} We propose LAEO-Net++\@\xspace: a new three branch head-track model for determining if two people are LAEO. LAEO-Net++\@\xspace is based on LAEO-Net\@\xspace~\cite{marin19cvpr} but it better decodes the head-tracks by using a different architecture and it better exploits the temporal continuity of videos by using $\mathcal{M}$-frames long head-tracks. % The differences between the two models are described in details in Section~\ref{sec:model} and experimentally compared in Section~\ref{sub:old_vs_new}. The results show that the proposed changes improve the performance and, overall, LAEO-Net++\@\xspace outperforms all other approaches. \item \textbf{Pre-training schemes.} We present three different settings with different levels of supervision to pre-train LAEO-Net++\@\xspace and discuss our findings (Section~\ref{sub:head-pose}). First, we use ground truth labels for head orientation in videos (fully supervised setting). Second, we use the self-supervised Facial Attributes-Net~\cite{wiles2018bmvc} by extending it to video frames (self-supervised setting). Third, we use random initialization and demonstrate that pose can be learnt implicitly by the LAEO task alone (implicit supervised setting). \item \textbf{Analysis and experiments.} We provide more insights and contents to explain the performance of LAEO-Net++\@\xspace, as well as more experiments on all datasets (Sections~\ref{sub:head-map}-\ref{sub:cross}). Specifically, we experimentally demonstrate and discuss the benefits of the head-map branch, exploiting the temporal dimension, implicit or explicit self-supervised learning, and of applying LAEO-Net++\@\xspace to various datasets (Section~\ref{sub:discussion}). \item \textbf{Interaction prediction.} For one episode of the TV show `Friends' we use LAEO-Net++\@\xspace as a proxy for guiding the search for human interactions in videos. In particular, we show that by using LAEO we can identify the social relationship between characters and whether two characters are interacting, even if they hardly co-exist (Section~\ref{sec:friends}). \end{itemize} \section{Related work} \label{sub:relworks} Gaze~\cite{recasens2015nips} and head pose~\cite{drouard2017hpose} are powerful tools to deal with the problem of determining the \textit{visual focus of attention} (VFoA) in a scene, i.e.\@\xspace what people are looking at. For instance, \cite{kobayashi2001unique} highlights the importance of the white part of the human eye (i.e.\@\xspace white sclera) in recognising gaze direction, enabling the extraordinary ability of humans to communicate just by using gaze signals. \paragraph{Visual focus of attention.} One classical approach for determining the VFoA is \cite{ba2009vfoa}, where the authors model the dynamics of a meeting group in a probabilistic way, inferring where the participants are looking at. An improved version of this work is presented in \cite{ba2011pami}, where context information is used to aid in solving the task. \cite{zhang17pami} present a new gaze dataset and propose GazeNet, the first deep appearance-based gaze estimation method. More recently, \cite{brau2018eccv} discover 3D locations of regions of interest in a video by analysing human gaze. They propose a probabilistic model that simultaneously infers people's location and gaze as well as the item they are looking at, which might even be outside the image. \begin{figure*}[t] \centerline{ \includegraphics[width=1\linewidth]{Figure2.pdf} } \caption{\small{\textbf{Our three branch track LAEO-Net++\@\xspace}: It consists of the head branches (green), the head-map branch (red) and a fusion block, which concatenates the embeddings from the other branches and scores the track sequence as LAEO or not-LAEO with a fully connected layer (blue) using softmax loss. In our experiments, we use head tracks of length $\mathcal{T}=10$ and head-maps of length $\mathcal{M}=10$}.} \label{fig:main} \end{figure*} \paragraph{Gaze direction.} In the literature, some works focus on `gaze following'~\cite{recasens2015nips,recasens2017iccv}. \cite{recasens2015nips} proposes a two-branch model that follows the gaze of a single person (head branch) and identifies the object being looked at (saliency branch). In a similar manner, LAEO-Net++\@\xspace makes use of spatial and temporal information throughout the video and processes the relation of people over time. We discuss the relationship between LAEO and gaze prediction learning in Section~\ref{sub:pre-training}. The work in \cite{chong2018eccv} focuses on images by proposing a network that estimates both the gaze direction and the VFoA. A coarse spatial location of the target face is provided in the form of a one-hot vector. In contrast, in our model, this is provided by a RGB image with Gaussian-like circles representing the centre and scale of heads and a colour-coding indicating the target pair (Figure~\ref{fig:headmaps_synpairs}~(a)). Thus, our representation offers a better resolution of the scene geometry and incorporates cues about head scales. Typically, in edited movies, an interaction is represented by alternating video shots. Therefore, sometimes the VFoA is not visible in the current frame or shot, but in a different one. This is addressed in~\cite{recasens2017iccv} with a model that reasons about human gaze and 3D geometric relations between different views of the same scene. \cite{masse2018pami} consider scenarios where multiple people are involved in a social interaction. Given that the eyes of a person are not always visible (\eg due to camera viewpoint), they estimate people's gaze by modelling the motion of their heads with a Bayesian model. \cite{fischer18eccv} propose an appearance-based CNN that learns a direct image-to-gaze mapping using a large dataset of annotated eye images. \cite{krafka2016cvpr} present GazeCapture, a dataset collected from smartphone users, and use it to train iTracker, a CNN for gaze estimation that runs in real-time on commercial mobile devices. \cite{huang17mva} work on gaze estimation on tablets. They collect an unconstrained dataset and present an method for gaze estimation using a Random Forest regressor. \cite{zhu17iccv} propose a gaze transform layer to connect separate head pose and eyeball movement models. This does not suffer from overfitting of head-gaze correlation and makes it possible to use datasets existing for other tasks. \cite{li18eccv} propose a model for joint gaze estimation and action recognition in first person. They model gaze distribution using stochastic units, from which they generate an attention map. Then, this map guides the aggregation of visual features for action recognition. \cite{kellnhofer19iccv} collect a 3D gaze dataset simply by recording with an omni-directional camera subjects looking at a pre-defined point (indoors and outdoors). \paragraph{People Looking At Each Other. } A special case of VFoA is when subject-A's VFoA is subject-B, and subject-B's VFoA is subject-A. This is known as \textit{mutual gaze} or people \textit{looking at each other} (LAEO). This situation typically entails non-physical human interactions, but might precede or continue a physical one, \eg hand-shake before or after a conversation. In the context of \textit{Behaviour Imaging}, detecting LAEO events is a key for understanding higher-level social interactions, as in autism in children~\cite{rehg2011behavior}. Furthermore, \cite{ajodan2019increased} shows that children diagnosed with autism spectrum disorder demonstrate increased eye contact with their parents compared to others, \eg a clinician, despite the social communication difficulties. In the context of {social interaction}, \cite{goffman2008public,loeb1972mutual} point out that one principal way of demonstrating interest in social interaction is the willingness of people to LAEO. The problem of detecting people LAEO in videos was introduced in \cite{marin2013ijcv}. After detecting and tracking human heads, \cite{marin2013ijcv} model and predict yaw and pitch angles with a Gaussian Process regression model. Based on the estimated angles and the relative position of the two heads, a LAEO score is computed per frame, and aggregated over the shot. Although we also model the head pose and relative position, LAEO-Net++\@\xspace estimates LAEO for a track over a temporal window, instead of a single frame. \cite{ricci2015iccv} address the problem of detecting conversational groups in social scenes by combining cues from body orientation, head pose and relative position of people. In a controlled scenario with just two people, \cite{palmero2018laeo} addresses the LAEO problem by using two calibrated cameras placed in front of the participants, making sure that there is an overlapping visible zone between both cameras. Recently, LAEO has been used as an additional task in the joint learning of LAEO and 3D gaze estimation~\cite{doosti2020mgaze3d}. The authors show that this leads to richer representations than solving each task separately, and more importantly that 3D gaze is a powerful cue for understanding relations. Thus LAEO is bridging the gap between 2D and 3D mutual gaze detection (more details in Section~\ref{sub:pre-training}). \paragraph{Interactions and relations. } Looking at a person is a dominant classes for human interactions in videos~\cite{Patron2010hi5,gu2018ava}. ~\cite{liu2019social} propose a network to capture long and short-term temporal cues, \cite{lv2018multi} classify relationships between characters, while \cite{kukleva2020learning} jointly learn interactions and relations between characters. Instead, we treat mutual gaze as a cue to identify the existence of interactions and to determine the level of friendness between people. In this context, we think that a LAEO model (either pre-trained or fine-tuned and adapted to new data) can have an impact on other applications, such as detecting cartoons, animals (\eg cats, chimpanzees~\cite{schofield19chimp}) or other object classes (\eg cars) looking at each other. \section{LAEO-Net++\@\xspace } \label{sec:model} \input{datasets_table.tex} Given a video clip, we aim to determine if any two humans are \textit{Looking At Each Other} (LAEO). To this end, we introduce the LAEO-Net++\@\xspace, a three branch \textit{track} network, which takes as input two head tracks and the relative position between the two heads encoded by a head-map, and determines a confidence score on whether the two people are looking at each other or not, and the frames where LAEO occurs. The network is applied exhaustively over all pairs of simultaneous head tracks in the video clip. LAEO-Net++\@\xspace consists of three input branches, a fusion block, and a fully-connected layer and is illustrated in Figure~\ref{fig:main}. Two of the input streams determine the pose of the heads (green branches) and the third represents their relative position and scale (red branch). The fusion block combines the embeddings from the three branches and passes them through a fully-connected layer that predicts the LAEO classification (blue layer). The network uses spatio-temporal 3D convolutions and can be applied to the head tracks in the video. We next describe the components and report their specifications in Table 1 in the supplementary material. \paragraph{Head-pose branch. } It consists of two branches, one per person. The input of each branch is a tensor of $\mathcal{T}$ RGB frame crops of size $64 \times 64$ pixels, containing a sequence of heads of the same person. Each branch encodes the head frame crop, taking into account the head pose. The architecture of the head-pose branch is inspired by the encoder of the self-supervised method~\cite{wiles2018bmvc}. It consists of five conv layers, which are followed by a dropout and a flatten ones (green branches in Figure~\ref{fig:main}). The output of the flatten layer is L2-normalized before using it for further processing. Note that the head sequence of each person of the target pair will be encoded by this branch, obtaining two embedding vectors as a result. \paragraph{Head-map branch. } This branch embeds the relative position and relative distance to the camera (i.e.\@\xspace depth) between two head tracks over time using a head-map. In particular, we depict as 2D Gaussians all the heads detected at each frame of the $\mathcal{T}$-frames track (Figure~\ref{fig:headmaps_synpairs}~(a)), whose size is proportional to the head size (i.e.\@\xspace detection bounding-box). The different Gaussian sizes encode the relative 3D arrangement (depth) of people in the scene, i.e.\@\xspace smaller sizes indicate that people are further from the camera compared to those with bigger size. We define a $64 \times 64 \times \mathcal{M}$ map (for the whole $\mathcal{T}$-frames track) that encodes this information\footnote{ Assuming a 0-indexed list, the central frame of a sequence with length $T$ is the $\lfloor T/2 \rfloor$-th one. Specifically, for $T=10$ and $M=1$, the central frame is the one in position 5 (i.e.\@\xspace the 6th), whereas for $M=5$ we use 5 consecutive frames in the central part (taking into account the previous criterion). }. In addition to the two head tracks, this branch encodes information for other people in the scene. Depending on its size and scale, a third person could cut the \textit{gaze ray} between the two side people (Figure~\ref{fig:whyhmap}). Including this information helps the LAEO-Net++\@\xspace to distinguish such cases. This branch consists of a series of four convolutional layers: either 2D if we are modeling the relative head position only at the central frame or 3D if we target the whole $\mathcal{T}$-frame head track. To obtain the embedding of the head-map we flatten the output of the last conv layer and apply L2-normalization. \paragraph{Fusion block.} The embedding vectors obtained as the output of the different branches of the network are concatenated and further processed by one fully-connected layer with a dropout layer (blue layer in Figure~\ref{fig:main}). Then, a Softmax layer consisting of two hidden units (i.e.\@\xspace representing not-LAEO and LAEO classes) follows. \paragraph{LAEO loss function. } For training the LAEO predictor, we use the standard binary cross entropy loss: \begin{equation} \label{eq:laeoLoss} \small{{\mathcal{L}_{\textrm{LAEO}} = - \left( c \cdot \log (\hat{p}_{c}) + (1-c) \cdot \log(1-\hat{p}_{c}) \right),}} \end{equation} where $c$ is the ground-truth class ($0$ for not-LAEO, $1$ for LAEO) and $\hat{p}_{c}$ the predicted probability of the pair being class $c$. \paragraph{Differences between LAEO-Net\@\xspace~\cite{marin19cvpr} and LAEO-Net++\@\xspace. } LAEO-Net\@\xspace exploits the temporal information of videos by using as input two head tracks instead of single frames. Nevertheless, the relative position between the two heads (and any interleaving head) is encoded by a \emph{single} frame, i.e.\@\xspace one head map. We consider this a wasted opportunity, as this single frame may suffer from several issues, such as noise, inconsistency, detection problems, etc.\@\xspace. Therefore, we extend the temporal dimension of the head maps and consider multiple consecutive frames instead of single frames. This leads to two main architecture changes in LAEO-Net++\@\xspace: (a) we consider $\mathcal{M}$-length head-maps, and (b) we decode the information from the temporal sequence of head-maps using a series of 3D conv layers instead of 2D ones from LAEO-Net\@\xspace~\cite{marin19cvpr} (bottom branch in Figure~\ref{fig:main}). Additionally, we change the architecture of the branches that process the head-tracks from a shallower arbitrary-chosen architecture of LAEO-Net\@\xspace~\cite{marin19cvpr} to the deeper, inspired-by-\cite{wiles2018bmvc} architecture of LAEO-Net++\@\xspace. LAEO-Net++\@\xspace has more parameters and therefore, ability to better generalize and learn better features. In Section~\ref{sub:old_vs_new} we present experiments demonstrating the benefit of all changes. \section{Datasets} \label{sec:datasets} In this section, we describe the LAEO datasets. First, we introduce two new datasets: UCO-LAEO\xspace and AVA-LAEO, and then, two other datasets: AFLW~\cite{koestinger11aflw}, and TVHID~\cite{Patron2010hi5}. AFLW is used for pre-training the head-pose branch and for generating synthetic data, while TVHID is used only for testing. The newly introduced UCO-LAEO\xspace and AVA-LAEO datasets are used both for training and testing LAEO-Net++\@\xspace. Table~\ref{tab:dbstats} shows an overview of the LAEO datasets. The new datasets with their annotations and the code for evaluation are available online at: \url{http://www.robots.ox.ac.uk/~vgg/research/laeonet/}. \subsection{The UCO-LAEO\xspace dataset } \label{sub:dat_laeo} We use four popular TV shows: `Game of Thrones', `Mr Robot', `Silicon Valley' and `The Walking Dead'. From these shows, we collect $129$ ($3$-$12$ seconds long) shots, annotate all the heads in each frame with bounding boxes, and then annotate each head pair as LAEO or not-LAEO (Figure~\ref{fig:datasets}~(top)). \paragraph{Annotation setup. } We annotate all frames both at the frame level, i.e.\@\xspace, \textit{does this frame contain any pair of people LAEO?}; and at the head level, i.e.\@\xspace we annotate all heads in a frame with a bounding-box and all the possible LAEO pairs. The visually ambiguous cases are assigned as `ambiguous' and we exclude them from our experiments. We split the $100$ LAEO shots into $77$ train, $8$ validation and $15$ test, respectively. This results in $\sim7.5$k training, $\sim1.2$k val and $\sim1.5$k test LAEO pairs (Table~\ref{tab:dbstats}). \begin{figure}[t] \begin{center} \includegraphics[width=1\linewidth]{Figure4.pdf} \end{center} \beforecaptions \caption{\small{\textbf{(top) UCO-LAEO\xspace and (bottom) AVA-LAEO datasets.} Example of frames and LAEO head pair annotations included in our new datasets. Different scenarios, people clothing, background clutter and diverse video resolutions, among other factors, make them challenging. }} \label{fig:datasets} \end{figure} \subsection{AVA-LAEO dataset } \label{sub:dat_ava} AVA-LAEO consists of movies from the training and validation sets of the `Atomic Visual Actions' dataset (AVA v2.2)~\cite{gu2018ava} dataset. The AVA frames are annotated (every one second) with bounding-boxes for $80$ actions, without LAEO annotations; therefore, we enhance the labels of the existing (person) bounding-boxes in a subset of the train and val sets with LAEO annotations. \paragraph{Annotation setup. } From the train and val sets of AVA, we select the frames with more than one person annotated as \textit{`watch (a person)'}, resulting in a total of $40,166$ and $10,631$ frames, respectively. We only consider the cases, where both the watcher and the watched person are visible (since the watched person may not be visible in the frame). For annotating, we follow the same process as in UCO-LAEO\xspace, i.e.\@\xspace we annotate each pair of human bounding boxes at the frame level as LAEO, not-LAEO, or ambiguous. This results in $\sim19$k LAEO and $\sim118$k not-LAEO pairs for the training set and $\sim5.8$k LAEO and $\sim28$k not-LAEO pairs for the val set (Table~\ref{tab:dbstats}). We refer to this subset as AVA-LAEO. Figure~\ref{fig:datasets}~(bottom) shows some LAEO pair examples. \subsection{Additional datasets} \paragraph{AFLW dataset. } \label{subsec:aflw} We use the `Annotated Facial Landmarks in the Wild' dataset~\cite{koestinger11aflw} to (a) pre-train the head-pose branch (first stage, Section~\ref{sub:pretrain-AFLW}), and (b) generate synthetic data for training (second stage, Section~\ref{sub:finetune}). It contains about $25$k annotated faces in images obtained from FlickR, where each face is annotated with a set of facial landmarks. From those landmarks, the head pose (i.e.\@\xspace yaw, pitch and roll angles) is estimated. To create a sequence of head-crops, we replicate the input image $\mathcal{T}$ times. We keep the two middle replicas unchanged and randomly perturbing the others, i.e.\@\xspace small shift, zooming and brightness change. \paragraph{TVHID-LAEO. } \label{subsec:tvhid} TVHID~\cite{Patron2010hi5} was originally designed for human interaction recognition in videos. It contains $300$ video clips with five classes: hand-shake, high-five, hug, kiss and negative. We use the LAEO annotations at the shot level from \cite{marin2013ijcv}, which result in $443$ shots with $331$ LAEO and $112$ not-LAEO pairs (Table~\ref{tab:dbstats}). \section{Head detection and tracking} \label{sub:d_t} Unlike most methods that rely on faces, LAEO-Net++\@\xspace requires {\em head} tracks as input. Here, we train the Single Shot Multibox Detector detector~\cite{liu2016ssd} from scratch and obtain head detections. Then, we group them into tracks using the linking algorithm from~\cite{kalogeiton2017action} (see Section 2 in the supplementary material). \section{Training LAEO-Net++\@\xspace} \label{sec:training} We describe here our two-stage training procedure. The first stage involves only the head-pose branches (Section~\ref{sub:head-pose}). We consider three initialization options for these branches: (i) fully-supervised pre-training with annotated head-pose data (Section~\ref{sub:pretrain-AFLW}), (ii) self-supervised pre-training using Facial Attributes-Net \cite{wiles2018bmvc} (Section~\ref{sub:pretrain-ss}), or (iii) completely random initialization, i.e.\@\xspace no pre-training (Section~\ref{sub:pretrain-random}). In the second stage, we train LAEO-Net++\@\xspace from scratch, i.e.\@\xspace head-map and upper layers (Section~\ref{sub:finetune}). \subsection{Head-pose branches} \label{sub:head-pose} In general, humans can infer \textit{where} a person is looking just based on the head pose, without even seeing the eyes~\cite{langton2004influence}. This shows that important information is encoded in the head orientation. In the literature, several works model the head orientation \cite{ruiz2018hpose} or the eye gaze~\cite{recasens2015nips}. Note that using the actual eye gazing is not always an option, even with multiple-frames as input, as there is no guarantee that the eyes are fully visible, i.e.\@\xspace due to image resolution, or self occlusions. Therefore, in this work we model gaze just based on head orientation. In particular, we either (i)~pre-train a model that learns the head orientation using the head angles (Section~\ref{sub:pretrain-AFLW}), or (ii)~use the self-supervised Facial Attributes-Net that models the head orientation implicitly (Section~\ref{sub:pretrain-ss}), or (iii)~use a random initialization for the LAEO-Net++\@\xspace that manages to learn the head-pose orientation (Section~\ref{sub:pretrain-random}). \subsubsection{Fully-supervised pre-training} \label{sub:pretrain-AFLW} We model head orientation with three angles (in order of decreasing information): (a)~yaw, i.e.\@\xspace looking right, left, (b)~pitch, i.e.\@\xspace looking up, down, and (c)~roll, i.e.\@\xspace in-plane rotation. We use this modelling to pre-train the head-pose branches. \paragraph{Loss function of head-pose pre-training. } Let $(\alpha, \beta, \gamma)$ be the yaw, pitch and roll angles of a head, respectively. We define one loss for estimating each pose angle: $\mathcal{L}_{\alpha}$, $\mathcal{L}_{\beta}$, $\mathcal{L}_{\gamma}$ and model them with the $L1$-smooth loss~\cite{ren2015fastrcnn}. Given that the yaw angle is the dominant one, in addition to these losses, we include a term that penalizes an incorrect estimation of the sign of the yaw angle, i.e.\@\xspace, failing to decide if the person is looking left or right ($\mathcal{L}_{s}$). It is defined as: \begin{equation} \mathcal{L}_{s} = \max(0, - \mathrm{sign}(\alpha) \cdot \mathrm{sign}(\hat{\alpha}) ) , \end{equation} where $\mathrm{sign}(\alpha)$ is the sign function (i.e.\@\xspace returns $+1$ for positive inputs, $-1$ for negative inputs, and $0$ if the input is $0$) applied to the yaw angle; and, $\hat{\alpha}$ is the ground-truth angle. In practise, as the gradient for the sign function is always 0, it is implemented by using $\mathrm{tanh}(\cdot)$ (hyperbolic tangent). Therefore, the loss function $\mathcal{L}_h$ for training the head-pose branch for LAEO purposes is given by: \begin{equation} \label{eq:headloss} \mathcal{L}_h = w_{\alpha} \cdot \mathcal{L}_{\alpha} + w_{\beta} \cdot \mathcal{L}_{\beta} + w_{\gamma} \cdot \mathcal{L}_{\gamma} + w_s \cdot \mathcal{L}_s, \end{equation} where $w_x$ are positive weights chosen through cross-validation at training. In our experiments, we use: $w_{\alpha}=0.6$, $w_{\beta}=0.3$, $w_{\gamma}=0.1$, $w_s=0.1$, as $w_{\alpha}$ is the dominant one. Note that the weights do not necessarily add to $1$. Please refer to Section 3 in the supplementary material for ablations about the two losses. \subsubsection{Self-supervised pre-training} \label{sub:pretrain-ss} The goal is to use a (self-supervised) network that learns head-pose orientation without being explicitly trained on it. To this end, we use a modified version of the self-supervised Facial Attributes-Net from~\cite{wiles2018bmvc}. The Facial Attributes-Net uses a single frame, whereas we are interested in $\mathcal{T}$ input video frames. Therefore, we inflate the filters from the Facial Attributes-Net for $\mathcal{T}$ consecutive frames, by replicating their weights. Moreover, we change the input size of the Facial Attributes-Net from $256\times256$ to the input of LAEO-Net++\@\xspace, i.e.\@\xspace $64\times64$. \subsubsection{Random initialization} \label{sub:pretrain-random} For reference, we also initialize the LAEO-Net++\@\xspace with random values for the weights. Albeit randomly initialized, LAEO-Net++\@\xspace manages to learn head pose and orientation implicitly by solving the LAEO task alone (Section~\ref{sub:pre-training}). \subsection{Training the LAEO-Net++\@\xspace} \label{sub:finetune} We train LAEO-Net++\@\xspace with both real and synthetic data. We use data augmentation techniques, such as image perturbations, translations, brightness changes, zoom changes, etc.\@\xspace. For the first $N=2$ epochs, we use only synthetic data, and then we alternate between real and synthetic data. To improve the performance of the model, we use hard negative mining. We deploy the curriculum learning strategy of \cite{Nagrani18c}, which modulates the difficulty of the hard negatives incorporated into training. In our experiments, the value of the negative difficulty parameter~\cite{Nagrani18c} is increased after $2$ epochs, allowing more difficult samples as its value increases. \begin{figure}[t!] \centerline{% \begin{tabular}{c@{}c@{}} \includegraphics[width=0.52\linewidth]{Figure5a.pdf}& \includegraphics[width=0.46\linewidth]{Figure5b.pdf} \\ \small{(a)} & \small{(b)} \\ \end{tabular}} \beforecaptions \caption{\small{\textbf{(a)~Head-maps and (b)~augmentation of LAEO samples.} (a)~We analyse all head pairs with a color coding: \textit{blue} for the left, \textit{green} for the right and \textit{red} for the remaining heads, such as middle, i.e.\@\xspace not considered for evaluation. (b)~We generate synthetic LAEO negative training data (red boxes) from positive pairs (green box), based on the orientation or the relative position of the heads. }} \label{fig:headmaps_synpairs} \end{figure} \paragraph{Synthetic data. } For generating synthetic data we use images with head-pose information. To generate positive samples, we select pairs of heads whose angles are compatible with LAEO and, at the same time, they generate consistent geometrical information. To generate negative samples, we either (i)~change the geometry of the pair, i.e.\@\xspace making LAEO not possible any more, \eg by mirroring just one of the two heads, or (ii)~select pairs whose pose are incompatible with LAEO, \eg both looking at the same direction. Figure~\ref{fig:headmaps_synpairs}~(b) shows some artificially generated pairs. \section{Evaluation and scoring methodology} \label{sec:metrics} \paragraph{LAEO-classification AP} is the metric we use to evaluate the LAEO predictions. Similar to object detection, a detection is correct if its intersection-over-union overlap (IoU) with the ground-truth box is $>0.5$ \cite{voc}. A detected pair is correct if both heads are correctly localized and its label (LAEO, not-LAEO) is correct. The performance is Average Precision (AP) computed as the area under the Precision-Recall (PR) curve. Depending on the available ground-truth annotations, we measure AP at frame level, considering each pair as an independent sample, or at shot-level, if more detailed annotations are not available. Frame level is used for \textit{UCO-LAEO\xspace} and \textit{AVA-LAEO} and, following previous work~\cite{marin2013ijcv,masse2018pami}, shot level for \textit{TVHID}. \paragraph{Scoring methodology. } Given that the level of (ground truth) annotation differs between the three datasets, we describe how we use the LAEO-Net++\@\xspace outputs to obtain the final scores, either at the shot or at the frame level. We test LAEO-Net++\@\xspace on pairs of head-tracks (of length $\mathcal{T}=10$), obtain one LAEO score for each track-pair, and assign the LAEO score to the head-pair in the middle frame. The scoring process for each dataset is as follows: \begin{enumerate}[label=(\roman*)] \item \textit{UCO-LAEO\xspace}: Since the bounding boxes for the heads are available for each frame, the LAEO-Net++\@\xspace is applied directly to these head tracks (no detections are used). To account for the $T/2$ frames at the beginning (resp.\ end) of a track, we propagate the score from the middle frame. \item \textit{AVA-LAEO:} We run the head tracker and apply LAEO-Net++\@\xspace on these tracks. AVA-LAEO contains pair annotations for \textit{human} bounding-boxes (instead of heads); hence, we compare each head pair against the ground-truth human pairs using intersection over head area (instead of IoU). \item \textit{TVHID:} We run the head tracker and apply LAEO-Net++\@\xspace on the tracks. We compute a LAEO score as the max of smoothed scores in a shot; the smoothed score is the average of a moving temporal window (of length five) along the track. \end{enumerate} \section{Experimental results} \label{sec:expers} In this section, we experimentally evaluate the effectiveness of LAEO-Net++\@\xspace for determining people LAEO. Note that the model is trained either on UCO-LAEO\xspace or on AVA-LAEO. Here, we study the impact of all training and architecture choices. First, we examine the importance of the head-map branch, the length $\mathcal{T}$ of the head-tracks, and the length of $\mathcal{M}$ of the head-map (Sections~\ref{sub:head-map}-\ref{sub:Mlength}). Then, we assess the importance of the different pre-training schemes (Section~\ref{sub:pre-training}). In Section~\ref{sub:res_newdatasets} we examine the performance of LAEO-Net++\@\xspace on the two new test datasets, UCO-LAEO\xspace, and AVA-LEO. After, we analyse different domains by performing a cross dataset evaluation (Section~\ref{sub:cross}). In Section~\ref{sub:old_vs_new} we provide experimental results between LAEO-Net\@\xspace and LAEO-Net++\@\xspace, and n Section~\ref{sub:discussion} we provide a summary of our findings. Finally, in Section~\ref{sub:restvhid}, we compare LAEO-Net++\@\xspace to LAEO-Net\@\xspace~\cite{marin19cvpr} and to other state-of-the-art methods on the UCO-LAEO\xspace, AVA-LAEO, and TVHID-LAEO datasets. \paragraph{Implementation details.} LAEO-Net++\@\xspace is implemented with Keras~\cite{chollet2015keras} using TensorFlow as backend. All implementation details can be found in Section 1.2 in the supplementary material. \subsection{Importance of the head-map} \label{sub:head-map} We evaluate LAEO-Net++\@\xspace with and without the head-map branch (Table~\ref{tab:ablation2}). Adding it improves the performance (from $80.3\%$ to $\UCOLAEOscoreTrUCOSSMA\%$ for $\mathcal{T}{=}10$), as it learns the spatial relation between heads. \paragraph{Comparison with the geometry branch baseline.} To assess the quality of the head-maps branch, we consider a baseline: the \textit{geometrical information branch}, where the relative position of two heads over time is encoded by their geometry. It embeds the relative position between two head tracks over time (relative to a $(1,1)$ normalized reference system), and the relative scale of the head tracks. The input is a tuple $(dx, dy, s_{r})$, where $dx$ and $dy$ are the $x$ and $y$ components of the vector that goes from the left head $L$ to the right one $R$, and $s_{r} = s_{L}/s_{R}$, is the ratio between the scale of the left and right heads. The consists of two fc layers with 64 and 16 hidden units and it outputs a vector of 16 dimensions encoding the geometrical relation between the two target heads. LAEO-Net++\@\xspace with the geometry branch results in $1\%$ less classification AP than with the head-pose branch. This is expected; even though both branches encode the same information (i.e.\@\xspace relative position of the two heads), the head-maps branch provides a richer representation of the scene, as it encodes information for all existing heads and, therefore, results in better AP. Note, using both the head-map \textit{and} the geometry branches (in addition to the head-pose branches) does not lead to any further improvement, as the combination of these two branches just increases the number of parameters without providing additional information. Thus, we conclude that LAEO-Net++\@\xspace is the most effective architecture in terms of AP performance. \subsection{Temporal window $\mathcal{T}$} \label{sub:Klength} To assess the importance of the temporal window using $\mathcal{T}$ frames compared to using a single frame, we vary $\mathcal{T}$ and train and evaluate LAEO-Net++\@\xspace with $\mathcal{T}=1,5,10$. Table~\ref{tab:ablation2} shows that there is an improvement in AP performance of $1.5\%$ when $\mathcal{T}$ increases from only 1 to 5 frames, and a significant improvement of $2.9\%$ when $\mathcal{T}$ increases from 1 to 10 frames (we found no improvement for $\mathcal{T} > 10$). In the remainder of this work, we use $\mathcal{T}=10$ frames. \subsection{Length of Head-map $\mathcal{M}$} \label{sub:Mlength} To assess the importance of the length of the head-map $\mathcal{M}$ compared to using a single-frame, we vary $\mathcal{M}$ and train and evaluate LAEO-Net++\@\xspace with $\mathcal{M}=1,5,10$ with various pre-training schemes. Table~\ref{tab:ablation45} shows the results for UCO-LAEO\xspace and AVA-LAEO. We observe that there is a significant performance improvement of approximately $5\%$ when increasing the length of the head-map from 1 to 10 for all cases. Therefore, for the remainder of this work, we use $\mathcal{M}=10$. \input{tabs} \subsection{Pre-training schemes} \label{sub:pre-training} We examine the three different settings for pre-training LAEO-Net++\@\xspace: fully supervised, where we pre-train using ground-truth labels for head orientations (Section~\ref{sub:pretrain-AFLW}); self-supervised, where we employ a video model learnt to solve another task (Section~\ref{sub:pretrain-ss}); and implicit supervision, where we use random initialization (Section~\ref{sub:pretrain-random}). Table~\ref{tab:ablation45} reports the \%AP results. For low values of $\mathcal{M}=1$, the random initialization performs similarly to the other models. This shows that for $\mathcal{M}=1$ there is sufficient data to train the network for the LAEO task; however, for higher values of $M$ there is not enough training data. It is, therefore, interesting to investigate the learnt properties of the network when $\mathcal{M}=1$. To this end, we evaluate LAEO-Net++\@\xspace trained with random intialization on AFLW, which has ground truth labels for the head orientations. We project the predicted head-embeddings on a 2D space using the Uniform Manifold Approximation and Projection for Dimension Reduction ~\cite{umap} and illustrate it in Figure~\ref{fig:headembed}. LAEO-Net++\@\xspace groups the heads based on their orientation (we depict discretized angles). Specifically, we illustrate the head embeddings after one, ten and twenty epochs of training and observe that the more LAEO-Net++\@\xspace is trained, the more separate the head clusters become. Thus, we conclude that to solve an explicit task, i.e.\@\xspace people LAEO, LAEO-Net++\@\xspace learns an additional task, i.e.\@\xspace estimating head pose (implicit supervision). For longer temporal head-maps, \eg $\mathcal{M}{=}10$, the self-supervised model outperforms the other ones by a small margin ($1$-$3\%$). This is interesting as one might expect the fully supervised one to prevail. This is probably due to the size and variety of the training data: the self-supervised model has been trained on a larger dataset~\cite{voxceleb2} and with greater pose variation than the one in AFLW. Overall, the self-supervised pre-training outperforms the rest; hence, in the remainder of this work we use this. \vspace{-0.2cm} \paragraph{Relation to gaze direction.} An alternative pre-training scheme would be to use gaze direction models as initialization for LAEO-Net++\@\xspace. For instance, the head-branch could be initialized by the one from~\cite{recasens2015nips}, as both encode information about the head pose and orientation. Similarly, LAEO-Net++\@\xspace could be used as initialization for gaze direction models~\cite{recasens2015nips,recasens2017iccv}. Moreover, LAEO-Net++\@\xspace could be adapted to infer person-wise VFoA, for instance by replacing one head-track branch by a saliency predictor~\cite{recasens2015nips} or a transformation pathway~\cite{recasens2017iccv}. A possible extension would be to add saliency prediction in LAEO-Net++\@\xspace as additional task for joint training with the LAEO. Another line of work would be to combine the self-supervised LAEO pre-training with 3D gaze estimation~\cite{doosti2020mgaze3d} to scale-up gaze estimation or gaze following. \subsection{Results on UCO-LAEO\xspace and AVA-LAEO} \label{sub:res_newdatasets} Table~\ref{tab:ablation45} reports the results when evaluating LAEO-Net++\@\xspace on UCO-LAEO\xspace and AVA-LAEO. The performance is $86.7\%$ and $68.7\%$ when training and testing on UCO-LAEO\xspace, and AVA-LAEO, respectively. These results reveal that there exists a significant gap in the performance between the two datasets. This is due to the different nature of AVA-LAEO compared to other datasets: (1) head annotations are not provided (just human bounding-boxes every 1 second); (2) it contains challenging visual concepts, such as (a) low resolution movies, (b) many people in a scene, (c) blurry, small heads, and (d) particular clothing styles, \eg several people wearing hats (western, Egyptian's, turbans, etc.\@\xspace). Despite these difficulties, LAEO-Net++\@\xspace achieves AP=$68.7\%$. Moreover, to assess the difficulty of these datasets and the effectiveness of LAEO-Net++\@\xspace, we compare it to the chance level classification. LAEO-Net++\@\xspace outperforms chance level by a large margin: $\times 2$ for UCO and $\times 4$ for AVA (Table~\ref{tab:results}). When applying LAEO-Net++\@\xspace on UCO and AVA we obtain the results of Figure~\ref{fig:res_datasets}, where we display some of the highest ranked pairs of people LAEO. We observe that LAEO-Net++\@\xspace leverages the head orientations and their temporal consistency and accurately determines the frames where people are LAEO. We hope that LAEO-Net++\@\xspace with these two datasets will provide solid baselines and help future research on this area. \paragraph{Impact of the detection and tracking errors on the AP.} LAEO-Net++\@\xspace ($T{=}10$, $M{=}10$) achieves an AP=68.7\% when evaluated on all annotated pairs of AVA-LAEO. In contrast, if we compute the LAEO classification accuracy only on the subset of \emph{detected pairs}, the AP increases up to 79.8\%. \vspace{-3mm} \subsection{Cross-dataset evaluation} \label{sub:cross} Here, we aim at examining the generalization of LAEO-Net++\@\xspace accross different domains. To this end, we examine the performance when initializing the weights with one dataset and fine-tuning it (and testing) on another dataset, i.e.\@\xspace pre-training on UCO (and fine-tune on AVA) leads to 67.0\% AP, whereas pre-training on AVA (and fine-tune on UCO) to 84.5\%. Interestingly, we observe that this cross-dataset scheme performs very good, resulting in classification performances similar to the ones with no change in domain: for UCO there is a drop of only 2.2\% ($84.5\%$ vs $86.7\%$), and for AVA the drop is 1.7\% ($67.0\%$ vs $68.7\%$) Table~\ref{tab:ablation45}. These results show that the domain shift~\cite{torralba11cvpr} definitely affects the performance, and that for solving the LAEO task, the pre-training is less important than the actual data for fine-tuning. \begin{figure}[t] \begin{center} \includegraphics[width=\linewidth]{Figure7.pdf} \end{center} \beforecaptions \caption{\small{\textbf{LAEO-Net++\@\xspace results on UCO-LAEO\xspace (top) and AVA-LAEO (bottom).} For different scenarios, backgrounds, head poses etc.\@\xspace, in most cases LAEO-Net++\@\xspace successfully determines if two people are LAEO (green boxes). }} \label{fig:res_datasets} \end{figure} \subsection{Comparison between LAEO-Net\@\xspace and LAEO-Net++\@\xspace} \label{sub:old_vs_new} We examine the differences in performance between LAEO-Net\@\xspace and LAEO-Net++\@\xspace for the \textit{same} setting (paragraph with differences in Section~\ref{sec:model}). Table~\ref{tab:v0vsv1} reports the \% AP results when training and testing the two networks on UCO-LAEO\xspace and AVA-LAEO. \textit{Single frame head-map: } LAEO-Net\@\xspace results in AP = 79.5\% for UCO and 50.6\% for AVA, whereas for the same setting replacing the head-track architecture of LAEO-Net\@\xspace with the new one results in AP = 80.2\% for UCO and 59.3\% for AVA, i.e.\@\xspace absolute improvements of 0.7\% for UCO and 8.7\% for AVA. These suggest that the new architecture helps determining the mutual gaze between people; this is especially demonstrated by the big boost on AVA, thus suggesting that the new model handles difficult scenes and scenarios better than the old one, given the more challenging nature of AVA. Additionally, using the self-supervised pre-training for LAEO-Net++\@\xspace leads to AP = 81.5\% for UCO and 59.8\% for AVA, i.e.\@\xspace absolute improvements of 2\% for UCO and 9.2\% for AVA wrt~\cite{marin19cvpr}, showing that the proposed self-supervised pre-training of LAEO-Net++\@\xspace leads to greater performance improvements than the AFLW pre-training. \textit{Multi-frame head-map: } Increasing the head-map length from one to $\mathcal{M}=10$ frames leads to AP=83.7\% for UCO-LAEO\xspace and 68.6\% for AVA-LAEO when using the AFLW pre-training, and AP=86.7\% for UCO-LAEO\xspace and 68.7\% for AVA-LAEO when using the self-supervised pre-training. The improvements compared to LAEO-Net\@\xspace are between 4-7\% for UCO-LAEO\xspace and around 18\% for AVA-LAEO. This clearly indicates that using multiple frames for the head-map boosts the LAEO performance, as the network is better able to capture the temporal aspect of moving heads, thus reducing the missed detections and the false positives. \subsection{Summary} \label{sub:discussion} The findings of LAEO-Net++\@\xspace can be summarized as follows: \begin{enumerate}[label=(\roman*)] \item the head-map branch is the most suitable architecture for the task we examine (Table~\ref{tab:ablation2}), \item exploiting the temporal dimension by using $\mathcal{T}$-frame long head-tracks and $\mathcal{M}$-frame long head-maps boosts the performance (Tables~\ref{tab:ablation2}-\ref{tab:ablation45}). \item for low values of the head-map length ($\mathcal{M}=1$) pre-training is not necessary for solving the LAEO task; nevertheless, for larger values of $\mathcal{M}$ there is a significant benefit in pre-training, as the model benefits from more data (Table~\ref{tab:ablation45}). \item solving the LAEO task \textit{alone} (without any pre-training) results in learning head pose and orientations (implicit supervised setting, Figure~\ref{fig:headembed}). \item AVA-LAEO is more challenging than UCO-LAEO\xspace due to its different nature. \end{enumerate} \begin{figure}[t] \begin{center} \includegraphics[width=1\linewidth]{Figure8.pdf} \end{center} \beforecaptions \caption{\small{\textbf{LAEO-Net++\@\xspace results on TVHID.} (top three rows) correct LAEO results when the ground truth is LAEO (green) and not-LAEO (blue). LAEO-Net++\@\xspace successfully detects people LAEO in several situations (illuminations, scales, clutter). (last row) failure cases for false positive LAEO detections (first example) and missed detections (three last examples). Most failures are missing people LAEO in ambiguous scenes; \eg in the last red frame the characters are LAEO, even though the character on the left has closed eyes. }} \label{fig:res_tvhid} \end{figure} \begin{figure*}[t!] \begin{center} \includegraphics[width=1\linewidth]{Figure9.pdf} \end{center} \caption{ \small{ \textbf{Interaction prediction with the \textit{Average-LAEO} vs various Baselines on Friends.} In addition to the Average-LAEO score (AL), we display four baselines: Random Probability (PR), Uniform Probability per Episode (UPE), Shots-Coexistence-Ratio (SCR), and Uniform Probability per Shot (UPS). (a)~AP performance for AL and various baselines for each pair; (more pairs in Section 4 in supplementary material) (b)~Pair-agnostic precision-recall curves. Some patterns are clear: `Ross and Rachel' or `Monica and her workmate' interact with each other almost continuously when they coexist; albeit their low frequency of co-existence, `Joey and Ross', `Joey and Monica' or `Ross and Mark' interact significantly when they co-exist as captured mainly by AL (red). (c)~Examples of AL and SCR. We compute the AL of each pair and display some examples: true positives (TP), when we correctly predict a pair of characters as interacting (green color); true negatives (TN), when we correctly predict a pair of characters as not-interacting (blue color); false negatives (FN), when we miss pairs that interact (orange color). Note than in all examples the SCR results are reversed (see SCR scores): i.e.\@\xspace the green rows are wrongly predicted as not-interacting; the blue rows are wrongly predicted as interacting; the orange row is correctly predicted as interacting. As expected, we observe that the AL fails to determine interactions, where the people are not LAEO. In most cases, however, either in real life or in TV-shows a human interaction typically involves gazing; hence, the AL is suitable for automatically capturing pairs of characters that interact. (\textit{Best viewed in digital format.}) }} \aftercaptions \label{fig:rankfriends2} \end{figure*} \subsection{Results on TVHID-LAEO} \label{sub:restvhid} We compare LAEO-Net++\@\xspace to the state of the art on TVHID~\cite{Patron2010hi5}, i.e.\@\xspace the only video dataset with LAEO annotations (Section~\ref{subsec:tvhid}). As in \cite{marin2013ijcv}, we use average AP over the two test sets (Table~\ref{tab:results}). LAEO-Net++\@\xspace trained on UCO-LAEO\xspace and AVA-LAEO achieves AP$=92.3\%$ and AP=$87.4\%$, respectively. Notably, training on UCO-LAEO\xspace outperforms training on AVA-LAEO when tested on TVHID. This is due to the fact that the domain of TVHID is closer to the one of UCO-LAEO\xspace than to AVA-LAEO, given that UCO-LAEO\xspace and TVHID consist of TV shows, whereas AVA-LAEO contains movies. Despite the domain differences, LAEO-Net++\@\xspace trained on AVA-LAEO achieves comparable results to the state of the art. Finally, we observe that the model trained on UCO-LAEO\xspace outperforms all other methods by a large margin ($1-3\%$). When we apply LAEO-Net++\@\xspace on TVHID and obtain the results shown in Figure~\ref{fig:res_tvhid}. Our model successfully detects people LAEO in several situations and scenarios, such as different illuminations, scales, cluttered background. By examining the remaining~$8\%$ error, we note that in most cases, the ground truth label is ambiguous, \eg last two red frames in Figure~\ref{fig:res_tvhid}. \section{Social network \& Interaction prediction} \label{sec:friends} One principal way of signaling an interest in social interaction is the willingness of people to LAEO~\cite{goffman2008public,loeb1972mutual}. The duration and frequency of eye contact reflects the power relationships, the attraction or the antagonism between people~\cite{abele1986gaze}. We present two applications of LAEO-Net++\@\xspace in analysing social interactions in TV material. First, at the shot level, we show that LAEO is an indicator of whether two characters are {\em interacting} (see below). Second, at the episode level, we show that LAEO is an indicator of the extent of social interactions between two characters, we term this \textit{friend-ness}. Here, we define two characters as interacting if they are directly involved (\eg kiss, hug), or the actions of one influence the actions of the other (\eg show something on a screen), or they communicate (\eg talk to each other), or if they perform an activity together (\eg shopping). Two characters are not interacting within a shot if they do not refer to each other (\eg both characters listen to a third person talking), or they do not influence each other, or they perform different tasks (\eg one character is watching TV while the other is reading a book). \subsection{Dataset processing and annotation} \paragraph{Dataset. } We use one episode of the TV show `Friends' (\textit{s03ep12}). First, we detect and track all heads (see Section~\ref{sub:d_t}), resulting in $1.7k$ head tracks. Then, with no further training, we apply LAEO-Net++\@\xspace on each track pair to determine if two characters are LAEO. \paragraph{Character annotation.} All head tracks are annotated with the identity of their character. This results in main characters (more than one third of the tracks), irrelevant characters ($\sim35\%$), being wrong ($20\%$) or some secondary ones (the rest). \paragraph{Interaction annotation.} Within each shot, all pairs of characters are annotated as interacting or not. Our annotation procedure results in $220$ positive and $200$ negative pairs. \subsection{Experiments} The goal is to assess whether LAEO can be used to predict character pair interactions at the shot level, and Friend-ness at the episode level. We measure LAEO at the shot level using `average-LAEO score' (AL) over the frames where the two characters co-exist, and measure LAEO at the episode level as the average of AL over all shots in which the two characters appear. Interaction is a binary label for a pair of characters in a shot. We treat AL as the score for predicting interaction, and assess its performance using Average Precision (AP). \paragraph{Baselines.} For interaction prediction we use four baselines: (1)~Random Probability (PR): every pair has a random probability of interacting (drawn from an uniform distribution); (2)~Uniform Probability per Episode (UPE): the probability of interacting for a pair is $1/L$, where $L$ is the number of existing pairs per episode; (3)~Shots-Coexistence-Ratio (SCR): the ratio between the number of frames that two characters co-exist in a shot over the total number of frames of the shot; and (4)~Uniform Probability per Shot (UPS): the probability of interacting for a pair is $1/L_{S}$, where $L_{S}$ is the number of existing pairs per shot. \paragraph{Interaction prediction. } The AP for individual pairs of characters is shown in Figure~\ref{fig:rankfriends2}(a) (more pairs in the suppl.\ material); and a pair-agnostic ranking, where all pairs are evaluated together, no matter the character identities is shown in Figure~\ref{fig:rankfriends2}(b). In Figure~\ref{fig:rankfriends2}(a), we observe that in some cases several baselines are good predictors as they capture the possible interactions, \eg \textit{ross-rachel} or \textit{monica-workmate monica}. However, in the cases where there exist several pairs within an episode or where two characters co-exist only in a few frames (compared to the shot length), the SCR (green) and UPS (blue) baselines are incapable of capturing the interactions, \eg \textit{joey-ross}, \textit{joey-monica} or \textit{ross-mark}. In these cases, however, AL (red bars) correctly predicts the interaction level between characters. Overall, we observe that AL outperforms all other baselines in all pair-specific cases. \begin{figure}[t!] \begin{center} \includegraphics[width=0.95\linewidth]{Figure10.pdf} \end{center} \caption{\small{\textbf{Social network using the \textit{Average-LAEO} (AL) on Friends.} We depict the \%AL between character pairs with the edges in the graph: the thicker the edge, the more dominant the relationship. We observe some clear patterns: Ross and Rachel or Monica and Julio `like' each other more than Chandler and Phoebe or Ross and Phoebe. }} \label{fig:friends_examples} \vspace{-4mm} \end{figure} In the pair-agnostic PR curves of Figure~\ref{fig:rankfriends2}(b), the AL score outperforms all baselines by 4-34\%. The RP baseline performs worse than all other scores, which is expected as it contains no information, while UPE and SCR perform similarly (AP= 77\% and 68\%), indicating that the frequency of existence of a pair at the frame or episode level does not necessarily reveal interactions. The UPS score notably outperforms the other baselines by 3-29\% showing that the fewer people exist in a shot, the more likely they are to interact. Finally, AL outperforms all baselines, reaching AP=84\%; showing that it captures the main interactions with high confidence, and therefore can be useful for automatically retrieving them. To demonstrate the powerfulness of AL and its superiority compared to SCR, we show some examples of pairs of characters in Figure~\ref{fig:rankfriends2}(c). The examples in green are correctly predicted as interacting by AL, but wrongly predicted as not-interacting by SCR; the examples in blue are correctly predicted as not-interacting by AL, but wrongly predicted as interacting by SCR; the examples in orange are missed interactions by AL, but correctly predicted as interacting by SCR. We observe that in several cases, the AL is suitable for predicting the presence or absence of interactions between characters, whereas the SCR is incapable of differentiating them; for instance, Monica and Joey in the last green example co-exist and interact in a few frames and, therefore, they are wrongly predicted as not-interacting by SCR. Moreover, we note that the AL fails to determine interactions where people are not LAEO (\eg Ross and Chandler or Mark and Rachel in orange). In most cases, however, either in real life or in TV-shows a human interaction typically involves gazing; hence, the AL is suitable for automatically capturing pairs of characters interacting. \paragraph{\textit{Friend}-ness. } For each shot, we measure \textit{friend}-ness between a pair of characters with the AL and depict it in the social network of Figure~\ref{fig:friends_examples}: the thicker the edge, the higher the score and the stronger the relations. AL captures the dominant relationships between characters, \eg Ross and Rachel, against characters that are more distant, \eg Phoebe and Chandler. Our study reveals all prominent pair relations, demonstrating that the more people are LAEO, the stronger their \textit{interaction} and \textit{social relationship}. \section*{Acknowledgments} \else \section*{Acknowledgment} \fi We are grateful to our annotators (RF, RD, DK, DC, E. Pina), to Q. Pleplé for proof-reading, to S. Koepke for the model, to the reviewers for the constructive suggestions, and to NVIDIA for donating some of the GPUs we used. This work was supported by the Spanish grant ``Jos\'e Castillejo'', the EPSRC Programme Grant Seebibyte EP/M013774/1, and the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior/ Interior Business Center (DOI/IBC) contract \# D17PC00341.
1,108,101,566,244
arxiv
\section{Introduction}\label{intro} Stochastic gradient descent (SGD), as a stochastic approximation for the gradient descent, is a simple but powerful optimization method, where the objective function is often the average of a family of functions. With the ``random mini-batch" idea, instead of directly calculating the sum of gradient of the whole family, SGD uses the sum over a small random set to approximate the big summation \cite{robbins1951stochastic,ross1988taguchi}. SGD is widely used for solving large scale data science problem, which has shown amazing performance for large-scale learning tasks due to its computational and statistical efficiency \cite{bottou2010large,bubeck2015convex,bottou2016optimization}. Recent decades have witnessed huge and fast progress in SGD-related research \cite{hulililiu2018,li2019stochastic,ankirchner2021approximating,smith2020generalization,smith2021origin}. Several variants of SGD have been proposed to deal with various tasks more efficiently, including combining with momentum and varying step size, etc \cite{daniel2016learning,zeiler2012adadelta,dauphin2015equilibrated}. The optimization problem suited for SGD is given by $\min_{x\in \mathbb{R}^d} f(x)$, where \begin{gather}\label{eq:def_expected_loss} f(x):=\mathbb{E} f(x; \xi) \end{gather} is the loss/objective function associated with a certain training set, and $d$ is the dimension for the parameter $x$. Here, $\xi \sim \nu$ is a random variable/vector for some probability distribution $\nu$. Compared with $f(x)$, $f(x; \xi)$ is often much easier to handle for each $\xi$. The SGD iteration with constant step size $\eta$ is then \begin{gather}\label{eq:sgd} X_{n+1}=X_n-\eta \nabla f(X_n; \xi_n) , \end{gather} where $\xi_n\sim \nu$ are i.i.d. so that $\xi_n$ is independent of $X_n$. Then, $\{X_n\}$ is a time homogeneous Markov chain. In practice, $\xi_n\sim \nu$ is often implemented by drawing random sets from training sets and using it to give the ``stochastic gradient''. By \eqref{eq:def_expected_loss}, $\nabla f(x;\xi_n)$ is an unbiased estimation of true gradient $\nabla f$, namely, \[ \mathbb{E}\left[\nabla f(x;\xi)\right] = \nabla f(x),\quad \forall x \in \mathbb{R}^d. \] Besides, the uncertainty introduced by the SGD is helpful for escaping sharp minimizers and for possible better generalization behavior \cite{li2019stochastic,lin2018don}. As an example, consider training a deep neural network using $N\gg 1$ samples. The loss function is given by $f(x)=\frac{1}{N}\sum_{k=1}^N f_k(x)$. The back propagation algorithm is applied to compute $\nabla f_k(x)$, which is not trivial, making computing $\nabla f(x)$ expensive \cite{cao2009neural,li2016tutorial}. To handle this problem, we can pick a random set $\mathcal{B}\subset \{1, \ldots, N\}$ with $|\mathcal{B}|=m\ll N$. Then we identify $\mathcal{B}$ with $\xi$ and let $f(x; \xi)=\frac{1}{m}\sum_{k\in \mathcal{B}}f_k(x)$. Computing the gradient of $f(x;\xi)$ is clearly much cheaper. Now following the Markov property \cite{durrett1999essentials,durrett2019probability}, the function \begin{gather}\label{eq:Un} U^n(x)=\mathcal{S}^n \varphi(x):=\mathbb{E}_x(\varphi(X_n)) \end{gather} satisfies the equation \cite{hulililiu2018,feng2017} \begin{gather}\label{eq:weakmaster} U^{n+1}(x)=\mathcal{S} U^n(x) :=\mathbb{E}(U^n(x-\eta \nabla f(x; \xi))). \end{gather} This means that $\{\mathcal{S}^n\}$ is in fact a semigroup. With the semigroup property, one is thus naturally motivated to approximate $U$ with solutions to some appropriate time continuous equation. One classical method to approximate SGD is the diffusion approximation, and much work has been done by former researchers \cite{hulililiu2018,ankirchner2021approximating,feng2017,litaie2017,feng2019uniform} . Assuming that $f(\cdot,\xi)$ has bounded derivatives $\nabla f$, in any finite time interval, iterates of SGD are close in the weak sense to the solution of following stochastic differential equation (SDE): \begin{gather}\label{eq:classicalsde} \mathrm{d} X_{t}=-\nabla\left[f(X)+\frac{1}{4} \eta |\nabla f(X) |^{2}\right] d t+\sqrt{\eta \Sigma}\, d W, \end{gather} where the matrix $\Sigma$ given by \begin{gather} \Sigma=\mathbb{E}_{\xi}\left[\left(\nabla f_{\xi}-\nabla f\right) \otimes\left(\nabla f_{\xi}-\nabla f\right)\right] \end{gather} is the covariance matrix of the random gradients, and $W$ is the standard Brownian motion in $\mathbb{R}^d$. Note that the SDE \eqref{eq:classicalsde} approximates SGD in the weak sense with second order accuracy. If we instead use $dX=-\nabla f(X) dt+\sqrt{\eta\Sigma}\,dW$ for any smooth positive definite $\Sigma$, then the approximation has first order weak accuracy. This implies that the first order weak approximation only captures the coarse gradient descent feature, and loses much information, especially the fluctuation in the dynamics, and possibly the implicit bias \cite{smith2020generalization,smith2021origin}. Among those choices, taking $\Sigma=\var(\nabla f(x,\xi))$ captures the most fluctuation in the corresponding SDE \cite{hulililiu2018}. The associated backward Kolmogrov equation to \eqref{eq:classicalsde} is given by \begin{gather}\label{eq:backwardKol} \frac{\partial u}{\partial t}=-\nabla f \cdot \nabla u+\eta\left(-\frac{1}{4} \nabla |\nabla f |^{2} \cdot \nabla u+\frac{1}{2} \operatorname{Tr}\left(\Sigma \nabla^{2} u\right)\right), \end{gather} and $u(x,t)$ with initial value $u(x,0)=\varphi(x)$ has a representation \begin{gather} u(x, t)=\mathbb{E}_x\varphi(X_t) , \end{gather} where $\varphi$ is an arbitrary test function with certain regularity. In particular, we can take $\varphi = f$ to study the asymptotic oscillatory and how objective function converges to global minimum if the objective function admits some ``good" properties, like convexity. Other related works regarding diffusion approximation for SGD can be found in \cite{hulililiu2018,litaie2017}. Many other approximation methods are proposed in \cite{ankirchner2021approximating}, where the authors also study approximation for SGD in some finite time iterval $[0,T]$. Their methods include ODE approximation, first order SDE approximation, and second order SDE approximation, with the change of step size taken into account in each method. Though numerous novel insights have been gained from this continuous perspective, it was previously still unclear whether the modified SDEs can really be adopted to study asymptotic behaviors of SGD, since the weak approximation is only valid in a finite time interval. In \cite{feng2019uniform}, the authors used a truncated formal power expansion of the solution of a Kolmogrov equation arising from diffusion approximation to obtain uniform-in-time analysis. However, the diffusion approximation itself is still not uniformly valid. Besides, in practice, the boundedness assumption on $\nabla f$ is strong, which also motivates us to establish the uniform-in-time diffusion approximation for SGD with much weaker assumptions. In this paper, instead of only considering the finite time horizon, we extend the classical idea of diffusion approximation to infinite time horizon, without assuming the boundedness of $\nabla f$. In our work, we study the traditional SGD with constant step size for general unbounded $\nabla f(\cdot, \xi)$, and show that SGD can be approximated in the weak sense by continuous-time SDEs in $\mathbb{R}^d$, by only assuming the strong convexity of the objective function $f$ and some other mild conditions. The SDE we use is different from \eqref{eq:classicalsde} in the sense that we have modified the diffusion coefficient $\Sigma$. Our approximation has second order weak accuracy and is uniform in time. These will help us understand the discrete algorithms in the viewpoint of diffusion approximation and randomly perturbed dynamical system \cite{wu2018sgd}. With the diffusion approximation, it becomes possible that one is able to better understand the behavior of stochastic gradient noise in the SGD algorithm \cite{wu2021revisiting,simsekli2019tail}. In particular, we study the long time behavior of $\left\{X_{n}\right\}_{n \geq 0}$ as $n$ approaches infinity in the flavor of backward error analysis of stochastic numerical schemes. After restricting $\Sigma$'s support to some compact domain, where $\Sigma$ is the diffusion coefficient used in the classical method \eqref{eq:classicalsde}, we are then able to prove the following long time approximation: \begin{gather} \sup_{n\ge 0}\sup_{x\in B(0, R)}|U^n(x)-u(x, n\eta)|< C \eta^2, \end{gather} where $U$ is defined in \eqref{eq:Un} and $u$ is the solution to the backward Kolmogrov equation associated with the modified SDE. Compared with some previous work like \cite{feng2019uniform}, our result successfully weakens some of the assumptions, like the strong convexity of $f(\cdot;\xi)$ for every $\xi$ in the entire space $\mathbb{R}^d$, so that our result may be applied to more general objective functions. The rest of the paper is organized as follows. Before proving the main theorem, we establish some crucial auxiliary results in Section \ref{sec:setup}. In Section \ref{sec:main}, we show that there is a uniform in time diffusion approximation for SGD with initial distribution on bounded sets by assuming strong convexity of the expected loss and some other mild regularity requirements. In Section \ref{sec:conclusion}, we perform discussion on the significance of long time diffusion approximation of SGD. We also discuss the case of general objective functions, for which the diffusion approximation on bounded set is valid up to $n\eta \sim O(\log(\eta^{-1}))$. \section{Setup and auxiliary results}\label{sec:setup} In this section, we first give some basic assumptions for our diffusion approximation, the most important of which is the strong convexity of the expected loss function. Next, we prove some auxiliary results useful for the diffusion approximation of SGD. In particular, Lemma \ref{lmm:Scontraction} ensures that SGD could not escape some compact set after assuming some mild confinement conditions on the random loss functions; Lemma \ref{rmk3} ensures that the SDE solution we use to approximate SGD could not escape some compact set after modifying its diffusion coefficient; Proposition \ref{prop:derivativedecay} aims to estimate the high order derivatives of the associated Kolmogorov equation, which is crucial for the main theorem. \begin{assumption}\label{ass:strongconvex} The random loss functions and the expected loss satisfy the following conditions. \begin{itemize} \item[(i)] For any $\xi$, $f(\cdot, \xi)$ is smooth. For any compact set $K \subset \mathbb{R}^d$, $\sup_{x\in K}\sup_{\xi}|\nabla f(x, \xi)|<\infty$. Moreover, $f(\cdot,\xi)$ is confining in the sense that there exist $\nu>0$, $L >0$ independent of $\xi$ such that \[ x\cdot\nabla f(x, \xi)\ge \nu |x|^2, \quad \forall |x|\ge L. \] \item[(ii)] $f(\cdot):=\mathbb{E}_{\xi}f(\cdot,\xi)$ is $\mu$-strongly convex in $\mathbb{R}^d$ i.e. $\forall x,y \in \mathbb{R}^d$, \[ f(y) \geq f(x)+\nabla f(x) \cdot (y-x) + \frac{\mu}{2}|y-x|^2. \] \end{itemize} \end{assumption} \begin{remark} Here we are not assuming the growth rate of $f(\cdot; \xi)$. In later part of this paper, although we would have terms like its high order derivatives $\partial^{\alpha} f$, they are easy to control as the solutions to our SDE with a modified diffusion coefficient stay in a compact set. \end{remark} \begin{remark} The assumption on confinement of $f(\cdot;\xi)$ outside $B(0, L)$ is not restrictive because most models only use information in finite domains and this far-away behavior is satisfied by most models. \end{remark} \begin{remark} Note that we are only assuming the strong convexity of the expected loss $f(x)=\mathbb{E} f(x; \xi)$ instead of on each $f(\cdot; \xi)$. Though we are assuming the convexity of $f$ in the whole space, we actually only need the convexity of $f$ in a compact set where the SGD sees. Hence, our results in this paper actually apply to the behaviors near some local minimizers. \end{remark} With these assumptions, we then prove some auxilary results before our main theorem (Theorem \ref{thm:strongconvex}). The following says SGD will be trapped in a compact set if the initial measure is supported in a compact set. \begin{lemma}\label{lmm:Scontraction} Suppose Assumption \ref{ass:strongconvex} holds. Recall the definition of $X_n$ and the operator $\mathcal{S}$ in Section \ref{intro}. Then we have the followings: \begin{itemize} \item[(i)] For the $L$ above in Assumption \ref{ass:strongconvex}, fix any $R>L$, then there exists $\eta_0>0$ such that for all $\eta\le \eta_0$, the condition $X_0\in B(0, R)$ implies that $X_n\in B(0, R)$ for all $n$. \item[(ii)] For any continuous function $\phi$ and $x\in B(0, R)$, $(\mathcal{S} \phi)(x)$ only depends on the values of $\phi$ in $B(0, R)$ with \[\|\mathcal{S} \phi\|_{L^{\infty}(B(0, R))} \le \|\phi\|_{L^{\infty}(B(0, R))}.\] \end{itemize} \end{lemma} \begin{proof} We set \[ M_1:=\sup_{\xi}\sup_{x\in B(0, L)}|\nabla f(x, \xi)|<\infty, \] and \[ M_2:=\sup_{\xi}\sup_{x\in B(0, R)}|\nabla f(x, \xi)|. \] We claim that we can set $\eta_0=\min \{(R-L)/M_1, 2\nu L^2/M_2^2\}$. In fact, if $|X_0|\le L$, then $|X_1|=|X_0-\eta\nabla f(X_0, \xi_0)|\le L+\eta M_1\le R$. Otherwise, $L<|X_0|\le R$, using (i) in Assumption \ref{ass:strongconvex}, we have \begin{gather*} \begin{split} |X_1|^2 &=|X_0|^2-2\eta X_0\cdot\nabla f(X_0,\xi_0)+\eta^2|\nabla f(X_0, \xi_0)|^2 \\ &\le |X_0|^2-2\eta\nu L^2+\eta^2M_2^2\le |X_0|^2.\\ \end{split} \end{gather*} Simple induction yields the first claim. The second claim regarding $\mathcal{S}\phi$ is a straightforward corollary of the first one, using the definition $\mathcal{S}\phi(x) = \mathbb{E}\phi(x - \eta \nabla f(x;\xi))$ in \eqref{eq:weakmaster} and the fact that $x \in B(0,R)$ implies $(x - \eta \nabla f(x;\xi)) \in B(0,R)$. \end{proof} In the following lemma, we show that if we modify the diffusion coefficient $\Sigma$ in \eqref{eq:classicalsde} outside a certain compact set, then the solution $X$ to the diffusion approximation, which is a modified version of \eqref{eq:classicalsde}, stays in some compact set. This then allows us to consider the $C^k$ norm of $g$ on a bounded domain for suitable function $g$ in later sections. We define the modified diffusion coefficient $\Lambda$ as follows: \begin{gather}\label{eq:defLambda} \Lambda = \begin{cases} \begin{aligned} & \Sigma, \quad |x|\leq R,\\ & 0, \quad |x|> R_2\\ & \text{is~smooth}, \quad R\leq |x| \leq R_2. \end{aligned} \end{cases} \end{gather} Moreover, we require $\Lambda$ to be positive semidefinite everywhere. This is clearly possible. Indeed, since $\Sigma$ is semidefinite, we can consider $\tilde{\sigma}$ being the smoothness modification of $\sqrt{\Sigma}$, so $\tilde{\sigma}\tilde{\sigma}^T$ is the smoothness modification of $\Sigma$, which is obviously semidefinite everywhere. \begin{lemma}\label{rmk3} Take $R>L$ in Lemma \ref{lmm:Scontraction}, and $\Lambda$ is chosen as above. Under Assumption \ref{ass:strongconvex}, for any initial value $x \in B(0,R)$, there exists $\eta_1>0$ such that for all $\eta \leq \eta_1$, the solution $X$ to the following SDE \begin{gather}\label{eq:modifiedSDE} \begin{split} & dX=-\left[\nabla f(X)+\eta \left(\frac{1}{4}\nabla | \nabla f(X)|^2\right)\right]\,dt+\sqrt{\eta\Lambda(X)}\,dW,\\ & X(0; x)=x \end{split} \end{gather} satisfies that for all $t\ge 0$, \[ X(t;x) \in \overline{B(0,R_2)},\quad a.s. \,. \] \end{lemma} \begin{proof} To show this, we make use of the classical Stroock-Varadhan support theorem\cite{stroock2020support}. More precisely, for any $T>0$, consider the corresponding control problem \begin{equation}\label{eq:controlproblem} dX^v=-\left[\nabla f(X^v)+\eta \left(\frac{1}{4}\nabla | \nabla f(X^v)|^2\right)\right]\,dt+\sqrt{\eta\Lambda}v(t)\,dt, \quad X^v|_{t=0} = x, \end{equation} with $v(\cdot) \in V:= C([0, T]; \mathbb{R}^d)$. Denote by $S_x^T$ the support of $X_t$ in $C([0, T]; \mathbb{R}^d)$ under the topology induced by the uniform convergence norm $\|X\|:=\sup_{0\le t\le T}|X_t|$ ($X_t$ is the solution of the SDE \eqref{eq:modifiedSDE} at time $t$), and $C_x^T(V)$ the set of all solutions of the ODE~\eqref{eq:controlproblem} when the function $v$ varies in $V$. The Stroock-Varadhan support theorem says that \begin{gather}\label{eq:SVclosure} S_x^T = \overline{C^T_x(V)}. \end{gather} Next, we show that the support of the ODE solution $X^v(t)$ lies in $B(0,R_2)$ for all $t\le T$. By Assumption \ref{ass:strongconvex}, when $\eta$ is small, one has \[ -x\cdot \left(\eta\left(\frac{1}{4}\nabla | \nabla f(x)|^2\right) + \nabla f(x)\right) < 0 \] for $|x| = R_2$. If $|X^v|$ ever reaches $R_2$, then \begin{equation} \frac{d}{dt}|X^v|^2 = 2X^v\cdot \dot{X^v} = -2X^v\cdot \left(\eta\left(\frac{1}{4}\nabla | \nabla f(X^v)|^2\right) + \nabla f(X^v)\right) < 0. \end{equation} This in fact implies that $|X^v|<R_2$ for all $t \leq T$. Finally, combining with \eqref{eq:SVclosure}, and since $T$ is arbitrary, we conclude that $\mathrm{supp} X_t \subset \overline{ B(0,R_2)}$ for all $t$. \end{proof} Without loss of generality, in the remaining part of this paper, we set $\eta_0=\eta_1$ for the convenience. The following result is crucial for the long time approximation. Note that the diffusion matrix has been modified compared with that in classical results, as is stated in \eqref{eq:backwardKol}. More precisely, we consider the following Kolmogorov equation associated with the modified diffusion approximation \eqref{eq:modifiedSDE}: \begin{gather}\label{eq:modifiedpde} u_t=-\left(\nabla f+\eta \left(\frac{1}{4}\nabla | \nabla f|^2\right)\right)\cdot\nabla u+\frac{1}{2}\eta \Lambda:\nabla^2u,\quad u|_{t=0} = \varphi, \end{gather} where the diffusion matrix $\Lambda$ is defined in Lemma \ref{rmk3}. In the next proposition, we estimate the high order derivatives of its solution $u$. Below, for a multi-index $J = (J_1,J_2,...,J_d)$, we denote $|J| := \sum_{i=1}^d J_i$, and $\partial^J := \partial^{J_1}_1\partial^{J_2}_2...\,\partial^{J_d}_d$. \begin{proposition}\label{prop:derivativedecay} Let $u$ be the unique solution of the Kolmogorov equation \eqref{eq:modifiedpde}. Assume the initial data $\varphi \in C^{k}$. Suppose that Assumption \ref{ass:strongconvex} holds. Then for each multi-index $J$ with $0<|J|\le k$, there exist $\eta_0>0$, $C_J > 0$, $\gamma_J>0$, and an integer $p_J > 0$ such that for all $\eta\le \eta_0$, $x\in B(0,R) \subset \mathbb{R}^d$, \begin{gather}\label{eq:gradbd} |\partial^J u(x, t)| \leq C_J (1+|x|^{p_J})e^{-\gamma_J t}. \end{gather} \end{proposition} \begin{proof} First of all, we consider $X(t; x)$ which solves the SDE \eqref{eq:modifiedSDE}. Then \eqref{eq:modifiedpde} is its associated backward Kolmogorov equation, and \begin{gather}\label{eq:representationu} u(x, t)=\mathbb{E}\varphi\left(X(t; x)\right), \quad \forall x \in B(0,R). \end{gather} For the convenience of notation, we denote \begin{gather} \sigma := \sqrt{\Lambda / 2}. \end{gather} {\bf Step 1:} Estimates of $\mathbb{E}|X(t,x)|^{2m}$. We claim that for nonnegative integer $m$, \begin{gather}\label{eq:mattingly} \mathbb{E}|X(t; x)|^{2m}\le C_m\left(1+|x|^{2m}e^{-\gamma_m t}\right). \end{gather} This can be proved easily using It\^o's formula and induction on $m$. For the convenience of notations, we will use $X$ to represent $X(t; x)$ in the current proof. Applying It\^o's formula to $|X|^{2m}$, for $m \geq 1$, we have \begin{multline}\label{29} d|X|^{2m}=\Big(2m|X|^{2m-2}X\cdot[-(\nabla f(X)+\eta (\frac{1}{4}\nabla | \nabla f(X)|^2))] \\+\frac{1}{2}(2m)|X|^{2m-2} M : 2\eta \sigma^2(X)\Big)\,dt +2m|X|^{2m-2}X\cdot \sqrt{2\eta}\sigma(X)\cdot dW, \end{multline} with \[ M:=I_d+(2m-2)\frac{X\otimes X}{|X|^2}. \] Taking expectation, we know that for any fixed $\bar{\mu} \in (0,\mu)$, it holds \begin{gather}\label{210} \frac{d}{dt}\mathbb{E}|X|^{2m}\leq -2m\bar{\mu}\mathbb{E}|X|^{2m}+A_2\mathbb{E}|X|^{2m-2},\quad m \geq 1. \end{gather} In the inequality above, $A_2$ is a positive constant depending on $m$. Indeed, since $f$ is strongly convex, $X \cdot \nabla f(X) = X \cdot \left(\nabla f(X) - \nabla f(0)\right) + X \cdot \nabla f(0) \geq \mu |X|^2 + X \cdot \nabla f(0) $. Also, by Proposition \ref{rmk3}, $\mathbb{P}[X \in B(0,R_2)] = 1$. Clearly, the $C_0(B(0,R_2))$-norm of $\sigma$ and $\frac{1}{4}\nabla|\nabla f(\cdot)|^2$ is finite, so it holds that \begin{gather*} \begin{aligned} \frac{d}{dt}\mathbb{E}|X|^{2m} &\leq -2m\mu \mathbb{E}|X|^{2m} + A_0 \mathbb{E} |X|^{2m-2} + \mathbb{E}\left[(2m|X|^{2m-2}X\cdot\left( \eta \left(\frac{1}{4}\nabla|\nabla f(X)|^2\right) - \nabla f(0)\right)\right]\\ & \leq -2m\mu\mathbb{E}|X|^{2m} + A_0 \mathbb{E}|X|^{2m-2} + A_1 \mathbb{E}|X|^{2m-1}\\ & = -2m\mu \mathbb{E}|X|^{2m} + A_0 \mathbb{E}|X|^{2m-2} + A_1 \mathbb{E}\sqrt{(\epsilon_m |X|^{2m})(\frac{1}{\epsilon_m}|X|^{2m-2})}\\ & \leq -2m\mu \mathbb{E}|X|^{2m} + A_0 \mathbb{E}|X|^{2m-2} + \frac{1}{2}A_1 \mathbb{E}\left[\epsilon_m |X|^{2m}+\frac{1}{\epsilon_m}|X|^{2m-2}\right]\\ & \leq -2m\bar{\mu} \mathbb{E}|X|^{2m} + A_2 \mathbb{E}|X|^{2m-2}. \end{aligned} \end{gather*} Above, we have chosen $\epsilon_m$ small enough such that $\frac{1}{2}A_1\epsilon_m < 2m(\mu-\bar{\mu})$ to ensure that the last inequality holds. So now \eqref{210} is obtained. Next, we consider induction on $m$. \eqref{eq:mattingly} is obvious for $m=0$. For $m>0$, using induction hypothesis, we have \begin{gather} \frac{d}{dt}\mathbb{E}|X|^{2m}\leq -2m\bar{\mu}\mathbb{E}|X|^{2m}+A_3(1+|x|^{2m-2}e^{-\gamma_{m-1}t}), \end{gather} where $A_3$ is a positive constant depending on $m$. Using Gr\"ownwall's inequality, we have \begin{gather*} \begin{aligned} \mathbb{E}|X|^{2m} & \leq e^{-2m\bar{\mu}t}|x|^{2m} +\int_0^t A_3(1+|x|^{2m-2}e^{-\gamma_{m-1}s}) e^{-2m\bar{\mu}(t-s)}ds\\ &\leq c_m(1+|x|^{2m}e^{-\gamma_mt}), \end{aligned} \end{gather*} for some positive constants $A_4$, $A_5$, $c_m$ and $\gamma_m$, and the last inequality is due to Young's inequality. Hence \eqref{eq:mattingly} holds for any nonnegative integer $m$. {\bf Step 2:} Estimates of the moments of $\partial_x^J X(t,x)$ It is well-known that the stochastic map $x\mapsto X(t, x)$ is a diffeomorphism almost surely for all $t$ \cite{kunita1997stochastic,le1984stochastic}, so it is valid here to take partial derivatives with respect to $x$. Below, we will consider \[ X^{(J)}(t, x):=\partial_x^{J}X(t,x). \] Similarly, we will use $\partial^J$ to represent $\partial_x^J$ and $X^{(J)}$ to represent $X^{(J)}(t, x)$ for convenience. If $|J|=1$ (note that $|J| = \sum_{i=1}^dJ_i$ ), by similar discussion in \cite{elworthy1994formulae}, $X^{(J)}$ satisfies the following SDE \begin{equation} \begin{aligned} & dX^{(J)} =-\left[\nabla^2f(X)+\eta\left(\nabla (\frac{1}{4}\nabla | \nabla f|^2)\right)^T\right]\cdot X^{(J)}dt+\sqrt{2\eta}(X^{(J)}\cdot\nabla\sigma)\cdot dW,\\ & X^{(J)}(0; x)=e_J. \end{aligned} \end{equation} Formally, the equation for $X^{(J)}$ is obtained by taking derivative of $X$ on $x$ in the SDE \eqref{eq:modifiedSDE}. Obviously, $X^{(J)}$ also has compact support, though we do not use this property in our proof. Applying It\^o's formula to $|X^{(J)}(t; x)|^p$, for $p\ge 2$ \begin{gather*} \begin{aligned} d|X^{(J)}|^{p} =&\Big[p|X^{(J)}|^{p-2}X^{(J)}\cdot\left(-\left(\nabla^2f(X)+\eta\frac{1}{4}\nabla^2 | \nabla f|^2 \right)\right)\cdot X^{(J)}\\ & +\eta p|X^{J}|^{p-2}X^{(J)}_kX_{\ell}^{(J)}\partial_{k}\sigma_{i,\cdot}\partial_{\ell}\sigma_{i,\cdot}:M_J\Big]dt\\ & +p|X^{(J)}|^{p-2}X^{(J)}\cdot\sqrt{2\eta}(X^{(J)}\cdot\nabla\sigma) \cdot dW \end{aligned} \end{gather*} with \[ |X^{(J)}(0,x)|^{p}=|e_{J}|^{p}=1. \] Above, \[ M_J:=I_d+(p-2)\frac{X^{(J)}\otimes X^{(J)}}{|X^{J}|^2}. \] Similarly with \eqref{210}, using the fact that $X$ is bounded, for all $\eta$ small enough, the matrix $\left(\nabla^2f+\eta\frac{1}{4}\nabla^2 | \nabla f|^2 + \eta \partial \sigma_{i,\cdot}\partial \sigma_{i,\cdot} :M_d\right)$ is positive definite, after taking expectation, it holds \begin{gather} \frac{d}{dt}\mathbb{E}|X^{(J)}|^p\leq -p \gamma \mathbb{E}|X^{(J)}|^{p}. \end{gather} Here, $\gamma$ is a positive constant. By Gr\"onwall's inequality, \begin{gather}\label{eq:j=1} \mathbb{E}|X^{(J)}(t; x)|^p \le \exp(-p\gamma t)|e_J|^p=\exp(-p\gamma t). \end{gather} Now, we do induction for general $J$. Suppose we have constructed $X^{(I)}(t; x)$ with $|I|\le |J|-1$, such that the moments satisfy: \begin{gather}\label{eq:momentcontrol} \mathbb{E}|X^{(I)}(t; x)|^p \le C(1+|x|^{q_{I}})\exp(-\gamma_{p,I} t),~~p\ge 2. \end{gather} Now, we consider $J$. The new introduced variable $X^{(J)}$ satisfies the following equation: \begin{multline}\label{eq:eqforhighQJ} dX^{(J)}=-\left(\nabla^2f+\eta\left(\nabla \left(\frac{1}{4}\nabla | \nabla f|^2\right)\right)^T\right)\cdot X^{(J)} \,dt +Q_J\left(\partial^{\alpha}f, \partial^{\beta}\left(\frac{1}{4}\nabla | \nabla f|^2\right), X^{(I)} \right)\,dt\\ +\sqrt{2\eta}\left(X^{(J)}\cdot\nabla\sigma+R_J\left(\partial^{\alpha}\sigma, X^{(I)}\right)\right)\cdot dW, \end{multline} with the initial condition \[ X^{(J)}(0;x)=0 \in \mathbb{R}^d. \] In equation \eqref{eq:eqforhighQJ}, $Q_J\left(\partial^{\alpha}f, \partial^{\beta}(\frac{1}{4}\nabla | \nabla f|^2), X^{(I)}\right)$ is a polynomial of $\partial^{\alpha}f$ with $|\alpha|\le |J|$, $\partial^{\beta}(\frac{1}{4}\nabla | \nabla f|^2)$ with $|\beta|\le |J|$ , and $X^{(I)}$ with $|I|\le |J|-1$. Similarly, $R_J$ is a polynomial `of $\partial^{\alpha}\sigma$ with $|\alpha|\le |J|$ and $X^{(I)}$ with $|I|\le |J|-1$. Note that each term in both polynomials has some $X^{(I)}$ with positive order. Again, using It\^o's formula, we find that for $p\ge 2$, \begin{multline*} \frac{d}{dt}\mathbb{E}|X^{(J)}|^p =\mathbb{E}p|X^{(J)}|^{p-2}X^{(J)}\cdot \Big[\left(-\nabla^2f-\eta\left(\nabla \left(\frac{1}{4}\nabla | \nabla f|^2\right)\right)^T\right)\cdot X^{(J)}\\ +Q_J\left(\partial^{\alpha}f, \partial^{\beta}\left(\frac{1}{4}\nabla | \nabla f|^2\right), X^{(I)} \right)\Big] +\eta p\mathbb{E}|X^{J}|^{p-2}X^{(J)}_kX_{\ell}^{(J)}\partial_{k}\sigma_{i,\cdot}\partial_{\ell}\sigma_{i,\cdot}:M_J\\ +\eta p\mathbb{E}|X^{J}|^{p-2} R_JR_J^T:M_J +2\eta p\mathbb{E}|X^{J}|^{p-2}(X_{J}\cdot \nabla \sigma)\cdot R_J^T:M_J. \end{multline*} Similarly with the case $|J|=1$, for $\eta < \eta_0$, the first and third term above can be bounded above by $-p\gamma\mathbb{E}|X^{(J)}|^p$ with $\gamma$ being a positive constant. Since $X$ is bounded, the second term (``$Q_J$" term) can be bounded above by $p\mathbb{E}|X^{(J)}|^{p-1}\sum_{0<|I|\le |J|-1}C_{1,I}|X^{(I)}|^{q_I}$. Other terms can be bounded similarly. So we have \begin{multline*} \frac{d}{dt}\mathbb{E}|X^{(J)}|^p\le -p\gamma\mathbb{E}|X^{(J)}|^p +p\bar{A}\mathbb{E}|X^{(J)}|^{p-1}\sum_{0<|I|\le |J|-1}C_{1,I}|X^{(I)}|^{q_I} \\+p\eta \mathbb{E}|X^{(J)}|^{p-2}\sum_{0<|I|\le |J|-1}C_{2,I}|X^{(I)}|^{r_{2,I}}. \end{multline*} Next, we estimate the $\mathbb{E}|X^{(J)}|^{p-1}$ term and the $\mathbb{E}|X^{(J)}|^{p-2}$ term. Applying Young's inequality, for any $\delta>0$, we have \[ \mathbb{E}|X^{(J)}|^{p-1}\sum_{|I|\le |J|-1}C_{1,I}|X^{(I)}|^{q_I} \le \delta\frac{(p-1)\mathbb{E}|X^{(J)}|^p}{p}+C_3\frac{1}{p\delta}\mathbb{E}\left(\sum_{|I|\le |J|-1}C_{1,I}|X^{(I)}|^{q_I}\right)^p. \] The $\mathbb{E}|X^{(J)}|^{p-2}$ term can be similarly controlled if $p>2$. If $p=2$, we just leave it as it appears. Now, we choose $\delta$ small enough such that $\gamma-2\delta>0$. Then, for $\eta$ small enough, combining with the induction assumption on the moments \eqref{eq:momentcontrol}, we find that \begin{gather} \frac{d}{dt} \mathbb{E}|X^{(J)}|^p \leq -p\bar{\gamma} \mathbb{E}|X^{(J)}|^p + C(1+|x|^{q_{J}})\exp(-\gamma_{p,J} t), \end{gather} where $\bar{\gamma}$, $C$ and $\gamma_{p,J}$ are positive constants. Hence \eqref{eq:momentcontrol} also holds for $J$ using Gr\"onwall's inequality. Namely, we can control the moments by \begin{gather}\label{eq:momentcontrol-J} \mathbb{E}|X^{(J)}(t; x)|^p \le C(1+|x|^{q_{J}})\exp(-\gamma_{p,J} t),~~p\ge 2. \end{gather} {\bf Step 3:} Estimates $\partial^J u(x, t)$. Finally, using \eqref{eq:representationu} and similar discussion in \cite{elworthy1994formulae}, we have \begin{gather}\label{eq:DerivativerepresentationGeneralJ} \partial^Ju(x, t)= \begin{cases} \mathbb{E}\left[\nabla \varphi(X) \cdot X^{(J)}\right], & \quad |J| = 1,\\ \mathbb{E}\left[\nabla\varphi(X)\cdot X^{(J)}+ P_J\left(\partial^{\alpha}\varphi(X), X^{(I)}\right) \right], & \quad |J| \geq 2, \end{cases} \end{gather} where $P_J(\varphi, X^{(I)})$ is a polynomial of $X^{(I)}$ with $|I|\le |J|-1$ and $\partial^{\alpha}\varphi(X)$ with $|\alpha|\le |J|$. Using \eqref{eq:mattingly}, \eqref{eq:momentcontrol-J}, and applying H\"older inequality to \eqref{eq:DerivativerepresentationGeneralJ} yields the result for $\partial^J u$. This then finishes the proof. \end{proof} \begin{remark} The initial value $x$ is inside $B(0,R)$, but the solution $X$ to the SDE \eqref{eq:modifiedSDE} can be outside the ball $B(0, R)$. However, this proposition ensures that $\sup_{x\in B(0,R)}|\partial^J u(x,t)|$ can still be controlled. \end{remark} \section{Main theorem: uniform in time diffusion approximation}\label{sec:main} Now, we fix $R>L$ mentioned in Lemma \ref{lmm:Scontraction} and consider the initial distribution/law of $X_0$ that is supported in $B(0, R)$. We consider $\Lambda$ as in \eqref{eq:defLambda}. Observe that under such setting, it holds that \begin{gather} \|\Lambda\|_{C^k(\mathbb{R}^d)}\le C_k\|\Sigma\|_{C^k(B(0, R))}. \end{gather} Now consider the SDE \begin{gather}\label{eq:newSDE} dX=-\left(\nabla f(X)+\eta \left(\frac{1}{4}\nabla|\nabla f(X)|^2\right) \right)\,dt+\sqrt{\eta \Lambda(X)}\,dW,\quad X|_{t=0} = x. \end{gather} Let $u$ be the solution to the Kolmogorov equation $\partial_tu=\mathcal{L} u$ with $u(x, 0)=\varphi(x)$. (Recall the definition of $\mathcal{L}$ and $u$ in \eqref{eq:modifiedpde}.) We have the following long time diffusion approximation. \begin{theorem}\label{thm:strongconvex} Suppose Assumption \ref{ass:strongconvex} holds. Then the SDE \eqref{eq:newSDE} approximates the SGD \eqref{eq:sgd} with weak second order accuracy uniformly in time for initial distributions supported in $B(0,R)$. More precisely, for $R$ that is required in Lemma \ref{lmm:Scontraction} and $\varphi\in C^{4}$, there exists $C=C(\varphi,R)>0$ that depends on $\varphi$ and $R$ but independent of $\eta$ such that when $\eta$ is sufficiently small (recall \eqref{eq:Un} for $U^n$) \begin{gather} \sup_{n\ge 0}\sup_{x\in B(0, R)}|U^n(x)-u(x, n\eta)|< C \eta^2. \end{gather} \end{theorem} Now, we do some preparation to prove this theorem. For the convenience of the presentation, we introduce \begin{gather} u^n(x):=u(x, n\eta), \end{gather} and denote the generator associated with \eqref{eq:newSDE}: \begin{gather} \mathcal{L}=-\left(\nabla f(x)+\eta (\frac{1}{4}\nabla | \nabla f(x)|^2)\right)\cdot\nabla+\frac{1}{2}\eta\Lambda(x):\nabla^2=:\mathcal{L}_1+\eta\mathcal{L}_2, \end{gather} so that $\mathcal{L}_1=-\nabla f(x)\cdot\nabla$ and $\mathcal{L}_2=-\left(\frac{1}{4}\nabla | \nabla f(x)|^2\right)\cdot\nabla+\frac{1}{2}\Lambda:\nabla^2$. Since $u^{n+1}=e^{\eta \mathcal{L}}u^n$, direct semigroup expansion would give us that \begin{gather} \begin{split} & u^j=u^{j-1}+\eta \mathcal{L} u^{j-1}(x)+\frac{1}{2}\eta^2 \mathcal{L}^2u^{j-1}(x)+\eta^3 R_1\\ & =u^{j-1}+\eta\left(-\nabla f-\eta (\frac{1}{4}\nabla | \nabla f|^2)\right)\cdot\nabla u^{j-1} +\frac{\eta^2}{2}\Lambda:\nabla^2u^{j-1} +\frac{\eta^2}{2} \nabla f\cdot\nabla(\nabla f\cdot\nabla u^{j-1})+\eta^3 R_2\\ & = u^{j-1} -\eta \nabla f \cdot \nabla u^{j-1} + \frac{\eta^2}{2} (\Lambda + \nabla f \otimes \nabla f) : \nabla^2 u^{j-1} + \eta^3 R_2. \end{split} \end{gather} In the remainder term $R_2$, we have derivatives of $u$ and $f$. It seems that we need the $C^6$ norms of $u$ and $f$ to bound the terms of order $\eta^3$. In fact, we can relax this. \begin{lemma}\label{lmm:localtruncation} For $R>0$ that is required in Lemma \ref{lmm:Scontraction}, then it holds that \begin{multline} \sup_{x\in B(0,R)}\left|u^{n+1}(x)-\left(u^n+\eta \mathcal{L} u^n+\frac{1}{2}\eta^2\nabla f\cdot\nabla(\nabla f\cdot\nabla u^n)\right) \right| \\ \le C(\|f\|_{C^4(B(0, R))})\sup_{t\in [t^n, t^{n+1}]}\|u(\cdot,t)\|_{C^{1,4}(B(0,R))}\eta^3, \end{multline} where $\|u\|_{C^{p,q}(U)}:=\sum_{p\le |\alpha|\le q}\sup_{x\in U}|\partial^{\alpha}u|$ and $C(\|f\|_{C^4(B(0, R))})$ is a constant depending on $\|f\|_{C^4(B(0, R))}$. \end{lemma} \begin{proof} The proof is straightforward using the equivalent integral representation: \begin{gather} u(x,t)=u^n(x)+\int_{t^n}^{t}\mathcal{L}_1u(x,s)+\eta \mathcal{L}_2u(x,s)\,ds. \end{gather} If we directly do semigroup expansion, we will have $\mathcal{L}^3 u$ terms in the remainder, which require $C^6$ norms of $u$, so we consider using the integral form to solve this. Our strategy is to put the equivalent integral representation of $u$ into the integral on the right side of the formula above. We then repeat this process until every term in the residue is no less than $\eta^3$. By doing so, we can avoid the $C^6$ norms of $u$. Simple calculation gives \begin{multline} u(x,t)=u^n(x)+\int_{t^n}^{t}\mathcal{L}_1\left(u^n(x)+\int_{t^n}^{s}\mathcal{L}_1u(x,\tau)+\eta \mathcal{L}_2u(x,\tau)d\tau\right)ds \\+\eta\int_{t^n}^{t}\mathcal{L}_2\left(u^n(x)+\int_{t^n}^{s}\mathcal{L}_1u(x,\tau)+\eta\mathcal{L}_2u(x,\tau)d\tau\right)ds \\=u^n(x)+(t-t^n)(\mathcal{L}_1+\eta\mathcal{L}_2)u^n(x)+\int_{t^n}^{t}\int_{t^n}^{s}\mathcal{L}_1^2u(x,\tau)d\tau ds +\eta \int_{t^n}^{t}\int_{t^n}^{s}\mathcal{L}_1\mathcal{L}_2u(x,\tau)d\tau ds\\+\eta\int_{t^n}^{t}\int_{t^n}^{s}\mathcal{L}_2\mathcal{L}_1u(x,\tau)d\tau ds +\eta^2\int_{t^n}^{t}\int_{t^n}^{s}\mathcal{L}_2^2u(x,\tau)d\tau ds \\=u^n(x)+(t-t^n)(\mathcal{L}_1+\eta\mathcal{L}_2)u^n(x)+\int_{t^n}^{t}\int_{t^n}^{s}\mathcal{L}_1^2\left(u^n(x)+\int_{t^n}^{\tau}\mathcal{L}_1u(x,z)+\eta\mathcal{L}_2u(x,z)dz\right)d\tau ds \\+\eta \int_{t^n}^{t}\int_{t^n}^{s}\mathcal{L}_1\mathcal{L}_2u(x,\tau)d\tau ds+\eta\int_{t^n}^{t}\int_{t^n}^{s}\mathcal{L}_2\mathcal{L}_1u(x,\tau)d\tau ds +\eta^2\int_{t^n}^{t}\int_{t^n}^{s}\mathcal{L}_2^2u(x,\tau)d\tau ds \\=u^n(x)+(t-t^n)(\mathcal{L}_1+\eta\mathcal{L}_2)u^n(x)+\frac{1}{2}(t-t^n)^2\mathcal{L}_1^2u^n(x)+\int_{t^n}^{t}\int_{t^n}^{s}\int_{t^n}^{\tau}\mathcal{L}_1u(x,z)+\eta\mathcal{L}_2u(x,z)dzd\tau ds \\+\eta \int_{t^n}^{t}\int_{t^n}^{s}\mathcal{L}_1\mathcal{L}_2u(x,\tau)d\tau ds+\eta\int_{t^n}^{t}\int_{t^n}^{s}\mathcal{L}_2\mathcal{L}_1u(x,\tau)d\tau ds +\eta^2\int_{t^n}^{t}\int_{t^n}^{s}\mathcal{L}_2^2u(x,\tau)d\tau ds. \end{multline} Setting $t$ at $t^{n+1}$, since the initial value $x \in B(0,R)$, and \[ |\mathcal{L}_i^{\alpha}\mathcal{L}_j^{\beta}u|\le C(\|f\|_{C^{i\alpha+j\beta}(B(0,R))})\|u\|_{C^{1,i\alpha+j\beta}(B(0,R))} \] with $i,j \in \{1,2\}$ and $i\alpha+j\beta \leq 4$, the claim follows. \end{proof} Now all the preparation work has been done. Then, we can prove our main theorem. \begin{proof}[Proof of Theorem \ref{thm:strongconvex}] Since $U^n=\mathcal{S}^n\varphi=\mathcal{S}^nu(\cdot, 0)$, we have \begin{gather*} U^n(x)-u(x, n\eta)=\sum_{j=1}^n \mathcal{S}^{n-j}(\mathcal{S}u^{j-1}-u^j)(x). \end{gather*} By Lemma \ref{lmm:Scontraction} , we have $\|U^n(x)-u(x, n\eta)\|_{L^{\infty}(B(0, R))}\le \sum_{j=1}^n \|Su^{j-1}-u^j\|_{L^{\infty}(B(0, R))}$. Now, direct Taylor expansion shows that \begin{multline*} (\mathcal{S}u^{j-1})(x)=\mathbb{E}u^{j-1}\left(x-\eta\nabla f(x;\xi)\right) =u^{j-1}(x)-\eta\nabla f(x)\cdot\nabla u^{j-1}(x)\\ +\frac{1}{2}\eta^2\left(\Sigma+\nabla f(x)\otimes \nabla f(x)\right):\nabla^2u^{j-1}(x) +\eta^3 R, \end{multline*} where $\|R\|_{L^{\infty}(B(0, R))}\le C(\|f(x,\xi)\|_{C^1(0,R)}) \|u^{j-1}\|_{C^3(B(0, R))}$ by Taylor expansion. Note that $\Sigma+\nabla f(x)\otimes \nabla f(x)=\mathbb{E}\nabla f(x,\xi)\otimes \nabla f(x,\xi)$. Note that in $B(0, R)$, $\Lambda=\Sigma$. By Lemma \ref{lmm:localtruncation}, there exists a constant $C$ depending on $\sup_{\xi}\|f(\cdot, \xi)\|_{C^4}$ such that \[ \|\mathcal{S}u^{j-1}-u^j\|_{L^{\infty}(B(0, R))} \le C(\|f(\cdot)\|_{C^4(B(0, R))}) \sup_{t\in [t^{j-1}, t^j]}\|u(\cdot, t)\|_{C^{1,4}(B(0, R))} \eta^3. \] By Proposition \ref{prop:derivativedecay}, there exists $\beta>0$ such that for $\eta$ is sufficiently small it holds \[ \sum_{j=1}^n\|\mathcal{S}u^{j-1}-u^j\|_{L^{\infty}(B(0, R))} \le C(\|f(\cdot)\|_{C^4}) \eta^3 \sum_{j}C_4 e^{-\beta j \eta} \le C\eta^2. \] The claim therefore follows. \end{proof} \section{Discussions}\label{sec:conclusion} In this paper we extended the classical diffusion approximation for SGD from finite time to infinite time, provided that the expected loss function is strongly convex. Here, we perform some illustrating discussion. \subsection{Significance of the diffusion approximation} Usual SGD diffusion approximation is on finite time. The extension to a uniform-in-time approximation in this work would allow us to analyze the aysmptotic behavior of SGD using the tools from SDEs with small noise, like those about random perturbed dynamical systems \cite{freidlin2004random,MR2571413}. As a first example, in \cite{feng2019uniform}, by assuming that each $f(\cdot;\xi)$ is convex, the SGD has been shown to have an invariant measure. Without the convexity assumption on each $f(\cdot, \xi)$ as in our current work, it is not straightforward to study the ergodicity of SGD directly. The uniform $O(\eta^2)$ weak approximation, however, provides a possible way to investigate the long time behavior under these weaker conditions. In fact, if the expected loss $f$ is strongly convex within the region where the SGD sees, we have the uniform-in-time approximation and the diffusion approximation SDE can be shown to have an exponential ergodicity due to the convexity of $f$. The uniform-in-time approximation ensures that the distribution of SGD is only $O(\eta^2)$ away from this invariant measure, which then tells us the long time behavior. For another example, the uniform-in-time diffusion approximation may enable us to investigate the behavior of SGD near local minimizers, by analyzing the large deviation behaviors of \eqref{eq:modifiedSDE}, which was done in \cite{hulililiu2018} where the uniform-in-time approximation was taken for granted. Moreover, as in the work of Samuel \cite{smith2020generalization,smith2021origin}, the term $\frac{1}{4}\eta |\nabla f|^2$ may be regarded as the implicit regularizer to the loss landscape. Hence, with the uniform-in-time SDE approximation, we may perform similar study the behavior near the local minimizers and also how the regularizer affects the behaviors near the local minimizers and thus possibly the generalization ability. \subsection{Discussion on nonconvex case} For general loss function, given initial value in $B(0, R)$, SGD can hit the boundary of $B(0, 2R)$ in $M\eta^{-1}$ steps, where $M$ depends on the $L^{\infty}$ norms of $f(\cdot, \xi)$ on $B(0, 2R)$. Similarly as in Section \ref{sec:setup}, we can modify the values $\Sigma$ outside $B(0, 2R)$ so that it is a smooth function with compact support. Then, it is possible to show that for $x\in B(0, R)$ \[ \mathbb{E}|X^{J}|^p\le C \exp(\gamma_{I,p} t), \] where $\gamma_{I,p}$ depends on the values of $f$ in $B(0, 2R)$. Using this, one can show that \[ \|\partial^{J}u\|_{L^{\infty}(B(0, R))}\le C\exp(\alpha t). \] With similar computation, we find the $O(\eta^2)$ diffusion approximation is valid up to time $T\sim \log(1/\eta)$. Actually, we can observe this by simply replacing $\exp(-\beta j \eta)$ with $\exp(\alpha j \eta)$ in the last step of the proof of theorem \ref{thm:strongconvex}. \begin{proposition}\label{prop:2} Assume the initial point of SGD is chosen from $B(0, R)$ for some $R>0$. For any $\varphi\in C^{\infty}$, there exists $\beta>0$, $C>0$ that depends on $\varphi$ and the norms of $f(\cdot,\xi)$ in $B(0, 2R)$ such that \begin{gather} \sup_{n\eta\le \beta\ln(1/\eta)}\sup_{x\in B(0, R)}|U^n(x)-u(x, n\eta)|\le C \eta^2. \end{gather} \end{proposition} In applications, the expected loss functions are generally not strongly convex in the focused region. In particular, the problem of investigating the behavior of SGD near saddle points are important for understanding some special behaviors in SGD \cite{kleinberg2018alternative}. As mentioned in \cite{hulililiu2018}, Kifer proved that the SDE \[ dX=-\nabla f(X)dt+\sqrt{\epsilon}\sigma\, dW \] escapes the saddle point of $f$ in $O(\log(\epsilon^{-1}))$ time. Using the diffusion approximation, we expect that SGD escapes the saddle point in a typical steps of order $O(\eta^{-1}|\log \eta|)$, which is the direct result of Proposition \ref{prop:2}. Since the diffusion approximation is valid exactly up to this time regime, it is not realistic to use diffusion approximation to justify this guess. We leave this problem for future. \subsection{Extensions and possible future work} In this paper, we established the uniform-in-time diffusion approximation of SGD for the strongly convex case, but extensions to general non-convex case is really difficult. This is mainly due to the fact that the diffusion coefficient of SGD is usually of $O(\sqrt{\eta})$, where $\eta$ is the footstep or learning rate. In Stochastic Gradient Langevin Dynamics (SGLD), however, the diffusion coefficient is of $O(1)$ \cite{welling2011bayesian}. Hence in the future it is possible to prove the high order uniform-in-time diffusion approximation of SGLD with non-convex potentials. \section*{Acknowledgement} This work is financially supported by the National Key R\&D Program of China, Project Number 2021YFA1002800. The work of L. Li was partially supported by Shanghai Municipal Science and Technology Major Project 2021SHZDZX0102, NSFC 11901389 and 12031013, and Shanghai Sailing Program 19YF1421300. We would like to thank Yang Jing for the help of some formula derivation. \bibliographystyle{unsrt}
1,108,101,566,245
arxiv
\section{Introduction} In this paper, we consider linear operator equations of the form \begin{equation}\label{Ax=y} Ax = y \,, \end{equation} where $A \, : \, {\ell^2} \to {\ell^2}$ is a bounded linear operator on the (infinite-dimensional) sequence space ${\ell^2}$. Note that by using a suitable basis or frame, operator equations between separable function spaces such as $L^p$, Sobolev, or Besov spaces can all be transformed into problems of the form \eqref{Ax=y}. We assume that only noisy data $y^{\delta}$ satisfying \begin{equation}\label{y-yd} \norm{y - y^{\delta}}_2 \leq \delta \end{equation} are available, where $\norm{.}_2$ denotes the standard ${\ell^2}$-norm. Problems of the form \eqref{Ax=y} arise in many practical applications including, but not limited to, image processing (compression, denoising, enhancement, inpainting, etc.), image reconstruction, as well as medical and tomographic imaging. For example, in the case in tomography, where $A$ is the Radon transform and $x$ is the internal density to be reconstructed from sinogram data $y^{\delta}$, the solution $x$ can be expected to have a sparse representation in a given basis. Hence, we are particularly interested in sparse solutions of \eqref{Ax=y}, to which end we consider the minimization of the following Tikhonov functional \begin{equation}\label{Tikhonov} \mathcal{T}_{\alpha,\delta}(x) := \norm{Ax-y^{\delta}}_2^2 + \alpha \norm{x}_1 \,, \end{equation} where $\norm{.}_1$ denotes the standard ${\ell^1}$-norm. This problem has already been thoroughly studied analytically (compare with Section~\ref{section_regularization}) as well as numerically (see Section~\ref{section_minimization} for an overview of previously proposed methods). However, the efficient minimization of the Tikhonov functional $\mathcal{T}_{\alpha,\delta}$ still remains a field of active study, especially since the presence of the ${\ell^1}$-norm makes the functional non-differentiable at the origin. One approach to circumvent this issue was proposed in \cite{Ramlau_Teschke_2006}, where the authors considered a transformation of the Tikhonov functional into one which is once differentiable. In this paper, we extend their transformation idea by using an approximate transformation approach in order to end up with a functional that is also twice differentiable. This then allows the application of efficient second-order iterative methods for carrying out the minimization. This paper is organized as follows: In Section~\ref{section_regularization}, we review known regularization results concerning sparsity regularization via the Tikhonov functional \eqref{Tikhonov} and in Section~\ref{section_minimization}, we discuss some of the existing methods for its minimization. In Section~\ref{transformation_approach}, we consider the transformation approach presented in \cite{Ramlau_Zarzer_2012} and its extension for obtaining twice differentiable functionals, for which we provide a convergence analysis. Furthermore, in Section~\ref{numerical_experiments}, we present numerical simulations based on a tomography problem to demonstrate the usefulness of our approach. Finally, a conclusion is given in Section~\ref{sect_conclusion}. \section{Sparsity Regularization} \label{section_regularization} In this section, we recall some basic results (adapted from \cite[Section~3.3]{Scherzer_Grasmair_Grossauer_Haltmeier_Lenzen_2008}) concerning the regularization properties of Tikhonov regularization with sparsity constraints. For a more extensive review on regularization theory for Tikhonov functionals with sparsity constraints the reader is referred to \cite{Resmerita_2005, Ramlau_Resmerita_2010, Jin_Maass_2012}, and more recently, \cite{ Jin_Maass_Scherzer_2017, Hohage_Sprung_Weidling_2020}. First of all, concerning the well-definedness of minimizers of $\mathcal{T}_{\alpha,\delta}$ and their stability with respect to the data $y^{\delta}$, we get the following result, which is an immediate consequence of \cite[Theorem~3.48]{Scherzer_Grasmair_Grossauer_Haltmeier_Lenzen_2008}: \begin{theorem} Let $A \, : \, {\ell^2} \to {\ell^2}$ be weakly sequentially continuous, $\alpha > 0$ and $y^{\delta} \in {\ell^2}$. Then there exists a minimizer of the functional $\mathcal{T}_{\alpha,\delta}$ defined in \eqref{Tikhonov}. Furthermore, the minimzation is weakly subsequentially stable with respect to the noisy data $y^{\delta}$. \end{theorem} Concerning the convergence of the minimizers of the Tikhonov functional, we get the following theorem, which follows directly from \cite[Theorem~3.49]{Scherzer_Grasmair_Grossauer_Haltmeier_Lenzen_2008}: \begin{theorem} Let $A \, : \, {\ell^2} \to {\ell^2}$ be weakly sequentially continuous, assume that the problem \eqref{Ax=y} has a solution in ${\ell^1}$, and let $\alpha(\delta) : (0,\infty) \to (0, \infty) $ be chosen such that \begin{equation}\label{cond_alpha_delta} \alpha(\delta) \to 0 \,, \quad \text{and} \quad \frac{\delta^2}{\alpha(\delta)} \to 0 \,, \quad \text{as} \quad \delta \to 0 \,. \end{equation} Moreover, assume that the sequence $\delta_k$ converges to $0$, that $y_k := y^{\delta_k}$ satisfies the estimate $\norm{y-y_k}_2\leq\delta_k$, and that $x_k$ is a sequence of elements minimizing $\mathcal{T}_{\alpha(\delta_k),y_k}$. Then there exists an ${\ell^1}$-minimum-norm solution $x^\dagger$ and a subsequence $x_{k_n}$ of $x_k$ such that $\norm{x_{k_n} - x^\dagger}_2 \to 0$ as $n \to \infty$. Furthermore, if the ${\ell^1}$-minimum-norm solution $x^\dagger$ is unique, then $\norm{x_k - x^\dagger}_2 \to 0$ as $k \to \infty$. \end{theorem} Note that typically, one only gets weak subsequential convergence of the minimizers of the Tikhonov functional to the minimum-norm solution. However, the above theorem shows that for sparsity regularization, one even gets strong subsequential convergence. Furthermore, note that if $A$ is injective, the ${\ell^1}$-minimizing solution is sparse (i.e., only finitely many of its coefficients are non-zero) and satisfies a variational source condition, then it is possible to prove optimal convergence rates under the a-priori parameter choice $\alpha(\delta) \sim \delta$, both in Bregman distance and in norm \cite[Theorem~3.54]{Scherzer_Grasmair_Grossauer_Haltmeier_Lenzen_2008}. \section{Minimization of the Tikhonov functional} \label{section_minimization} In this section, we review some of the previously proposed methods for the minimization of \eqref{Tikhonov}. Due to the non-differentiability of the ${\ell^1}$-norm in zero, this minimization problem is a non-trivial task. Among the first and perhaps the most well-known method is the so-called \emph{Iterative Shrinkage Thresholding Algorithm (ISTA)}, proposed in \cite{Daubechies_Defrise_DeMol_2004}. Each iteration of this algorithm consists of a gradient-descent step applied to the residual functional, followed by a thresholding step, which leads to the iterative procedure \begin{equation}\label{ISTA} x_{k+1}^\delta = S_{\alpha \omega} \kl{x_k^\delta- \omega A^* \kl{Ax_k^\delta - y^\delta}} \, , \end{equation} where $S_{\alpha \omega}$ denotes the component-wise thresholding (shrinkage) operator \begin{equation*} \kl{S_{\alpha \omega}(x)}_k := \mfunc{sgn}(x_k) \max\{ \vert x_k \vert - \alpha \omega,0 \} \, . \end{equation*} It was shown that the iterates generated by ISTA converge to a minimizer of the Tikhonov functional \eqref{Tikhonov} under suitable assumptions \cite{Daubechies_Defrise_DeMol_2004, Bredies_Lorenz_2008}. Unfortunately, this converge can be very slow, which motivated the introduction of \emph{Fast ISTA (FISTA)} in \cite{Beck_Teboulle_2009}. Based on Nesterov's acceleration scheme \cite{Nesterov_1983}, the iterates of FISTA are defined by \begin{equation}\label{FISTA} \begin{split} x_{k}^\delta &=S_{\alpha \omega} \big( z_{k-1}^\delta- \omega A^* \big(Az_{k-1}^\delta-y^\delta \big) \big) \,, \qquad t_k = \tfrac{1+\sqrt{1+4 t_{k-1}^2}}{2} \,, \\ z_k^\delta &= x_k^\delta+ \Big(\tfrac{t_{k -1}-1}{t_k} \Big) (x_k^\delta-x_{k-1}^\delta) \,, \qquad z_0^\delta =x_0 \,, \quad t_0 = 1 \,. \end{split} \end{equation} The convergence analysis presented in \cite{Beck_Teboulle_2009} as well as many numerical experiments show that the iterates of FISTA converge much faster than those of ISTA, the residual converging with a rate of $O(1/k^2)$ for FISTA compared to $O(1/k$) for ISTA, hence making it more practical. This speedup also holds for a generalized version of FISTA, which is applicable to composite (convex) minimization problems \cite{Attouch_Peypouquet_2016}. Applied to problem \eqref{Tikhonov}, it has the same form as \eqref{FISTA}, but with the computation of $z_k^\delta$ replaced by \begin{equation*} z_{k}^\delta = x_k^\delta + \tfrac{k-1}{k + \beta -1} \kl{x_k^\delta - x_{k-1}^\delta} \,, \end{equation*} where the choice of $\beta = 3$ is common practice. The convergence of this method also for any other choice of $\beta > 3$ was established in \cite{Attouch_Peypouquet_2016}. In the context of compressed sensing, where one tries to recover signals from incomplete and inaccurate measurements in a stable way, minimization problems of the form \eqref{Tikhonov} have been analyzed and numerically treated in finite dimensions (see e.g.\ \cite{Candes_Romberg_Tao_2006, Donoho_Tanner_2005, Daubechies_DeVore_Fornasier_Gunturk_2010}). Also in finite dimensions, the minimization problem \eqref{Tikhonov} has been tackled sucessfully by using various Krylov-subspace techniques (see e.g.\ \cite{Buccini_Reichel_2019, Lanza_Morigi_Reichel_Sgallari_2015, Huang_Lanza_Morigi_Reichel_Sgallari_2017}). In infinite dimensions, a number of different minimization algorithms for \eqref{Tikhonov} have been proposed. For example, the authors of \cite{Ramlau_Teschke_2006, Ramlau_Teschke_2005, Ramlau_Teschke_2010} have proposed a surrogate functional approach, while the authors of \cite{Bredies_Lorenz_Maass_2009, Bonesky_Bredies_Lorenz_Maass_2007} and \cite{Griesse_Lorenz_2008} have proposed conditional gradient and semi-smooth Newton methods, respectively. Of particular interest to us is the minimization approach presented in \cite{Ramlau_Zarzer_2012, Zarzer_2009}, which we discuss in detail in Section~\ref{transformation_approach} below. It is based on a nonlinear transformation utilizing a Nemskii operator, which turns the Tikhonov functional \eqref{Tikhonov} into one with a standard ${\ell^2}$-norm penalty, but with a nonlinear operator. Since the resulting transformed functional is continuously Fr\'echet differentiable, one can use standard first-order iterative methods for its minimization. Unfortunately, the functional is not twice differentiable, which prohibits the use of second-order methods, known for their efficiency. Circumventing this shortcoming is the motivation for the minimization approach based on an approximate transformation presented below. \section{Transformation Approach} \label{transformation_approach} The concept of approximating a nonsmooth operator with a convergent sequence of smooth operators has been used before, e.g., in \cite{Acar_Vogel_1994} in the context of BV regularization. In the related setting where only an inexact forward operator is known, convergence of the resulting approximate solutions as the the uncertainty in the forward operator and the data decreases has been studied e.g., in \cite{Korolev_Lellmann_2018}. As described above, the authors of \cite{Ramlau_Zarzer_2012, Zarzer_2009} considered a transformation approach for minimizing the Tikhonov functional \eqref{Tikhonov}. This approach is based on a nonlinear transformation of the functional using the Nemskii operator \begin{equation}\label{def_N_p_q} \begin{split} N_{p,q} \, : \, (x_k)_{k \in \mathbb{N}} \mapsto \kl{ \eta_{p,q}(x_k) }_{k \in \mathbb{N}} \,, \end{split} \end{equation} where the function $\eta_{p,q}$ is defined by \begin{equation}\label{def_eta_p_q} \eta_{p,q} \, : \, \R \to \R \,, \quad \tau \mapsto \mfunc{sgn}(\tau) \abs{\tau}^\frac{q}{p} \,. \end{equation} The operator $N_{p,q}$ has for example been used in the context of maximum entropy regularization \cite{Engl_Landl_1993}. Since here we need it only for the special case $p=1$ and $q=2$, we now define the operator \begin{equation}\label{def_N} \begin{split} N \, : \, {\ell^2} \to {\ell^1} \,, \qquad x \mapsto N_{1,2}(x) \,, \end{split} \end{equation} and the function \begin{equation}\label{def_eta} \eta \, : \, \R \to \R \,, \quad \tau \mapsto \eta_{1,2}(\tau) \,. \end{equation} The operator $N$ is continuous, bounded, bijective, and Fr{\'e}chet differentiable with \begin{equation} N'(x)h = \kl{2 \abs{x_k}h_k }_{k \in \mathbb{N}} \,, \end{equation} and is used to define the following nonlinear operator \begin{equation}\label{def_F} F \, : \, {\ell^2} \to {\ell^2} \,, \qquad x \mapsto (A \circ N)(x) \,. \end{equation} This is then used to transform the problem of minimizing \eqref{Tikhonov} into a standard ${\ell^2} - {\ell^2}$ minimization problem, as shown by the following result from \cite{Ramlau_Zarzer_2012}: \begin{proposition} The following two problems are equivalent: \begin{enumerate} \item Find $x^* \in {\ell^1}$, such that $x^*$ minimizes \begin{equation} \label{def_g} \mathcal{T}_{\alpha,\delta}(x) = \norm{ Ax- y^{\delta}}_2^2 + \alpha \norm{x}_1 \,. \end{equation} \item Find $x^*= N(\tilde{x})$, such that $\tilde{x} \in {\ell^2}$ minimizes \begin{equation} \label{def_Jad} \mathcal{J}_{\alpha,\delta}(x) := \norm{F(x) - y^{\delta} }_2^2 + \alpha \norm{x}_2^2 \,. \end{equation} \end{enumerate} \end{proposition} Due to the above proposition, both the original and the transformed problem recover the same solution, which thus have the same sparsity properties. Note that the operator $F$ is nonlinear even if $A$ is linear. However, using the transformed operator has the advantage that the resulting functional $\mathcal{J}_{\alpha,\delta}$ is differentiable. \begin{proposition} The operator $F$ and the functional $\mathcal{J}_{\alpha,\delta}$ defined in \eqref{def_F} and \eqref{def_Jad}, respectively, are continuously Fr{\'e}chet differentiable, with \begin{equation*} F'(x)h = A N'(x)h \,, \qquad \text{and} \qquad \mathcal{J}_{\alpha,\delta}'(x) h= \spr{2F '(x)^*(F(x)-y^{\delta}) + 2\alpha x,h} \,. \end{equation*} \end{proposition} \begin{proof} This is an immediate consequence of the definition of $\mathcal{J}_{\alpha,\delta}$ and the fact that $A$ is linear and $N$ is differentiable. \end{proof} Due to the above result, it is now possible to apply gradient based (iterative) methods for minimizing the transformed functional $\mathcal{J}_{\alpha,\delta}$, and thus to compute a minimizer of the functional $\mathcal{T}_{\alpha,\delta}$, which itself is not differentiable. Unfortunately, the transformed functional $\mathcal{J}_{\alpha,\delta}$ is not twice differentiable, due to the fact that $N$ is not twice differentiable (at zero). This prohibits the use of second order methods like Newton's method, which are known to be very efficient in terms of iteration numbers. Hence, we propose to approximate $N$ by a sequence of operators $N_\varepsilon$ which are twice continuously differentiable, and to minimize, instead of $\mathcal{J}_{\alpha,\delta}$, the functional \begin{equation}\label{def_Jade} \mathcal{J}_{\alpha,\delta}^\eps(x) := \norm{F_\varepsilon(x) - y^{\delta}}_2^2 + \alpha \norm{x}_2^2 \,, \end{equation} where we define the operator $F_\varepsilon$ by \begin{equation}\label{def_F_eps} F_\varepsilon \, : \, {\ell^2} \to {\ell^2} \,, \quad x \mapsto (A \circ N_\varepsilon)(x) \,, \end{equation} for a suitable approximation $N_\varepsilon$ of the operator $N$. This approximation is based on suitable approximations $\eta_\varepsilon$ of the functions $\eta$, which we introduce in the following \begin{definition} For $\varepsilon > 0$ we define functions $\eta_\varepsilon : \R \to \R$ by \begin{equation}\label{def_eta_eps} \eta_\varepsilon(\tau) := \begin{cases} -\tau^2 -\tfrac{1}{3} \varepsilon^2 \,, &\tau \in (-\infty, -\varepsilon) \,, \\ \frac{1}{3\varepsilon}\tau^3+\varepsilon \tau \,, &\tau \in [-\varepsilon, \varepsilon] \,, \\ \tau^2+\tfrac{1}{3} \varepsilon^2 \,, &\tau \in (\varepsilon, \infty) \,. \end{cases} \end{equation} \end{definition} \begin{figure} \centering \includegraphics[scale=0.6]{pics/eta} \caption{Comparison of the transformation functions $\eta_\varepsilon$ and $\eta$.} \end{figure} Obviously, $\eta_\varepsilon \to \eta$ as $\varepsilon \to 0$ and furthermore, we get the following \begin{lemma}\label{lem_eta_eps_diff} The functions $\eta_\varepsilon$ defined by \eqref{def_eta_eps} are twice continuously differentiable. \end{lemma} \begin{proof} It follows from its definition that $\eta_\varepsilon$ is everywhere continuous and that \begin{equation*} \eta'_\varepsilon(\tau) := \begin{cases} -2\tau \,, &\tau \in (-\infty, -\varepsilon) \,, \\ \frac{1}{\varepsilon}\tau^2 + \varepsilon \,, &\tau \in [-\varepsilon, \varepsilon] \,, \\ 2 \tau\,, &\tau \in (\varepsilon, \infty) \,. \end{cases} \end{equation*} Again it follows that $\eta'_{\varepsilon}$ is everywhere continuous and that \begin{equation*} \eta''_\varepsilon(\tau) := \begin{cases} -2 \,, &\tau \in (-\infty, -\varepsilon) \,, \\ \frac{2}{\varepsilon}\tau \,, &\tau \in [-\varepsilon, \varepsilon] \,, \\ 2 \,, &\tau \in (\varepsilon, \infty) \,, \end{cases} \end{equation*} which is again continuous everywhere, which concludes the proof. \end{proof} We now use the functions $\eta_\varepsilon$ to build the operators $N_\varepsilon$ via the following \begin{definition} For all $\varepsilon > 0$ we define the operators \begin{equation}\label{def_N_eps} N_\varepsilon : {\ell^2} \to {\ell^2} \,, \qquad (x_k)_{k\in\mathbb{N}} \mapsto \kl{\eta_\varepsilon(x_k)}_{k \in \mathbb{N}} \,. \end{equation} \end{definition} Concerning the well-defined and boundedness of $N_\varepsilon$, we have the following \begin{lemma}\label{lem_Neps_bounded} The operators $N_\varepsilon $ defined by \eqref{def_N_eps} satisfy \begin{equation} \norm{N_\varepsilon(x)}_2 \leq \norm{x}_2 \sqrt{\tfrac{16}{9}\varepsilon^2+ 2 \norm{x}_2^2} \,, \end{equation} and are therefore well-defined as operators from ${\ell^2} \to {\ell^2}$. \end{lemma} \begin{proof} Let $\varepsilon > 0$ be arbitrary but fixed and take $x = (x_k)_{k \in \mathbb{N}} \in {\ell^2}$. We have that \begin{equation*} \begin{split} \abs{\eta_\varepsilon(x_k)} &= \begin{cases} \abs{x_k}^2 + \tfrac{1}{3} \varepsilon^2 \,, & \abs{x_k} > \varepsilon \,, \\ \frac{1}{3 \varepsilon}\abs{x_k}^3 +\varepsilon\abs{x_k} \,, &\abs{x_k} \leq \varepsilon \,, \\ \end{cases} \\ \vspace{2pt} \\ & \leq \begin{cases} \abs{x_k}^2 + \tfrac{1}{3} \varepsilon \abs{x_k} \,, & \abs{x_k} > \varepsilon \,, \\ \frac{4}{3}\varepsilon\abs{x_k} \,, &\abs{x_k} \leq \varepsilon \,. \\ \end{cases} \end{split} \end{equation*} Therefore, we get that \begin{equation*} \begin{split} \norm{N_\varepsilon(x)}_2 ^2 & = \sum\limits_{k \in \mathbb{N}} \abs{\eta_\varepsilon(x_k)}^2 = \sum\limits_{\abs{x_k} \leq \varepsilon} \abs{\eta_\varepsilon(x_k)}^2 + \sum\limits_{\abs{x_k} > \varepsilon} \abs{\eta_\varepsilon(x_k)}^2 \\ & \leq \kl{\tfrac{4}{3} \varepsilon}^2 \sum\limits_{\abs{x_k} \leq \varepsilon} \abs{x_k}^2 + \sum\limits_{\abs{x_k} > \varepsilon} \kl{\abs{x_k}^2 + \tfrac{1}{3} \varepsilon \abs{x_k}}^2 \\ & \leq \kl{\tfrac{4}{3} \varepsilon}^2 \sum\limits_{\abs{x_k} \leq \varepsilon} \abs{x_k}^2 + 2 \sum\limits_{\abs{x_k} > \varepsilon} \abs{x_k}^4 + \tfrac{2}{9} \varepsilon^2 \sum\limits_{\abs{x_k} > \varepsilon} \abs{x_k}^2 \,, \end{split} \end{equation*} from which we derive that \begin{equation*} \begin{split} \norm{N_\varepsilon(x)}_2 ^2 & \leq \tfrac{16}{9} \varepsilon^2 \sum\limits_{k = 1}^\infty \abs{x_k}^2 + 2 \sum\limits_{k=1} ^\infty \abs{x_k}^4\ \\ &= \tfrac{16}{9} \varepsilon^2 \norm{x}_2^2 + 2 \norm{x}_4^4 \leq \kl{\tfrac{16}{9}\varepsilon^2+ 2 \norm{x}_2^2} \norm{x}_2^2 \,, \end{split} \end{equation*} which immediately yields the assertion. \end{proof} The operators $N_\varepsilon$ are also continuous, as we see in the following \begin{proposition}\label{prop_N_eps_cont} The operators $N_\varepsilon \, : \, {\ell^2} \to {\ell^2}$ defined by \eqref{def_N_eps} are continuous. \end{proposition} \begin{proof} Let $\varepsilon > 0$ and $x = (x_k)_{k\in\mathbb{N}} \in {\ell^2}$ be arbitrary but fixed, and consider a sequence $x^n = (x^n_k)_{k \in \mathbb{N}} \in {\ell^2}$ converging to $x$. It follows that the norm of $x^n$ is uniformly bounded, i.e., there exists a constant $c > 0$ such that $\norm{x^n} \leq c$ for all $n$, from which it also follows that $\abs{x^n_k} \leq c$ for all $k$ and $n$. Furthermore, since the function $\eta_\varepsilon$ is continuously differentiable, it follows that it is Lipschitz continuous on bounded sets. This implies that there exists a Lipschitz constant $L> 0$ such that \begin{equation} \abs{\eta_\varepsilon(x^n_k) - \eta_\varepsilon(x_k)} \leq L \abs{x^n_k - x_k } \,. \end{equation} Hence, we get that \begin{equation} \norm{ N_\varepsilon(x^n) - N_\varepsilon(x) }_2^2 = \sum\limits_{k=1}^\infty \abs{\eta_\varepsilon(x^n_k) - \eta_\varepsilon(x_k)}^2 \leq L^2 \sum\limits_{k=1}^\infty \abs{x^n_k - x_k }^2 = L^2 \norm{ x^n - x}_2^2 \,, \end{equation} and therefore, \begin{equation} \norm{ N_\varepsilon(x^n) - N_\varepsilon(x) }_2 \leq L \norm{ x^n - x}_2 \quad \to 0 \qquad \text{as} \quad n \to \infty \,, \end{equation} which shows the continuity of $N_\varepsilon$ and concludes the proof. \end{proof} By their construction, the operators $N_\varepsilon$ are also twice differentiable, as we see in \begin{proposition}\label{prop_N_eps_diff} The operators $N_\varepsilon \, : \, {\ell^2} \to {\ell^2} $ defined by \eqref{def_N_eps} are twice continuously Fr\'echet differentiable, with \begin{equation} N'_\varepsilon (x)h = \kl{ \eta_\varepsilon ' (x_k ) h_k }_{k \in \mathbb{N}} \,, \qquad \text{and} \qquad N''_\varepsilon (x)(h,w) =\kl{ \eta_\varepsilon '' (x_k ) h_k w_k }_{k \in \mathbb{N}} \,. \end{equation} \end{proposition} \begin{proof} This follows from the definition of $N_\varepsilon$ together with Lemma~\ref{lem_eta_eps_diff}. \end{proof} The approximation properties of the operators $N_\varepsilon$ are studied in the following \begin{proposition}\label{prop_N_approx} For $N$ and $N_\varepsilon$ be defined by \eqref{def_N} and \eqref{def_N_eps}, respectively, it holds that \begin{equation} \norm{N(x) - N_\varepsilon(x)}_2 \leq \tfrac{7}{3} \varepsilon \norm{x}_2 \,. \end{equation} \end{proposition} \begin{proof} Let $\varepsilon > 0$ and $x \in {\ell^2}$ be arbitrary but fixed. Then it holds that \begin{equation*} \eta_\varepsilon(x_k) - \eta(x_k) = \begin{cases} - \tfrac{1}{3} \varepsilon^2 \,, & x_k \in (-\infty,-\varepsilon) \,, \\ \tfrac{1}{3\varepsilon}x_k^3 + \varepsilon x_k + x_k^2 \,, & x_k \in [-\varepsilon,0] \,, \\ \tfrac{1}{3\varepsilon}x_k^3 + \varepsilon x_k - x_k^2 \,, & x_k \in [0,\varepsilon] \,, \\ \tfrac{1}{3}\varepsilon^2 \,, & x_k \in (\varepsilon,\infty) \,, \end{cases} \end{equation*} from which it follows that \begin{equation*} \begin{split} \abs{\eta_\varepsilon(x_k) - \eta(x_k) } &= \begin{cases} \tfrac{1}{3} \varepsilon^2 \,, & \abs{x_k} > \varepsilon \,, \\ \abs{ \tfrac{1}{3\varepsilon} \abs{x_k}^3 + \varepsilon \abs{x_k} - \abs{x_k}^2 } \,, & \abs{x_k} \leq \varepsilon\,. \end{cases} \\ \vspace{2pt} \\ &\leq \begin{cases} \tfrac{1}{3} \varepsilon \abs{x_k} \,, & \abs{x_k} > \varepsilon \,, \\ \tfrac{1}{3} \varepsilon \abs{x_k} + \varepsilon \abs{x_k} + \varepsilon \abs{x_k} \,, & \abs{x_k} \leq \varepsilon\,, \end{cases}. \end{split} \end{equation*} and therefore \begin{equation*} \abs{\eta_\varepsilon(x_k) - \eta(x_k) } \leq \tfrac{7}{3} \varepsilon \abs{x_k} \,. \end{equation*} This now implies that \begin{equation*} \begin{split} \norm{N_\varepsilon(x) - N(x) }_2^2 = \sum\limits_{k=1}^\infty \abs{\eta_\varepsilon(x_k) - \eta(x_k)}^2 = \kl{\tfrac{7}{3}\varepsilon}^2 \sum\limits_{k=1}^\infty \abs{x_k}^2 = \kl{\tfrac{7}{3}\varepsilon}^2 \norm{x}_2^2 \,, \end{split} \end{equation*} from which the statement immediately follows. \end{proof} The above result immediately implies an approximation result for the operators $F_\varepsilon$. \begin{corollary}\label{cor_F_approx} Let $A \, : \, {\ell^2} \to {\ell^2}$ be a bounded and linear operator and let $F$ and $F_\varepsilon$ be defined by \eqref{def_F} and \eqref{def_F_eps}, respectively. Then it holds that \begin{equation} \norm{F(x) - F_\varepsilon(x)}_2 \leq \tfrac{7}{3} \varepsilon \norm{A} \norm{x}_2 \,. \end{equation} \end{corollary} \begin{proof} By the definition of $F$ and $F_\varepsilon$, we have that \begin{equation*} \begin{split} \norm{F(x) - F_\varepsilon(x)}_2 = \norm{(A \circ N)(x) - (A \circ N_\varepsilon)(x) }_2 \leq \norm{A} \norm{N(x) - N_\varepsilon(x)}_2 \,, \end{split} \end{equation*} which, together with Proposition~\ref{prop_N_approx} now yields the assertion. \end{proof} Other important properties of the operators $F$ and $F_\varepsilon$ are collected in the following \begin{proposition}\label{prop_Fe_comp_closed} Let $A : {\ell^2} \to {\ell^2} $ be a bounded linear operator. Then the operators $F$ and $F_\varepsilon$ defined by \eqref{def_F} and \eqref{def_F_eps}, respectively, are continuous and weakly sequentially closed. \end{proposition} \begin{proof} Since $A$ and, due to Proposition~\ref{prop_N_eps_cont}, $N_\varepsilon$ are continuous, by its definition also $F_\varepsilon$ is continuous. In order to show the weak sequential closedness of $F_\varepsilon$, note that since its definition space is the whole of ${\ell^2}$, it suffices to show that $F_\varepsilon$ is weakly continuous. For this, take an arbitrary sequence $x^n \in {\ell^2}$ converging weakly to some element $x \in {\ell^2}$. Since in ${\ell^2}$ a sequence converges weakly if and only if it converges componentwise and its norm is bounded \cite{Conway_1994}, it follows from the continuity and boundedness of $N_\varepsilon$ (Lemma~\ref{lem_Neps_bounded}) and Proposition~\ref{prop_N_eps_cont}) that $N_\varepsilon(x^n)$ converges weakly to $N_\varepsilon(x)$. Now, as a bounded linear operator, $A$ is also weakly sequentially continuous. Hence, since $F_\varepsilon = A \circ N_\varepsilon$, it follows that $F_\varepsilon(x^n)$ converges weakly to $F_\varepsilon(x)$, which establishes its weak sequential continuity and consequentially also its weak sequential closedness. For the operator $F$, these result have already been shown in \cite{Ramlau_Zarzer_2012}. However, noting that Lemma~\ref{lem_Neps_bounded} and Proposition~\ref{prop_N_eps_cont} also hold for the limit case $\varepsilon = 0$, they also follow the same way as above. \end{proof} Furthermore, the differentiability of $N_\varepsilon$ immediately translates into the following \begin{proposition} \label{derivatives} The operators $F_\varepsilon$ and thus the functionals $\mathcal{J}_{\alpha,\delta}^\eps$ defined in \eqref{def_F_eps} and \eqref{def_Jade}, respectively, are twice continuously Fr{\'e}chet differentiable, where \begin{equation*} \begin{split} &F_\varepsilon'(x)h = A N_\varepsilon'(x)h \,, \qquad F_\varepsilon''(x)(h,w) = A N_\varepsilon''(x)(h,w) \,, \\ &\mathcal{J}_{\alpha,\delta}^\eps\, '(x)h = 2\spr{F_\varepsilon '(x)^*(F_\varepsilon(x)-y^{\delta}) + \alpha x ,h} \,, \\ &\mathcal{J}_{\alpha,\delta}^\eps\,''(x)(h,w) = 2 \spr{F_\varepsilon(x) - y^{\delta}, F_\varepsilon''(x)(h,w) } + 2\spr{F_\varepsilon '(x)^*F_\varepsilon'(x)w + \alpha w ,h} \,. \end{split} \end{equation*} \end{proposition} \begin{proof} This follows from the definition of $F_\varepsilon$ and $\mathcal{J}_{\alpha,\delta}^\eps$ together with Proposition~\ref{prop_N_eps_diff}. \end{proof} We now consider the problem of minimizing the Tikhonov functional $\mathcal{J}_{\alpha,\delta}^\eps$, whose minimizers we denote by $x_{\alpha,\eps}^\delta$. Due to the above results, the classical analysis of Tikhonov regularization for nonlinear operators is applicable (see for example \cite{Engl_Hanke_Neubauer_1996, Engl_Ramlau_2015}), and we immediately get the following \begin{theorem} Let $A\,:\, {\ell^2} \to {\ell^2}$ be a bounded, linear operator and let $F_\varepsilon$ be defined by \eqref{def_F_eps}.Then for each $\alpha > 0$, a minimizer $x_{\alpha,\eps}^\delta$ of the functional $\mathcal{J}_{\alpha,\delta}^\eps$ defined in \eqref{def_Jade} exists. Furthermore, the minimization of $\mathcal{J}_{\alpha,\delta}^\eps$ is stable under perturbations of $y^{\delta}$. \end{theorem} \begin{proof} Since by Proposition~\ref{prop_Fe_comp_closed}, the operator $F_\varepsilon$ is continuous and weakly sequentially closed, this follows immediately from \cite[Theorem~10.2]{Engl_Hanke_Neubauer_1996}. \end{proof} Next, we are interested in the behaviour of the minimizers $x_{\alpha,\eps}^\delta$ as $\varepsilon \to 0$. Given a suitable coupling of the noise level $\delta$ and the parameter $\varepsilon$, we get the following \begin{theorem} \label{conv} Assume that $F(x) = y$ has a solution and let $\alpha(\delta)$ and $\varepsilon(\delta)$ satisfy \begin{equation}\label{cond_alpha_delta_eta} \alpha(\delta) \to 0 \,, \quad \varepsilon(\delta) \to 0 \,, \quad \frac{\delta^2}{\alpha(\delta)} \to 0 \,, \quad \frac{\varepsilon^2}{\alpha(\delta)} \to 0 \,, \quad \text{as} \quad \delta \to 0 \,. \end{equation} Then $x_{\alpha(\delta),\varepsilon(\delta)}^\delta$ has a convergent subsequence. Moreover, the limit of every convergent subsequence is a minimum-norm solution of $F(x) = y$. Furthermore, if the minimum-norm solution $x^\dagger$ is unique, then \begin{equation} \lim\limits_{\delta \to 0} x_{\alpha(\delta),\varepsilon(\delta)}^\delta \, = \, x^\dagger \,. \end{equation} \end{theorem} \begin{proof} The proof of this theorem follows the same lines as the classical proof of convergence of Tikhonov regularization \cite{Engl_Hanke_Neubauer_1996} and the proof for the case that the operator is approximated by a series of finite dimensional operators \cite{Neubauer_1989, Poeschl_Resmerita_Scherzer_2010} (in which case a slightly stronger condition than what we can derive from Proposition~\ref{prop_N_approx} was used). Hence, we here only indicate the main differences in the proof. Note first that due to Proposition~\ref{prop_N_approx}, it follows that \begin{equation} \begin{split} \norm{F_\varepsilon(x) - F(x) }_2 \leq \norm{A}\norm{N_\varepsilon(x) - N(x) }_2 \leq \tfrac{7}{3} \varepsilon \norm{A}\norm{x}_2 \,. \end{split} \end{equation} This, together with $x_{\alpha,\eps}^\delta$ being a minimizer of $\mathcal{J}_{\alpha,\delta}^\eps$ implies that \begin{equation}\label{eq_helper_3} \begin{split} \norm{F_\varepsilon(x_{\alpha,\eps}^\delta) - y^{\delta} }_2^2 + \alpha \norm{x_{\alpha,\eps}^\delta}_2^2 &\leq \norm{F_\varepsilon(x^\dagger) - y^{\delta} }_2^2 + \alpha \norm{x^\dagger}_2^2 \\ &\leq \kl{\tfrac{7}{3} \norm{A}\norm{x^\dagger}_2 \varepsilon + \delta}^2 + \alpha \norm{x^\dagger}_2^2 \,. \end{split} \end{equation} Together with \eqref{cond_alpha_delta_eta}, this implies the boundedness of $x_{\alpha,\eps}^\delta$ and \begin{equation*} \lim\limits_{\delta \to 0} \norm{F_\varepsilon(x_{\alpha,\eps}^\delta) - y^{\delta} }_2 = 0 \,. \end{equation*} Hence, since then there holds \begin{equation*} \begin{split} \norm{F(x_{\alpha,\eps}^\delta) - y }_2 &\leq \norm{F_\varepsilon(x_{\alpha,\eps}^\delta) - y^{\delta} }_2 + \norm{F_\varepsilon(x_{\alpha,\eps}^\delta) - F(x_{\alpha,\eps}^\delta) }_2 + \norm{y - y^{\delta}}_2 \\ &\leq \norm{F_\varepsilon(x_{\alpha,\eps}^\delta) - y^{\delta} }_2 + \delta + \tfrac{7}{3}\norm{A}\norm{x_{\alpha,\eps}^\delta}_2 \varepsilon \quad \underset{\delta \to 0} {\longrightarrow} \quad 0 \,, \end{split} \end{equation*} the weak sequential closedness of $F$ implies the convergence of a subsequence of $x_{\alpha,\eps}^\delta$ to a solution of $F(x) = y$. The remainder of the proof then follows analogously to the one of \cite[Theorem~10.3]{Engl_Hanke_Neubauer_1996} and is therefore omitted here. \end{proof} The above result shows that minimizing $\mathcal{J}_{\alpha,\delta}^\eps$ instead of $\mathcal{J}_{\alpha,\delta}$ to approximate the solution of $F(x) = y$ makes sense if $\varepsilon$ and the noise level $\delta$ are suitably coupled, for example via $\varepsilon \sim \delta$. Furthermore, the assumption that $F(x) = y$ solvable, is for example satisfied if $Ax = y$ has a solution belonging not only to ${\ell^2}$ but also to ${\ell^1}$, i.e., is sparse. \begin{remark} Following the line of the proofs of classical Tikhonov regularization results, it is also possible to derive convergence rate results under standard assumptions. Furthermore, the above analysis also holds for nonlinear operators $A$ which are Lipschitz continuous, since then Corollary~\ref{cor_F_approx} also holds. \end{remark} \section{Minimization methods for the Tikhonov functional} \label{numerical_experiments} In the previous section, we established existence, stability, and convergence of the minimizers of $\mathcal{J}_{\alpha,\delta}$ and $\mathcal{J}_{\alpha,\delta}^\eps$ under standard assumptions. However, there still remains the question of how to actually compute those minimizers in an efficient way. One way to do this is to interpret the minimization of $\mathcal{J}_{\alpha,\delta}$ and $\mathcal{J}_{\alpha,\delta}^\eps$ as Tikhonov regularization for the nonlinear operator equations $F(x) = y$ and $F_\varepsilon(x) = y$, respectively, and to use iterative regularization methods for their solution. Since both the operator $F$ and $F_\varepsilon$ are continuously Fr\'echet differentiable, iterative regularization methods like Landweber iteration \cite{Kaltenbacher_Neubauer_Scherzer_2008}, TIGRA \cite{Ramlau_2003}, the Levenberg-Marquardt method \cite{Hanke_1997,Jin_2010} or iteratively regularized Gauss-Newton \cite{Blaschke_Neubauer_Scherzer_1997,Jin_Tautenhahn_2009} are applicable. Of course, as all of those methods only require a once differentiable operator, it makes sense in terms of accuracy to apply them for the operator $F$ and not for the approximated operator $F_\varepsilon$. Another way is to use standard iterative optimization methods for the (well-posed) problem of minimizing $\mathcal{J}_{\alpha,\delta}$ or $\mathcal{J}_{\alpha,\delta}^\eps$. In particular, since we have derived in the previous section that $\mathcal{J}_{\alpha,\delta}^\eps$ is twice continuously Fr\'echet differentiable, efficient second order methods like Newton's method are applicable for its minimization. In this section, we introduce and discuss some details of the minimization methods used to obtain the numerical results presented in Section~\ref{sect_numerics} below. \subsection{Gradient descent, ISTA and FISTA} We have seen that the Tikhonov functional $\mathcal{J}_{\alpha,\delta}$ defined in \eqref{def_Jad} is continuously Fr\'echet differentiable. Hence, it is possible to apply gradient descent for its minimization. For this, note first that since $N'(x)h$ is a linear operator, it can be written as \begin{equation}\label{eq_N_G} N'(x)h = G(x)h \,, \end{equation} where $G(x)$ is the infinite dimensional `matrix' representation of $N'(x)$ given by \begin{equation*} G(x) := \text{diag}(2\abs{x_k})_{k \in \mathbb{N}} \,, \end{equation*} which is called the $\emph{gradient}$ of $N$. Similarly, there is an (infinite-dimensional) matrix representation of $\mathcal{J}_{\alpha,\delta}'(x)$, i.e., the gradient $\nabla \mathcal{J}_{\alpha,\delta}(x)$ of $\mathcal{J}_{\alpha,\delta}(x)$, which is given by \begin{equation*} \nabla \mathcal{J}_{\alpha,\delta} (x) := 2 G(x) A^T \kl{AN(x)-y^\delta} + 2 \alpha x \,, \end{equation*} where, with a small abuse of notation, $A$ denotes the (infinite-dimensional) matrix representation of the linear operator $A$, and $A^T$ denotes its transpose. Using the above representations, we can now write the gradient descent algorithm for minimizing $\mathcal{J}_{\alpha,\delta}$ in the well-known form \begin{equation}\label{gradient_descent} x_{n+1}^\delta = x_n^\delta - \omega_n \nabla \mathcal{J}_{\alpha,\delta} (x_n^\delta ) \, , \end{equation} where $\omega_n$ is a sequence of stepsizes. If the stepsizes are chosen in a suitable way, for example via the Armijo rule \cite{Hinze_Ulbrich_Ulbrich_2009}, the iterates converge to a stationary point of $\mathcal{J}_{\alpha,\delta}$ (see e.g.\ \cite[Theorem 2.2]{Hinze_Ulbrich_Ulbrich_2009}). In order to stop the iteration, we employ the well-known \emph{discrepancy principle}, i.e., the iteration is terminated with index $n_* = n_*(\delta,y^{\delta})$, when for the first time \begin{equation}\label{discrepancy_nonlinear} \norm{F(x_{n_*}^\delta )-y^\delta}_2 \leq \tau \delta \,, \end{equation} where $\tau>1$ is fixed. Note that since the Tikhonov functional may have several (local and global) minima, convergence to a global minimum is only guaranteed if a sufficiently good initial guess is chosen. The (infinite-dimensional) matrix representations introduced above can also be used to rewrite ISTA \eqref{ISTA} into the following form \begin{equation*} x_{n+1}^\delta = S_{\alpha \omega} \kl{x_n^\delta - \omega \, 2 \, G(x_n^\delta ) A^T \kl{AN(x_n^\delta )-y^\delta}} \, , \end{equation*} which immediately also translates to a similar rewriting of FISTA defined in \eqref{FISTA}. \subsection{The Levenberg-Marquardt method} It is well-known that gradient based methods like gradient descent or ISTA are quite slow with respect to convergence speed. Although it is possible to speed them up by using suitable stepsizes (see for example \cite{Saxenhuber_2016,Neubauer_2017_2}) or acceleration schemes like FISTA, it is often advantageous to use second-order methods instead. One such method is the Levenberg-Marquardt method \cite{Hanke_1997,Jin_2010}, which is given by \begin{equation}\label{Levenberg_Marquardt} x^\delta_{n+1}=x^\delta_n + \kl{ F'(x^\delta_n)^* F'(x^\delta_n)+\alpha_n I }^{-1}F'(x^\delta_n)^*\kl{ y^\delta - F(x^\delta_n) }. \end{equation} Although this is a second-order method, it only requires the operator $F$ to be once continuously Fr\'echet differentiable. Using again the (infinite-dimensional) matrix representation of $N'(x)h$ from \eqref{eq_N_G}, the method can be rewritten into the following form \begin{equation*} x_{n+1}^\delta = x_n^\delta + \kl{ G(x_n^\delta) A^T A G(x_n^\delta) + \alpha_n I }^{-1} G(x_n^\delta) A^T(y^\delta - F(x_n^\delta)) \, . \end{equation*} In order to obtain convergence of this method, one needs, among other things, a suitably chosen sequence $\alpha_n$ converging to $0$ as well as a sufficiently good initial guess \cite{Hanke_1997}). As a stopping rule, one usually also employs the discrepancy principle \eqref{discrepancy_nonlinear}. The Levenberg-Marquardt method typically requires only very few iterations to satisfy the discrepancy principle. However, in each iteration step the linear operator $\kl{ F'(x^\delta_n)^* F'(x^\delta_n)+\alpha_n I }$ has to be inverted, which might be costly for some applications. This can be circumvented, though, via approximating the result of this inversion by the application of number of iterations of the conjugate gradient method. It is possible to add an additional regularization term to the Levenberg-Marquardt method, thereby ending up with the so-called \emph{iteratively-regularized Gauss-Newton method} \cite{Blaschke_Neubauer_Scherzer_1997,Jin_Tautenhahn_2009}. Typically behaving very similar in practice, this method can be proven to converge under slightly weaker assumptions than the Levenberg-Marquardt method. \subsection{Newton's method}\label{sect_Newton} In contrast to $\mathcal{J}_{\alpha,\delta}$, the functional $\mathcal{J}_{\alpha,\delta}^\eps$ is twice continuously Fr\'echet differentiable. The information contained in this second derivative can be used to design efficient methods for its minimization. One such method, based on Newton's method, is considered here. Note that the first-order optimality condition for minimizing $\mathcal{J}_{\alpha,\delta}^\eps$ is given by \begin{equation}\label{optimality_condition} \mathcal{J}_{\alpha,\delta}^\eps \,'(x)h= 0 \qquad \forall\, h \in {\ell^2} \,. \end{equation} Using Taylor approximation in the above equation yields \begin{equation*} \mathcal{J}_{\alpha,\delta}^\eps\,' (x + \tau)(h) = \mathcal{J}_{\alpha,\delta}^\eps\,' (x)h + \mathcal{J}_{\alpha,\delta}^\eps\,'' (x)(\tau,h) \qquad \forall \, h \in {\ell^2} \,, \end{equation*} which, for the special choice of $x = x_n$ and $\tau = (x_{n+1}-x_n)$, becomes \begin{equation}\label{newton} \mathcal{J}_{\alpha,\delta}^\eps\,' (x_n)(h) + \mathcal{J}_{\alpha,\delta}^\eps\,''(x_n)(x_{n+1}-x_n,h) = 0 \, \qquad \forall \, h \in \ell^2 \,. \end{equation} This implicitly defines an iterative procedure, which is nothing else than Newton's method applied to the optimality condition \eqref{optimality_condition}. Since $\mathcal{J}_{\alpha,\delta}^\eps\,''(\cdot,h)$ is continuously invertible around the global minimizer, this method is (locally) well-defined and q-superlinearly convergent (see for example \cite[Corollary~2.1]{Hinze_Ulbrich_Ulbrich_2009}). We can again use an (infinite-dimensional) matrix representation to rewrite this iterative procedure into a more familiar form. For this, we first define the `matrices' \begin{equation} G_\varepsilon(x) := \text{diag}(\eta_\varepsilon' (x_k) )_{k \in \mathbb{N}} \,, \qquad H_\varepsilon(x,w) := \text{diag}(\eta_\varepsilon'' (x_k) w_k )_{k \in \mathbb{N}} \,, \end{equation} which correspond to the gradient and the Hesse matrix of $N_\varepsilon(x)$, and use this to write \begin{equation} N_\varepsilon'(x) h = G_\varepsilon(x) h \,, \qquad N_\varepsilon''(x) (w,h) = H_\varepsilon(x,w) h \,. \end{equation} This allows the following matrix representation of the functionals $\mathcal{J}_{\alpha,\delta}^\eps \,'(x)$ and $\mathcal{J}_{\alpha,\delta}^\eps\,''(x)$ \begin{equation*} \nabla \mathcal{J}_{\alpha,\delta}^\eps (x) := 2 G_\varepsilon (x) A^T \kl{AN_\varepsilon (x) - y^\delta } + 2 \alpha x \,, \end{equation*} \begin{equation*} \nabla^2 \mathcal{J}_{\alpha,\delta}^\eps (x) := 2 H_\varepsilon \kl{x,A^T \kl{AN_\varepsilon (x) - y^\delta } } + 2 G_\varepsilon(x) A^T A G_\varepsilon(x) + 2 \alpha I \,, \end{equation*} where $I$ denotes the identity matrix, and $\nabla \mathcal{J}_{\alpha,\delta}^\eps (x)$ and $\nabla^2 \mathcal{J}_{\alpha,\delta}^\eps(x)$ can be seen as the gradient and the Hessian matrix of the functional $\mathcal{J}_{\alpha,\delta}^\eps$, respectively. Using these representations, the iterative procedure \eqref{newton} can be rewritten into the more familiar form \begin{equation*} \nabla \mathcal{J}_{\alpha,\delta}^\eps (x_n) + \nabla^2 \mathcal{J}_{\alpha,\delta}^\eps (x_n) (x_{n+1}-x_n) = 0 \, . \end{equation*} which is an infinite-dimensional matrix-vector system for the update $(x_{n+1} - x_n)$. \section{Numerical Examples}\label{sect_numerics} In this section, we demonstrate the usefulness of our proposed approximation approach on a numerical example problem based on \emph{Computerized Tomography (CT)}. In particular, we focus on how the Newton approach for the minimization of $\mathcal{J}_{\alpha,\delta}^\eps$ introduced in Section~\ref{sect_Newton} above performs in comparison to the other methods presented in Section~\ref{section_minimization}. In the medical imaging problem of CT, one aims to reconstruct the density function $f$ inside an object from measurements of the intensity loss of an X-ray beam sent through it. In the 2D case, for example if one scans a cross-section of the human body, the relationship between the intensity $I_0$ of the beam at the emitter position and the intensity $I_L$ at the detector position is given by \cite{Natterer_2001} \begin{equation}\label{tomography_equation} \log I_L(s,w) - \log I_0(s,w) = - \int_\R f(sw+tw^\perp ) \, dt \,. \end{equation} Thus, if one defines the well-known \emph{Radon transform} operator \begin{equation*} Rf(s,w) := \int_\R f(sw+tw^\perp ) \, dt \,, \end{equation*} the reconstruction problem \eqref{tomography_equation} can be written in the standard form \begin{equation*} R f = g \,. \end{equation*} Expressing $f$ in terms of some basis or frame, and noting that typically one considers objects whose density is equal to $0$ on large subparts, the above problem precisely fits into the framework of ${\ell^1}$ sparsity regularization considered in this paper. \subsection{Discretization and Implementation} In order to obtain a discretized version of problem \eqref{tomography_equation}, we make use of the toolbox AIR TOOLS II by Hansen and Jorgensen \cite{Hansen_Jorgensen_2017}. Therein, the density function $f$ is considered as a piecewise constant function on an $m\times m$ pixel grid (see Figure~\ref{fig_allmethods} for examples). With this, equation \eqref{tomography_equation} can be written in the discretized form \begin{equation}\label{eq_discr} y_i := -\kl{\log I_L^{(i)} - \log I_0^{(i)} } = \sum_{j=1}^{m^2} a_{ij}x_j \end{equation} where the $x_j$ denote the value of $f$ at the $j$-th pixel, $ I_0^{(i)}$ and $I_L^{(i)}$ denote the emitted and detected intensity of the $i$-th ray, respectively, and $a_{ij}$ denotes the length of the path which it travels through within the $j$-th pixel cell. Note that since any given ray only travels through relatively few cells, most of the coefficients $a_{ij}$ are equal to $0$ and thus the matrix $A$ is sparse. Collecting the coefficients $a_{ij}$ into a matrix $A$, equation \eqref{eq_discr} can be written as a matrix-vector equation of the form \begin{equation*} Ax = y \, . \end{equation*} Specifying all required parameters as well as the exact solution which one wants to reconstruct, the toolbox provides both the matrix $A$ and the right-hand side vector $y$. For our purposes, we used the toolbox function \texttt{paralleltomo}, creating a parallel beam tomography problem with (the suggested default values of) $180$ angles and $70$ parallel beams for each of them. For the number of pixels we used $m^2 = 50^2$, which altogether leads to the dimension $12600 \times 2500$ for the matrix $A$. The exact solution (the Shepp-Logan phantom) is depicted in Figure~\ref{fig_allmethods}. In order to obtain noisy data, we used $y^\delta := y+\bar{\delta} \norm{y}_2 r$, where $r$ is a randomly generated, normed vector, and $\bar{\delta}$ denotes the relative noise level. The implementation of the methods introduced in Section~\ref{section_minimization} was done in a straightforward way by using their infinite-dimensional matrix representations but for the now finite dimensional matrices. The iterations were stopped using the discrepancy principle \eqref{discrepancy_nonlinear} with the choice $\tau = 1.1$ for all methods. For the approximation parameter $\varepsilon$ in the definition of $\mathcal{J}_{\alpha,\delta}^\eps$, we have used the choice $\varepsilon= 10^{-4} \delta$, which is conforming with the theory developed above. The stepsize $\omega$ in ISTA and FISTA was chosen as a constant based on the norm of $A$, and for the gradient descent method \eqref{gradient_descent}, the stepsizes $\omega_n$ were chosen via the Armijo rule. In the Levenberg-Marquardt method \eqref{Levenberg_Marquardt}, we chose $\alpha_n = 0.6^n \delta$, which is a sequence tending to $0$ in accordance with the convergence theory. All computations were carried out in Matlab on a desktop computer with an Intel Xeon E5-1650 processor with 3.20GHz and 16 GB RAM. \subsection{Numerical Results} In this section, we present the results of applying the iterative methods introduced in Section~\ref{section_minimization} to the tomography problem described above. In the following, we present reconstruction results for different noise levels $\bar{\delta}$, which is directly related to the signal-to-noise ratio (SNR) by \begin{equation*} \bar{\delta} = \frac{\norm{y-y^{\delta}}}{\norm{y}} \approx \frac{\norm{y-y^{\delta}}}{\norm{y^{\delta}}} = \text{SNR}^{-1} \,. \end{equation*} The first results, which are related to the computational efficiency of the different methods, are presented in Figure~\ref{fig_comparison}. One can clearly see that regardless of the noise level $\bar{\delta}$, the Newton method and the Levenberg-Marquardt method outperform the gradient based methods, both in terms of computation time and number of iterations $n_*$ required to meet the discrepancy principle. Furthermore, as was to be expected, FISTA also performs much better than both ISTA and the gradient descent method. Note also that with the Levenberg-Marquardt and the Newton method, one can satsify the discrepancy principle also for very small noise levels, which becomes infeasible for the other methods due to the too large runtime which would be required for that. \begin{figure}[h!] \includegraphics[width=0.48\textwidth]{pics/comparison} \quad \includegraphics[width=0.48\textwidth]{pics/comparison_iterations} \caption{Elapsed time (left) and number of iterations (right) required for meeting the stopping criterion versus different noise levels, for the considered minimization methods.} \label{fig_comparison} \end{figure} The results depicted in Figure~\ref{fig_relative_error} show that not only do the Levenberg-Marquardt and the Newton method require less iterations and computation time to satisfy the discrepancy principle, the resulting approximations also have a comparable and even somewhat smaller relative error than for the gradient based methods. This is of course partly due to the fact that each iteration step of those methods is `larger' than in the other methods, which nevertheless turns out to be an advantage in our case. The resulting approximate solutions for $10\%$ relative noise are shown in Figure~\ref{fig_allmethods}. The higher quality of the solutions obtained by the Levenberg-Marquardt and the Newton method is apparent. \begin{figure}[h!] \centering \includegraphics[scale=0.68]{pics/relative_error} \caption{Relative error $\norm{x_{n_*} - x^\dagger}/\norm{x^\dagger}$ in percent versus different noise levels.} \label{fig_relative_error} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{pics/allmethods10} \caption{Exact solution and reconstructions for the noise level $\bar{\delta}=10\%$.} \label{fig_allmethods} \end{figure} \section{Conclusion}\label{sect_conclusion} In this paper, we presented a minimization approach for a Tikhonov functional with ${\ell^1}$ penalty for the solution of linear inverse problems with sparsity constraints. The employed approximate transformation approach based on a Nemskii operator was mathematically analysed within the framework of ill-posed problems, and the fact that the resulting transformed functional is twice continuously Fr\'echet differentiable served as a basis for the construction of an effective minimization algorithm using Newton's method. Numerical example problems based on the medical imaging problem of computerized tomography demonstrated the usefulness of the proposed approach. \section{Support} The authors were funded by the Austrian Science Fund (FWF): F6805-N36. \bibliographystyle{plain} {\footnotesize
1,108,101,566,246
arxiv
\section*{Background} Clustering has been a mainstay of genomics since the early days of gene-expression microarrays \cite{bendor}. For instance, expression profiles can be taken over various tissue samples and then clustered according to the expression levels for each sample, the aim being to discriminate pathologies based on their differential patterns of gene expression \cite{bittner}. In particular, model-based clustering, which assumes that the data are generated by a finite mixture of underlying probability distributions, has gained popularity over heuristic clustering algorithms, for which there is no concrete way of determining the number of clusters or the best clustering method \cite{yeung2001model}. Model-based clustering methods \cite{fraley2002model} provide more robust criteria for selecting the appropriate number of clusters. For example, in a Bayesian framework, utilizing Bayes Factor can incorporate both \emph{a priori} knowledge of different models, and goodness of fit of the parametric model to the observed data. Moreover, nonparametric models such as Dirichlet-process mixture models \cite{maceachern1998estimating} provide a more flexible approach for clustering, by automatically learning the number of components. In small-sample settings, model-based approaches that incorporate model uncertainty have proved successful in designing robust operators \cite{DaltonOBC,mahdin1,alireza2,mahdin2}, and in objective-based experiment design to expedite the discovery of such operators \cite{shahino1,ariana,shahino21}. Whereas classification theory is grounded in feature-label distributions with the error being the probability that the classifier mislabels a point \cite{DaltonOBC,alireza11}; clustering algorithms operate on random labeled point processes (RLPPs) with error being the probability that a point will be placed into the wrong cluster (partition) \cite{Dougherty2004}. An optimal (Bayes) clusterer minimizes the clustering error and can be found with respect to an appropriate representation of the cluster error \cite{dalton2015analytic}. A common problem in clustering is the existence of missing values. These are ubiquitous with high-throughput sequencing technologies, such as microarrays \cite{schena1995quantitative} and RNA sequencing (RNA-seq) \cite% {mortazavi2008mapping}. For instance, with microarrays, missing data can occur due to poor resolution, image corruption, or dust or scratches on the slide \cite{troyanskaya2001missing}, while for RNA-seq, the sequencing machine may fail to detect genes with low expression levels owing to the random sampling nature of sequencing technologies. As a result of these missing data mechanisms, gene expression data from microarray or RNA-seq experiments are usually in the form of large matrices, with rows and columns corresponding to genes and experimental conditions or different subjects, respectively, with some values missing. Imputation methods, such as \emph{% MICE} \cite{buuren2010mice}, \emph{Amelia II} \cite{honaker2011amelia} and \emph{missForest} \cite{stekhoven2011missforest}, are usually employed to complete the data matrix before clustering analysis; however, in small-sample settings, which are common in genomic applications, these methods face difficulties, including co-linearity due to potential high correlation between genes in samples, which precludes the successful imputation of missing values. In this paper we follow a different direction by incorporating the generation of missing values with the original generating random labeled point process, thereby producing a new RLPP that generates the actual observed points with missing values. The optimal clusterer in the context of missing values is obtained by marginalizing out the missing features in the new RLPP. One potential challenge arising here is that in the case of missing values with general patterns, conducting the marginalization can be computationally intractable, and hence resorting to approximation methods such as Monte Carlo integration is necessary. Although the proposed framework for optimal clustering can incorporate the probabilistic modeling of arbitrary types of missing data mechanisms, to facilitate analysis, throughout this work we assume data are missing completely at random (MCAR) \cite{little2014statistical}. In this scenario, the parameters of the missingness mechanism are independent of other model parameters and therefore vanish after the expectation operation in the calculation of the posterior of label functions for clustering assignment. We derive the optimal clusterer for different scenarios in which features are distributed according to multivariate Gaussian distributions. The performance of this clusterer is compared to various methods, including $k$-POD \cite{kpod} and fuzzy $c$-means with optimal completion strategy \cite{fcm-ocs}, which are methods for directly clustering data with missing values, and also $k$% -means \cite{kanungo2002efficient}, fuzzy $c$-means \cite{bezdek1984fcm} and hierarchical clustering \cite{johnson1967hierarchical} with the missing values imputed. Comprehensive simulations based on synthetic data show the superior performance of the proposed framework for clustering with missing values over a range of simulation setups. Moreover, evaluations based on RNA-seq data further verify the superior performance of the proposed method in a real-world application with missing data. \section*{Methods} \subsection*{Optimal clustering} Given a point set $S \subset \mathbb{R}^d$, where $d$ is the dimension of the space, denote the number of points in $S$ by $\eta(S)$. A \emph{random labeled point process} (RLPP) is a pair $(\Xi,\Lambda)$, where $\Xi$ is a point process generating $S$ and $\Lambda$ generates random labels on point set $S$. $\Xi$ maps from a probability space to $[N;\mathcal{N}]$, where $N$ is the family of finite sequences in $\mathbb{R}^d$ and $\mathcal{N}$ is the smallest $\sigma$-algebra on $N$ such that for any Borel set $B$ in $\mathbb{% R}^d$, the mapping $S \rightarrow \eta(S \cap B)$ is measurable. A random labeling is a family, $\Lambda = \{ \Phi_S: S \in N \}$, where $\Phi_S$ is a random label function on the point set $S$ in $N$. Denoting the set of labels by $L=\{1,2,...,l\}$, $\Phi_S$ has a probability mass function on $% L^S $ defined by $P_S(\phi_S)=P(\Phi_S=\phi_S|\Xi=S)$, where $\phi_S:S \rightarrow L$ is a deterministic function assigning a label to each point in $S$. A label operator $\lambda$ maps point sets to label functions, $% \lambda(S)=\phi_{S,\lambda} \in L^S$. For any set $S$, label function $% \phi_S $ and label operator $\lambda$, the \emph{label mismatch error} is defined as \begin{equation} \epsilon _{\lambda }(S,\phi _{S})=\frac{1}{\eta (S)}\sum_{x\in S}I_{\phi _{S}(x)\neq \phi _{S,\lambda }(x)}, \end{equation}% where $I_{A}$ is an indicator function equal to 1 if $A$ is true and 0 otherwise. The \emph{error of label function} $\lambda (S)$ is computed as $% \epsilon _{\lambda }(S)=\mathbb{E}_{\Phi _{S}}[\epsilon _{\lambda }(S,\phi _{S})|S]$, and the \emph{error of label operator} $\lambda$ for the corresponding RLPP is then defined by $\epsilon \lbrack \lambda ]=\mathbb{E}_{\Xi }\mathbb{E}_{\Phi _{\Xi }}[\epsilon _{\lambda }(\Xi ,\phi _{\Xi })]$. Clustering involves identifying partitions of a point set rather than the actual labeling, where a partition of $S$ into $l$ clusters has the form $% \mathcal{P}_S = \{S_1,S_2,...,S_l \}$ such that $S_i$'s are disjoint and $% S = \bigcup_{i=1}^l S_i$. A cluster operator $\zeta$ maps point sets to partitions, $\zeta(S)=\mathcal{P}_{S,\zeta}$. Considering the label switching property of clustering operators, let us define $F_{\zeta}$ as the family of label operators that all induce the same partitions as the clustering operator $\zeta$. More precisely, a label function $\phi_S$ induces partition $\mathcal{P}_S = \{S_1,S_2,...,S_l \}$, if $S_i = \{ x \in S : \phi_S(x)=l_i \}$ for distinct $l_i \in L$. Thereby, $\lambda \in F_{\zeta}$ if and only if $\phi_{S,\lambda}$ induces the same partition as $% \zeta(S)$ for all $S \in N$. For any set $S$, label function $\phi_S$ and cluster operator $\zeta$, define the \emph{cluster mismatch error} by \begin{equation} \epsilon_{\zeta}(S,\phi_S) = \min_{\lambda \in F_{\zeta}} \epsilon_{\lambda}(S,\phi_S) , \end{equation} the \emph{error of partition} $\zeta(S)$ by $\epsilon_{\zeta}(S) = \mathbb{E}% _{\Phi_S} [\epsilon_{\zeta}(S,\phi_S)|S ]$ and the \emph{error of cluster operator} $\zeta$ for the RLPP by $\epsilon[\zeta] = \mathbb{E}_{\Xi} \mathbb{E}% _{\Phi_{\Xi}} [\epsilon_{\zeta}(\Xi,\phi_{\Xi})]$. As shown in \cite{dalton2015analytic}, error definitions for partitions can be represented in terms of risk with intuitive cost functions. Specifically, define $G_{\mathcal{P}_S}$ such that $\phi_S \in G_{\mathcal{P}_S}$ if and only if $\phi_S$ induces $\mathcal{P}_S$. The error of partition can be expressed as \begin{equation} \epsilon_{\zeta}(S) = \sum_{\mathcal{P}_S \in \mathcal{K}_S} c_S(\zeta(S),% \mathcal{P}_S) P_S(\mathcal{P}_S), \end{equation} where $\mathcal{K}_S$ is the set of all possible partitions of $S$, $P_S(% \mathcal{P}_S) = \sum_{\phi_S \in G_{\mathcal{P}_S}} P_S(\phi_S)$ is the probability mass function on partitions $\mathcal{P}_S$ of $S$, and the \emph{partition cost function} between partitions $\mathcal{P}_S$ and $\mathcal{Q}_S$ of $S$ is defined as \begin{equation} c_{S}(\mathcal{Q}_{S},\mathcal{P}_{S})=\frac{1}{\eta (S)}\min_{\phi _{S,% \mathcal{Q}_{S}}\in G_{\mathcal{Q}_{S}}}\sum_{x\in S}I_{\phi _{S,\mathcal{P}% _{S}}\neq \phi _{S,\mathcal{Q}_{S}}}, \end{equation}% with $\phi _{S,\mathcal{P}_{S}}$ being any member of $G_{\mathcal{P}_{S}}$. A Bayes cluster operator $\zeta ^{\ast }$ is a clusterer with the minimal error $% \epsilon \lbrack \zeta ^{\ast }]$, called the \emph{Bayes error}, obtained by a Bayes partition, $\zeta ^{\ast }(S)$ for each set $S\in N$: \begin{eqnarray} \zeta^*(S) &=& \arg \min_{\zeta(S) \in \mathcal{K}_S} \epsilon_{\zeta}(S) \notag \\ &=& \arg \min_{\zeta(S) \in \mathcal{K}_S} \sum_{\mathcal{P}_S \in \mathcal{K% }_S} c_S(\zeta(S),\mathcal{P}_S) P_S(\mathcal{P}_S). \notag \\ \end{eqnarray} The Bayes clusterer can be solved for each fixed $S$ individually. More specifically, the search space in the minimization and the set of partitions with known probabilities in the summation can be constrained to subsets of $\mathcal{K}_S$, denoted by $\mathcal{C}_S$ and $\mathcal{R}_S$, receptively. We refer to $\mathcal{C}_S$ and $\mathcal{R}_S$ as the set of candidate partitions and the set of reference partitions, respectively. Following \cite{dalton2015analytic}, we can search for the optimal clusterer based on both optimal and suboptimal procedures (detailed in Results and discussion section) with derived bounds that can be used to optimally reduce the size of $\mathcal{C}_S$ and $\mathcal{R}_S$. \subsection*{Gaussian model with missing values} We consider an RLPP model that generates the points in the set $S$ according to a Gaussian model, where features of $x \in S$ can be missing completely at random due to a missing data mechanism independent of the RLPP. More precisely, the points $x \in S$ with label $\phi_S(x) = i$ are drawn independently from a Gaussian distribution with parameters $\rho_i = \{\mu_i,\Sigma_i\} $. Assuming $n_i$ sample points with label $i$, we divide the observations into $G_i \leq n_i$ groups, where all $n_{ig}$ points in group $g$ have the same set, $J_{ig}$, of observed features with cardinality $|J_{ig}| = d_{ig}$. Denoting by $S_{ig}$ the set of sample points in group $% g$ of label $i$, we represent the pattern of missing data in this group using a $d_{ig}\times d$ matrix $M_{ig}$, where each row is a $d$% -dimensional vector with a single non-zero element with value 1 corresponding to the observed feature's index. Thus, the non-missing portion of sample point $x \in S_{ig}$, i.e. $M_{ig}x$, has Gaussian distribution $% \text{N}(M_{ig}\mu_i ,M_{ig}\Sigma_i M_{ig}^{T})$. Given $\rho =\{\rho _{1},\rho _{2},...,\rho _{l}\}$ of independent parameters, to evaluate the posterior probability of random labeling function $\phi _{S}\in L^{S}$, we have \begin{align} P_{S}& (\phi _{S})\propto P(\phi _{S})f(S|\phi _{S})= \notag \label{eq:ps} \\ & P(\phi _{S})\int f(S|\phi _{S},\rho)f(\rho )d\rho = \notag \\ & P(\phi _{S})\prod_{\substack{ i=1 \\ n_{i}\geq 1}}^{l}\int \Big(% \prod_{x\in S_{i}}f_{i}(x|\rho _{i})\Big)f(\rho _{i})d\rho _{i}= \notag \\ & P(\phi _{S})\prod_{\substack{ i=1 \\ n_{i}\geq 1}}^{l}\int \Big(% \prod_{g=1}^{G_{i}}\prod_{x\in S_{ig}} \\ & \mbox{N}\big(M_{ig}x;M_{ig}\mu _{i},M_{ig}\Sigma _{i}M_{ig}^{T}\big)\Big)% f(\mu _{i},\Sigma _{i})d\mu _{i}d\Sigma _{i}, \notag \end{align}% where $P(\phi _{S})$ is the prior probability on label functions, which we assume does not depend on the specific points in $S$. \subsubsection*{Known means and covariances} When mean and covariance parameters of label-conditional distributions are known, the prior probability $f(\mu_i,\Sigma_i)$ in~(\ref{eq:ps}) is a point mass at $\rho_i = \{\mu_i,\Sigma_i\} $. Thus, \begin{equation} \label{eq:ps1} \begin{split} &P_S(\phi_S)\propto P(\phi_S) \times \\ &\prod_{\substack{ i=1 \\ n_i \geq 1}}^l \prod_{g=1}^{G_i} \prod_{x \in S_{ig}} \Big[ (2\pi)^{-d_{ig}/2} |M_{ig}\Sigma_i M_{ig}^{T}|^{-1/2} \times \\ &\exp \big\{ - \frac{1}{2} (x-\mu_i)^T M_{ig}^{T} (M_{ig}\Sigma_i M_{ig}^{T})^{-1} M_{ig} (x-\mu_i) \big\} \Big]. \end{split} \end{equation} We define the group-$g$ statistics of label $i$ as \begin{eqnarray} m_{ig} &:=&\frac{1}{n_{ig}}\sum_{x \in S_{ig}}M_{ig}x, \notag \\ \Psi_{ig} &:=&\sum_{x \in S_{ig}}(M_{ig}x-m_{ig})(M_{ig}x-m_{ig})^{T}, \label{eq:def} \end{eqnarray} where $m_{ig}$ and $\Psi_{ig}$ are the sample mean and scatter matrix, employing only the observed $n_{ig}$ data points in group $g$ of label $i$. The posterior probability of labeling function~(\ref{eq:ps1}) then can be expressed as \begin{equation} \label{eq:ps2} \begin{split} &P_S(\phi_S)\propto P(\phi_S) \prod_{\substack{ i=1 \\ n_i \geq 1}}^l \prod_{g=1}^{G_i} \\ &\Big[|2\pi \Sigma_{ig}|^{-n_{ig}/2}\exp \{- \frac{1}{2}\text{tr}\big(% \Psi_{ig}(\Sigma _{ig})^{-1}\big)\} \times \\ &\exp \big\{-\frac{1}{2}n_{ig}(m_{ig}-M_{ig}\mu_i)^{T}(% \Sigma_{ig})^{-1}(m_{ig}-M_{ig}\mu_i )\big\} \Big], \end{split} \end{equation} where $\Sigma_{ig}=M_{ig}\Sigma_i M_{ig}^{T}$ is the covariance matrix corresponding to group $g$ of label $i$. \subsubsection*{Gaussian means and known covariances} Under this model, data points are generated according to Gaussians whose mean parameters are random and their covariance matrices are fixed. Specifically, for label $i$ we have $\mu_i \sim \mbox{N}(m_i,\frac{1}{\nu_i}% \Sigma_i)$, where $\nu_i>0$ and $m_i$ is a length $d$ real vector. Thus the posterior of label function given the point set $S$ can be derived as \begin{equation} \label{eq:ps3} \begin{split} &P_S(\phi_S) \propto P(\phi_S) \prod_{\substack{ i=1 \\ n_i \geq 1}}^l % \Bigg[ \prod_{g=1}^{G_i} \Big[ |2\pi\Sigma_{ig}|^{-n_{ig}/2} \times \\ &\exp \{- \frac{1}{2}\text{tr}\big(\Psi_{ig}(\Sigma _{ig})^{-1}\big)\} % \Big] \times (\nu_i)^{d/2} |2\pi \Sigma_i|^{-1/2} \\ &\int \exp \Big\{-\frac{1}{2}\sum_{g=1}^{G_i}n_{ig}(m_{ig}-M_{ig}% \mu_i)^{T}(\Sigma_{ig})^{-1} \\ &(m_{ig}-M_{ig}\mu_i ) - \frac{\nu_i}{2} (\mu_i-m_i)^T \Sigma_i^{-1} (\mu_i-m_i) \Big\} d\mu_i \Bigg]. \end{split} \end{equation} By completing the square and using the normalization constant of multivariate Gaussian distribution, the integral in this equation can be expressed as \begin{align} & \int \exp \Big\{-\frac{1}{2}\big[(\mu _{i}-A_{i}^{-1}b_{i})^{T}A_{i}(\mu _{i}-A_{i}^{-1}b_{i})+ \notag \\ & \sum_{g=1}^{G_{i}}n_{ig}m_{ig}^{T}\Sigma _{ig}^{-1}m_{ig}+\nu _{i}m_{i}^{T}\Sigma _{i}^{-1}m_{i}-b_{i}^{T}A_{i}^{-1}b_{i}\big]\Big\} \notag \\ & =|A_{i}/(2\pi )|^{-1/2}\exp \Big\{-\frac{1}{2}\big[% \sum_{g=1}^{G_{i}}n_{ig}m_{ig}^{T}\Sigma _{ig}^{-1}m_{ig}+ \\ & \quad \quad \quad \quad \quad \quad \nu _{i}m_{i}^{T}\Sigma _{i}^{-1}m_{i}-b_{i}^{T}A_{i}^{-1}b_{i}\big]\Big\}, \notag \end{align}% where% \protect\begin{eqnarray} A_{i} &=&\sum_{g=1}^{G_{i}}n_{ig}M_{ig}^{T}\Sigma _{ig}^{-1}M_{ig}+\protect% \nu _{i}\Sigma _{i}^{-1}, \\ b_{i} &=&\sum_{g=1}^{G_{i}}n_{ig}M_{ig}^{T}\Sigma _{ig}^{-1}m_{ig}+\protect% \nu _{i}\Sigma _{i}^{-1}m_{i}. \protect\end{eqnarray}% \subsubsection*{Gaussian-Inverse-Wishart Means and Covariances} Under this model, data points are generated from Gaussian distributions with random mean and covariance parameters. More precisely, the parameters associated with label $i$ are distributed as ${\mu_i|\Sigma_i \sim \mbox{N}(m_i,\frac{1% }{\nu_i}\Sigma_i)}$ and $\Sigma_i \sim \mbox{IW}(\kappa_i,\Psi_i)$, where the covariance has inverse-Wishart distribution \begin{equation} f(\Sigma_i) = \frac{|\Psi_i|^{\kappa_i/2}}{2^{\kappa_i d/2} \Gamma_d(\kappa_i/2)} |\Sigma_i|^{\frac{\kappa_i+d+1}{2}} \exp \big( -\frac{1% }{2} \mbox{tr}(\Psi_i \Sigma_i^{-1}) \big). \end{equation} To compute the posterior probability of labeling function (\ref{eq:ps}), we first marginalize out the mean parameters $\mu_i$ in a similar fashion to (% \ref{eq:ps3}), obtaining \begin{eqnarray} \label{eq:iw} P_S(\phi_S) &\propto& P(\phi_S) \prod_{\substack{ i=1 \\ n_i \geq 1}}^l \int \Bigg[ \prod_{g=1}^{G_i}|2\pi \Sigma_{ig}|^{-n_{ig}/2}\times \notag \\ &&\exp \{- \frac{1}{2}\text{tr}\big(\Psi_{ig}(\Sigma _{ig})^{-1}\big)\}\times \\ && (\nu_i)^{d/2} |\Sigma_i|^{-1/2} |A_i/(2\pi)|^{-1/2}\times \notag \\ && \exp \Big\{ -\frac{1}{2} \big[ \sum_{g=1}^{G_i}n_{ig} m_{ig}^T \Sigma_{ig}^{-1} m_{ig} \notag \\ &+& \nu_i m_i^T \Sigma_i^{-1} m_i - b_i^T A_i^{-1} b_i \big] \Big\} \Bigg] % f(\Sigma_i) d\Sigma_i. \notag \end{eqnarray} The integration in the above equation has no closed form solution, thus we resort to Monte Carlo integration for approximating it. Specifically, denoting the term in the brackets in equation~(\ref{eq:iw}) as $g(\Sigma_i)$% , we draw $J$ samples $\Sigma_i^{(j)} \sim \mbox{IW}(\kappa_i,\Psi_i)$, $% j=1,2,...,J$, and then compute the integral as $\frac{1}{J} \sum_{j=1}^{J} g(\Sigma_i^{(j)})$. \section*{Results and discussion} The performance of the proposed method for optimal clustering with missing values at random is compared with some suboptimal versions, two other methods for clustering data with missing values, and also classical clustering algorithms with imputed missing values. The performance comparison is carried out on synthetic data generated from different Gaussian RLPP models with different missing probability setups, and also on a publicly available dataset of breast cancer generated by TCGA Research Network (https://cancergenome.nih.gov/). In our experiments, the results of the exact optimal solution for the RLPP with missing at random (Optimal) is provided for smaller point sets, i.e. wherever computationally feasible. We have also tested two suboptimal solutions, similar to the ideas in \cite{dalton2015analytic}, for an RLPP with missing at random. In the first one (Subopt. Pmax), the set of reference partitions ($\mathcal{R}_S$) is restricted to a closed ball of a specified radius centered on the partition with the highest probability, where the distance of two partitions is defined as the minimum Hamming distance between labels inducing the partitions. In both Optimal and Pmax, the reference set is further constrained to the partitions that assign the correct number of points to each cluster, but the set of candidate partitions ($\mathcal{C}_S$) includes all the possible partitions of $n$ points, i.e. $2^{n-1}$. In the second suboptimal solution (Subopt. Pseed), a local search within Hamming distance at 1 is performed starting from five random initial partitions to approximately find the partition with (possibly local) maximum probability. Then the sets of reference and candidate partitions are constrained to the partitions with correct cluster sizes with a specified Hamming distance from the found (local) maximum probability partition. The bounds derived in \cite% {dalton2015analytic} for reducing the set of candidate and reference partitions are used, where applicable, in Optimal, Pseed, and Pmax. In all scenarios, $k$-POD and fuzzy $c$-means with optimal completion strategy (FCM-OCS) are directly applied to the data with missing values. In the simulations in \cite{fcm-ocs}, where FCM-OCS is introduced, to initialize cluster centers, the authors apply ordinary fuzzy $c$-means to the complete data, i.e. using knowledge of the missing values. To have a fair comparison with other methods, we calculate the initial cluster centers for FCM-OCS by applying fuzzy $c$-means to the subset of points with no missing features for lower missing rates. For higher missing rates we impute the missing values by the mean of the corresponding feature values across all points, and then apply fuzzy $c$-means to all the points to initialize the cluster centers. In order to apply the classical algorithms, the missing values are imputed according to \cite{siamak}, by employing a multivariate Gibbs sampler that iteratively generates samples for missing values and parameters based on the observed data. The classical algorithms included in our experiments include \emph{k}-means (KM), fuzzy \emph{c}-means (FCM), hierarchical clustering with single linkage (Hier. (Si)), and hierarchical clustering with complete linkage (Hier. (Co)). Moreover, completely random clusterer (Random) results are also included for performance comparisons. \subsection*{Simulated data} In the simulation analysis, the number of clusters is fixed at 2, and the dimensionality of the RLPPs (number of features) is set to 5. Additional results for 20 features are provided in Additional file 1. Point generation is done based on a Gaussian mixture model (GMM). Three different scenarios for the parameters of the GMM are considered: \emph{i}) Fixed known means and covariances \emph{ii}) Known covariances and unknown means with Gaussian distributions. \emph{iii}) Unknown means and covariances with Gaussian inverse-Wishart distributions. We select the values of the parameters of the point generation process to have an approximate Bayes error of 0.15. The selected values are shown in Table \ref{table:sim-params}% . \begin{table*}[tph] \caption{Parameters for the point generation under three models. $\mbox{N}$, $\mbox{IW}$, $\mathbf{1}_{d}$ , and $I_{d}$ denote Gaussian, inverse-Wishart, column vector of all ones with length $d$, and $d\times d$ idendity matrix, respectively.} \label{table:sim-params} \begin{center} \resizebox{0.99\linewidth}{!} {\begin{tabular}{|l|l|l|l|} \hline Model & Mean vectors & Covariance matrices & Distributions' hyperparameters \\ \hline & & & \\ Fixed means and covariances & $\mu_1=0\cdot \mathbf{1}_d$, $\mu_2=0.445\cdot \mathbf{1}_d$ & $\Sigma_1=\Sigma_2=0.23\cdot I_d$ & --- \\ \hline Gaussian means and fixed covariances & $\mu_1 \sim \mbox{N}(m_1,\frac{1}{\nu_1}\Sigma_1)$, $\mu_2 \sim \mbox{N}(m_2,\frac{1}{\nu_2}\Sigma_2)$ & $\Sigma_1=\Sigma_2=0.28\cdot I_d$ & $m_1=0\cdot \mathbf{1}_d$, $m_2=0.45\cdot \mathbf{1}_d$, \\ & & & $\nu_1=30$, $\nu_2=5$ \\ \hline Gaussian means and inverse-Wishart covariances & $\mu_1 \sim \mbox{N}(m_1,\frac{1}{\nu_1}\Sigma_1)$, $\mu_2 \sim \mbox{N}(m_2,\frac{1}{\nu_2}\Sigma_2)$ & $\Sigma_1\sim \mbox{IW}(\kappa_1,\Psi_1)$,$\Sigma_2\sim \mbox{IW}(\kappa_2,\Psi_2)$ & $m_1=0\cdot \mathbf{1}_d$, $m_2=0.45\cdot \mathbf{1}_d$, \\ & & & $\nu_1=30$, $\nu_2=5$, \\ & & & $\Psi_1=\Psi_2=20.7\cdot I_d$, \\ & & & $\kappa_1=\kappa_2=75$ \\ \hline \end{tabular}} \end{center} \end{table*} For the point set generation, the number of points from each cluster is fixed \emph{a priori}. The distributions are first drawn from the assumed model, and then the points are generated based on the drawn distributions. A subset of the points' features is randomly selected to be hidden based on missing at random with different missing probabilities. Four different setups for the number of points are considered in our simulation analysis: 10 points from each cluster ($n_{1}=n_{2}=10$), 12 points from one cluster and 8 points from the other cluster ($n_{1}=12,n_{2}=8$), 35 points from each cluster ($n_{1}=n_{2}=35$), and 42 points from one cluster and 28 points from the other cluster ($n_{1}=42,n_{2}=28$). When having unequal sized clusters, in half of the repetitions $n_{1}$ points are generated from the first distribution and $n_{2}$ points from the second distribution, and vice-versa in the other half. In each simulation repetition, all clustering methods are applied to the points to generate a vector of labels that induces a two-cluster partition. The predicted label vector by each method is compared with the true label vector of each point in the point set to calculate the error of that method on that point set. In other words, for each method the number of points assigned to a cluster different from their true one are counted (after accounting for the label switching issue) and divided by the total number of points ($n=n_{1}+n_{2}$) to calculate the clustering error of that method on the point set. These errors are averaged across all point sets in different repetitions to empirically estimate the clustering error of each method under a model and fixed missing-value probability. In cases with $n=70$, since applying Optimal and Pmax is computationally prohibitive, we only provide the results for Pseed. In Additional file 1, the average clustering errors are shown as a function of the Hamming distance threshold used to define the set of reference partitions in Pmax and Pseed, for different simulation scenarios. From the Figures in Additional file 1, we see that in all cases, the performances of Pmax and Pseed are quite insensitive to the set threshold of the Hamming distance for reference partitions. Note that in these types of figures all the other methods' performances other than Pmax and Pseed are constant in each plot. The average results for the fixed mean vectors and covariance matrices across 100 repetitions are shown in Figure \ref{fig:sim-fixedmeanfixedcov}. Here, the Hamming distance threshold for reference partitions in Pmax and Pseed is fixed at 1. It can be seen that Optimal, Pmax, and Pseed outperform all the other methods in all the smaller sample size settings, and Pmax and Pseed have virtually the same performance as Optimal. For the larger sample size settings where only Pseed is applied, its superior performance compared with other methods is clear from the figure. \begin{figure*}[tph] \centering [width=0.99\textwidth]{./Fig1} \caption{\csentence{Average clustering errors vs. missing probability for fixed means and covariances model.} The first and second rows correspond to $n=20$ and $n=70$, respectively.} \label{fig:sim-fixedmeanfixedcov} \end{figure*} Figure \ref{fig:sim-gaussianmeanfixedcov} depicts the comparison results under the unknown mean vectors with Gaussian distributions and fixed covariance matrices averaged over 80 repetitions. The Hamming distance threshold in Pmax and Pseed is set to 2. For smaller sample sizes, Optimal, Pmax and Pseed have lower average errors than all the other methods. We can see that for balanced clusters the suboptimal and optimal solutions have very close performances, but for the unbalanced clusters case with higher missing probabilities the difference between Optimal and Pmax and Pseed gets noticeable. For larger sample sizes Pseed consistently outperforms the other methods, although for lower missing probabilities it has closer competitors. In all cases, as the missing probability increases, the superior performance of the proposed methods becomes more significant. \begin{figure*}[tph] \centering [width=0.99\textwidth]{./Fig2_} \caption{\csentence{Average clustering errors vs. missing probability for Gaussian means and fixed covariances model.} The first and second rows correspond to $n=20$ and $n=70$, respectively.} \label{fig:sim-gaussianmeanfixedcov} \end{figure*} The average results under the unknown mean vectors and coavriance matrices with Gaussian-inverse-Wishart distribution over 40 repetitions are provided in Figure \ref{fig:sim-gaussianmeaninversewishartcov}. In the smaller sample size cases, the Hamming distance threshold in Pmax and Pseed is fixed at 8, and we can see that the proposed suboptimal (Pmax and Pseed) and optimal clustering with missing values have very close average errors, and all are much lower than the other methods' errors. For larger sample sizes, only the results for missing probability equal to 0.15 are shown vs. the Hamming distance threshold used to define the reference partitions in Pseed. Again, Pseed performs better than the other methods. \begin{figure*}[tph] \centering [width=0.99\textwidth]{./Fig3_} \caption{\csentence{Average clustering errors for Gaussian means and inverse-Wishart covariances model.} The first row corresponds to $n=20$, and the errors are shown for different missing probabilities. The second row corresponds to $n=70$ and missing probability of 0.15, where the errors are plotted vs. the Hamming distance threshold used to define the reference partitions in Pseed.} \label{fig:sim-gaussianmeaninversewishartcov} \end{figure*} \subsection*{RNA-seq data} In this section the performance of the clustering methods are examined on a publicly available RNA-seq dataset of breast cancer. The data is available on The Cancer Genome Atlas (TCGA) \cite{cancer2008comprehensive}, and is procured by the R package TCGS2STAT \cite{wan2015tcga2stat}. It consists of matched tumor and normal samples, and includes 97 points from each. The original data are in terms of the number of sequence reads mapped to each gene. RNA-seq data are integers, highly skewed and over-dispersed \cite% {ehsan1,ehsan2,arianan1}. Thus, we apply a variance stabilizing transformation (VST) \cite{durbin2002variance} implemented in DESeq2 package \cite{love2014moderated}, and transform data to a log2 scale that have been normalized with respect to library size. For all subsequent analysis, other than for calculating clustering errors, we assume that the labels of data are unknown. Feature selection is performed in a completely unsupervised manner, since in clustering no labels are known in practice. The top 10 genes in terms of variance to mean ratio of expression are picked as features to be used in clustering algorithms. In general, for setting prior hyperparameters, external sources of information like signaling pathways, where available, can be leveraged \cite{shahinp1,shahinp2}. Here, we only use a subset of the discarded gene expressions, i.e. the next 90 top genes (in terms of variance to mean ratio of expression), for prior hyperparameters calibration for the optimal/suboptimal approaches. We follow the approach in \cite{dalton2011application} and employ the method of moments for prior calibration, but unlike \cite{dalton2011application}, a single set of hyperparameters is estimated and used for both clusters, since the labels of data are not available. It is well known that in small sample size settings, estimation of covariance matrices, scatter matrices and even mean vectors may be problematic. Therefore, similar to \cite% {dalton2011application}, we assume the following structure \begin{equation*} \begin{split} & \Psi _{0}=\Psi _{1}=% \begin{bmatrix} \sigma ^{2} & \rho \sigma ^{2} & \dots & \rho \sigma ^{2} \\ \rho \sigma ^{2} & \sigma ^{2} & \dots & \rho \sigma ^{2} \\ \vdots & \vdots & \ddots & \vdots \\ \rho \sigma ^{2} & \dots & \dots & \sigma ^{2}% \end{bmatrix}% _{d\times d}, \\ & m_{0}=m_{1}=m[1,\cdots ,1]_{d}^{T}, \\ & \nu _{0}=\nu _{1}=\nu ,\kappa _{0}=\kappa _{1}=\kappa , \end{split}% \end{equation*}% and estimate five scalars ($m$, $\sigma ^{2}$, $\rho $, $\kappa $, $\nu $) from the data. In each repetition, stratified sampling is done, i.e. $n_{1}$ and $n_{2}$ points are sampled randomly from each group (normal and tumor). When $% n_{1}\neq n_{2}$, in half of the repetitions $n_{1}$ and $n_2$ points are randomly selected from the normal and tumor samples, respectively, and vice-versa in the other half. Prior calibration is performed in each repetition, and 15\% of the selected features are considered as missing values. Similar to the experiments on the simulated data, the clustering error of each method in each iteration is calculated by comparing the predicted labels and true labels of the sampled points (accounting for label switching issue), and the average results over 40 repetitions are provided in Figure \ref{fig:real}. It can be seen that the proposed optimal clustering with missing values and its suboptimal versions outperform the other algorithms. It is worth noting that the performance of Pseed is more sensitive to the selected Hamming distance threshold for reference partitions compared with the results on simulated data. \begin{figure}[tph] \centering [width=0.45\textwidth]{./Fig4} \caption{\csentence{Empirical clustering errors on breast cancer RNA-seq data.}} \label{fig:real} \end{figure} \section*{Conclusion} The methodology employed in this paper is very natural. As with any signal processing problem, the basic problem is to find an optimal operator from a class of operators given the underlying random process and a cost function, which is often an error associated with operator performance. While it may not be possible to compute the optimal operator, one can at least employ suboptimal approximations to it while knowing the penalties associated with the approximations. In this paper, we have, in effect, confronted an old problem in signal processing: If we wish to make a decision based on a noisy observed signal, is it better to filter the observed signal and then determine the optimal decision on the filtered signal, or to find the optimal decision based directly on the observed signal? The answer is the latter. The reason is that the latter approach is fully optimal relative to the actual observation process, whereas, even if in the first approach the filtering is optimal relative to the noise process, the first approach produces a composite of two actions, filter and decision, each of which is only optimal relative to a portion of the actual observation process. In the present situation involving clustering, in the standard imputation-followed-by-clustering approach, it is typically the case that neither the filter (imputation) nor the decision (clustering) is optimal, so that even more advantage is obtained by optimal clustering over the missing-value-adjusted RLPP. \begin{backmatter} \section*{List of abbreviations} RLPP: random labeled point process; RNA-seq: RNA sequencing; MCAR: missing completely at random; TCGA: The Cancer Genome Atlas; FCM-OCS: fuzzy $c$-means with optimal completion strategy; Hier. (Si): hierarchical clustering with single linkage; Hier. (Co): hierarchical clustering with complete linkage; GMM: Gaussian mixture model; VST: variance stabilizing transformation; \section*{Declarations} \section*{Competing interests} The authors declare that they have no competing interests. \section*{Author's contributions} S. B. and S. Z. D. developed the method, performed the experiments, and wrote the first draft. X. Q. and E. R. D. proofread and edited the manuscript, and oversaw the project. All authors have read and approved final manuscript. \section*{Availability of data and materials} The publicly available real datasets analyzed during the current study have been generated by the TCGA Research Network https://cancergenome.nih.gov/. \section*{Funding} This work was funded in part by Awards CCF-1553281 and IIS-1812641 from the National Science Foundation, and a DMREF grant from the National Science Foundation, award number 1534534. The publication cost of this article was funded by Award IIS-1812641 from the National Science Foundation. \section*{Ethics approval and consent to participate} Not applicable. \section*{Consent for publication} Not applicable. \section*{Acknowledgment} We thank Texas A\&M High Performance Research Computing for providing computational resources to perform experiments in this paper. \bibliographystyle{bmc-mathphys}
1,108,101,566,247
arxiv
\section{Introduction} It is believed that betting house always makes money in long run irrespective of their short term loss or gain. In this paper, we make an attempt to understand this phenomenon with the concept of simple `expectation' and `variance' of probability theory. First, we will discuss what is \textbf{fair game}. Let's consider a simple game of English Premier League (EPL) where Manchester United (ManU) is playing against Liverpool. Suppose a betting house offers a game that if ManU wins with probability 0.606, then the player will receive \$0.65 from the betting house. On the other hand, if ManU loses with probability 0.394 then the player has to pay \$1 to the betting house. Now player's revenue scheme $R_p$ is defined as follows: \begin{eqnarray*} R_{p}=\bigg\{\begin{array}{cc} 0.65 & \mbox{with probability }0.606,\\ -1 & \mbox{with probability }0.394, \end{array} \end{eqnarray*} and player's expected revenue is $$ E(R_p)=0.65\times 0.606 - 0.394=0. $$ More about expectation and moments can be found in [1,2]. Now, we will look at the revenue scheme and expected revenue for the betting house for the same game. \begin{eqnarray*} R_{b}=\bigg\{\begin{array}{cc} -0.65 & \mbox{with probability }0.606,\\ 1 & \mbox{with probability }0.394. \end{array} \end{eqnarray*} Expected revenue for betting house is, $$ E(R_b)=-0.65\times 0.606 + 0.394=0. $$ If we compare $R_p$ and $R_b$, then we can see player's loss is the gain of the betting house and vice-versa. This type of game is known as `zero-sum' game. Also, $E(R_b)=E(R_p)=0$ means if the player and betting house play this game several times, then `on-average' neither betting house nor the player will make or lose money. This is called `fair-game.' Now we know that the betting house has an establishment cost and most of the time these betting houses are the for-profit organization. Here the question is how they are making money. An easy way of doing it, if the betting house pays the player less than what they suppose to pay. That means, in the fair game, if they think to pay the player \$0.65, but to make money, they will pay the player less than \$0.65. For example, if they pay the player \$0.6 then the revenue scheme for the betting house is, \begin{eqnarray*} R_{b}=\bigg\{\begin{array}{cc} -0.6 & \mbox{with probability }0.606,\\ 1 & \mbox{with probability }0.394, \end{array} \end{eqnarray*} and the expected revenue for betting house is, $$ E(R_b)=-0.6\times 0.606 + 0.394=0.0304. $$ It means if the player and betting house play this game several times, the on-average betting house will make \$0.03 or 3 cents from the player and the player will lose the same amount because it is a zero-sum game. So to earn money, in the long run, the betting house offers the player less than what is fair. Rest of the paper is organized as follows. In section \ref{sec_betting_strategy_prob_th}, we present the strategy of betting houses using probability theory. In section \ref{sec_strategy_league_football_match}, we discuss the strategy for league football match. In section \ref{sec_data_analysis}, we presented the data analysis and showed how imputed cost of a betting house could be estimated numerically. \section{Betting Strategy with Probability Theory}\label{sec_betting_strategy_prob_th} In this section, we present the strategy of the betting house. Suppose $A$ is an event with $P(A)=p$. If $A$ happens then, the betting house will pay \$$r$. Otherwise, the betting house will receive \$1. So the revenue scheme for the betting house is $$ R_{b}=\bigg\{ \begin{array}{cc} -r & \mbox{with probability }p,\\ 1 & \mbox{with probability }(1-p), \end{array} $$ and the expected revenue for betting house is, $$ E(R_b)=-rp+(1-p). $$ Now this game is fair game if $E(R_b)=0$, that is $r=\frac{1}{p}-1$, and $\frac{1}{p}$ is known as `\emph{decimal odds}'. The variance of the revenue scheme is generally considered as risk of a game. So for a fair game $Var(R_b)=E(R_b^2)-[E(R_b)]^2=E(R_b^2)$. Now \begin{eqnarray*} E(R_b^2)&=& p \Big(\frac{1}{p}-1\Big)^2+ (1-p)\\ &=& (1-p)\Big[\frac{1-p}{p}+1\Big]\\ &=&\frac{1-p}{p}=\frac{1}{p}-1=r. \end{eqnarray*} Interestingly, for the fair game, $r$ is the amount which betting house pays for each dollar they receive. It turns out that $r$ is also the measure of risk for the same strategy as $Var(R_b)=r$, which is known as `\emph{fractional odds}.' To make a profit, in the long run, betting house pays \$$(r-\epsilon)$, where $\epsilon>0$. The revenue scheme is $$ R_{b}=\bigg\{ \begin{array}{cc} -\{(\frac{1}{p}-1)-\epsilon\} & \mbox{with probability }p,\\ 1 & \mbox{with probability }(1-p), \end{array} $$ and the expected revenue of betting house is, \begin{eqnarray*} E(R_b)&=&-(\frac{1}{p}-1)p+\epsilon p+ (1-p)\\ &=&-(1-p)+\epsilon p + (1-p)\\ &=&\epsilon p. \end{eqnarray*} After simplification $Var(R_{b})=E(R_b^2)-[E(R_b)]^2= r(1-p\epsilon)^2$. \section{Strategy for League Football Match}\label{sec_strategy_league_football_match} There are three mutually exclusive outcomes in a league football match. The outcomes are (i) Hometeam win, (ii) Away team win, (iii) Draw. Let us consider the EPL match between ManU Vs. Liverpool, where ManU is Hometeam. Suppose for this match betting house calculates the probabilities 0.6, 0.15 and 0.25 for three outcomes, namely Hometeam wins, Away team wins and draw respectively. The corresponding decimal odds are $1/0.6=1.66$, $1/0.15=6.66$ and $1/0.25=4.00$. In the previous section, we noticed that the betting house would never reveal these fair odds. They will always announce odds which are less than the fair value so that they can stay in profit. For instance, if the betting house offers odds of 1.57, 6.57 and 3.87 against the respective events of Hometeam wins, Away team wins and draw, then the revised revenue scheme for ManU to win the match with announced odds of 1.57 is, $$ R_{b}^H=\bigg\{ \begin{array}{cc} -(1.57-1) & \mbox{with probability }0.6,\\ 1 & \mbox{with probability }0.4. \end{array} $$ The expected revenue for the betting house is, $$ E(R_b^H)=-0.57\times 0.6 + 0.4=0.058. $$ Similarly, the revised revenue scheme for the Liverpool to win the match with announce odds of 6.57 is, $$ R_{b}^A=\bigg\{ \begin{array}{cc} -(6.57-1) & \mbox{with probability }0.15,\\ 1 & \mbox{with probability }0.85, \end{array} $$ and the expected revenue for betting house is, $$ E(R_b^A)= -5.57\times 0.15 + 0.85=0.0145. $$ Likewise, the revised revenue scheme that match will be draw with announce odds of 3.87 is, $$ R_{b}^D=\bigg\{ \begin{array}{cc} -(3.87-1) & \mbox{with probability }0.25,\\ 1 & \mbox{with probability }0.75, \end{array} $$ and the expected revenue for betting house is, $$ E(R_b^D)= -2.87\times 0.25 + 0.75=.0325. $$ Expected revenue of betting house for all events are positive. Mathematically, we can show when the announce odds are converted to probabilities; they usually add up to more than 1 to keep the house on benefit. Let us assume, $\Big(\frac{1}{P_H}-\epsilon\Big)=\frac{1}{P_{H}^*},$ $\Big(\frac{1}{P_A}-\epsilon\Big)=\frac{1}{P_{A}^*}$ and $\Big(\frac{1}{P_D}-\epsilon\Big)=\frac{1}{P_{D}^*}$ Therefore, \begin{eqnarray*} \frac{1-\epsilon P_H}{P_H}=\frac{1}{P_H^*}\\ \frac{P_H}{1-\epsilon P_H}={P_H^*}\\ {P_H}<{P_H^*} \end{eqnarray*} In the same way we can show ${P_A}<{P_A^*}$ and ${P_D}<{P_D^*}$. Now adding both sides we can conclude, ${P_H}+{P_A}+{P_D}<{P_H^*}+{P_A^*}+{P_D^*}$. Of course probabilities of a fair game sums up to 1 i.e. ${P_H}+{P_A}+{P_D}=1$, hence evidently ${P_H^*}+{P_A^*}+{P_D^*}>1$. \section{Data Analysis}\label{sec_data_analysis} Let us discuss the former analysis above with EPL data. Data is accessible on \textbf{http://www.football-data.co.uk/englandm.php}. In the table (\ref{table_B365}) we present the decimal odds of 10 different EPL matches from the popular betting house Bet365. First three columns, `B365H', `B365D' and `B365A' represent the decimal odds (i.e. $1/p$) of three events that are (i) home team wins, (ii) draw and (iii) away team wins; from the betting house Bet365. It is visible that total probability of the three mutually exclusive events of a given match is greater than one which violates the thumb rule that probabilities add up to one. It is inevitable that sum of the probabilities will be greater than one from our results described in the previous section. However, if there are two betting houses, we can say that one house is offering better odds to the players if its sum of probabilities is closer to one. Let us consider a match held on 8th August 2015, between Bournemouth and Aston Villa where the former team is Hometeam and later one is Away team. We look at the betting houses for this match, the following table \ref{table_compare_odds} (for the six betting houses, namely Bet365 (B365), Bet\&Win (BW), Interwetten (IW), Ladbrokes (LB), William Hill (WH), VC Bet (VC) ). Here we can compare the announce odds whichever is near to 1 that betting the house is offering fair. As per the table \ref{table_compare_odds}, B365 is offering fair odds compared with others. The betting house Interwetten is offering worst odds for this match. \subsection{Imputed Cost of a Betting House} We can show different ways of calculating Imputed cost of the betting house. Here we are using additive model for the same. In this model, revenue of betting house is described as, $$ R_{b}=\bigg\{ \begin{array}{cc} -\{(\frac{1}{p}-1)-\epsilon\} & \mbox{with probability }p,\\ 1 & \mbox{with probability }(1-p). \end{array} $$ As discussed earlier, to make a profit, betting houses will always offer an amount which is less than what they suppose to pay. Here $\epsilon$ is that profitable amount. Suppose the betting houses announce odds are ${P_{H}^*}$, ${P_{A}^*}$ and ${P_{D}^*}$ respectively for Hometeam, Away team and Draw of a match. These announce odds can be defined as below: $$\Big(\frac{1}{P_H}-\epsilon\Big)=\frac{1}{P_{H}^*},~~ \Big(\frac{1}{P_A}-\epsilon\Big)=\frac{1}{P_{A}^*},~~ \Big(\frac{1}{P_D}-\epsilon\Big)=\frac{1}{P_{D}^*}$$ Now we all know that probabilities of three mutually exclusive events add up to 1 for fair game, i.e. $P_{H}+P_{A}+P_{D}=1$. But ${P_{H}^*}+{P_{A}^*}+{P_{D}^*}>1$. Let us say $\delta$ is the difference between sum of these two probabilities. Then \begin{eqnarray*} &&{P_H^*}+{P_A^*}+{P_D^*}-({P_H}+{P_A}+{P_D})=\delta\\ &\Rightarrow &{P_H^*}+{P_A^*}+{P_D^*}-(\frac {P_H^*}{1+\epsilon P_H^*}+\frac {P_A^*}{1+\epsilon P_A^*}+ \frac {P_D^*}{1+\epsilon P_D^*})=\delta\\ &\Rightarrow &{P_H^*}-\frac {P_H^*}{1+\epsilon P_H^*}+{P_A^*}-\frac {P_A^*}{1+\epsilon P_A^*}+{P_D^*}-\frac {P_D^*}{1+\epsilon P_D^*}=\delta\\ &\Rightarrow &{P_H^*}(1-\frac{1}{1+\epsilon{P_H^*}})+{P_A^*}(1-\frac{1}{1+\epsilon{P_A^*}})+{P_D^*}(1-\frac{1}{1+\epsilon {P_D^*}})=\delta\\ &\Rightarrow &\frac {\epsilon (P_H^*)^2}{1+\epsilon P_H^*} + \frac {\epsilon (P_A^*)^2}{1+\epsilon P_A^*} + \frac {\epsilon (P_D^*)^2}{1+\epsilon P_D^*}= \delta. \end{eqnarray*} Clearly we can estimate $\epsilon$ from the above equation as ${P_H^*},{P_A^*},{P_D^*}$ and $\delta$ are already known to us. Let us solve the equation for $\epsilon$ using the given EPL data. Considering the first match for B365 betting house where $\{P_H^*\}=0.50$, $\{P_A^*\}$=0.25, $\{P_D^*\}$=0.28 and $\delta$=1.03-1=0.03. Now solving the above equation with the help of this numerical values, \begin{eqnarray*} &\epsilon(\frac {0.50^2}{1+\epsilon (0.50)} + \frac {0.25^2}{1+\epsilon (0.25)} + \frac {0.28^2}{1+\epsilon (0.28)})=0.03. \end{eqnarray*} Here we can see the equation is cubic in $\epsilon$. Instead solving analytically which is complicated, we are introducing \texttt{R} programming to optimise $\epsilon$. Therefore calculate the value of $\epsilon$ is 0.065 or following the revenue model we can say \$0.065 per dollar is the profitable amount for the betting house. \subsection{Comparative study of Imputed cost among different betting houses: } Let us consider the result presented in Table (\ref{table_imputed_cost}) and figure (\ref{fig_imputed_cost}). \begin{enumerate} \item All betting houses are limited to \$0.10 seems like there is some regulatory policy \item For $95\%$ of the matches B365 charges between \$.04 to \$.06. \item For majority games and approximately for all matches BW and IW respectively charge \$0.10. \item Same behavior we can see for LB also. \item But VC never charges greater than \$.09. Moreover, for majority matches, they charge between \$.04 to \$.06. \item B365 and VC has similar kind of behavior. \end{enumerate} \section{Conclusion} In this paper, we presented how the strategy of the betting houses can be explained with the probability odds. From the real data, it is visible that for all betting houses considered here, the sum of the probability for the win, loss, and draw of a match are greater than one. We also presented how the cost of a game can be imputed numerically. Therefore, with this understanding, one realizes how the betting houses are consistently making money and accordingly decides whether to invest one's money on the betting house.
1,108,101,566,248
arxiv
\section{\label{sec:level1} Introduction} Progress in trapped-ion quantum information processing \cite{leibfried_experimental_2003, haffner_quantum_2008, blatt_entangled_2008, wineland_nobel_2013}, quantum simulation \cite{schaetz_focus_2013, blatt_quantum_2012}, and precision spectroscopy experiments \cite{schmidt_spectroscopy_2005, rosenband_frequency_2008, hempel_entanglement-enhanced_2013, huntemann_improved_2014,wan_precision_2014,gebert_precision_2015} is largely based on advances in the ability to control and manipulate the quantum states of the system. Trapped and laser-cooled ions represent a particularly well-controlled system for which different techniques have been established to control the internal (electronic) and external (motional) state. Commonly, sequences of laser or microwave pulses are applied to prepare a desired state or implement operations for state manipulation. For this, mostly square pulses with a fixed length and frequency are employed that rotate the atomic qubit and -- depending on the experimental implementation -- also change the motional state. The effect of undesired frequency components in square-shaped pulses has previously been reduced by employing amplitude-shaped pulses with a smooth rising and falling slope \cite{riebe_process_2006, benhelm_towards_2008, kirchmair_deterministic_2009}. Furthermore, composite pulses, first developed in the context of nuclear magnetic resonance \cite{levitt_nmr_1979, levitt_composite_1986, vandersypen_nmr_2005}, are used in trapped ion systems to implement complex algorithms \cite{gulde_implementation_2003, schmidt-kaler_realization_2003} or operations that are less sensitive to variations of the experimental parameters \cite{timoney_error-resistant_2008, ivanov_high-fidelity_2011, shappert_spatially_2013, mount_error_2015}. Adiabatic state manipulation represents another class of techniques with reduced sensitivity to fluctuations in the coupling strength \cite{bergmann_coherent_1998, bergmann_perspective:_2015}. Pulses with slowly varying intensity and/or frequency are used to manipulate the state of the system. For trapped ions two adiabatic techniques have been investigated, namely Rapid Adiabatic Passage (RAP) and stimulated Raman adiabatic passage (STIRAP). In RAP a frequency and amplitude modulated pulse is used to tailor the dynamics of the atomic state dressed by the light field for adiabatic transfer of population between two bare atomic states. In the experiment, the time dependence of the intensity usually has a Gaussian shape, whereas the frequency is varied linearly in time across an atomic resonance. RAP has been used in optical qubits on carrier transitions for robust internal state preparation \cite{wunderlich_robust_2007, yamazaki_robust_2008, noel_adiabatic_2012, poschinger_interaction_2012}, and on sideband transitions, to prepare Fock \cite{watanabe_sideband_2011} and Dicke \cite{linington_robust_2008, toyoda_generation_2011} states. The STIRAP technique is typically realized in $\Lambda$-systems and relies on an adiabatic evolution from an initial to a final state without populating a short-lived intermediate state. It is usually implemented using Gaussian-shaped intensity profiles of two laser pulses with a fixed frequency difference that are delayed with respect to each other in time. It has been demonstrated for population transfer \cite{sorensen_efficient_2006} and the generation of Dicke states \cite{noguchi_generation_2012}, and suggested for efficient qubit detection of single ions \cite{moller_efficient_2007} and Doppler-free efficient state transfer in multi-ion crystals \cite{kamsap_coherent_2013}. Here we demonstrate STIRAP between hyperfine qubit states in \Mg{25} involving a change in the motional state. The coupling strength of such sideband transitions is strongly dependent on the initial motional state of the ion \cite{wineland_experimental_1998}. We used the insensitivity of STIRAP to the coupling strength to perform a complete population transfer of motionally excited states to determine the motional ground state population. For thermal states the ground state population is a direct measure for the temperature \cite{wan_efficient_2015}. Using this approach, good agreement with the expected Doppler cooling temperature is found. We implement STIRAP using a large detuning, which is in contrast to the near resonant STIRAP transfer typically discussed in the literature \cite{bergmann_coherent_1998, fewell_coherent_1997}. In this situation the counter-intuitive and intuitive pulse sequences give comparable population transfer efficiency, allowing the pulse order to be chosen in order to minimize off-resonant scattering. The intuitive pulse sequence was previously studied in doped crystals and termed b-STIRAP \cite{klein_robust_2007}. The comparable large detuning used in our experiment also relaxes the condition of adiabaticity during the transfer process and consequently allows the transfer to be comparably fast. Furthermore, spontaneous emission from light fields coupling to states not involved in the STIRAP process is suppressed, allowing STIRAP to be implemented in multi-level systems such as the \Mg{25} ion used in our work. The paper is organized as follows. In \ref{sec:level2} we provide an introduction into the theoretical treatment of STIRAP. The experimental setup for the realization of STIRAP with a single trapped \Mg{25} ion is briefly described in \ref{sec:level3}. The implementation of numerical simulations supporting the experimental findings is described in \ref{sec:level4}. In \ref{sec:level5} we present the experimental results of our investigation on the STIRAP efficiency and its dependence on pulse order, pulse length and pulse separation together with the numerical simulations. An optimized pulse sequence is used to demonstrate the advantage of STIRAP over a stimulated Raman Rabi population transfer on carrier and sideband transitions for a thermal state. \ref{sec:level6} summarizes the work and points at possible improvements and applications of the technique. \section{\label{sec:level2} Principles} \begin{figure} \centering \includegraphics[width=0.80\textwidth]{Mg-3lvl_pulses.pdf} \caption{a) Simplified level structure of \Mg{25} together with the involved laser couplings with associated Rabi frequencies $\Omega_p$ and $\Omega_s$. The dark levels are relevant for the STIRAP process, whereas the grey, dotted levels lead to additional off-resonant couplings. b) Time dependence of the Rabi frequencies normalized to their maximum value. The pulse length is defined as the full width at half maximum and the delay of the two pulses is defined as the separation of the Rabi frequency maxima.} \label{fig:3lvl_pulses} \end{figure} In the following, we briefly review the basics of STIRAP in a 3-level $\Lambda$-scheme as shown in \ref{fig:3lvl_pulses}. The Hamiltonian of the 3-level system coupled by two light fields in the interaction picture using the rotating wave approximation is given by \cite{bergmann_coherent_1998}: \begin{equation} \mathcal{H}=\frac{\hbar}{2} \left(\begin{array}{@{}ccc@{}} -2\Delta_p & \Omega_p & 0\\ \Omega_p & 0 & \Omega_s\\ 0 & \Omega_s & -2\Delta_s \end{array}\right), \label{eq:Hamiltonian} \end{equation} where the $\Omega_i$'s are the Rabi frequencies and the $\Delta_i$'s are the detunings of the so-called pump and Stokes laser beams with respect to the one-photon resonances. In the case of two-photon resonance $\Delta_p=\Delta_s=\Delta$, the eigenfrequencies of the system are given by \begin{eqnarray}\label{eq:eigenvalue} \omega_0&=&0,\nonumber\\ \omega_{+}&=&\frac{1}{2}\left(\Delta+\sqrt{\Delta^2+\Omega_p^2+\Omega_s^2}\right),\\ \omega_{-}&=&\frac{1}{2}\left(\Delta-\sqrt{\Delta^2+\Omega_p^2+\Omega_s^2}\right).\nonumber \end{eqnarray} For large detuning $\Delta\gg\Omega_i$, the corresponding eigenvectors (dressed states) become: \begin{eqnarray}\label{eq:eigenstates} \ket{a^0}&=&\cos\Theta\ket{1}-\sin\Theta\ket{3}\nonumber\\ \ket{a^+}&=&\ket{2}\\ \ket{a^-}&=&\sin\Theta\ket{1}+\cos\Theta\ket{3},\nonumber \end{eqnarray} where the so-called mixing angle $\Theta$ has been introduced. It is related to the Rabi frequencies of the coupling lasers by: \begin{equation} \tan\Theta=\frac{\Omega_p}{\Omega_s}, \label{eq:mixing_angle} \end{equation} The basic principle of the adiabatic transfer can be understood from the eigenstate equations (\ref{eq:eigenstates}). At the beginning of the sequence, only the Stokes laser field interacts with the atom and the adiabatic state $\ket{a^0}$ is aligned parallel to the initially populated electronic ground state $\ket{1}$. Due to the presence of the Stokes laser field, the initially degenerate energies of the system, $\omega_{-}$ and $\omega_{0}$, are split by the ac Stark shift. As long as this energy splitting is large compared to the coupling between the eigenstates of the system, no transition to other states will occur and the system stays in its instantaneous eigenstate, as stated by the adiabatic theorem \cite{born_beweis_1928}. By ramping the strength of the relative couplings between the three states such that only the pump laser induces a significant coupling at the end of the sequence (see \ref{fig:3lvl_pulses}b)), we can change the mixing angle $\Theta$ from $0$ to $\pi/2$. in doing so, we rotate the dressed state basis with respect to the bare state basis by $\unit{90}{^\circ}$, which means rotating $\ket{a^0}$ around $\ket{2}$ from $\ket{1}$ to $-\ket{3}$. If adiabaticity is maintained during the process, the population will stay in the eigenstate $\ket{a^0}$ and will follow the rotation, transferring it from the bare state $\ket{1}$ to state $\ket{3}$ without populating state $\ket{2}$. From equations \ref{eq:eigenstates} we can see that the large detuning results in a symmetry of the eigenstates such that the dressed state $\ket{a^-}$ can be used as the initial state for population transfer using the so-called intuitive pulse order, which is sometimes called b-STIRAP in the literature. As mentioned above, the adiabatic criterion has to be fulfilled in the STIRAP sequence, i.e. the energy splitting must be larger than the couplings between the states \cite{gaubatz_population_1990, moller_efficient_2007}: \begin{equation} \bra{a^0}\frac{d}{dt}\ket{a^{\pm}}\ll\left|\omega^{\pm}-\omega^{0}\right| \label{eq:adiabatic_criterion} \end{equation} The left side of the equation can be evaluated and it reads for the two states $\ket{a^{\pm}}$: \begin{eqnarray} \bra{a^0}\frac{d}{dt}\ket{a^{+}}&=0,\\ \bra{a^0}\frac{d}{dt}\ket{a^{-}}&=-\dot{\Theta}=\frac{\dot{\Omega}_p\Omega_s-\Omega_p\dot{\Omega}_s}{\Omega_{p}^{2}+\Omega_{s}^{2}} \end{eqnarray} Here we see that transitions to state $\ket{2}=\ket{a^+}$ are not allowed due to the large detuning. We now insert the second equation and the eigenvalues (\ref{eq:eigenvalue}) of the Hamiltonian in (\ref{eq:adiabatic_criterion}) and get a time dependent adiabatic criterion: \begin{equation} \frac{\dot{\Omega}_p\Omega_s-\Omega_p\dot{\Omega}_s}{\Omega_{p}^{2}+\Omega_{s}^{2}}\ll\frac{1}{2}\left(\Delta-\sqrt{\Delta^2+\Omega_p^2+\Omega_s^2}\right) \label{eq:adiabatic_criterion_time_dep} \end{equation} We plotted both sides of the equation for a pulse length of \unit{100}{$\mu$s} and three different delay times of \unit{30}{$\mu$s}, \unit{80}{$\mu$s} and \unit{130}{$\mu$s} in \ref{fig:ac_timedep}. \begin{figure}[tbp] \centering \includegraphics[width=0.32\textwidth]{acandcoup_timedep_sf3-eps-converted-to.pdf} \label{fig:ac_timedep_sf03} \includegraphics[width=0.32\textwidth]{acandcoup_timedep_sf8-eps-converted-to.pdf} \label{fig:ac_timedep_sf08} \includegraphics[width=0.32\textwidth]{acandcoup_timedep_sf13-eps-converted-to.pdf} \label{fig:ac_timedep_sf13} \caption[Time dependence of the adiabatic criterion]{\textbf{Time dependence of the adiabatic criterion} The couplings (left side of (\ref{eq:adiabatic_criterion_time_dep}), blue) and the energy splitting (right side of (\ref{eq:adiabatic_criterion_time_dep}), red) are shown for a fixed pulse length of \unit{100}{$\mu$s} and (a) a delay time of \unit{30}{$\mu$s}, (b) a delay time of \unit{80}{$\mu$s} and (c) a delay time of \unit{130}{$\mu$s}. The dotted lines represent the pulses.} \label{fig:ac_timedep} \end{figure} For short delay times (\ref{fig:ac_timedep}a)), the adiabatic criterion is not fulfilled at the beginning and the end of the pulse sequence. During this part of the sequence transitions between the adiabatic states $\ket{a^0}$ and $\ket{a^+}$ may occur, leading to non-adiabatic transfer that depends on the relative populations in the two involved adiabatic states. For long delay times (\ref{fig:ac_timedep}c)), the adiabatic criterion is not fulfilled in between the two pulses. \section{\label{sec:level3} Experimental Setup} Details of the experimental setup have been described before \cite{hemmerling_single_2011,hemmerling_novel_2012}. We use the $\ket{F=3, m_F=3}=\ket{\!\!\downarrow}=\ket{1}$, $\ket{F=2, m_F=2}=\ket{\!\!\uparrow}=\ket{3}$ states of the ${}^2$S$_{1/2}$-ground state of a \Mg{25} ion as our qubit. The states are separated in energy by the hyperfine splitting of \unit{1.789}{GHz}. The frequency quadrupled output of a fiber laser is used to create the laser beams at a wavelength of 280~nm for Doppler cooling, Raman sideband cooling and for coherent manipulation. The first order sideband of an electo-optic modulator (EOM) is resonant with the ${}^2$S$_{1/2}$ $\rightarrow$ ${}^2$P$_{1/2}$ transition for Doppler cooling and state discrimination. The \unit{9.2}{GHz} red detuned optical carrier is used with an additional acousto-optical modulator (AOM) setup to create the Raman laser beams that couple the hyperfine qubit states (see \ref{fig:3lvl_pulses}). Additionally, a radio frequency can be applied to couple the qubit states without being influenced by or changing the motional state. A sequence of consecutive Raman red sideband and repump pulses is used for ground state cooling the axial vibrational mode of the ion \cite{wan_efficient_2015}.\\ We implemented the STIRAP sequence in our setup using a pulse sequencer based on a field programmable gate array (FPGA) \cite{pham_general-purpose_2005, schindler_frequency_2008} that controls direct digital synthesizer (DDS) boards. We used the built-in power sweep function of the sequencer to shape the amplitude of the two laser beams needed for the STIRAP sequence. It is implemented by applying a voltage resembling the shape of the desired pulse to a voltage controlled gain amplifier which modulates the rf signal generated by the DDS chip. This radio frequency signal is subsequently amplified and fed into an AOM that imprints the time dependence of the radio frequency amplitude onto the laser intensity. The peak resonant Rabi frequency for each beam is on the order of a few 10 megahertz, resulting in carrier Raman Rabi frequencies of around 100~kHz. The experimental data is typically averaged over 250 repetitions of an identical experiment with the same initial conditions. \section{\label{sec:level4} STIRAP simulation} Numerical simulations were carried out based on the density matrix formalism to determine the parameter regime for efficient population transfer using the STIRAP process. We integrated the master equation numerically and derived the time dependence of the atomic state populations. In general, the master equation can be expressed as:\\ \begin{equation} \frac{d\rho}{dt}=\mathcal{L}\rho. \end{equation} Here, $\mathcal{L}$ is the Liouvillian operator. Our qubit states $\ket{1}$ and $\ket{3}$ are magnetic sub-states of the hyperfine splitted ground state of the \Mg{25} ion and spontaneous emission from these long-lived states is neglected in the simulation. The detuning of the lasers with respect to the excited state $\ket{2}$ is \unit{9.2}{GHz} and the off-resonant scattering rates from coupling to all possible excited states are on the order of \unit{1.1}{ms$^{-1}$} and \unit{4.4}{ms$^{-1}$} for the ground states $\ket{1}$ and $\ket{3}$, respectively \cite{gebert_damage-free_2014}. This off-resonant scattering limits the coherence between the lasers and the atom and reduces the detected signal. However, this effect is small and since the consideration of off-resonant scattering increases the complexity of the simulation excessively, we neglected this effect in the simulations. In this case the time evolution of the system can be described by a Hamiltonian $\mathcal{H}$ and the Liouvillian acts on the density matrix as follows: \begin{equation} \mathcal{L}\rho=-i\left[\mathcal{H},\rho\right]=-i\left(\mathcal{H}\rho-\rho\mathcal{H}\right) \label{eqn:Liouvillian} \end{equation} The quantized motion of the ion in the trap is included in the simulations as the tensor product of the electronic state \ket{e} and the harmonic trap states \ket{n} for the state of the ion: $\ket{\psi}=\ket{e}\otimes\ket{n}$. Up to 16 motional levels have been considered. This way, we are able to simulate carrier as well as sideband transitions. The time dependence of the STIRAP process is incorporated by time-dependent Rabi frequencies in the Hamiltonian, where for the simulation a Gaussian pulse shape was assumed. The parameters of the pulses are defined as: \begin{eqnarray} \Omega_{i}(t)&=&\Omega_{i,\mathrm{max}}\cdot\exp\left(-\frac{\left(t-t_i\right)^2}{2t_\mathrm{width}^2}\right) \end{eqnarray} where we define $t_\mathrm{pulse}=2\sqrt{2 \ln(2)}\cdot t_\mathrm{width}$ as the pulse length, $t_{i}$ as the centers, and $\Omega_{i,\mathrm{max}}$ as the maximum Rabi frequencies of the two pulses $i\in\{p,s\}$. Additionally, we denote the delay between the pulses as $t_\mathrm{delay}=t_s-t_p$ (see \ref{fig:3lvl_pulses}). All simulations presented in the following were performed using the quantum optics toolbox \cite{tan_computational_1999} in the MATLAB programming language \cite{matlab_version_2013}. Since the Hamiltonian of the system is time dependent, the solver "\textbf{solvemc}" was used. It performs a direct integration of the master equation to calculate the density matrix $\rho$ for consecutive times. In order to compare the simulated with the experimental results we fitted a Gaussian function to the measured pulses. Due to technical imperfections the measured pulse length of the pump field is around 12~\% shorter than the pulse length of the Stokes field. Therefore we derive an effective pulse length (mean of the Gaussian full width at half maximum, FWHM) from the fit and measure the effective delay time of the two pulses. These values, which were ranging from a few to two hundred microseconds, were used for the laser pulse parameters in the simulations. \section{\label{sec:level5} Results} \subsection{\label{sec:level5a} Pulse length and pulse delay dependence} Density matrix simulations were performed to investigate and optimize the population transfer efficiency. First, the influence of the delay of the two laser pulses for a fixed pulse length of \unit{120}{$\mu$s} was studied. The ion is initialized in state \ket{1} and the motional ground state. \begin{figure}[tbp] \centering \includegraphics[width=0.46\textwidth]{comp_sim_meas_delay-eps-converted-to.pdf} \label{fig:comp_sim_meas_delay} \includegraphics[width=0.46\textwidth]{time_evolution_delay26-eps-converted-to.pdf} \label{fig:sim_trans_delay26} \includegraphics[width=0.46\textwidth]{time_evolution_delay80-eps-converted-to.pdf} \label{fig:sim_trans_delay80} \includegraphics[width=0.46\textwidth]{time_evolution_delay136-eps-converted-to.pdf} \label{fig:sim_trans_delay136} \caption[Simulated and measured delay time dependence of the STIRAP transfer.]{\textbf{Simulated and measured delay time dependence of the STIRAP transfer.} a) Comparison of the simulated and measured delay time dependence of the STIRAP transfer on a carrier transition for an ion in the motional ground state. Positive delay times correspond to the counter-intuitive pulse sequence. For the simulated data (red) a moving average was used to account for experimental intensity fluctuations. The simulated time dependence of the STIRAP transfer for different delay times is shown in b) for \unit{26}{$\mu$s}, c) for \unit{80}{$\mu$s}, and d) for \unit{136}{$\mu$s}. The dotted lines represent the pulses.} \label{fig:delay_time_dep} \end{figure} In \ref{fig:delay_time_dep}a) we compare the measured and simulated transfer efficiency of the STIRAP sequence for carrier transitions. To allow a direct comparison between simulation and experiment, a moving average is applied to the simulated data to account for small fluctuations of experimental parameters (see below). In the figure we see that both pulse orders (positive delay times correspond to the counter-intuitive pulse order) are able to transfer population from the initial to the final state. This behavior is a result of the large detuning of the light fields from resonance. The reduced transfer efficiency seen in the experiment compared to the simulation result is explained by off-resonant excitation to different magnetic sub-states, indicated in \ref{fig:3lvl_pulses}a) which is not considered in the simulations. The relative populations of the two qubit states during the STIRAP sequence, together with the relative strength of the involved couplings of these states to the auxiliary magnetic sub-states \cite{gebert_damage-free_2014}, explain the asymmetry with respect to the pulse order seen in the experiment. We therefore use the counter-intuitive pulse sequence from now on. Using the numerical simulations we investigate the different transfer regimes. As can be seen in \ref{fig:delay_time_dep}b), for a delay time of $\unit{26}{\mu s}$, the transfer process consists of two contributions, one oscillating and one adiabatic part. For the chosen pulse length of $\unit{120}{\mu s}$, the adiabatic criterion is not fulfilled at the beginning and end of the sequence [see \ref{fig:ac_timedep}a)] and the time evolution can be understood as a combination of dynamically varying Rabi oscillations, which is the transfer mechanism for the case of overlapping pulses (zero delay time), and an adiabatic part of the population transfer. In this situation the final transfer efficiency strongly depends on the timing and exact Rabi frequency of the pulses. Fluctuations of experimental parameters result in an averaged transfer efficiency. Full adiabaticity is achieved for a delay time of for example $\unit{80}{\mu s}$, shown in \ref{fig:delay_time_dep}c) [see also \ref{fig:ac_timedep}b)]. For this case efficient adiabatic transfer is possible, whereas for an even longer delay time of $\unit{136}{\mu s}$ the overlap of the pulses is too small, leading to transitions between the different adiabatic states [\ref{fig:ac_timedep}c) and \ref{fig:delay_time_dep}d)], so that the transfer is incomplete. The optimal parameter regime for STIRAP was investigated by comparing the experimental and simulated population transfer with respect to the pulse length for different delays between the pulses. For this we introduce a scaling factor $s$ for the delay time which is related to the pulse length by $t_\mathrm{delay}=s \cdot t_\mathrm{pulse}$. If the pulse length is chosen long enough, we are in the Rabi oscillation regime for $s<0.6$ and in the adiabatic (STIRAP) regime for $s>0.6$. In \ref{fig:2D_plots_car} the simulated transfer efficiency for carrier transitions with the ion initialized in the motional ground state is compared with the experimental result. \begin{figure}[tbp] \centering \includegraphics[width=0.46\textwidth]{pop_vs_delay_pulselength_car_2d-eps-converted-to.pdf} \label{fig:pop_vs_delay_and_pulselength} \includegraphics[width=0.46\textwidth]{car_2d_20141009-eps-converted-to.pdf} \label{fig:pop_vs_delay_and_pulselength_meas} \caption[Transfer efficiency for different delay scaling factors and pulse lengths for carrier transitions.]{\textbf{Transfer efficiency for different delay scaling factors and pulse lengths for carrier transitions.} a) Simulation and b) experimental results of the STIRAP transfer, where red corresponds to no and blue to complete transfer. For the simulations and the measurements the ion was initialized in the ground state of motion in \ket{3} and the pulse length was scanned for different delay scaling factors $s$.} \label{fig:2D_plots_car} \end{figure} Experiment and simulation show the same behavior as in \ref{fig:delay_time_dep}, i.e. the transfer depends on the exact pulse length for small delay scaling factors. The transfer is slightly faster in the experiment than in the simulations which may be due to slight deviations of the used parameters as well as an imperfect pulse shape of the STIRAP pulses. Additionally, deviations from the ideal pulse shape may lead to the enhanced oscillations in the experiment, since in the experiment the pulses envelope is less smooth. Despite these differences, the experimental data is in qualitative agreement with the simulations and we can identify large regions of efficient transfer for the constant coupling strength realized in this particular setting. The results also allow to extrapolate which STIRAP parameters should be used for the case of a fluctuating coupling strength, since the required pulse length scales with the inverse coupling strength. For a ``band'' of coupling strengths, the optimum STIRAP transfer parameters are dictated by the corresponding graph for the smallest coupling strength. Efficient transfer is then inherently accomplished for the larger coupling strengths. This scaling behavior is illustrated in \ref{fig:2D_plots_bsb}, where the transfer efficiency is shown for a blue sideband transitions with the ion initialized in the motional ground state. The sideband transition is weaker by the Lamb-Dicke factor of approximately $\eta\approx 0.3$. \begin{figure}[tbp] \centering \includegraphics[width=0.46\textwidth]{pop_vs_delay_pulselength_bsb_2d-eps-converted-to.pdf} \label{fig:pop_vs_delay_pulselength_rsb} \includegraphics[width=0.46\textwidth]{bsb_2d_20141009-eps-converted-to.pdf} \label{fig:pop_vs_delay_pulselength_rsb_meas} \caption[Transfer efficiency for different delay scaling factors and pulse length for blue sideband transitions.]{\textbf{Transfer efficiency for different delay scaling factors and pulse length for blue sideband transitions.} a) Simulation and b) experimental results of the STIRAP transfer, where red corresponds to no and blue to complete transfer. For the simulations and the measurements the ion was initialized in the ground state of motion and the pulse length was scanned for different delay scaling factors $s$.} \label{fig:2D_plots_bsb} \end{figure} Therefore, a longer pulse length is required to fulfill the adiabatic criterion and to achieve efficient transfer. As can be seen in \ref{fig:2D_plots_bsb}, for this transition we can also identify a parameter range, where we have efficient transfer for our given laser power and detuning. For the carrier transition we achieve the best transfer for a delay scaling factor of around $\textit{s}=0.7$ and a pulse length $>\unit{50}{\mu s}$, whereas for sideband transitions smaller scaling factors and longer pulse lengths are necessary. For both transitions off-resonant excitation to the excited $^2P_{3/2}$-state is limiting the final transfer efficiency. The longer pulses required for sideband transitions limit the transfer efficiency to \unit{85}{\%} for a pulse duration of $\unit{>70}{\mu s}$ and a delay scaling factor of $\leq0.7$. \subsection{\label{sec:level5b} Motional Dependence of the Transfer Efficiency} After determining appropriate parameters for the transfer we now investigate the motional dependence of the transfer efficiency. Using the simulations we investigated the dynamics of population transfer for the lowest 15 motional Fock states for carrier and sideband transitions. \begin{figure}[tbp] \centering \includegraphics[width=0.46\textwidth]{car_diff_n-eps-converted-to.pdf} \label{fig:car_diff_n} \includegraphics[width=0.46\textwidth]{rsb_diff_n-eps-converted-to.pdf} \label{fig:rsb_diff_n} \caption[Simulated transfer dynamics of the population for the lowest 15 motional levels.]{\textbf{Simulated transfer dynamics of the population for the lowest 15 motional levels.} The population transfer as a function of time is displayed for a) the carrier (delay scaling factor $s=0.7$ and pulse length of $\unit{50}{\mu s}$) and b) the red sideband transition (delay scaling factor $s=0.4$ and pulse length of $\unit{100}{\mu }$). Fock states are used in the simulations, where dark blue corresponds to the motional ground state and the color gets brighter for higher motional levels and changes from blue to green, to red and finally to yellow for n=15.} \label{fig:sim_time_evolution_diff_n} \end{figure} The results are shown in \ref{fig:sim_time_evolution_diff_n}, where we can see that for carrier transitions the transfer efficiency is reduced for higher motional levels. This is due to the decrease in carrier coupling strength associated with higher Fock state levels, scaling with the generalized Laguerre polynomial $L_n^0(\eta^2)$ \cite{wineland_experimental_1998}. In contrast, the coupling strength of blue sideband transitions initially increases with $n$ according to $\sqrt{n!/(n+1)!}L_n^1(\eta^2)$ and remains at a high level up to the largest investigated Fock state $n=15$. As a consequence, the transfer becomes more adiabatic for higher motional levels, since the state evolution speeds up while the pulse time remains constant. Experimentally, we do not probe each of the motional states individually, but rather a given distribution. Therefore, we experimentally verified the independence of the STIRAP transfer from the initial motional state by investigating the pulse length dependence of the transfer efficiency for the ion initialized in the motional ground state and compare it to the efficiency for the ion initialized in a thermal state. Furthermore, the counter-intuitive pulse sequence allows us to further speed up the STIRAP sequence by considering only the part of laser interaction where the transfer takes place: The Stokes beam is switched on abruptly with the maximum intensity at the beginning of the sequence. While ramping down its intensity, the pump beam intensity is ramped up, both with a near-Gaussian shape. At maximum intensity of the pump beam, it is switched off rapidly. The effective transfer time for this sequence is given by $t_{trans}=t_{FWHM}+s\cdot t_{FWHM}$. As can be seen from \ref{fig:comp_thermal_with_ground_state}, the STIRAP population transfer rate for sideband transitions is smaller when the ion is initialized in the motional ground state. This is a consequence of the smaller Rabi frequency of sideband transitions with $n=0$ compared to $n>0$ also populated in a thermal state. For pulse lengths longer than $\unit{100}{\mu s}$, the two curves for the transfer on the blue sideband overlap with each other. This means that, for a sufficiently long pulse, the STIRAP transfer becomes independent of the motional state population of the ion. \begin{figure}[tbp] \centering \includegraphics[width=0.46\textwidth]{comp_thermal_BSB_RSB_20150529-eps-converted-to.pdf} \caption[STIRAP transfer efficiency as a function of pulse length for different initial motional states.]{\textbf{STIRAP transfer efficiency as a function of pulse length for different initial motional states.} The ion was initialized in the motional ground state (dark blue) and a thermal state (light blue) for a blue sideband transition. Additionally the transfer is displayed for a red sideband transition, where the ion was initialized in a thermal state (orange). The delay scaling factor for all measurements was $s=0.5$. The lines are moving averages of the data and are guides to the eye. For clarity the error bars are omitted.} \label{fig:comp_thermal_with_ground_state} \end{figure} Additionally, the transfer efficiency for a red sideband transition is shown in \ref{fig:comp_thermal_with_ground_state} for the ion initialized in a thermal state of motion. We assume that for a pulse length of $\unit{120}{\mu s}$ the transfer is complete. In contrast to a blue sideband, the red sideband leaves the $n=0$ motional population untouched. We extract a ground state population of $0.08\pm 0.01$ by subtracting the respective signals averaged over pulse lengths between $\unit{120}{\mu s}$ and $\unit{150}{\mu s}$. The associated thermal distribution corresponds to a temperature of $1.3\pm0.2$ times the Doppler cooling temperature for \Mg{25}. This is in agreement with the measured temperature after Doppler cooling of 1.2 times the Doppler temperature using the technique described in \cite{Poulsen_sideband_2011} as well as the estimated temperature from measured Rabi oscillation decay which resulted in $1.6\pm 0.5$ \cite{hemmerling_towards_2011}. \subsection{\label{sec:level5c} Comparison of Coherent and Adiabatic Transfer} After showing the feasibility of motional state-independent transfer we compare the transfer efficiency of the STIRAP process with that of a $\pi$ pulse for a Doppler cooled ion. \begin{figure}[tbp] \centering \includegraphics[width=0.46\textwidth]{bsb_comp_stirap_rabi_20141009-eps-converted-to.pdf} \label{fig:bsb_comp_stirap_rabi} \includegraphics[width=0.46\textwidth]{car_comp_stirap_rabi_20141009-eps-converted-to.pdf} \label{fig:car_comp_stirap_rabi} \caption[Population transfer dynamics of Rabi and STIRAP for a thermal motional state.]{\textbf{Population transfer dynamics of Rabi and STIRAP for a thermal motional state.} The transfer efficiency is displayed for a) the blue sideband and b) the carrier transition with the ion initialized by Doppler cooling. The delay scaling factor was chosen to be $s=0.5$ for sideband transitions and s=0.7 for carrier transitions. The lines are moving averages of the data and are guides to the eye. For clarity the error bars are omitted.} \label{fig:comparison_Rabi_STIRAP} \end{figure} As expected, \ref{fig:comparison_Rabi_STIRAP} shows that the state evolution for Raman Rabi oscillations is faster than the corresponding STIRAP signal. After approximately $\unit{12}{\mu s}$ the maximum transfer efficiency of below $\unit{80}{\%}$ is reached for the blue sideband transition (\ref{fig:comparison_Rabi_STIRAP}a)). This transfer is sensitive to system parameters, especially to variations in Rabi frequency. The STIRAP transfer, however, is slower but reaches a transfer efficiency of higher than $\unit{85}{\%}$ for transfer times on the order of $\unit{150}{\mu s}$. It is worthwhile mentioning that the STIRAP transfer is limited by off-resonant excitation, which can be circumvented using a larger Raman detuning, whereas the transfer using Rabi oscillations is fundamentally limited by the different couplings between the motional states. On the carrier, the stronger motional state dependence of the Rabi frequency leads to a faster dephasing of the Rabi oscillations, as can be seen in \ref{fig:comparison_Rabi_STIRAP}b). Therefore, the transfer efficiency using Raman Rabi oscillations is reduced to $\unit{50}{\%}$ whereas the transfer using the STIRAP technique is still on the order of $\unit{75}{\%}$ for our system parameters. The transfer efficiency on a carrier transitions is reduced compared to the blue sideband transition since the Rabi frequency has a zero crossing at a motional state of $n=15$. Population in a range around this state can not be transferred efficiently by either technique. \section{\label{sec:level6} Conclusion} We have investigated the STIRAP technique to transfer population between two hyperfine states of a \Mg{25} ion. A systematic study of the transfer efficiency on the carrier and motional sidebands was performed for different pulse lengths and pulse delays with the ion initialized in the ground state of motion. Good agreement was found with numerical simulations. The insensitivity of STIRAP to the exact Rabi frequency was exploited to perform population transfer in the presence of an inherent range of Rabi frequencies found e.g. for thermally populated motional states. We demonstrated efficient population transfer on carrier and blue sideband transitions for Fock and thermal states. Experimentally the transfer was limited to $\sim 85~\%$ by off-resonant excitation to states not involved in the STIRAP process. However, this is not a fundamental limitation and could be overcome using a Raman laser system with a larger detuning, allowing transfer efficiencies approaching \unit{100}{\%}. In contrast, population transfer using Raman Rabi oscillations was shown to be faster, but less efficient for thermal motional states. We used the difference in blue and red sideband STIRAP transfer efficiency to detect the motional ground state population, from which a temperature in agreement with Doppler cooling temperature was extracted. This ground state population detection technique is robust against fluctuations of the Rabi frequencies and does not depend on the specific motional distribution. More elaborate pulse sequences using a series of STIRAP red and blue sidebands would even allow for the measurement of higher motional state populations, enhancing the accuracy of the determined temperature measurement. The same approach can be employed to determine the full motional state distribution \cite{muller_optimal_2015} or prepare strongly entangled states \cite{linington_robust_2008, noguchi_generation_2012}. Another important application of the presented technique is the detection of small forces via excitation of a normal mode in a trapped ion crystal. Here, it has the potential to enhance the sensitivity of relative mass measurements \cite{drewsen_nondestructive_2004}, indirect internal state detection \cite{hume_trapped-ion_2011, wolf_quantum_2015} and the detection of small electrical \cite{biercuk_ultrasensitive_2010, narayanan_electric_2011} and optical \cite{clark_detection_2010, biercuk_phase-coherent_2011, lin_resonant_2013} forces. We demonstrated the power of STIRAP in photon recoil spectroscopy \cite{wan_precision_2014, gebert_precision_2015} where the small force imprinted onto a two-ion crystal during absorption of a few photons leaves the motional state of the ions distributed over several trap levels. A STIRAP pulse on the red sideband probes the residual motional ground state population, which represents the spectroscopy signal. \section{Acknowledgements} We acknowledge the support of DFG through QUEST and Grant SCHM2678/3-1. This work was financially supported by the State of Lower-Saxony, Hannover, Germany. Y.W. acknowledges support from the Braunschweig International Graduate School of Metrology (B-IGSM). We thank Ian D. Leroux for stimulating discussions. \section{References} \bibliographystyle{h-physrev3} \section{\label{sec:level1} Introduction} Progress in trapped-ion quantum information processing \cite{leibfried_experimental_2003, haffner_quantum_2008, blatt_entangled_2008, wineland_nobel_2013}, quantum simulation \cite{schaetz_focus_2013, blatt_quantum_2012}, and precision spectroscopy experiments \cite{schmidt_spectroscopy_2005, rosenband_frequency_2008, hempel_entanglement-enhanced_2013, huntemann_improved_2014,wan_precision_2014,gebert_precision_2015} is largely based on advances in the ability to control and manipulate the quantum states of the system. Trapped and laser-cooled ions represent a particularly well-controlled system for which different techniques have been established to control the internal (electronic) and external (motional) state. Commonly, sequences of laser or microwave pulses are applied to prepare a desired state or implement operations for state manipulation. For this, mostly square pulses with a fixed length and frequency are employed that rotate the atomic qubit and -- depending on the experimental implementation -- also change the motional state. The effect of undesired frequency components in square-shaped pulses has previously been reduced by employing amplitude-shaped pulses with a smooth rising and falling slope \cite{riebe_process_2006, benhelm_towards_2008, kirchmair_deterministic_2009}. Furthermore, composite pulses, first developed in the context of nuclear magnetic resonance \cite{levitt_nmr_1979, levitt_composite_1986, vandersypen_nmr_2005}, are used in trapped ion systems to implement complex algorithms \cite{gulde_implementation_2003, schmidt-kaler_realization_2003} or operations that are less sensitive to variations of the experimental parameters \cite{timoney_error-resistant_2008, ivanov_high-fidelity_2011, shappert_spatially_2013, mount_error_2015}. Adiabatic state manipulation represents another class of techniques with reduced sensitivity to fluctuations in the coupling strength \cite{bergmann_coherent_1998, bergmann_perspective:_2015}. Pulses with slowly varying intensity and/or frequency are used to manipulate the state of the system. For trapped ions two adiabatic techniques have been investigated, namely Rapid Adiabatic Passage (RAP) and stimulated Raman adiabatic passage (STIRAP). In RAP a frequency and amplitude modulated pulse is used to tailor the dynamics of the atomic state dressed by the light field for adiabatic transfer of population between two bare atomic states. In the experiment, the time dependence of the intensity usually has a Gaussian shape, whereas the frequency is varied linearly in time across an atomic resonance. RAP has been used in optical qubits on carrier transitions for robust internal state preparation \cite{wunderlich_robust_2007, yamazaki_robust_2008, noel_adiabatic_2012, poschinger_interaction_2012}, and on sideband transitions, to prepare Fock \cite{watanabe_sideband_2011} and Dicke \cite{linington_robust_2008, toyoda_generation_2011} states. The STIRAP technique is typically realized in $\Lambda$-systems and relies on an adiabatic evolution from an initial to a final state without populating a short-lived intermediate state. It is usually implemented using Gaussian-shaped intensity profiles of two laser pulses with a fixed frequency difference that are delayed with respect to each other in time. It has been demonstrated for population transfer \cite{sorensen_efficient_2006} and the generation of Dicke states \cite{noguchi_generation_2012}, and suggested for efficient qubit detection of single ions \cite{moller_efficient_2007} and Doppler-free efficient state transfer in multi-ion crystals \cite{kamsap_coherent_2013}. Here we demonstrate STIRAP between hyperfine qubit states in \Mg{25} involving a change in the motional state. The coupling strength of such sideband transitions is strongly dependent on the initial motional state of the ion \cite{wineland_experimental_1998}. We used the insensitivity of STIRAP to the coupling strength to perform a complete population transfer of motionally excited states to determine the motional ground state population. For thermal states the ground state population is a direct measure for the temperature \cite{wan_efficient_2015}. Using this approach, good agreement with the expected Doppler cooling temperature is found. We implement STIRAP using a large detuning, which is in contrast to the near resonant STIRAP transfer typically discussed in the literature \cite{bergmann_coherent_1998, fewell_coherent_1997}. In this situation the counter-intuitive and intuitive pulse sequences give comparable population transfer efficiency, allowing the pulse order to be chosen in order to minimize off-resonant scattering. The intuitive pulse sequence was previously studied in doped crystals and termed b-STIRAP \cite{klein_robust_2007}. The comparable large detuning used in our experiment also relaxes the condition of adiabaticity during the transfer process and consequently allows the transfer to be comparably fast. Furthermore, spontaneous emission from light fields coupling to states not involved in the STIRAP process is suppressed, allowing STIRAP to be implemented in multi-level systems such as the \Mg{25} ion used in our work. The paper is organized as follows. In \ref{sec:level2} we provide an introduction into the theoretical treatment of STIRAP. The experimental setup for the realization of STIRAP with a single trapped \Mg{25} ion is briefly described in \ref{sec:level3}. The implementation of numerical simulations supporting the experimental findings is described in \ref{sec:level4}. In \ref{sec:level5} we present the experimental results of our investigation on the STIRAP efficiency and its dependence on pulse order, pulse length and pulse separation together with the numerical simulations. An optimized pulse sequence is used to demonstrate the advantage of STIRAP over a stimulated Raman Rabi population transfer on carrier and sideband transitions for a thermal state. \ref{sec:level6} summarizes the work and points at possible improvements and applications of the technique. \section{\label{sec:level2} Principles} \begin{figure} \centering \includegraphics[width=0.80\textwidth]{Mg-3lvl_pulses.pdf} \caption{a) Simplified level structure of \Mg{25} together with the involved laser couplings with associated Rabi frequencies $\Omega_p$ and $\Omega_s$. The dark levels are relevant for the STIRAP process, whereas the grey, dotted levels lead to additional off-resonant couplings. b) Time dependence of the Rabi frequencies normalized to their maximum value. The pulse length is defined as the full width at half maximum and the delay of the two pulses is defined as the separation of the Rabi frequency maxima.} \label{fig:3lvl_pulses} \end{figure} In the following, we briefly review the basics of STIRAP in a 3-level $\Lambda$-scheme as shown in \ref{fig:3lvl_pulses}. The Hamiltonian of the 3-level system coupled by two light fields in the interaction picture using the rotating wave approximation is given by \cite{bergmann_coherent_1998}: \begin{equation} \mathcal{H}=\frac{\hbar}{2} \left(\begin{array}{@{}ccc@{}} -2\Delta_p & \Omega_p & 0\\ \Omega_p & 0 & \Omega_s\\ 0 & \Omega_s & -2\Delta_s \end{array}\right), \label{eq:Hamiltonian} \end{equation} where the $\Omega_i$'s are the Rabi frequencies and the $\Delta_i$'s are the detunings of the so-called pump and Stokes laser beams with respect to the one-photon resonances. In the case of two-photon resonance $\Delta_p=\Delta_s=\Delta$, the eigenfrequencies of the system are given by \begin{eqnarray}\label{eq:eigenvalue} \omega_0&=&0,\nonumber\\ \omega_{+}&=&\frac{1}{2}\left(\Delta+\sqrt{\Delta^2+\Omega_p^2+\Omega_s^2}\right),\\ \omega_{-}&=&\frac{1}{2}\left(\Delta-\sqrt{\Delta^2+\Omega_p^2+\Omega_s^2}\right).\nonumber \end{eqnarray} For large detuning $\Delta\gg\Omega_i$, the corresponding eigenvectors (dressed states) become: \begin{eqnarray}\label{eq:eigenstates} \ket{a^0}&=&\cos\Theta\ket{1}-\sin\Theta\ket{3}\nonumber\\ \ket{a^+}&=&\ket{2}\\ \ket{a^-}&=&\sin\Theta\ket{1}+\cos\Theta\ket{3},\nonumber \end{eqnarray} where the so-called mixing angle $\Theta$ has been introduced. It is related to the Rabi frequencies of the coupling lasers by: \begin{equation} \tan\Theta=\frac{\Omega_p}{\Omega_s}, \label{eq:mixing_angle} \end{equation} The basic principle of the adiabatic transfer can be understood from the eigenstate equations (\ref{eq:eigenstates}). At the beginning of the sequence, only the Stokes laser field interacts with the atom and the adiabatic state $\ket{a^0}$ is aligned parallel to the initially populated electronic ground state $\ket{1}$. Due to the presence of the Stokes laser field, the initially degenerate energies of the system, $\omega_{-}$ and $\omega_{0}$, are split by the ac Stark shift. As long as this energy splitting is large compared to the coupling between the eigenstates of the system, no transition to other states will occur and the system stays in its instantaneous eigenstate, as stated by the adiabatic theorem \cite{born_beweis_1928}. By ramping the strength of the relative couplings between the three states such that only the pump laser induces a significant coupling at the end of the sequence (see \ref{fig:3lvl_pulses}b)), we can change the mixing angle $\Theta$ from $0$ to $\pi/2$. in doing so, we rotate the dressed state basis with respect to the bare state basis by $\unit{90}{^\circ}$, which means rotating $\ket{a^0}$ around $\ket{2}$ from $\ket{1}$ to $-\ket{3}$. If adiabaticity is maintained during the process, the population will stay in the eigenstate $\ket{a^0}$ and will follow the rotation, transferring it from the bare state $\ket{1}$ to state $\ket{3}$ without populating state $\ket{2}$. From equations \ref{eq:eigenstates} we can see that the large detuning results in a symmetry of the eigenstates such that the dressed state $\ket{a^-}$ can be used as the initial state for population transfer using the so-called intuitive pulse order, which is sometimes called b-STIRAP in the literature. As mentioned above, the adiabatic criterion has to be fulfilled in the STIRAP sequence, i.e. the energy splitting must be larger than the couplings between the states \cite{gaubatz_population_1990, moller_efficient_2007}: \begin{equation} \bra{a^0}\frac{d}{dt}\ket{a^{\pm}}\ll\left|\omega^{\pm}-\omega^{0}\right| \label{eq:adiabatic_criterion} \end{equation} The left side of the equation can be evaluated and it reads for the two states $\ket{a^{\pm}}$: \begin{eqnarray} \bra{a^0}\frac{d}{dt}\ket{a^{+}}&=0,\\ \bra{a^0}\frac{d}{dt}\ket{a^{-}}&=-\dot{\Theta}=\frac{\dot{\Omega}_p\Omega_s-\Omega_p\dot{\Omega}_s}{\Omega_{p}^{2}+\Omega_{s}^{2}} \end{eqnarray} Here we see that transitions to state $\ket{2}=\ket{a^+}$ are not allowed due to the large detuning. We now insert the second equation and the eigenvalues (\ref{eq:eigenvalue}) of the Hamiltonian in (\ref{eq:adiabatic_criterion}) and get a time dependent adiabatic criterion: \begin{equation} \frac{\dot{\Omega}_p\Omega_s-\Omega_p\dot{\Omega}_s}{\Omega_{p}^{2}+\Omega_{s}^{2}}\ll\frac{1}{2}\left(\Delta-\sqrt{\Delta^2+\Omega_p^2+\Omega_s^2}\right) \label{eq:adiabatic_criterion_time_dep} \end{equation} We plotted both sides of the equation for a pulse length of \unit{100}{$\mu$s} and three different delay times of \unit{30}{$\mu$s}, \unit{80}{$\mu$s} and \unit{130}{$\mu$s} in \ref{fig:ac_timedep}. \begin{figure}[tbp] \centering \includegraphics[width=0.32\textwidth]{acandcoup_timedep_sf3-eps-converted-to.pdf} \label{fig:ac_timedep_sf03} \includegraphics[width=0.32\textwidth]{acandcoup_timedep_sf8-eps-converted-to.pdf} \label{fig:ac_timedep_sf08} \includegraphics[width=0.32\textwidth]{acandcoup_timedep_sf13-eps-converted-to.pdf} \label{fig:ac_timedep_sf13} \caption[Time dependence of the adiabatic criterion]{\textbf{Time dependence of the adiabatic criterion} The couplings (left side of (\ref{eq:adiabatic_criterion_time_dep}), blue) and the energy splitting (right side of (\ref{eq:adiabatic_criterion_time_dep}), red) are shown for a fixed pulse length of \unit{100}{$\mu$s} and (a) a delay time of \unit{30}{$\mu$s}, (b) a delay time of \unit{80}{$\mu$s} and (c) a delay time of \unit{130}{$\mu$s}. The dotted lines represent the pulses.} \label{fig:ac_timedep} \end{figure} For short delay times (\ref{fig:ac_timedep}a)), the adiabatic criterion is not fulfilled at the beginning and the end of the pulse sequence. During this part of the sequence transitions between the adiabatic states $\ket{a^0}$ and $\ket{a^+}$ may occur, leading to non-adiabatic transfer that depends on the relative populations in the two involved adiabatic states. For long delay times (\ref{fig:ac_timedep}c)), the adiabatic criterion is not fulfilled in between the two pulses. \section{\label{sec:level3} Experimental Setup} Details of the experimental setup have been described before \cite{hemmerling_single_2011,hemmerling_novel_2012}. We use the $\ket{F=3, m_F=3}=\ket{\!\!\downarrow}=\ket{1}$, $\ket{F=2, m_F=2}=\ket{\!\!\uparrow}=\ket{3}$ states of the ${}^2$S$_{1/2}$-ground state of a \Mg{25} ion as our qubit. The states are separated in energy by the hyperfine splitting of \unit{1.789}{GHz}. The frequency quadrupled output of a fiber laser is used to create the laser beams at a wavelength of 280~nm for Doppler cooling, Raman sideband cooling and for coherent manipulation. The first order sideband of an electo-optic modulator (EOM) is resonant with the ${}^2$S$_{1/2}$ $\rightarrow$ ${}^2$P$_{1/2}$ transition for Doppler cooling and state discrimination. The \unit{9.2}{GHz} red detuned optical carrier is used with an additional acousto-optical modulator (AOM) setup to create the Raman laser beams that couple the hyperfine qubit states (see \ref{fig:3lvl_pulses}). Additionally, a radio frequency can be applied to couple the qubit states without being influenced by or changing the motional state. A sequence of consecutive Raman red sideband and repump pulses is used for ground state cooling the axial vibrational mode of the ion \cite{wan_efficient_2015}.\\ We implemented the STIRAP sequence in our setup using a pulse sequencer based on a field programmable gate array (FPGA) \cite{pham_general-purpose_2005, schindler_frequency_2008} that controls direct digital synthesizer (DDS) boards. We used the built-in power sweep function of the sequencer to shape the amplitude of the two laser beams needed for the STIRAP sequence. It is implemented by applying a voltage resembling the shape of the desired pulse to a voltage controlled gain amplifier which modulates the rf signal generated by the DDS chip. This radio frequency signal is subsequently amplified and fed into an AOM that imprints the time dependence of the radio frequency amplitude onto the laser intensity. The peak resonant Rabi frequency for each beam is on the order of a few 10 megahertz, resulting in carrier Raman Rabi frequencies of around 100~kHz. The experimental data is typically averaged over 250 repetitions of an identical experiment with the same initial conditions. \section{\label{sec:level4} STIRAP simulation} Numerical simulations were carried out based on the density matrix formalism to determine the parameter regime for efficient population transfer using the STIRAP process. We integrated the master equation numerically and derived the time dependence of the atomic state populations. In general, the master equation can be expressed as:\\ \begin{equation} \frac{d\rho}{dt}=\mathcal{L}\rho. \end{equation} Here, $\mathcal{L}$ is the Liouvillian operator. Our qubit states $\ket{1}$ and $\ket{3}$ are magnetic sub-states of the hyperfine splitted ground state of the \Mg{25} ion and spontaneous emission from these long-lived states is neglected in the simulation. The detuning of the lasers with respect to the excited state $\ket{2}$ is \unit{9.2}{GHz} and the off-resonant scattering rates from coupling to all possible excited states are on the order of \unit{1.1}{ms$^{-1}$} and \unit{4.4}{ms$^{-1}$} for the ground states $\ket{1}$ and $\ket{3}$, respectively \cite{gebert_damage-free_2014}. This off-resonant scattering limits the coherence between the lasers and the atom and reduces the detected signal. However, this effect is small and since the consideration of off-resonant scattering increases the complexity of the simulation excessively, we neglected this effect in the simulations. In this case the time evolution of the system can be described by a Hamiltonian $\mathcal{H}$ and the Liouvillian acts on the density matrix as follows: \begin{equation} \mathcal{L}\rho=-i\left[\mathcal{H},\rho\right]=-i\left(\mathcal{H}\rho-\rho\mathcal{H}\right) \label{eqn:Liouvillian} \end{equation} The quantized motion of the ion in the trap is included in the simulations as the tensor product of the electronic state \ket{e} and the harmonic trap states \ket{n} for the state of the ion: $\ket{\psi}=\ket{e}\otimes\ket{n}$. Up to 16 motional levels have been considered. This way, we are able to simulate carrier as well as sideband transitions. The time dependence of the STIRAP process is incorporated by time-dependent Rabi frequencies in the Hamiltonian, where for the simulation a Gaussian pulse shape was assumed. The parameters of the pulses are defined as: \begin{eqnarray} \Omega_{i}(t)&=&\Omega_{i,\mathrm{max}}\cdot\exp\left(-\frac{\left(t-t_i\right)^2}{2t_\mathrm{width}^2}\right) \end{eqnarray} where we define $t_\mathrm{pulse}=2\sqrt{2 \ln(2)}\cdot t_\mathrm{width}$ as the pulse length, $t_{i}$ as the centers, and $\Omega_{i,\mathrm{max}}$ as the maximum Rabi frequencies of the two pulses $i\in\{p,s\}$. Additionally, we denote the delay between the pulses as $t_\mathrm{delay}=t_s-t_p$ (see \ref{fig:3lvl_pulses}). All simulations presented in the following were performed using the quantum optics toolbox \cite{tan_computational_1999} in the MATLAB programming language \cite{matlab_version_2013}. Since the Hamiltonian of the system is time dependent, the solver "\textbf{solvemc}" was used. It performs a direct integration of the master equation to calculate the density matrix $\rho$ for consecutive times. In order to compare the simulated with the experimental results we fitted a Gaussian function to the measured pulses. Due to technical imperfections the measured pulse length of the pump field is around 12~\% shorter than the pulse length of the Stokes field. Therefore we derive an effective pulse length (mean of the Gaussian full width at half maximum, FWHM) from the fit and measure the effective delay time of the two pulses. These values, which were ranging from a few to two hundred microseconds, were used for the laser pulse parameters in the simulations. \section{\label{sec:level5} Results} \subsection{\label{sec:level5a} Pulse length and pulse delay dependence} Density matrix simulations were performed to investigate and optimize the population transfer efficiency. First, the influence of the delay of the two laser pulses for a fixed pulse length of \unit{120}{$\mu$s} was studied. The ion is initialized in state \ket{1} and the motional ground state. \begin{figure}[tbp] \centering \includegraphics[width=0.46\textwidth]{comp_sim_meas_delay-eps-converted-to.pdf} \label{fig:comp_sim_meas_delay} \includegraphics[width=0.46\textwidth]{time_evolution_delay26-eps-converted-to.pdf} \label{fig:sim_trans_delay26} \includegraphics[width=0.46\textwidth]{time_evolution_delay80-eps-converted-to.pdf} \label{fig:sim_trans_delay80} \includegraphics[width=0.46\textwidth]{time_evolution_delay136-eps-converted-to.pdf} \label{fig:sim_trans_delay136} \caption[Simulated and measured delay time dependence of the STIRAP transfer.]{\textbf{Simulated and measured delay time dependence of the STIRAP transfer.} a) Comparison of the simulated and measured delay time dependence of the STIRAP transfer on a carrier transition for an ion in the motional ground state. Positive delay times correspond to the counter-intuitive pulse sequence. For the simulated data (red) a moving average was used to account for experimental intensity fluctuations. The simulated time dependence of the STIRAP transfer for different delay times is shown in b) for \unit{26}{$\mu$s}, c) for \unit{80}{$\mu$s}, and d) for \unit{136}{$\mu$s}. The dotted lines represent the pulses.} \label{fig:delay_time_dep} \end{figure} In \ref{fig:delay_time_dep}a) we compare the measured and simulated transfer efficiency of the STIRAP sequence for carrier transitions. To allow a direct comparison between simulation and experiment, a moving average is applied to the simulated data to account for small fluctuations of experimental parameters (see below). In the figure we see that both pulse orders (positive delay times correspond to the counter-intuitive pulse order) are able to transfer population from the initial to the final state. This behavior is a result of the large detuning of the light fields from resonance. The reduced transfer efficiency seen in the experiment compared to the simulation result is explained by off-resonant excitation to different magnetic sub-states, indicated in \ref{fig:3lvl_pulses}a) which is not considered in the simulations. The relative populations of the two qubit states during the STIRAP sequence, together with the relative strength of the involved couplings of these states to the auxiliary magnetic sub-states \cite{gebert_damage-free_2014}, explain the asymmetry with respect to the pulse order seen in the experiment. We therefore use the counter-intuitive pulse sequence from now on. Using the numerical simulations we investigate the different transfer regimes. As can be seen in \ref{fig:delay_time_dep}b), for a delay time of $\unit{26}{\mu s}$, the transfer process consists of two contributions, one oscillating and one adiabatic part. For the chosen pulse length of $\unit{120}{\mu s}$, the adiabatic criterion is not fulfilled at the beginning and end of the sequence [see \ref{fig:ac_timedep}a)] and the time evolution can be understood as a combination of dynamically varying Rabi oscillations, which is the transfer mechanism for the case of overlapping pulses (zero delay time), and an adiabatic part of the population transfer. In this situation the final transfer efficiency strongly depends on the timing and exact Rabi frequency of the pulses. Fluctuations of experimental parameters result in an averaged transfer efficiency. Full adiabaticity is achieved for a delay time of for example $\unit{80}{\mu s}$, shown in \ref{fig:delay_time_dep}c) [see also \ref{fig:ac_timedep}b)]. For this case efficient adiabatic transfer is possible, whereas for an even longer delay time of $\unit{136}{\mu s}$ the overlap of the pulses is too small, leading to transitions between the different adiabatic states [\ref{fig:ac_timedep}c) and \ref{fig:delay_time_dep}d)], so that the transfer is incomplete. The optimal parameter regime for STIRAP was investigated by comparing the experimental and simulated population transfer with respect to the pulse length for different delays between the pulses. For this we introduce a scaling factor $s$ for the delay time which is related to the pulse length by $t_\mathrm{delay}=s \cdot t_\mathrm{pulse}$. If the pulse length is chosen long enough, we are in the Rabi oscillation regime for $s<0.6$ and in the adiabatic (STIRAP) regime for $s>0.6$. In \ref{fig:2D_plots_car} the simulated transfer efficiency for carrier transitions with the ion initialized in the motional ground state is compared with the experimental result. \begin{figure}[tbp] \centering \includegraphics[width=0.46\textwidth]{pop_vs_delay_pulselength_car_2d-eps-converted-to.pdf} \label{fig:pop_vs_delay_and_pulselength} \includegraphics[width=0.46\textwidth]{car_2d_20141009-eps-converted-to.pdf} \label{fig:pop_vs_delay_and_pulselength_meas} \caption[Transfer efficiency for different delay scaling factors and pulse lengths for carrier transitions.]{\textbf{Transfer efficiency for different delay scaling factors and pulse lengths for carrier transitions.} a) Simulation and b) experimental results of the STIRAP transfer, where red corresponds to no and blue to complete transfer. For the simulations and the measurements the ion was initialized in the ground state of motion in \ket{3} and the pulse length was scanned for different delay scaling factors $s$.} \label{fig:2D_plots_car} \end{figure} Experiment and simulation show the same behavior as in \ref{fig:delay_time_dep}, i.e. the transfer depends on the exact pulse length for small delay scaling factors. The transfer is slightly faster in the experiment than in the simulations which may be due to slight deviations of the used parameters as well as an imperfect pulse shape of the STIRAP pulses. Additionally, deviations from the ideal pulse shape may lead to the enhanced oscillations in the experiment, since in the experiment the pulses envelope is less smooth. Despite these differences, the experimental data is in qualitative agreement with the simulations and we can identify large regions of efficient transfer for the constant coupling strength realized in this particular setting. The results also allow to extrapolate which STIRAP parameters should be used for the case of a fluctuating coupling strength, since the required pulse length scales with the inverse coupling strength. For a ``band'' of coupling strengths, the optimum STIRAP transfer parameters are dictated by the corresponding graph for the smallest coupling strength. Efficient transfer is then inherently accomplished for the larger coupling strengths. This scaling behavior is illustrated in \ref{fig:2D_plots_bsb}, where the transfer efficiency is shown for a blue sideband transitions with the ion initialized in the motional ground state. The sideband transition is weaker by the Lamb-Dicke factor of approximately $\eta\approx 0.3$. \begin{figure}[tbp] \centering \includegraphics[width=0.46\textwidth]{pop_vs_delay_pulselength_bsb_2d-eps-converted-to.pdf} \label{fig:pop_vs_delay_pulselength_rsb} \includegraphics[width=0.46\textwidth]{bsb_2d_20141009-eps-converted-to.pdf} \label{fig:pop_vs_delay_pulselength_rsb_meas} \caption[Transfer efficiency for different delay scaling factors and pulse length for blue sideband transitions.]{\textbf{Transfer efficiency for different delay scaling factors and pulse length for blue sideband transitions.} a) Simulation and b) experimental results of the STIRAP transfer, where red corresponds to no and blue to complete transfer. For the simulations and the measurements the ion was initialized in the ground state of motion and the pulse length was scanned for different delay scaling factors $s$.} \label{fig:2D_plots_bsb} \end{figure} Therefore, a longer pulse length is required to fulfill the adiabatic criterion and to achieve efficient transfer. As can be seen in \ref{fig:2D_plots_bsb}, for this transition we can also identify a parameter range, where we have efficient transfer for our given laser power and detuning. For the carrier transition we achieve the best transfer for a delay scaling factor of around $\textit{s}=0.7$ and a pulse length $>\unit{50}{\mu s}$, whereas for sideband transitions smaller scaling factors and longer pulse lengths are necessary. For both transitions off-resonant excitation to the excited $^2P_{3/2}$-state is limiting the final transfer efficiency. The longer pulses required for sideband transitions limit the transfer efficiency to \unit{85}{\%} for a pulse duration of $\unit{>70}{\mu s}$ and a delay scaling factor of $\leq0.7$. \subsection{\label{sec:level5b} Motional Dependence of the Transfer Efficiency} After determining appropriate parameters for the transfer we now investigate the motional dependence of the transfer efficiency. Using the simulations we investigated the dynamics of population transfer for the lowest 15 motional Fock states for carrier and sideband transitions. \begin{figure}[tbp] \centering \includegraphics[width=0.46\textwidth]{car_diff_n-eps-converted-to.pdf} \label{fig:car_diff_n} \includegraphics[width=0.46\textwidth]{rsb_diff_n-eps-converted-to.pdf} \label{fig:rsb_diff_n} \caption[Simulated transfer dynamics of the population for the lowest 15 motional levels.]{\textbf{Simulated transfer dynamics of the population for the lowest 15 motional levels.} The population transfer as a function of time is displayed for a) the carrier (delay scaling factor $s=0.7$ and pulse length of $\unit{50}{\mu s}$) and b) the red sideband transition (delay scaling factor $s=0.4$ and pulse length of $\unit{100}{\mu }$). Fock states are used in the simulations, where dark blue corresponds to the motional ground state and the color gets brighter for higher motional levels and changes from blue to green, to red and finally to yellow for n=15.} \label{fig:sim_time_evolution_diff_n} \end{figure} The results are shown in \ref{fig:sim_time_evolution_diff_n}, where we can see that for carrier transitions the transfer efficiency is reduced for higher motional levels. This is due to the decrease in carrier coupling strength associated with higher Fock state levels, scaling with the generalized Laguerre polynomial $L_n^0(\eta^2)$ \cite{wineland_experimental_1998}. In contrast, the coupling strength of blue sideband transitions initially increases with $n$ according to $\sqrt{n!/(n+1)!}L_n^1(\eta^2)$ and remains at a high level up to the largest investigated Fock state $n=15$. As a consequence, the transfer becomes more adiabatic for higher motional levels, since the state evolution speeds up while the pulse time remains constant. Experimentally, we do not probe each of the motional states individually, but rather a given distribution. Therefore, we experimentally verified the independence of the STIRAP transfer from the initial motional state by investigating the pulse length dependence of the transfer efficiency for the ion initialized in the motional ground state and compare it to the efficiency for the ion initialized in a thermal state. Furthermore, the counter-intuitive pulse sequence allows us to further speed up the STIRAP sequence by considering only the part of laser interaction where the transfer takes place: The Stokes beam is switched on abruptly with the maximum intensity at the beginning of the sequence. While ramping down its intensity, the pump beam intensity is ramped up, both with a near-Gaussian shape. At maximum intensity of the pump beam, it is switched off rapidly. The effective transfer time for this sequence is given by $t_{trans}=t_{FWHM}+s\cdot t_{FWHM}$. As can be seen from \ref{fig:comp_thermal_with_ground_state}, the STIRAP population transfer rate for sideband transitions is smaller when the ion is initialized in the motional ground state. This is a consequence of the smaller Rabi frequency of sideband transitions with $n=0$ compared to $n>0$ also populated in a thermal state. For pulse lengths longer than $\unit{100}{\mu s}$, the two curves for the transfer on the blue sideband overlap with each other. This means that, for a sufficiently long pulse, the STIRAP transfer becomes independent of the motional state population of the ion. \begin{figure}[tbp] \centering \includegraphics[width=0.46\textwidth]{comp_thermal_BSB_RSB_20150529-eps-converted-to.pdf} \caption[STIRAP transfer efficiency as a function of pulse length for different initial motional states.]{\textbf{STIRAP transfer efficiency as a function of pulse length for different initial motional states.} The ion was initialized in the motional ground state (dark blue) and a thermal state (light blue) for a blue sideband transition. Additionally the transfer is displayed for a red sideband transition, where the ion was initialized in a thermal state (orange). The delay scaling factor for all measurements was $s=0.5$. The lines are moving averages of the data and are guides to the eye. For clarity the error bars are omitted.} \label{fig:comp_thermal_with_ground_state} \end{figure} Additionally, the transfer efficiency for a red sideband transition is shown in \ref{fig:comp_thermal_with_ground_state} for the ion initialized in a thermal state of motion. We assume that for a pulse length of $\unit{120}{\mu s}$ the transfer is complete. In contrast to a blue sideband, the red sideband leaves the $n=0$ motional population untouched. We extract a ground state population of $0.08\pm 0.01$ by subtracting the respective signals averaged over pulse lengths between $\unit{120}{\mu s}$ and $\unit{150}{\mu s}$. The associated thermal distribution corresponds to a temperature of $1.3\pm0.2$ times the Doppler cooling temperature for \Mg{25}. This is in agreement with the measured temperature after Doppler cooling of 1.2 times the Doppler temperature using the technique described in \cite{Poulsen_sideband_2011} as well as the estimated temperature from measured Rabi oscillation decay which resulted in $1.6\pm 0.5$ \cite{hemmerling_towards_2011}. \subsection{\label{sec:level5c} Comparison of Coherent and Adiabatic Transfer} After showing the feasibility of motional state-independent transfer we compare the transfer efficiency of the STIRAP process with that of a $\pi$ pulse for a Doppler cooled ion. \begin{figure}[tbp] \centering \includegraphics[width=0.46\textwidth]{bsb_comp_stirap_rabi_20141009-eps-converted-to.pdf} \label{fig:bsb_comp_stirap_rabi} \includegraphics[width=0.46\textwidth]{car_comp_stirap_rabi_20141009-eps-converted-to.pdf} \label{fig:car_comp_stirap_rabi} \caption[Population transfer dynamics of Rabi and STIRAP for a thermal motional state.]{\textbf{Population transfer dynamics of Rabi and STIRAP for a thermal motional state.} The transfer efficiency is displayed for a) the blue sideband and b) the carrier transition with the ion initialized by Doppler cooling. The delay scaling factor was chosen to be $s=0.5$ for sideband transitions and s=0.7 for carrier transitions. The lines are moving averages of the data and are guides to the eye. For clarity the error bars are omitted.} \label{fig:comparison_Rabi_STIRAP} \end{figure} As expected, \ref{fig:comparison_Rabi_STIRAP} shows that the state evolution for Raman Rabi oscillations is faster than the corresponding STIRAP signal. After approximately $\unit{12}{\mu s}$ the maximum transfer efficiency of below $\unit{80}{\%}$ is reached for the blue sideband transition (\ref{fig:comparison_Rabi_STIRAP}a)). This transfer is sensitive to system parameters, especially to variations in Rabi frequency. The STIRAP transfer, however, is slower but reaches a transfer efficiency of higher than $\unit{85}{\%}$ for transfer times on the order of $\unit{150}{\mu s}$. It is worthwhile mentioning that the STIRAP transfer is limited by off-resonant excitation, which can be circumvented using a larger Raman detuning, whereas the transfer using Rabi oscillations is fundamentally limited by the different couplings between the motional states. On the carrier, the stronger motional state dependence of the Rabi frequency leads to a faster dephasing of the Rabi oscillations, as can be seen in \ref{fig:comparison_Rabi_STIRAP}b). Therefore, the transfer efficiency using Raman Rabi oscillations is reduced to $\unit{50}{\%}$ whereas the transfer using the STIRAP technique is still on the order of $\unit{75}{\%}$ for our system parameters. The transfer efficiency on a carrier transitions is reduced compared to the blue sideband transition since the Rabi frequency has a zero crossing at a motional state of $n=15$. Population in a range around this state can not be transferred efficiently by either technique. \section{\label{sec:level6} Conclusion} We have investigated the STIRAP technique to transfer population between two hyperfine states of a \Mg{25} ion. A systematic study of the transfer efficiency on the carrier and motional sidebands was performed for different pulse lengths and pulse delays with the ion initialized in the ground state of motion. Good agreement was found with numerical simulations. The insensitivity of STIRAP to the exact Rabi frequency was exploited to perform population transfer in the presence of an inherent range of Rabi frequencies found e.g. for thermally populated motional states. We demonstrated efficient population transfer on carrier and blue sideband transitions for Fock and thermal states. Experimentally the transfer was limited to $\sim 85~\%$ by off-resonant excitation to states not involved in the STIRAP process. However, this is not a fundamental limitation and could be overcome using a Raman laser system with a larger detuning, allowing transfer efficiencies approaching \unit{100}{\%}. In contrast, population transfer using Raman Rabi oscillations was shown to be faster, but less efficient for thermal motional states. We used the difference in blue and red sideband STIRAP transfer efficiency to detect the motional ground state population, from which a temperature in agreement with Doppler cooling temperature was extracted. This ground state population detection technique is robust against fluctuations of the Rabi frequencies and does not depend on the specific motional distribution. More elaborate pulse sequences using a series of STIRAP red and blue sidebands would even allow for the measurement of higher motional state populations, enhancing the accuracy of the determined temperature measurement. The same approach can be employed to determine the full motional state distribution \cite{muller_optimal_2015} or prepare strongly entangled states \cite{linington_robust_2008, noguchi_generation_2012}. Another important application of the presented technique is the detection of small forces via excitation of a normal mode in a trapped ion crystal. Here, it has the potential to enhance the sensitivity of relative mass measurements \cite{drewsen_nondestructive_2004}, indirect internal state detection \cite{hume_trapped-ion_2011, wolf_quantum_2015} and the detection of small electrical \cite{biercuk_ultrasensitive_2010, narayanan_electric_2011} and optical \cite{clark_detection_2010, biercuk_phase-coherent_2011, lin_resonant_2013} forces. We demonstrated the power of STIRAP in photon recoil spectroscopy \cite{wan_precision_2014, gebert_precision_2015} where the small force imprinted onto a two-ion crystal during absorption of a few photons leaves the motional state of the ions distributed over several trap levels. A STIRAP pulse on the red sideband probes the residual motional ground state population, which represents the spectroscopy signal. \section{Acknowledgements} We acknowledge the support of DFG through QUEST and Grant SCHM2678/3-1. This work was financially supported by the State of Lower-Saxony, Hannover, Germany. Y.W. acknowledges support from the Braunschweig International Graduate School of Metrology (B-IGSM). We thank Ian D. Leroux for stimulating discussions. \section{References} \bibliographystyle{h-physrev3}
1,108,101,566,249
arxiv
\section{Introduction} The interplay between coherent and incoherent processes is key in the quantum mechanical processing of information. Systems designed in order to perform a given communication or computational task have to cope with the detrimental effects of the surrounding world, which can affect an otherwise coherent process in many distinct ways~\cite{nielsen}. In the context of distributed quantum information processing (QIP), where networks of spatially remote quantum nodes are used in order to process information in a delocalized way, a good assumption is that each local processor is affected by its own environment. Such an architecture for a QIP device is currently at the focus of extensive and multifaceted theoretical and experimental endeavors~\cite{browne}. Exstensive work has been performed on the incoherent dynamics resulting from the coupling of QIP systems with baths weakly perturbed by the system-induced back-action. The loss of quantum correlations due to the environment has recently received considerable attention~\cite{konrad}, with particular emphasis on the phenomenon of environment-induced entanglement sudden death~\cite{yu} ({\it i.e.}, complete disentanglement in a finite time), which has also been experimentally tested for the case of electromagnetic environments~\cite{experiments}. And yet, especially for solid-state implementations of quantum processors, the case of structured environments is extremely relevant~\cite{spinenv}. In this framework, the dynamics leading to complete disentanglement of two qubits coupled with a common spin environments (the so-called ``central-qubit model") has been extensively studied~\cite{zi,sun,cecilia}. Exponential decay of the concurrence~\cite{wootters} between two qubits initially prepared in pure states has been observed, the decay rate being enhanced for working points close to the quantum phase transition of the environment~\cite{sun}. In this paper, we study the dynamical evolution of the entanglement between two remote qubits coupled with mutually independent spin environments. The exact time dependence of their reduced density matrix is obtained using an original approach~\cite{DiFrancoEtal07,DiFrancoEtal08} which allows us to track the dynamics of quantum averages of one- and two-spin observables and entanglement properties. In stark contrast with most of the available literature, here we consider a ``transversal'' qubit-environment coupling that allows for energy transfer, resulting in a dissipative-like behavior of the qubits and in a much richer entanglement dynamics. The transversal nature of the qubit-environment coupling allows us to reveal the occurrence of entanglement sudden death even when starting from initially pure states of the qubits, there included the maximally entangled ones which are extremely relevant for QIP. The above feature cannot emerge in the case of longitudinal couplings as reported in Refs.~\cite{zi,sun,cecilia,davide}. We show that, by setting the environment in different operating regime, one can induce either entanglement sudden death or a freezing effect. Moreover, we shed light onto the differences in behavior experienced by the {\it parallel} and {\it antiparallel} concurrence introduced in~\cite{FubiniEtal06}, whose interplay is crucial in determining the properties of the quantum correlations within the two-qubit state, in particular at the environmental quantum critical point where the antiparallel entanglement appears to be better preserved than the parallel one. The paper is organized as follows. In Sec.~\ref{s.system} we define the physical system and the relevant interactions. In Sec.~\ref{s.dynamicsI} we provide exact expressions for the dynamics of a single qubit coupled with a spin-environment. These results are used in Sec.~\ref{s.dynamicsII} in order to get the dynamics of the entanglement between two qubits, each coupled with a spin-environment via a local isotropic (Subsection~\ref{ss.isocoupling}) or anisotropic (Subsection~\ref{ss.anisocoupling}) exchange interaction. Finally, in Sec.~\ref{s.conclusions} we draw our conclusions and suggest a possible scenario where the main features of our physical model can be embodied. \section{The system} \label{s.system} \begin{figure} \includegraphics[width=0.55\linewidth]{modelloNN.eps} \caption{Sketch of the physical model. Each of a pair of entangled qubits, $Q_A$ and $Q_B$, is locally coupled with a spin chain, $\Gamma_A$ and $\Gamma_B$, via the Hamiltonians $\hat{\cal H}_{0_A}$ and $\hat{\cal H}_{0_B}$, respectively. The dynamics within each chain is ruled by the intra-chain Hamiltonians, $\hat{\cal H}_{\Gamma_A}$ and $\hat{\cal H}_{\Gamma_B}$. The wavy line indicates initial entanglement.} \label{f.system} \end{figure} We consider two non-interacting subsystems, $A$ and $B$, each consisting of a qubit $Q_\kappa$ ($\kappa=A,B$) coupled with a chain $\Gamma_\kappa$ of $N_\kappa$ interacting $S=1/2$ particles. Whenever useful we will hereafter use the index $\kappa=A,B$ so as to generically refer to either of the two subsystems $A$ or $B$. As usual, the qubits $Q_\kappa$ are described in terms of $S=1/2$ spin operators, which we indicate as ${\hat{\bm s}}_{0\kappa}$. Operator ${\hat{\bm s}}_{n_\kappa}$ ($n_\kappa=1,...,N_\kappa$) corresponds to the spin located at site $n$ of the chain $\Gamma_\kappa$. Notice that although the above notation suggests the spin describing $Q_\kappa$ to sit at site $0$ of the respective chain $\Gamma_\kappa$, this is just a useful convention, but has no implication about the physical nature of $Q_A$ and $Q_B$. The intra-chain interaction is of $XY$-Heisenberg type with local magnetic fields possibly applied in the $z$-direction \begin{equation} \label{e.HGamma_k} \hat{\cal{H}}_{\Gamma_{\kappa}}\!=\!-\!2\!\sum_{n_\kappa=1}^{N_\kappa-1} (J^x_{n_\kappa} \hat{s}^x_{n_\kappa} \hat{s}^x_{n_\kappa+1}\!+\! J^y_{n_\kappa} \hat{s}^y_{n_\kappa} \hat{s}^y_{n_\kappa+1}) -2\sum_{n_\kappa=1}^{N_\kappa} h_{n_\kappa} \hat{s}^z_{n_\kappa}, \end{equation} where $h_{n_\kappa}$ is the field applied at site $n_\kappa$ and $J^{x,y}_{n_\kappa}$ are the coupling strengths of the intra-chain interactions. Each $\Gamma_\kappa$ is open-ended, while neither the $J^{x,y}_{n_\kappa}$'s nor the magnetic fields $h_{n_\kappa}$ need to be uniform along the chains. Qubit $Q_\kappa$ is coupled with the first spin of its environment, embodied by $\Gamma_\kappa$, via an exchange interaction of strengths $J_{0_\kappa}^{x,y}$ and can be subjected to a local magnetic field $h_{0_\kappa}$ directed along the $z$-direction. The corresponding Hamiltonian reads \begin{equation} \label{e.H_0k} \hat{\cal{H}}_{0_\kappa}\!=\!-2( J^x_{0_\kappa}\hat{s}^x_{0_\kappa}\hat{s}^x_{1_\kappa}+ J^y_{0_\kappa}\hat{s}^y_{0_\kappa}\hat{s}^y_{1_\kappa})\!-\! 2h_{0_\kappa}\hat{s}^z_{0_\kappa}~. \end{equation} The Hamiltonian of the total system $\kappa$ is thus given by $\hat{\cal H}_{\kappa}=\hat{\cal H}_{0_\kappa}+\hat{\cal H}_{\Gamma_\kappa}$. In Fig.~\ref{f.system} we provide a sketch of the model considered. As stated previously, although $A$ and $B$ do not interact directly, they experience joint dynamics due to the possibility of sharing initial entanglement. Depending on the choice of the local interaction parameters and magnetic fields in $\hat{\cal{H}}_\kappa$ the efforts required to tackle the model greatly changes. In Section~\ref{s.dynamicsI} we describe the approach used in order to achieve an exact solution for the dynamics of local observables. \section{Exact single-qubit dynamics} \label{s.dynamicsI} We first concentrate on the dynamics of one subsystem only (thus dropping the index $\kappa$ throughout this Section). In particular, we are interested in the evolution of a given initial state of the qubit $Q$, as determined by its coupling with the chain $\Gamma$ and under the influence of the local magnetic field. We resort to the Heisenberg picture, which has been recently shown to provide a convenient framework for the analysis of quantum many-body systems of interacting particles~\cite{DiFrancoEtal07}. The key step is the use of the Campbell-Baker-Hausdorff (CBH) formula in the management of the time-evolution operator of the system. For an operator $\hat{\cal{O}}$ associated with an observable of a physical system with Hamiltonian $\hat{\cal{H}}$, the CBH formula reads (we set $\hbar=1$ throughout the paper) \begin{equation} \label{e.BCH} \hat{\cal{O}}(t)= \sum_{p=0}^\infty\frac{(i t)^p}{p!} \left[\hat{\cal{H}},\left[\hat{\cal{H}},.. \left[\hat{\cal{H}},{\hat{\cal O}}\right]..\right]\right]. \end{equation} In virtue of the algebra satisfied by the Pauli matrices, we find that upon application of Eq.~(\ref{e.BCH}), the time evolution of the components of ${\hat{\bm s}}_0$ reads \begin{equation} \label{e.XYZ0(t)} \begin{aligned} \hat{s}^x_0(t)&=\frac{1}{2}\sum_{n=0}^N \left[\Pi^x_n(t)\hat{\sigma}^x_n+\Delta^x_n(t)\hat{\sigma}^y_n\right] \hat\sigma_0^z\hat P_n~, \\ \hat{s}^y_0(t)&=\frac{1}{2}\sum_{n=0}^N \left[\Pi^y_n(t)\hat{\sigma}^y_n-\Delta^y_n(t)\hat{\sigma}^x_n\right] \hat\sigma_0^z\hat P_n~, \\ \hat{s}^z_0(t)&=-i2\hat{s}^x_0(t)\hat{s}^y_0(t), \end{aligned} \end{equation} where $\hat{\sigma}_n^\alpha$ ($\alpha=x,y,z$) are the Pauli operators for the spin at site $n$ and $\hat P_n=\prod_{i=1}^{n-1}\hat{\sigma}^z_i$. The time-dependent coefficients $\Pi_n^{x}(t)$ and $\Delta_n^{x}(t)$ are the components of the $(N+1)$-dimensional vectors ${\bm\Pi}^{x}(t)$ and ${\bm\Delta}^{x}(t)$ defined by \begin{eqnarray} {\bm\Pi}^x(t)&=&\sum_{p=0}^{\infty}(-1)^p\frac{t^{2p}}{(2p)!} ({\bm \tau}{\bm \tau}^T)^p{\bm v}, \label{e.PiX}\\ {\bm\Delta}^x(t)&=&\sum_{p=0}^{\infty}(-1)^{p}\frac{t^{2p+1}}{(2p+1)!} {\bm \tau^T}({\bm \tau}{\bm \tau}^{T})^p{\bm v}, \label{e.DeltaX} \end{eqnarray} where $T$ stands for transposition, the vector ${\bm v}$ has components $v_i=\delta_{i0}$~\cite{commentoordering} and the tri-diagonal matrix ${\bm \tau}$ has elements \begin{equation} \tau_{ij}= J_{i-1}^y\delta_{i-1,j}+J_i^x\delta_{i+1,j}-2 h_i\delta_{i,j}~. \label{e.T} \end{equation} The coefficients ${\bm \Pi}^{y}(t)$ and ${\bm \Delta}^{y}(t)$ are obtained from Eqs.~(\ref{e.PiX}) and (\ref{e.DeltaX}) by replacing ${\bm \tau}$ with ${\bm \tau}^T$. As $\bm{\tau\tau}^T$ is real and symmetric, there is an orthogonal matrix ${\bm U}$ that diagonalizes it, so that $(\bm{\tau\tau}^T)^p={\bm U}{\bm\Lambda}^{2p}{\bm U}^T$, with ${\bm \Lambda}$ the diagonal matrix whose elements $\lambda_{ij}=\lambda_i\delta_{i,j}$ are the (positive) square roots of the eigenvalues of ${\bm \tau\bm \tau}^T$. Similarly, there is an orthogonal matrix ${\bm V}$ that diagonalizes $\bm{\tau}^T\bm\tau$ and such that $(\bm{\tau}^T{\bm\tau})^p={\bm V}{\bm\Lambda}^{2p}{\bm V}^T$ with the same diagonal matrix ${\bm\Lambda}$ as above. As a consequence, after straightforward matrix algebra, one can sum up the time-dependent series in Eqs.~(\ref{e.PiX}) and (\ref{e.DeltaX}) to get \begin{equation} \label{e.PiDeltaxyResum} \begin{aligned} {\bm \Pi}^x(t)&={\bm U}{\bm \Omega}(t){\bm U}^T{\bm v},~~ {\bm \Delta}^x(t)={\bm V}{\bm\Sigma}(t){\bm U}^T{\bm v}\\ {\bm \Pi}^y(t)&={\bm V}{\bm \Omega}(t){\bm V}^T{\bm v},~~ {\bm \Delta}^y(t)={\bm U}{\bm\Sigma}(t){\bm V}^T{\bm v}, \end{aligned} \end{equation} where ${\bm \Omega}(t)$ and ${\bm \Sigma}(t)$ are diagonal matrices with elements $\Omega_{ij}(t)=\cos(\lambda_i t) \delta_{ij}$ and $\Sigma_{ij}(t)=\sin(\lambda_i t) \delta_{ij}$. The above equations hold regardless of the local magnetic fields or the couplings $J^{x,y}_n$ and $J^{x,y}_{0}$. Adopting the language of Refs.~\cite{DiFrancoEtal07,DiFrancoEtal08}, the components of ${\bm \Pi}^{x,y}(t)$ and ${\bm \Delta}^{x,y}(t)$ embody the {\it fluxes of information} from the qubit $Q$ to the spin chain $\Gamma$. By means of Eqs.~(\ref{e.XYZ0(t)}) one can determine the time evolution of the single-qubit density matrix \begin{equation} \label{e.rho0} \hat{\rho}_0(t)= \frac{1}{2}\hat{\openone}+\sum_{\alpha} \mean{\hat{s}^\alpha_0(t)}\hat{\sigma}^\alpha_0, \end{equation} where $\hat{\openone}$ is the identity operator and $\mean{\,\cdot\,}$ indicates the expectation value over the initial state of the system. Once the diagonalizations required for determining the vectors ${\bm\Pi}^{x(y)}(t)$ and ${\bm\Delta}^{x(y)}(t)$ are performed, we need to evaluate the expectation values entering Eq.~(\ref{e.rho0}). Such a task can be performed within two different scenarios. In the first, $\Gamma$ has a small number of spins, so that one can {\it design} the precise structure of its initial state. In the second, $\Gamma$ consists of a large number of spins, which puts the analysis in the thermodynamic limit, where we can benefit of specific symmetry properties of $\hat{\cal{H}}_\Gamma$. Here, we concentrate on the latter situation, which has been the subject of several recent papers, due to the fact that it can be used for describing a proper qubit-environment system. We assume that $Q$ and $\Gamma$ are initially uncorrelated and set the former in an arbitrary single-qubit state $\hat{\rho}_0$ and the latter in an eigenstate $\ket{\Psi_\Gamma}$ of $\hat{\cal{H}}_\Gamma$. The initial state of the total system will thus be $\hat{\rho}=\hat{\rho}_0\otimes\ket{\Psi_\Gamma}\bra{\Psi_\Gamma}$. Under such conditions, the procedure described in the above section is most conveniently implemented as Eqs.~(\ref{e.XYZ0(t)}) greatly simplify due to the properties of $\hat{\cal{H}}_\Gamma$. In particular, the conservation rule ${[\hat{\cal{H}},\bigotimes_{n=0}^{N}\hat\sigma^z_{n}]=0}$, which states parity invariance of the Hamiltonian, together with the property $\mean{\hat\sigma_n^\alpha\hat\sigma_m^\beta}=0$ for $\alpha\neq \beta$ and $n\neq m$, imply \begin{equation} \mean{\hat s^x_0(t)}{=}\frac{1}{2}\sum_{n=0}^N \mean{\hat\sigma_0^z} \left[\Pi^x_n(t)\mean{\hat P_n\hat\sigma^x_n} +\Delta^x_n(t)\mean{\hat P_n\sigma^y_n}\right] \label{e.meanX0(t)} \end{equation} \begin{equation} \mean{\hat s^y_0(t)}{=}\frac{1}{2}\sum_{n=0}^N \mean{\hat\sigma_0^z} \left[\Pi^y_n(t)\mean{\hat P_n\hat\sigma^y_n}- \Delta^y_n(t)\mean{\hat P_n\hat\sigma^x_n}\right] \label{e.meanY0(t)} \end{equation} \begin{equation} \begin{aligned} &\mean{\hat s^z_0(t)}{=} \frac{1}{2}\sum_{n=0}^N \left[\Pi_n^x(t)\Pi_n^y(t) + \Delta_n^x(t)\Delta_n^y(t)\right] \mean{\hat\sigma_n^z}\label{e.meanZ0(t)}\\ -&\frac{1}{2}\sum_{n<m}^N \left[\Pi_n^y(t) \Pi_m^x(t) + \Delta_n^x(t) \Delta_m^y(t)\right] \mean{\hat P_{n+1}\hat P_m\hat\sigma_n^x\hat\sigma_m^x}\\ -&\frac{1}{2}\sum_{n<m}^N \left[\Pi_n^x(t) \Pi_m^y(t) + \Delta_n^y(t) \Delta_m^x(t)\right] \mean{\hat P_{n+1}\hat P_m\hat\sigma_n^y\hat\sigma_m^y}~. \end{aligned} \end{equation} Using Eqs.~(\ref{e.rho0})-(\ref{e.meanZ0(t)}) one can finally evaluate the single-qubit density matrix $\hat{\rho}_0(t)$. \section{Exact two-qubit dynamics} \label{s.dynamicsII} We now consider the complete system $A{\cup}B$. Under the assumption of non-interacting subsystems, the time propagator generated by $\hat{\cal H}_A+\hat{\cal H}_B$ factorizes as $\hat{\cal{U}}_A(t)\otimes\hat{\cal{U}}_B(t)$ with $\hat{\cal{U}}_\kappa(t)=\exp[-i\hat{\cal{H}}_\kappa t]$. Despite the absence of interaction, $A$ and $B$ might still display dynamical correlations depending on the initial state of the total system. In fact, if $A{\cup}B$ is prepared in an entangled state, its dynamical properties will depend on the structure of such initial state and the entanglement evolution will follow from the interactions ruling $A$ and $B$ separately. The values of the parameters entering $\hat{\cal{H}}_{\kappa}$ might thus be considered as {\it knobs} for the entanglement dynamics. We prepare the total system at time $t=0$ in \begin{equation} \hat\rho(0)= \hat\rho^{Q_A Q_B}(0)\otimes \hat\rho^{\Gamma_A}(0)\otimes\hat\rho^{\Gamma_B}(0), \label{e.initialState} \end{equation} so that we can use the results of Sec.~\ref{s.dynamicsI} and write the two-qubit density matrix at time $t$ in terms of the time-evolved single-qubit one. In fact, using the results of Ref.~\cite{BellomoEtal07}, we have \begin{equation} {\rho}^{Q_\kappa}_{ij}(t)= \sum_{p,r}K^{pr}_{ij}(t){\rho}^{Q_\kappa}_{pr}(0)~~~~~(\kappa,K=A,B) \label{e.rho1Q} \end{equation} with $\rho^{Q_\kappa}_{ij}(t)$ the elements of the single-qubit density matrix at time $t$ and ${\bm K}$ a tensor of time-dependent coefficients. The two-qubit state can then be written as \begin{equation} \label{e.rho2Q} {\rho}^{Q_A Q_B}_{IJ}(t)= \sum_{p_A,r_A}\sum_{p_B,r_B}A_{i_A j_A}^{p_A r_A}(t)B_{i_B j_B}^{p_B r_B}(t){\rho}^{Q_A Q_B}_{PR}(0), \end{equation} where lower-case indices take values $1$ and $2$, while capital ones are defined according to $L=l_A+l_B-l_A{\rm mod}2=1,..,4$ (with $L=I,J,P,R$ and $l=i,j,p,r$). By setting $\hat\rho_{\Gamma_\kappa}(0)= \ket{\Psi_{\Gamma_\kappa}}\bra{\Psi_{\Gamma_\kappa}}$, {\it i.e.} by preparing both the chains in any of the eigenstate of their respective Hamiltonians, we can explicitely evaluate Eqs.~(\ref{e.meanX0(t)})-(\ref{e.meanZ0(t)}) and hence Eq.~(\ref{e.rho0}). By comparing the latter with Eq.~(\ref{e.rho1Q}) we finally determine the coefficients $K^{pr}_{ij}(t)$, thus fully specifying the dynamics of the two-qubit state $\hat\rho^{Q_A Q_B}(t)$. Our approach is fully general and can be used in a variety of different situations. Here we concentrate on the case where the initial state of the two qubits is one of the Bell states \begin{equation} \label{e.initialStateXX} \ket{\phi_\pm}=\frac{1}{\sqrt 2}\left(\ket{00} \pm \ket{11}\right),~~~~ \ket{\psi_\pm}=\frac{1}{\sqrt 2}\left(\ket{01} \pm \ket{10}\right), \end{equation} which we dub {\it parallel} ($\ket{\phi_\pm}$) and {\it antiparallel} ($\ket{\psi_\pm}$) Bell states~\cite{FubiniEtal06}. In virtue of the symmetries of such states, the concurrence of the two-qubit state can be written as ${C(t){=}2\max\{0,C_{\uparrow\uparrow}(t),C_{\uparrow\downarrow}(t)\}}$, where we have introduced \begin{eqnarray} C_{\uparrow\downarrow}(t)&{=}& |\tilde\rho_{23}|{-}\sqrt{\tilde\rho_{11}\tilde\rho_{44}}~,\label{e.Cantiparallel}\\ C_{\uparrow\uparrow}(t)&{=}&|\tilde\rho_{14}|{-}\sqrt{\tilde\rho_{22}\tilde\rho_{33}}~\label{e.Cparallel} \end{eqnarray} with $C_{\uparrow\uparrow}(0)=1$ for $\hat\rho_0^{Q_AQ_B}(0)=\ket{\phi_\pm}\bra{\phi_\pm}$ and $C_{\uparrow\downarrow}(0)=1$ for $\hat\rho_0^{Q_AQ_B}(0)=\ket{\psi_\pm}\bra{\psi_\pm}$. Here the notation $\tilde\rho_{IJ}$ is used, for the sake of clarity, to indicate the matrix elements $\rho_{IJ}^{Q_A Q_B}(t)$ defined in Eq.~(\ref{e.rho2Q}). In what follows $C_{a}(t)$ [$C_p(t)$] is the concurrence corresponding to the case where the qubits are initially prepared in an antiparallel [parallel] Bell state. As for the environments, we take two identical chains of an (equal) even number of spins N with homogeneous intra-chain couplings and field, {\it i.e.} $J^{x,y}_{n_\kappa}{=}J$ and $h_{n_\kappa}{=}h$. Both chains are prepared in the ground state of ${\cal{H}}_{\Gamma_\kappa}$, which is found via Jordan-Wigner and Fourier transformations~\cite{sachdev,pla}. Straightforward calculations yield the relevant mean values entering Eqs.~(\ref{e.XYZ0(t)}) as \begin{equation} \begin{aligned} \mean{\hat\sigma_n^z}&{=}1{-}\frac{2}{N+1} \left(k_F-\frac{\cos[(k_F+1)\vartheta_n]\sin[k_F \vartheta_n]}{\sin\vartheta_n}\right),\\ g_{nm}&{\equiv}\mean{\hat P_{n}\hat P_m\hat\sigma_n^x\hat\sigma_m^x}= \frac{\phi_{n,k_F+1}\varphi_{m,k_F}-\varphi_{n,k_F}\varphi_{m,k_F+1}} {2\left(\cos\vartheta_n-\cos\vartheta_m\right)}, \end{aligned} \end{equation} where, for convenience of notation, we omit the index $\kappa$. We have introduced $\varphi_{j,k}{=}\sqrt{{2}/{(N+1)}}\sin(j \vartheta_k)$, ${\vartheta_k={k\pi}/({N+1})}$, ${k\in [1,N]}$ and the Fermi wave number $k_F$ is determined by the magnetic field~\cite{pla}. Moreover, due to the absence of symmetry breaking in $\ket{\Psi_{\Gamma _\kappa}}$, it is $\mean{\hat\sigma^{x,y}_{n_\kappa}}=0$. We consider $N$ finite but large enough to avoid finite-size effects to influence our results. For the range of parameters considered here, $N=50$ is found to fulfill such condition and it is then chosen to set the length of both chains. Finally, we take the same coupling strength between each qubit and its chain, that is $J^\alpha_{0_\kappa}=J^\alpha_0$. \subsection{Isotropic coupling between the qubit and the chain} \label{ss.isocoupling} We now consider an isotropic coupling between each qubit and its respective environment, defined by $J^x_{0_\kappa}=J^y_{0_\kappa}=J_0$ in Eq.~(\ref{e.H_0k}) In the theory of open quantum systems, such coupling typically corresponds to a dissipative interaction treated in the rotating wave approximation \cite{heinzpeter}. For isotropic coupling, the total Hamiltonians $\hat{\cal{H}}_\kappa$ have rotational invariance along the $z$-axis. This implies $\bm\tau=\bm\tau^{T}$, and thus $\Pi_n^x(t)=\Pi_n^y(t)\equiv\Pi_n(t)$, $\Delta_n^x(t)=\Delta_n^y(t)\equiv\Delta_n(t)$, which allows us to write \begin{equation} \label{meanopsum} \begin{aligned} &\mean{\hat s^x_0(t)}= \frac{1}{2}(\Pi_0(t) \mean{\hat\sigma^x_0} + \Delta_0(t) \mean{\hat\sigma^y_0}),\\ &\mean{\hat s^y_0(t)}=\frac{1}{2}(\Pi_0(t)\mean{\hat\sigma^y_0} - \Delta_0(t) \mean{\hat\sigma^x_0}),\\ &\mean{\hat s^z_0(t)}=\frac{1}{2} \sum_{n=0}^{N} [\Pi^2_n(t) + \Delta^2_n(t)]\mean{\hat\sigma^z_n}\\ &-\frac{1}{4}\!\sum_{\substack{n\neq{m}=1}}^N [\Pi_n(t)\Pi_m(t)+\Delta_n(t)\Delta_m(t)]g_{nm}~.\\ \end{aligned} \end{equation} From the above expressions we see that single-qubit states initially directed along the $z$-axis of the Bloch-sphere maintain such alignment regardless of the Hamiltonian parameters. On the other hand, initial ``equatorial'' states with $\mean{\hat s^z_0(0)}=0$ evolve in time and remain on the equatorial plane only for zero overall magnetic field. The rotational invariance around the $z$-axis of the total Hamiltonians $\hat{\cal{H}}_A$ and $\hat{\cal{H}}_B$ has relevant consequences also on the evolution of the entanglement, as shown in Section~\ref{ss.anisocoupling}. In the fully homogeneous case, {\it i.e.} for $h_0=h$ and $J_0=J$, it is $\lambda_q=-2(h-\cos\frac{q\pi}{N+2})$ and the $j^{th}$ component of the corresponding eigenvector has the same form as $\varphi_{j,q}$ with $N+1$ replaced by $N+2$. Hereafter, time will be measured in units of $J^{-1}$. In the thermodynamic limit where $N\rightarrow\infty$, the summations can be replaced by integrals yielding \begin{equation} \label{exactxy0} \begin{aligned} \mean{\hat s^x_0(t)}&=\frac{{\cal J}_1(2t)}{2 t}\left[\cos(2ht)\mean{\hat\sigma^x_0}-\sin(2ht)\mean{\hat\sigma^y_0}\right],\\ \mean{\hat s^y_0(t)}&=\frac{{\cal J}_1(2t)}{2 t}\left[\cos(2ht)\mean{\hat\sigma^y_0}+\sin(2ht)\mean{\hat\sigma^x_0}\right], \end{aligned} \end{equation} and, for the $z$ component \begin{equation} \label{exactz0} \begin{aligned} &\mean{\hat s^z_0(t)}=\sum_{n=0}^N \frac{(n+1)^2}{2t^2}{\cal J}^2_{n+1}(2t) \mean{\hat\sigma^z_n}\\ &-\sum_{n\neq m}^{n,m \, even}(-1)^{\frac{n+m}{2}} \frac{(n+1)(m+1)}{4t^2} {\cal J}_{n+1}(2 t){\cal J}_{m+1}(2 t) g_{nm}\\ &+\sum_{n\neq m}^{n,m \, odd}(-1)^{\frac{n+m}{2}} \frac{(n+1)(m+1)}{4t^2} {\cal J}_{n+1}(2 t){\cal J}_{m+1}(2 t) g_{nm}, \end{aligned} \end{equation} where ${\cal J}_n(x)$ are the Bessel functions. Long-time expansions show that the mean value of the single-spin $x (y)$ component decays as $t^{-\frac{3}{2}}$. If ${h_0=h=0}$, Eq.~(\ref{exactz0}) reduces to $\mean{s^z_0(t)}=\gamma(t) \mean{\hat\sigma^z_0}$ with $\gamma(t)={{\cal J}^2_{1}(2t)}/{2 t^2}$, yielding a $t^{-3}$ scaling at long times. Eqs.~(\ref{exactxy0}) and (\ref{exactz0}) give an excellent approximation also for finite $N$ ($\lesssim50$) within a time range where finite-size effects have not yet occurred. As the latter are caused by reflection of propagating excitations at the boundary of the chain, we can neglect finite-size effects for times up to $\sim{N}$. In fact, as the maximum one-excitation group velocity is $2$, it takes at least a time $N$ for an excitation to leave $Q$ and travel back to it. Let us now analyze the evolution of the entanglement between the two qubits. We first notice that in the present case of isotropic coupling between $Q_\kappa$ and $\Gamma_\kappa$ and given that the exchange interaction along the chain is set to be of XX type, both Hamiltonians $\hat{\cal H}_\kappa$ have rotational invariance around the $z$ axis. This enforces a disjoint dynamics of the off-diagonal matrix elements ($\tilde\rho_{23}$ and $\tilde\rho_{14}$) entering Eq.~(\ref{e.Cantiparallel}) and (\ref{e.Cparallel}). As a consequence, if $C_{\uparrow\downarrow}(0)=0$ it is $C_{\uparrow\downarrow}(t)=0$ at any time [the same holds for $C_{\uparrow\uparrow}(t)$]. Moreover, we find that $C_{a}(t){\geq} C_{p}(t)$ for any symmetric ${\bm \tau}$, irrespective of the Hamiltonian parameters. This means that antiparallel entanglement is more resilient to the effects of the spin environment. Using the analytical solutions given above we finally obtain the time evolution of the concurrence. For $h{=}0$ and in the fully omogeneous case of $J_0{=}J$, we find \begin{equation} C(t)=2\max\left\{ 0,\gamma^2(t)+\gamma(t)-\frac{1}{2}\right\} \end{equation} from which, by using the definition of $\gamma(t)$ in terms of Bessel functions, we infer that at $t_{\text{ESD}} \simeq 0.9037$ ESD occurs. In fact, our results show that sudden death occurs also for $J_{0}\ne J$, exhibiting coupling-dependent characteristics. It is remarkable that disentanglement at finite time occurs, here, for pure initial states of the two qubits, a feature due to the specific form of qubit-environment coupling considered here. In the weak coupling regime $J_0 \ll J$, Fig.~\ref{f.weak} shows that the concurrence relaxation-time grows as $J_0$ decreases. On the other hand, for strong coupling $J_0\gg J$, the non-Markovian character of the environments becomes evident [as shown in Fig.~\ref{f.strong}] and entanglement revivals occur due to the finite memory-time of the spin chain. These revivals can be intuitively understood as the result of the strong coupling between the two-qubit system and the spins at the first site of each chain. A large $J_0$ gives rise to almost perfectly coherent interactions within such qubit-spin pair, only weakly damped by leakage into the rest of the chains. A revival time (the time after which the revival exhibits a maximum) of about $\pi/ J_0$ is inferred from Fig.~\ref{f.strong}, for $J_0/J \gg 1$. In Fig.~\ref{f.tESD} {\bf (a)} such findings are summarized in terms of the dependence of $\log_{10}[t_{\text{ESD}}]$ on $\log_{10}[J_0/J]$. The quasi-linear trend shown there reveals that the growing rate of $t_{ESD}$ for $J_0/J<1$ is slightly larger than the decreasing rate at $J_0/J>1$. In Fig.~\ref{f.tESD} {\bf (b)} we show the behavior of the revival time $\log_{10}[t_{\text{rev}}]$ against $\log_{10}[J_0/J]$, which also exhibits a quasi-linear trend. \begin{figure}[t] \includegraphics[width=.7\linewidth]{graficiesdMAURO2mod.eps} \caption{Concurrence between $Q_A$ and $Q_B$ for isotropic and weak coupling, $J_0/J =0.5,0.4,0.2$ and $0.125$ (going from left-most to right-most curve). No magnetic field is applied.} \label{f.weak} \end{figure} \begin{figure}[b] \includegraphics[width=0.65\linewidth]{graficirev2MAUROmod.eps} \caption{Concurrence between $Q_A$ and $Q_B$ for isotropic and strong coupling, $J_0/J=3.5,4,4.5$ and $5$ (solid, dashed, dot-dashed and dotted curve, respectively). No magnetic field is applied. We also show the case of $J_0/J=1$} \label{f.strong} \end{figure} \begin{figure}[t] {\bf (a)}\hskip3.5cm{\bf (b)} \includegraphics[width=.5\linewidth]{esdloglog2N.eps}\includegraphics[width=.52\linewidth]{fileoriginerev2loglog10.eps} \caption{{\bf (a)} Time $t_{\rm ESD}$ after which the concurrence between $Q_A$ and $Q_B$ first exactly vanishes (entanglement sudden-death), as a function of the coupling $J_0/J$ (double logarithmic scale). No magnetic field is applied. {\bf (b)} Entanglement revival time against $\log_{10}[J_0/J]$. No magnetic field is applied.} \label{f.tESD} \end{figure} \begin{figure}[b] \includegraphics[width=0.495\linewidth]{decouplenew.eps} \includegraphics[width=0.495\linewidth]{decocftnewN.eps} \caption {{\bf (a)} $C_{a}(t)$ versus $t$, for isotropic and omogenous coupling ($J_0{=}J$) on the chains with $h_{\kappa}{=}0$ and $h_{0}/J{=}0.8, 0.9, 1, 1.1, 1.2, 1.5, 2$ (from bottom to top curve). {\bf (b)} Average parallel and anti-parallel concurrences and their difference plotted against $h_0$ with the same parameters as in panel {\bf (a)}. The lines joining the data points are simply a guide to the eye.} \label{f.iso-omo-fieldonqubits} \end{figure} \begin{figure}[hb] \center{\includegraphics[width=0.495\linewidth]{decoupleMauronew.eps} \includegraphics[width=0.495\linewidth]{cversush1newN.eps}} \caption {{\bf (a)} $C_{a}(t)$ versus $t$, for isotropic and omogenous coupling ($J_0{=}J$), no field applied on the qubits ($h_{0_{\kappa}}{=}0$) and $h/J=0.8,0.9,1,1.1,1.5,2,$ and $5$ (from bottom to top curve). {\bf (b)} Average parallel and anti-parallel concurrences and their difference plotted against $h/J$ with the same parameters as in panel {\bf (a)}. The lines joining the data points are simply a guide to the eye. } \label{f.iso-omo-fieldonchains} \end{figure} The presence of finite magnetic fields significantly changes the dynamics of the entanglement. If $h_{0}>0$ and $h=0$, {\it i.e.} the field is only applied to the qubits, we expect an effective decoupling of each qubit from the dynamics of its environment, such that both $\hat{\bm s}_{0_\kappa}$ precess with a Larmor frequency that depends mostly on $h_{0}$, though it is subjected to small quantitative corrections due to the interaction with $\Gamma_\kappa$ at rate $J_{0}$. In fact, in Fig.~\ref{f.iso-omo-fieldonqubits} we see that a larger amount of entanglement is mantained for considerably longer times as $h_0$ grows. Due to the above decoupling, as well as the condition $h_{0_A}=h_{0_B}$, correlations between $Q_A$ and $Q_B$ are preserved and so is their concurrence. In Fig.~\ref{f.iso-omo-fieldonqubits} {\bf (b)} the same effect is illustrated by the time-averaged concurrence ${\overline{C}_{a,p}= (1/\delta{t})\int_{\delta{t}}{C}_{a,p}(t')dt'}$ (the average is calculated over a time window $\delta{t}$ that excludes the oscillatory transients observed in Fig.~\ref{f.iso-omo-fieldonqubits} {\bf(a)} for $Jt\lesssim{10}$ and $Jt\gtrsim{45}$). The average entanglement grows with $h_0$. We further notice that parallel and antiparallel concurrences are almost identical (with $\overline{C}_a>\overline{C}_p$ as expected), though their difference vanishes only as $h_0\to\infty$. We have also found that for $h_{0_A}\neq h_{0_B}$ the phase relation between individual precessions is lost and no entanglement preservation is consequently observed. We now switch off the magnetic field on both qubits (that is, we take $h_0=0$) and apply a finite field $h>0$ on the environments. In this case a particularly interesting effect is observed as $h$ becomes larger than the saturation value $h=J$ and the dynamics of both chains slows down. As a consequence, after the transient, the dynamics of the correlations between the two qubits is considerably suppressed, due to the difficulty of the qubits to exchange excitations with saturated environments. A long-time entanglement memory effect results from this, which is evident in Fig.~\ref{f.iso-omo-fieldonchains} {\bf (a)}. There, we also notice a reduction of the wiggling, which further witnesses the freezing of the entanglement dynamics. It should be remarked that such effect is profoundly different from the decoupling mechanism highlighted previously, where $\overline{C}_{a}{-}\overline{C}_{p}$ was a monotonic function of the magnetic field. Here, in fact, a peak occurs in the difference between time-averaged concurrence components when $h{=}J$, as shown in Fig.~(\ref{f.iso-omo-fieldonchains}) {\bf (b)}, revealing a drastic change in the entanglement behavior at the onset of an environmental QPT~\cite{Amico06}. Clearly, at the environmental critical point, the antiparallel entanglement is favored against the parallel one, which is at the origin of the peculiar behavior observed in Fig.~\ref{f.iso-omo-fieldonchains} {\bf (b)} for the dashed line. The drastic change in the behavior of the average concurrence observed at $h/J=1$ is unique of the mechanism discussed here and, as already stressed, clearly distinguished from the freezing effect due to mismatched frequencies at each qubit-environment subsystem. For $h{>}J$, the effect is fully established and the average concurrence increases, while $\overline{C}_{a}$ and $\overline{C}_{p}$ get closer to each other. Moreover, by defining ${\cal{Z}}(t)=(1/2t^2)\sum_{n=1}^N{(n+1)^2 {\cal J}^2_{n+1}(2t)}$ and using the exact analytical expressions (valid for ${h>J}$) \begin{equation} \begin{aligned} C_{a}(t)&{=2\max\left\{0 , \gamma(t){-} \sqrt{\frac{1}{16}{-}\frac{ \left[ \gamma^2(t){+}{\cal{Z}}^2(t)\right]}{2}+\left[\gamma^2(t){-}{\cal{Z}}^2(t)\right]^2}\right\}},\\ C_{p}(t)&= 2 \max \left\{ 0 , \gamma(t) + \gamma^2(t) + {\cal{Z}}^2(t)-\frac{1}{4}\right\}, \end{aligned} \end{equation} we see that when the environments are saturated ({\it i.e.} all the spins of the chains are aligned along the $z$ axis) the concurrence dynamics does not depend on the magnetic field. \subsection{Anisotropic coupling} \label{ss.anisocoupling} We finally consider the case of anisotropic coupling $J^x_{0}\neq J^y_0$ between each qubit and its respective environment (the chain). Differently from the case of isotropic coupling studied in Subsection~\ref{ss.isocoupling} and as a direct consequence of the fact that the total magnetization of $A$ and $B$ is not conserved, the off-diagonal elements $\tilde{\rho}_{23}$ and $\tilde{\rho}_{14}$ of the two-qubit reduced density matrix are not dynamically disjoint. This implies the possibility for the concurrence of the two-qubit state to switch from the parallel to the antiparallel type and viceversa. \begin{figure}[t] \includegraphics[width=0.7\linewidth]{saturationflip2new.eps} \caption{Concurrence between $Q_A$ and $Q_B$ for anisotropic coupling, $J_{0_A}^x=J_{0_B}^x=J$. The magnetic fields are set to zero everywhere but on the chain $\Gamma_A$, where the field is let to change within the saturation region, $h_A=1,2,10$ (bottom to top). Solid (Dashed) lines are for for $C_{\uparrow\uparrow}$ ($C_{\uparrow\downarrow}$). All quantities are dimensionless.} \label{f.aniso} \end{figure} For the sake of clarity, we consider extremely anisotropic conditions, setting ${J_{0_\kappa}^y=0}$. In the case of no magnetic field on both qubits, ${h_{0}=0}$, a very simple expression for the concurrence is found, due to the fact that $\mean{\hat s_{0_\kappa}^x(t)}$ is a constant of motion. In particular, if the two qubits are initially prepared in a combination of the two antiparallel Bell state, their concurrence will evolve as ${C_{\uparrow\downarrow}(t)=-C_{\uparrow\uparrow}(t)=\Pi^{y}_{0_A}(t)\Pi^{y}_{0_B}(t)}$. On the other hand, if parallel Bell states are used to build up the initial entangled state, it is ${C_{\uparrow\uparrow}(t)=-C_{\uparrow\downarrow}(t)=\Pi^{y}_{0_A}(t)\Pi^{y}_{0_B}(t)}$. As a consequence, if $\Pi^{y}_{0_A}(t)=\Pi^{y}_{0_B}(t)$ (holding when $\hat{\cal{H}}_A=\hat{\cal{H}}_B$), the two-qubit concurrence cannot switch between $C_{\uparrow\uparrow}$ and $C_{\uparrow\downarrow}$. On the contrary, if $\Pi^{y}_{0_A}(t)\neq{\Pi^{y}_{0_B}(t)}$, one can drive the two-qubit system from parallel-type to antiparallel and viceversa. In fact, the switching between parallel and antiparallel entanglement is observed when $\hat{\bm s}_{0_A}$ and $\hat{\bm s}_{0_B}$ are flipped, under the effect of the coupling with the first spin of their respective chain, at different frequencies (for instance when $J^x_A\neq J^x_B$), or when the dynamics of one subsystem is slowed down with respect to the other (for instance due to the fact that the field on one of the two chains is larger than the saturation value, as seen in Sec.~\ref{ss.isocoupling}). However, as discussed in Ref.~\cite{konrad}, a two-channel entanglement evolution has an upper bound given by the product of the one-channel dynamics. Therefore, for such an ``entanglement switching'' to occur, the Hamiltonian parameters of subsystems $A$ and $B$ should be set so as to retain high entanglement values. By fixing $J_B^x$, while setting $h_A\gg J^x_A$ in order to slow down the entanglement relaxation in the corresponding channel, the efficiency of the switching increases. This is clearly seen in Fig.~\ref{f.aniso}. On the other hand, we can decrease $J^x_A$ so that channel $B$ is far more responsible for the entanglement evolution. In this case too a very efficient switching mechanism is achieved, suggesting that the saturation region of the chain is associated to an effective decoupling of the qubit from its corresponding environment. Finally, we notice that, being the coefficients defined by Eqs.~(\ref{e.PiDeltaxyResum}) regular oscillating functions of time, the concurrence can only vanish on a countable number of points on the temporal axis and cannot remain null for finite intervals of time. Therefore, entanglement sudden death is not observed. \section{Conclusions} \label{s.conclusions} We have analyzed the dynamics of an entangled qubit-pair connected to two structured environments composed of open-ended and finite interacting spin chains. The intra-chain interaction has been modeled by an XX Heisenberg-like Hamiltonian, while the coupling between each qubit and its respective environment has been realized via an XY exchange term with the first spin of the chain. Application of uniform magnetic fields has been also considered. We have exactly determined the time-dependent two-qubit density matrix, starting from information gathered on the single-qubit dynamics. We have then provided analytical solutions both for the case of finite even $N$ and in the thermodynamic limit, thus getting access to a full-comprehensive and general analysis of entanglement evolution. Particular emphasis has been given to the relaxation-like dynamics implied by the specific type of coupling considered, which gives rise, under suitable conditions, to a sudden death of the entanglement that we have analyzed. Interestingly, we have unveiled the occurrence of ESD also when starting from initially pure two-qubit states, a peculiarity of our model that does not emerge under ``longitudinal" qubit-environment couplings. By manipulating the transverse magnetic field on the initially entangled qubits, we have shown the possibility of decoupling their dynamics from that of their respective environments, thereby allowing for a dynamical entanglement protection. On the other hand, when the magnetic fields applied to both chains are larger than the saturation value, the dynamics of the environments slows down and entanglement sudden death is not observed. Interesting features are observed when the environments undergo a quantum phase transition, which in this case happens when the field applied to the chain gets the saturation value, in particular as far as the the behaviour of parallel and antiparallel concurrence is concerned. Our work provides the analytical characterization of the transverse ({\it i.e.} energy exchanging) coupling between a simple and yet interesting out-of-equilibrium system (the two qubits) and a non-trivial spin environments (the two chains). As spin models are now understood as effective descriptions of many different physical systems, our results hold the premises to find fertile application to a variety of cases. As a particularly interesting situation, one can think about the engineering of an effective spin environment by using unidimensional arrays of small-capacitance superconducting Josephson junctions~\cite{haviland}, which show a sharp phase transition from Josephson-type behavior to Cooper-pair tunneling Coulomb blockade analogous to that of an $XY$ model. This implementation thus constitutes a nearly ideal test for our predictions, since the effective environmental parameters can be modified through the use of gate voltages and external magnetic fluxes. It would be very interesting to study the applicability and relevance of a study such as the one performed here to the investigation of the properties of intrinsically open systems in quantum chemistry and biology exposed to finite-sized environments. In this context, it is particularly significant that the mathematical model used in order to describe the radical pair mechanism~\cite{CaiEtAl10} bears some analogies with the central-qubit model. \section{Acknowledgements} We acknoweldge discussions with thank L. Banchi and G. De Chiara. TJGA thanks Fondazione Carical for financial support. CDF is supported by the Irish Research Council for Science, Engineering and Technology. MP is supported by EPSRC (EP/G004579/1). MP and FP acknowledge support by the British Council/MIUR British-Italian Partnership Programme 2009-2010.
1,108,101,566,250
arxiv
\section{Introduction} Giant planets on short period orbits (also called hot Jupiters) were the first planets to be discovered, and their numbers increased quickly during the first years of exoplanetary science. Their existence itself immediately posed a challenge on planet formation theories, which at the time only had one example, the Solar System. Despite almost three decades of discovery of hot Jupiters, there is still no consensus on their exact origin channel \citep{Dawson2018}. While it is still unclear whether hot Jupiters can form \emph{in situ} or not \citep{Batygin2016}, \emph{ex situ} formation processes require a mechanism responsible of transporting these giant planets from larger separations to the current close-in orbits. The two leading hypotheses for such large-scale migration that have been put forward are disc migration and high-eccentricity tidal migration. In the former scenario, the planets exchange angular momentum with the gas and dust particles in the circumstellar disc. As a result, the semi-major axis slowly shrinks, while the orbit remains circular \citep[e.g.,][]{Lin1996,Baruteau2014}. In contrast, the latter scenario could result in very eccentric and misaligned orbits, since it involves gravitational interactions with other bodies in the system \citep[e.g.,][]{Nagasawa2008,Chatterjee2008}. The advent of space-based transit search missions has led to the discovery of thousands of new exoplanet candidates \citep[see, e.g.,][]{Borucki2010,Huang2013,Livingston2018,Kruse2019}. Combining these discoveries with ground-based spectroscopic follow-up observations leads to a large sample of well characterised exoplanet systems, including the bulk density of the transiting planets, host star properties, orbital eccentricities, stellar obliquities and companionship of outer planets or stars \citep[see, e.g.,][]{Gandolfi2019,VanEylen2019, Carleo2020,Albrecht2021,Knudstrup2021,Smith2022}. Here we report on the discovery of three transiting hot Jupiters, TOI-1820b, TOI-2025b, and TOI-2158b. The transit-like features associated with these systems were detected by the Transiting Exoplanet Survey Satellite \citep[\tess;][]{Ricker15}. We have confirmed these as bona fide planets, and we have characterised the planets and their host systems in terms of masses and orbital eccentricities. For one system (TOI-2025) we additionally performed spectroscopic transit observations and used them to determine the sky-projected spin-orbit obliquity. During the preparation of this manuscript we became aware of the efforts of another team to announce the discovery of TOI-2025~b (Rodriguez in prep.). The results were determined independently, and the communication between the teams were strictly related to the coordination of the manuscripts. In Section~\ref{sec:tessphot} we describe the \tess photometry and data extraction. We present our ground-based observations, which include both additional photometry and spectroscopic follow-up as well speckle interferometry, in Section~\ref{sec:data}. \review{In Section~\ref{sec:stelpars} we explain how we obtain stellar parameters for the three systems.} The methodology behind our analysis is described in Section~\ref{sec:mcmc}. We discuss our results in Section~\ref{sec:results} before placing these planets in the context of the population from the literature and drawing our conclusions in Section~\ref{sec:conclusions}. \begin{figure*} \centering \includegraphics[width=\textwidth]{figures/lc_toi1820.pdf} \caption{{\bf Photometry for TOI-1820.} Our different photometric observations of TOI-1820 with the best-fitting transit model shown with a grey line, and the residuals following the subtraction of the best-fitting model shown below.} \label{fig:lc_toi1820} \end{figure*} \section{TESS photometry of candidate systems} \label{sec:tessphot} The transiting planet candidates, TOI-1820, TOI-2025, and TOI-2158 were identified by the Massachusetts Institute of Technology (MIT) Quick Look Pipeline \citep[QLP;][]{Huang2020} in a search of light curves extracted from the 30-minute cadence Full Frame Images (FFIs) using the box-least-squares \citep[BLS;][]{Kovacs2002,Hartman2016} algorithm. Transit signals were detected for all three systems, which were then alerted as TESS Objects of Interest (TOIs) by the TESS Science Office at MIT \citep{Guerrero21}. Two of the targets, namely TOI-2025 and TOI-2158, were subsequently put on the target list for 2 minute cadence. The 2 minute cadence data are processed by the Science Processing Operation Center \citep[SPOC;][]{SPOC} team at the NASA Ames Research Center, where light curves are extracted through simple aperture photometry \citep[SAP;][]{Twicken2010,Morris2020} and processed using the Presearch Data Conditioning \citep[PDC;][]{Smith2012,Stumpe2012,Stumpe2014} algorithm. \review{We downloaded and extracted all the \tess light curves from the target pixel files using the \texttt{lightkurve} \citep{lightkurve} package, where we use the implemented \texttt{RegressionCorrector} to correct for any background noise.} We also removed outliers. First we removed the transits from the light curve through a light curve model using parameters from an initial fit, next we applied a Savitsky-Golay filter and identified outliers through $5\sigma$ sigma clipping, which we then excluded from the unfiltered light curve with transits. For all the three systems, we confirmed the presence of the transit-like features identified by \review{QLP}, by performing an independent search using the BLS and the Transit Least Squares \citep[TLS;][]{Hippke19b} algorithm. We furthermore searched for additional transits, without finding hints of any. \subsection{TOI-1820} TOI-1820 was observed in Sector 22 (February 18 2020 and March 18 2020), with CCD 1 in {\it TESS'} camera 1 with a cadence of 30~minutes. TOI-1820 was alerted on 17 April, 2020 with an SNR of 53. \review{In the top left of \fref{fig:lc_toi1820} we show the \tess light curve phase folded to the periodic transit signal occurring every $\sim$4.9~d with a depth of $\sim$0.6\%.} \subsection{TOI-2025} TOI-2025 was observed with a 30 minute cadence using {\it TESS'} camera 3 in Sector 14 (July 18 2019 to August 15 2019), Sectors 18-20 (November 2 2019 to January 21 2020), Sectors 24-26 (April 16 2020 to July 4 2020), as well as in 2~min. cadence in Sector 40 (June 24 2021 to July 23 2021) \review{and Sector 47 (December 30 2021 to January 28 2022)}, also with camera 3. \review{Since the \tess light curves of TOI-2025 display a periodic $\sim$8.9~d dip of $\sim$0.7\% with an SNR of 151, the candidate was announced as a TOI on 19 June, 2020. The two panels on the top left of \fref{fig:lc_toi2025} shows the phase folded \tess light curves.} \begin{figure*} \centering \includegraphics[width=\textwidth]{figures/lc_toi2025.pdf} \caption{{\bf Photometry for TOI-2025.} Our different photometric observations of TOI-2025 with the best-fitting transit model shown with a grey line, and the residuals following the subtraction of the best-fitting model shown below.} \label{fig:lc_toi2025} \end{figure*} \subsection{TOI-2158} TOI-2158 was observed with {\it TESS'} camera 1 during Sector 26 (June 8 2020 to July 4 2020) with a cadence of 30 minutes, and in Sector 40 (June 24 2021 to July 23 2021) with a 2 min. cadence. On 10 August, 2020 TOI-2158 was announced as a TOI with an SNR of 59. \review{The \tess light curve for TOI-2158 can be seen in the top of \fref{fig:lc_toi2158} phase folded onto the $\sim$8.6~d showing the $\sim$0.5\% decrease in flux.} \begin{figure*} \centering \includegraphics[width=\textwidth]{figures/lc_toi2158.pdf} \caption{{\bf Photometry for TOI-2158.} Our different photometric observations of TOI-2158 with the best-fitting transit model shown with a grey line, and the residuals following the subtraction of the best-fitting model shown below.} \label{fig:lc_toi2158} \end{figure*} \begin{table*}[t] \centering \caption{Parameters of the stellar hosts in the three systems of this study. } \begin{threeparttable} \begin{tabular}{l l c c c} \toprule & \tess Object of Interest &TOI-1820 & TOI-2025 & TOI-2158\\ & \tess Input Catalog & TIC 393831507 & TIC 394050135 & TIC 342642208 \\ & TYCHO-2 & TYC 1991-1863-1 & TYC 4595-797-1 & TYC 1577-691-1\\ \midrule $G$\tnote{a}\tnote{a} & Gaia $G$ magnitude & 10.97 & 11.36 & 10.67 \\ $\alpha_\mathrm{J2000}$\tnote{a} & Right Ascension & 12:30:44.813 & 18:51:10.861 & 18:27:14.413 \\ $\delta_\mathrm{J2000}$\tnote{a} & Declination & 27:27:07.206 & 82:14:43.492 & 20:31:36.793 \\ $\mu_\alpha$\tnote{a} & Proper motion in R.A. (mas yr$^{-1}$) & 50.54$\pm$0.08 & 2.79$\pm$0.04 & -44.00$\pm$0.04 \\ $\mu_\delta$\tnote{a} & Proper motion in Dec. (mas yr$^{-1}$) & -33.93$\pm$0.08 & -4.52$\pm$0.05 & 7.89$\pm$0.07 \\ $\varpi$\tnote{a} & Parallax (mas) & 4.00$\pm$0.06 & 2.95$\pm$0.02 & 5.01$\pm$0.04 \\ $\pi$\tnote{a} & Distance (pc) & 250$\pm$4 & 339$\pm$2 & 200$\pm$1 \\ $T_\mathrm{eff}$\tnote{b} & Effective temperature (K) & 5734$\pm$50 & 5880$\pm$53 & 5673$\pm$50 \\ $\log g$\tnote{b} & Surface gravity (dex) & 4.24$\pm$0.05 & 4.17$\pm$0.06 & 4.19$\pm$0.05 \\ $\rm [Fe/H]$\tnote{b} & Metallicity (dex) & 0.14$\pm$0.15 & 0.18$\pm$0.08 & 0.47$\pm$0.08 \\ $v\sin i_\star$\tnote{b} & Projected rotational velocity (km s$^{-1}$) & 4.5$\pm$0.8 & 6.0$\pm$0.3 & 3.7$\pm$0.5 \\ $A_\mathrm{V}$\tnote{c} & Extinction (mag) & 0.04$\pm$0.02 & 0.10$\pm$0.03 & 0.24$\pm$0.02 \\ $F_\mathrm{bol}$\tnote{c} & Bolometric flux (erg s$^{-1}$ cm$^{-2}$) & $(1.017 \pm 0.018)\times10^{-9}$ & $(7.02 \pm 0.16)\times10^{-10}$ & $(1.540 \pm 0.018)\times 10^{-9}$ \\ $R_\star$\tnote{c} & Radius (R$_\odot$) & 1.51$\pm$0.06 & 1.56$\pm$0.03 & 1.41$\pm$0.03 \\ $M_\star$\tnote{c} & Mass (M$_\odot$) & 1.04$\pm$0.13 & 1.32$\pm$0.14 & 1.12$\pm$0.12 \\ $P_\mathrm{rot}/\sin i$\tnote{c} & Rotation period (days) & 25$\pm$6 & 13.2$\pm$0.7 & 19$\pm$3 \\ $P_\mathrm{pred}$\tnote{c} & Predicted rotation period (days) & 40$\pm$2 & - & 43$\pm$3 \\ $\log$ R$^{\prime}_\mathrm{HK}$\tnote{c} & Activity & -5.37\tnote{d} & - & -5.06$\pm$0.05 \\ $\tau$\tnote{c} & Age (Gyr) & 11$\pm$2 & 1.7$\pm$0.2 & 8$\pm$1 \\ $\rho$\tnote{c} & Density (g cm$^{-3}$) & 0.43$\pm$0.07 & 0.49$\pm$0.06 & 0.56$\pm$0.07 \\ \bottomrule \end{tabular} \begin{tablenotes} \item[a] \gaia EDR3 \citep{GaiaEDR3}. \item[b] This work: SPC. \item[c] This work: SED. \item[d] This work: HIRES spectra. \end{tablenotes} \end{threeparttable} \label{tab:targets} \end{table*} \section{Ground-based observations} \label{sec:data} In addition to \tess\ space-based photometry, we gathered ground-based photometry via the Las Cumbres Observatory Global Telescope \citep[LOCGT;][]{Brown:2013} as well as ground-based spectroscopic measurements from different telescopes. Reconnaissance spectroscopy was acquired with the High Resolution Echelle Spectrometer \citep[HIRES;][]{Vogt1994} located at the Keck Observatory, the Tillinghast Reflector Echelle Spectrograph \citep[TRES;][]{Furesz2008} situated at the Fred L. Whipple Observatory, Mt. Hopkins, AZ, USA, as well as the FIber-fed Echelle Spectrograph \citep[FIES;][]{Frandsen99,Telting14} at the Nordic Optical Telescope \citep[NOT;][]{NOT2010} of the Roque de los Muchachos observatory, La Palma, Spain. To confirm and characterize the systems in terms of masses, \review{bulk} densities, and orbital parameters, we monitored the systems with the FIES spectrograph, and the Tull Coude Spectrograph \citep{Tull1995} at the 2.7\,m Harlan J. Smith telescope at the McDonald Observatory, Texas, USA. The FIES and Tull spectrographs are both cross-dispersed spectrographs with resolving powers of 67,000 (in high resolution mode) respectively 60,000. Finally, to investigate companionship in the systems we obtained speckle imaging using the 2.5-m reflector at the Caucasian Mountain Observatory of Sternberg Astronomical Institute \citep[CMO SAI;][]{Shatsky2020}. \subsection{Speckle interferometry with SPP} TOI-2158, TOI-2025, TOI-1820 were observed using speckle interferometry with the SPeckle Polarimeter (SPP; \citealt{Safonov2017}) on the 2.5-m telescope at the Sternberg Astronomical Institute of Lomonosov Moscow State University (SAI MSU). The detector has a pixel scale of 20.6~mas px$^{-1}$, and the angular resolution was 83~mas. The atmospheric dispersion compensation by two direct vision prisms allowed us to use the relatively broadband $I_c$ filter. For all targets 4000 of 30~ms frames were obtained. The detection limits are provided in \fref{fig:speckle}. For TOI-2158 and TOI-2025 we did not detect stellar companions, with limits for $\delta$mag for any potential companion of 6.5~mag and 7~mag at $1^{\prime\prime}$, respectively. \review{\subsubsection{Stellar companion to TOI-1820}} For TOI-1820 we detected a companion 4.0 magnitudes fainter than the primary on 2020-12-02 and 2021-07-15. The separation, position angle, and contrast were determined by the approximation of average power spectrum with the model of a binary star \citep[see Eq. (9) in][]{Safonov2017}. As the weight for the approximation, we took the inverse squared uncertainty of the power spectrum determination. The results are presented in \tref{tab:spp_TOI1820}. All binarity parameters for the two dates coincide within the uncertainties. According to \gaia EDR3 \citep{GaiaEDR3}, the proper motion of TOI-1820 is relatively high, being $50.54\pm0.08$~mas\,yr$^{-1}$ and $-33.93\pm0.08$~mas\,yr$^{-1}$ along right ascension and declination, respectively. If the companion of TOI-1820 were a background star, its position with respect to TOI-1820\footnote{In the SIMBAD entry \url{http://simbad.u-strasbg.fr/simbad/sim-basic?Ident=TYC+1991-1863-1&submit=SIMBAD+search} TOI-1820 is listed as a member of the cluster Melotte 111, however, the proper motion ($\mu_\alpha\sim-12$~mas yr$^{-1}$, $\mu_\delta\sim-9$~mas yr$^{-1}$) and parallax ($\varpi\sim12$~mas) are significantly different from the \gaia EDR3 \citep{GaiaEDR3} values listed in \tref{tab:targets}.} would change by $37.694\pm0.051$~mas between the two epochs of our observations. As long as we see a displacement much smaller than this, we conclude that TOI-1820 and its companion are gravitationally bound. With a \gaia parallax of 4~mas (see \tref{tab:targets}) we find a physical separation between the target and the companion of $\approx$110~AU. Furthermore, from our HIRES reconnaissance and using the algorithm from \citet{Kolbl2015}, we can constrain \review{this} secondary companion to only contribute 1\% in flux if the RV separation between the components in TOI-1820 is greater than 10 km/s. If the RV separation is less than 10 km~s$^{-1}$, the flux of the secondary \review{would have been unconstrained without the speckle interferometry.} \begin{table} \caption{Results from the SPP speckle interferometry of TOI-1820: separation, position angle, and contrast.} \begin{tabular}{ccccc} \hline Date & Separation & P.A. & $\Delta m$ \\ UT & mas & $^{\circ}$ & \\ \hline 2020-12-02 & $470\pm5$ & $102.6\pm0.3$ & $4.0\pm0.1$ \\ 2021-07-15 & $474\pm8$ & $101.7\pm0.9$ & $3.7\pm0.1$ \\ \hline \end{tabular} \label{tab:spp_TOI1820} \end{table} \begin{figure*} \centering \includegraphics[width=\textwidth]{figures/speckle_SAI_no_grid.png} \caption{{\bf Speckle interferometry.} SAI-2.5m speckle sensitivity curve and ACF for TOI-1820 (left panel), TOI-2025 (middle panel), and TOI-2158 (right panel). All images shown here were taken in the $I$-band. Only the speckle image of TOI-1820 shows evidence of a nearby companion as can be seen as the bump in the ACF around 0.45~arcsec.} \label{fig:speckle} \end{figure*} \subsection{\review{Photometric Follow-up}} We acquired ground-based time-series follow-up photometry of TOI-1820, TOI-2025, and TOI-2158 as part of the \textit{TESS} Follow-up Observing Program \citep[TFOP;][]{collins:2019}\footnote{https://tess.mit.edu/followup} to attempt to (1) rule out or identify nearby eclipsing binaries (NEBs) as potential sources of the detection in the \textit{TESS} data, (2) detect the transit-like events on target to confirm the depth and thus the \textit{TESS} photometric deblending factor, (3) refine the \textit{TESS} ephemeris, and (4) place constraints on transit depth differences across optical filter bands. We used the {\tt TESS Transit Finder}, which is a customized version of the {\tt Tapir} software package \citep{Jensen:2013}, to schedule our transit observations. Unless otherwise noted, the images were calibrated and the photometric data were extracted using the {\tt AstroImageJ} ({\tt AIJ}) software package \citep{Collins:2017}. The observing facilities are described below, and the individual observations are detailed in Table \ref{table:transitfollowup}. The ground-based light curves for TOI-1820, TOI-2025, and TOI-2158 are shown in \fref{fig:lc_toi1820}, \fref{fig:lc_toi2025}, and \fref{fig:lc_toi2158}, respectively. We observed six transits using the Las Cumbres Observatory Global Telescope \citep[LCOGT;][]{Brown:2013} 1.0\,m and 0.4\,m networks. Three transits were observed in alternating filter mode, resulting in a total of nine light curves. The 1\,m telescopes are equipped with $4096\times4096$ pixel SINISTRO cameras having an image scale of $0\farcs389$ per pixel, resulting in a $26\arcmin\times26\arcmin$ field of view. The 0.4-m telescopes are equipped with $2048\times3072$ pixel SBIG STX6303 cameras having an image scale of 0$\farcs$57 pixel$^{-1}$, resulting in a $19\arcmin\times29\arcmin$ field of view. The images were calibrated by the standard LCOGT {\tt BANZAI} pipeline \citep{McCully:2018}. We observed a transit from KeplerCam on the 1.2\,m telescope at the Fred Lawrence Whipple Observatory using alternating filters, resulting in two light curves. The $4096\times4096$ Fairchild CCD 486 detector has an image scale of $0\farcs336$ per pixel, resulting in a $23\farcm1\times23\farcm1$ field of view. We observed one transit each from the Kotizarovci Private Observatory 0.3\,m telescope near Viskovo, Croatia, the C.R. Chambliss Astronomical Observatory (CRCAO) 0.6\,m telescope at Kutztown University near Kutztown, PA, and the Conti Private Observatory 0.3\,m telescope near Annapolis, MD. The Kotizarovci telescope is equipped with a $765\times510$ pixel SBIG ST7XME camera having an image scale of $1\farcs2$ per pixel, resulting in a $15\arcmin\times10\arcmin$ field of view. The CRCAO telescope is equipped with a $3072\times2048$ pixel SBIG STXL-6303E camera having an image scale of $0\farcs76$ after $2\times2$ pixel image binning, resulting in a $13\arcmin\times20\arcmin$ field of view. The Conti telescope is equipped with a $2750\times2200$ pixel StarlightXpress SX694M camera having an image scale of $1\farcs0$ after $2\times2$ pixel image binning, resulting in a $23\arcmin\times18\arcmin$ field of view. \subsection{RV Follow-up} Our NOT and McDonald Observatory monitoring was carried out from May 2020 to September 2021. In \tref{tab:rv_toi1820}, \tref{tab:rv_toi2025}, and \tref{tab:rv_toi2158} we list all epochs and RVs for TOI-1820, TOI-2025, and TOI-2158, respectively. We reduced the FIES spectra using the methodology described in \citet{Buchhave2010} and \citet{Gandolfi2015}, which includes bias subtraction, flat fielding, order tracing and extraction, and wavelength calibration. We traced the RV drift of the instrument acquiring long-exposed ThAr spectra ($\sim$80~s) immediately before and after each science observation. The science exposure time was set to 1800-2700~s seconds, depending on the sky conditions and scheduling constraints. As our exposures were longer than 1200~s, we split the exposure in 3 sub-exposures to remove cosmic ray hits using a sigma clipping algorithm while combining the frames. RVs were derived via multi-order cross-correlations, using the first stellar spectrum as a template. For Tull we used 30 minute integrations to give an SNR of 60-70 per pixel. An $I_2$ gas absorption was used to provide the high precision radial velocity metric. All Tull spectra were reduced and extracted using standard IRAF tasks. Radial velocities were extracted using the Austral code \citep{Endl2000}. To validate the planetary nature of the transiting signal in TOI-1820 and fully characterize the system, we acquired 18 spectra with FIES and 12 spectra with Tull, shown to the left in \fref{fig:rv_all}. \fref{fig:gls_all} displays the generalised Lomb-Scargle \citep[GLS;][]{Lomb76,Scargle82} periodograms with TOI-1820 to the left in which the $\sim$4.9~d transiting signal has been overplotted as the dashed line. This periodicity corresponds to the peak we see in the GLS of the RVs. \begin{figure*} \centering \includegraphics[width=\textwidth]{figures/rv_all.pdf} \caption{{\bf Radial velocities.} From left to right are our FIES (blue), FIES+ (orange), and Tull (green) RVs for TOI-1820, TOI-2025, and TOI-2158, respectively, where the black part of the error bars denote the jitter added in quadrature. The grey curves are the best-fitting models. In the bottom row are the residuals after subtracting the best-fitting models.} \label{fig:rv_all} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{figures/gls_all.pdf} \caption{{\bf Generalised Lomb-Scargle periodograms}. From left to right are the GLS periodograms for TOI-1820, TOI-2025, and TOI-2158, respectively. In the top row we show the GLS periodograms from directly from the RVs, and in the bottom we have removed the orbit of the planet. The dashed vertical lines from left to right denote the 4.9~d, 8.9~d, and 8.6~d signals seen in the photometry for TOI-1820, TOI-2025, and TOI-2158, respectively. The solid lines are our baselines, i.e., $1/(t_\mathrm{last \ RV} - t_\mathrm{first \ RV})$ with $t_\mathrm{first \ RV}$ and $t_\mathrm{last \ RV}$ being the times for the first and last acquired RVs. \review{The horizontal dashed lines show the 1\% false alarm probability.}} \label{fig:gls_all} \end{figure*} We collected a total of 46 FIES RVs to validate the planetary nature of the signal as well as to characterise the TOI-2025 system. In the middle panel of \fref{fig:rv_all}, FIES+ refers to RVs collected after 1st of July, 2021 (see Section~\ref{sec:mcmc}). As before, the transiting signal coincides with the peak in the GLS periodogram in the middle panels of \fref{fig:gls_all}. For TOI-2158 we collected 30 FIES RVs and 23 Tull RVs shown in the right panel of \fref{fig:rv_all}. As for the other two systems, the peak associated with the $\sim$8.6~d period planet is detected in the GLS periodogram in \fref{fig:gls_all}, since it is stronger than the False Alarm Probability. \review{\section{Stellar parameters}\label{sec:stelpars} We made use of the the Stellar Parameter Classification \citep[SPC;][]{Buchhave2012,Buchhave2014,Bieryla2021} tool to obtain stellar parameters, where we reduced and extracted the spectra following the approach in \citet{Buchhave2010}. For TOI-2025 and TOI-2158 we used the TRES spectra as reconnaissance, and for TOI-1820 we used our FIES spectra. The derived stellar parameters are tabulated in \tref{tab:targets}. In addition, for TOI-1820 we also used our HIRES spectra with \texttt{Specmatch-Synth} to derive stellar parameters as described in \citet{Petigura2017}. From the two HIRES spectra, we find $T_\mathrm{eff}=5695 \pm 100$~K, $\log g=4.1 \pm 0.1$, [Fe/H]$=0.01\pm0.06$, and $v \sin i = 3.07 \pm 0.77$~km~s$^{-1}$. We also estimated the R$^\prime_\mathrm{HK}$ activity indicator. As a result we obtained $\log $R$^{\prime}_\mathrm{HK}$ = -5.37, a hint that the star is inactive. } \subsection{SED} As an independent check on the derived stellar parameters, we performed an analysis of the broadband spectral energy distribution (SED) together with the {\it Gaia\/} EDR3 parallax in order to determine an empirical measurement of the stellar radius, following the procedures described in \citet{Stassun:2016,Stassun:2017,Stassun:2018}. In short, we pulled the $B_T V_T$ magnitudes from Tycho-2, the $BVgri$ magnitudes from APASS, the $JHK_S$ magnitudes from {\it 2MASS}, the W1--W4 magnitudes from {\it WISE}, and the $G G_{\rm BP} G_{\rm RP}$ magnitudes from {\it Gaia}. We also used the {\it GALEX} NUV flux when available. Together, the available photometry spans the stellar SED over the wavelength range 0.35--22~$\mu$m, and extends down to 0.2~$\mu$m when {\it GALEX} data are available (see \fref{fig:sed}). We performed a fit using Kurucz stellar atmosphere models, with the priors on effective temperature ($T_{\rm eff}$), surface gravity ($\log g$), and metallicity ([Fe/H]) from the spectroscopically determined values. The remaining free parameter is the extinction ($A_V$), which we restricted to the maximum line-of-sight value from the dust maps of \citet{Schlegel:1998}. \begin{figure*} \centering \includegraphics[width=\textwidth]{figures/seds.png} \caption{{\bf Spectral Energy Distribution.} The SEDs for TOI-1820 (left panel), TOI-2025 (middle panel), and TOI-2158 (right panel). Red symbols represent the observed photometric measurements, where the horizontal bars represent the effective width of the passband. Blue symbols are the model fluxes from the best-fit Kurucz atmosphere model (black).} \label{fig:sed} \end{figure*} \review{The resulting SED fits are shown in \fref{fig:sed}, with reduced $\chi^2$ values of 1.5, 1.2, and 1.2, respectively. The resulting best-fit are summarized in \tref{tab:targets}. Integrating the (unreddened) model SED gives the bolometric flux at Earth, $F_{\rm bol}$, which with the $T_{\rm eff}$ and the {\it Gaia\/} EDR3 parallax \citep[with no systematic adjustment; see][]{StassunTorres:2021} gives the stellar radius. The stellar mass can then be determined empirically from the stellar radius and the spectroscopic $\log g$, and compared to the mass estimated from the empirical relations of \citet{Torres:2010}. Finally, we can estimate the age of the star from the spectroscopic $R'_{\rm HK}$ via the empirical relations of \citet{Mamajek:2008}, which we can also corroborate by comparing the stellar rotation period predicted at that age from the empirical gyrochronology relations of \citet{Mamajek:2008} against that determined from the stellar radius together with the specroscopic $v\sin i$. These parameters are also summarised in \tref{tab:targets}. The rather old ages inferred for TOI-1820 and TOI-2158 would predict slow stellar rotation periods of $P_{\rm rot} = 40 \pm 2$~d and $P_{\rm rot} = 43 \pm 3$~d, respectively, whereas the (projected) rotational periods estimated from the spectroscopic $v\sin i$ together with $R_\star$ gives $P_{\rm rot} / \sin i = 24.9 \pm 6.3$~d and $P_{\rm rot} / \sin i = 19.3 \pm 3.2$~d, suggesting either somewhat younger ages, or else a process that kept the stars rotating faster than expected for their ages. } It is interesting that both TOI-1820 and TOI-2158 appear to be rotating faster than what would be expected given their ages, especially seeing as both of these stars host a hot Jupiter. Discrepancy between ages inferred from isochrone fitting and gyrochronology among hot Jupiter hosts has been seen in studies by \citet{Brown2014} and \citet{Maxted2015}, both studies suggested tidal spin-up as a possible explanation. Further evidence for this has recently been found in \citet{Arevalo2021}. Tidal spin-up might therefore be the mechanism responsible for the discrepancy we are seeing in TOI-1820 and TOI-2158. Of course, this might also apply to the TOI-2025 system as this system also harbors a hot Jupiter, but as this system is younger, the effect might be less pronounced. \section{Joint analysis} \label{sec:mcmc} To estimate the planetary and orbital parameters, we fit the photometry and the RVs jointly, where we extract confidence intervals through Monte Carlo Markov Chain (MCMC) sampling using the \texttt{emcee} package by \citet{Foreman}. We model the light curves using the \texttt{batman} package \citep{Kreidberg}, which utilizes the formalism by \citet{Mandel2002}. To account for any morphological light curve distortion \citep{Kipping2010} caused by the 30~min. sampling, we oversample our 30~min. cadence light curves to correspond to a sampling of 2~min. In an attempt to mitigate correlated noise in the \tess photometry we make use of Gaussian Process (GP) Regression through the \texttt{celerite} package \citep{celerite}. We use the Matérn-3/2 kernel, which include two hyper parameters; the amplitude of the noise, $A$, and the time scale, $\tau$. For our ground-based photometry we do not have long out-of-transit baselines. Therefore, we did not model the noise from these transits with GPs, instead we use a Savitsky-Golay filter to de-trend the data with each draw in our MCMC. To fit the RVs we use a Keplerian orbit, where we naturally have different systemic velocities, $\gamma$, for the RVs stemming from FIES and Tull, when this is relevant. Due to a refurbishment of the FIES spectrograph, an offset in RV was introduced between the RVs obtained before 1st of July 2021 and those obtained after. We assign two independent systemic velocities and two independent jitter terms to RVs obtained before (FIES) and after (FIES+) this date. Our MCMC analysis for the three systems stepped in $\cos i$ instead of $i$, as well as in $\sqrt{e}\cos \omega$ and $\sqrt{e}\sin \omega$ instead of $e$ and $\omega$. Furthermore, the code stepped in the sum of the limb darkening parameters, i.e., $q_1 + q_2$, where we applied a Gaussian prior with a width of 0.1. We instead fixed the difference fixed, $q_1 - q_2$, during the sampling. We retrieved the starting values of $q_1$ and $q_2$ for the \tess passband from the table \citet{Claret17}, while we used the values from \citet{Claret2013} for the ground-based photometry. Furthermore, we used $V$ as a proxy for our transit observations of TOI-2025 using FIES. The initial and resulting values for the limb-darkening coefficients can be found in \tref{tab:ld_toi1820}, \tref{tab:ld_toi2025}, and \tref{tab:ld_toi2158}. We list all the adopted priors in \tref{tab:priors}, where a hyphen denotes that the associated parameter is not relevant for that run. We define our likelihood function as \begin{equation} \label{equ:likelihood} \log \mathcal{L} =-0.5 \sum_{i=1}^{N} \left [ \frac{(O_i - C_i)^2}{\sigma_i^2} + \log 2 \pi \sigma_i^2 \right] + \sum_{j=1}^{M} \log \mathcal{P}_{j}\, , \end{equation} where $N$ indicates the total number of data points from photometry and RVs. $C_i$ represents the model corresponding to the observed data point $O_i$. $\sigma_i$ represents the uncertainty for the $i$th data point, where we add a jitter term in quadrature and a penalty in the likelihood for the RVs. $\mathcal{P}_j$ is the prior on the $j$th parameter. \review{We run our MCMC until convergence, which we assess by looking at the rank-normalised $\hat{R}$ diagnostic test as implemented in the \texttt{rhat} module in \texttt{ArviZ} \citep{arviz_2019}.} \subsection{TOI-1820} Given the large separation of around 110~AU for the companion, the orbital period must be rather large and the expected $K$-amplitude must be rather small, meaning that, even if it is bound, it will not affect our RVs. The companion will, however, dilute the light curve. We therefore include a contaminating factor, where we write the total flux as a function of time as $F(t)=(F_1(t) + F_2)/(F_1 + F_2)$ with $F_1(t)$ and $F_1$ being the flux respectively in- and out-of-transit from the planet hosting star, and $F_2$ is the (constant) flux from the contaminating source (or sources). Here, we included the flux from the contaminating source as a fraction of the host, $F_2/F_1$, as the difference in magnitude, i.e., $\delta \rm M = -2.5 \log (F_2/F_1)$. \subsection{TOI-2025} For TOI-2025, we have two sets of light curves with different cadences (2~min. and 30~min.), we apply two different oversampling factors, while using the same limb darkening coefficients for both. We observed a spectroscopic transit of TOI-2025 at the NOT (FIES+) on the night starting on the 8th of August, 2021 allowing us to determine the projected obliquity, $\lambda$, of the host star. The RVs obtained during this transit night can be seen in \fref{fig:rm_toi2025}. We therefore also included a model for the RM effect using the algorithm by \citet{Hirano2011} for this fit. We used our SPC value in \tref{tab:targets} for $v \sin i_\star$ as a prior. For the macro- and micro-turbulence, we used priors stemming from the relations in \citet{Doyle2014} and \citet{Bruntt10}, respectively, along with the stellar parameters in \tref{tab:targets}. \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/rm_TOI2025.pdf} \caption{{\bf The Rossiter-McLaughlin effect in TOI-2025.} Our in-transit observations of TOI-2025 with FIES+. {\it Top:} The Keplerian orbit and linear trend has been subtracted from the RVs to {\bf better show} the RM effect with the grey line being the best-fitting model. {\it Bottom:} Here we have further subtracted this best-fitting model from the RVs.} \label{fig:rm_toi2025} \end{figure} We carried out three MCMC runs for TOI-2025. One where we included an additional first-order acceleration parameter, $\dot{\gamma}$, and one where we did not allow for any long-term drift, \fref{fig:rv_toi2025_drift}. Furthermore, we performed a fit where we fixed the eccentricity to 0, but allowed for a linear trend, \fref{fig:rv_toi2025_no_ecc}. \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/rv_TOI2025_drift.pdf} \caption{{\bf The long-term trend in TOI-2025.} The linear trend in TOI-2025. Symbols are the same as in \fref{fig:rv_all}, but here the RVs are plotted against time, and we have subtracted the planetary signal. {\it Top:} A fit where we only allow for a linear trend. {\it Bottom:} Here we do not include any long-term drift.} \label{fig:rv_toi2025_drift} \end{figure} \subsection{TOI-2158} Similarly to the case of TOI-2025, the RVs of TOI-2158 show a long-term trend. The Tull baseline is sufficiently long to reveal the curvature in the entire RV signal, \fref{fig:rv_toi2518_drift}. We therefore performed three runs, 1) a run where we included both a first-order and a second order, $\ddot{\gamma}$, acceleration parameter, 2) a run with a linear drift, and 3) a run without any long-term trend. \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/rv_TOI2158_drift.pdf} \caption{{\bf The long-term trend in TOI-2158.} The quadratic trend in TOI-2158. Symbols are the same as in \fref{fig:rv_all}, but here the RVs are plotted against time, and we have subtracted the planetary signal. {\it Top:} A fit where we allow for a quadratic trend. {\it Middle:} Here we only allow for a linear trend. {\it Bottom:} Here we do not include any long-term drift.} \label{fig:rv_toi2518_drift} \end{figure} \section{Results} \label{sec:results} The results from the MCMC for our preferred orbital configuration for each of the systems are tabulated in \tref{tab:mcmc}. \begin{table*}[t] \centering \caption{{\bf Results from our MCMC analysis.}} \begin{threeparttable} \begin{tabular}{l l c c c c} \toprule \multicolumn{2}{l}{Parameter} & TOI-1820 & TOI-2025 & TOI-2158 \\ \midrule $P$ & Period (days) & $4.860700\pm0.000010$ & $8.872086\pm0.000009$ & $8.60077\pm0.00003$ \\ $T_0$ & Mid-transit time (BJD$_\mathrm{TDB}$) & $2458903.0631_{-0.0006}^{+0.0007}$ & $2458690.2896\pm0.0005$ & $2459018.9224\pm0.0010$ \\ $R_\mathrm{p}/R_\star$ & Planet-to-star radius ratio & $0.0759_{-0.0014}^{+0.0013}$ & $0.0738_{-0.0005}^{+0.0004}$ & $0.0700\pm0.0009$ \\ $a/R_\star$ & Semi-major axis to star radius ratio & $9.8\pm0.6$ & $12.3_{-0.4}^{+0.5}$ & $11.4\pm0.5$ \\ $K$ & Velocity semi-amplitude (m s$^{-1}$) & $273\pm4$ & $402_{-15}^{+14}$ & $75\pm3$ \\ $\cos i$ & Cosine of inclination & $0.081\pm0.008$ & $0.025_{-0.025}^{+0.011}$ & $0.075_{-0.006}^{+0.005}$ \\ $\sqrt{e} \cos \omega$ & & $0.20\pm0.02$ & $-0.02\pm0.03$ & $0.10_{-0.07}^{+0.10}$ \\ $\sqrt{e} \sin \omega$ & & $0.032_{-0.032}^{+0.017}$ & $0.661_{-0.016}^{+0.019}$ & $0.10_{-0.10}^{+0.04}$ \\ $\gamma_1$ & Systemic velocity FIES (m s$^{-1}$) & $227_{-4}^{+5}$ & $-372_{-17}^{+18}$ & $-9_{-12}^{+11}$ \\ $\gamma_2$ & Systemic velocity FIES+ (m s$^{-1}$) & - & $-98_{-52}^{+50}$ & $-55_{-16}^{+15}$ \\ $\gamma_3$ & Systemic velocity Tull (m s$^{-1}$) & $13947\pm4$ & - & $-64814_{-13}^{+11}$ \\ $\sigma_1$ & Jitter FIES (m s$^{-1}$) & $7_{-7}^{+3}$ & $44_{-14}^{+11}$ & $13_{-5}^{+4}$ \\ $\sigma_2$ & Jitter FIES+ (m s$^{-1}$) & - & $19\pm6$ & $10\pm3$ \\ $\sigma_3$ & Jitter Tull (m s$^{-1}$) & $6_{-6}^{+2}$ & - & $9_{-9}^{+4}$ \\ $\log A_1$ & GP amplitude \tess 30 min. & $-6.96_{-0.12}^{+0.11}$ & $-8.30\pm0.06$ & $-8.95_{-0.12}^{+0.13}$ \\ $\log \tau_1$ & GP time scale \tess 30 min. ($\log$ days) & $-0.50_{-0.15}^{+0.14}$ & $-0.31\pm0.11$ & $-1.7\pm0.4$ \\ $\log A_2$ & GP amplitude \tess 2 min. & - & $-7.81_{-0.14}^{+0.13}$ & $-7.271\pm0.017$ \\ $\log \tau_2$ & GP time scale \tess 2 min. ($\log$ days) & - & $-0.34_{-0.19}^{+0.16}$ & $-7.29_{-0.06}^{+0.08}$ \\ $\ddot{\gamma}$\tnote{a} & Quadratic trend (m s$^{-1}$ d$^{-2}$) & - & - & $-0.0043\pm0.0009$ \\ $\dot{\gamma}$\tnote{a,b} & Linear trend (m s$^{-1}$ d$^{-1}$) & - & $0.95\pm0.16$ & $1.4\pm0.2$ \\ $\rm \delta M$ & Dilution & $3.9\pm0.5$ & - & - \\ $\lambda$ & Projected obliquity ($^{\circ}$) & - & $9_{-34}^{+36}$ & - \\ $v \sin i_\star$ & Projected rotational velocity (km s$^{-1}$) & - & $6.0\pm0.3$ & - \\ $\zeta$ & Macro-turbulence (km s$^{-1}$) & - & $4.0_{-0.9}^{+1.0}$ & - \\ $\xi$ & Micro-turbulence (km s$^{-1}$) & - & $1.3_{-1.0}^{+0.6}$ & - \\ \hdashline $e$ & Eccentricity & $0.043_{-0.009}^{+0.008}$ & $0.44_{-0.02}^{+0.03}$ & $0.031_{-0.030}^{+0.013}$ \\ $\omega$ & Argument of periastron ($^\circ$) & $9_{-9}^{+5}$ & $92\pm2$ & $50_{-49}^{+19}$ \\ $i$ & Inclination ($^\circ$) & $85.4\pm0.5$ & $88.6_{-0.6}^{+1.4}$ & $85.7\pm0.3$ \\ $b$ & Impact parameter & $0.79\pm0.03$ & $0.31_{-0.31}^{+0.12}$ & $0.86_{-0.03}^{+0.02}$ \\ $T \rm _{4,1}$ & Total transit duration (hours) & $2.77_{-0.07}^{+0.06}$ & $3.620_{-0.023}^{+0.018}$ & $3.77\pm0.06$ \\ $T \rm _{2,1}$ & Time from 1st to 2nd contact (hours) & $0.47\pm0.07$ & $0.256_{-0.011}^{+0.009}$ & $0.75\pm0.07$ \\ $R_\mathrm{p}$ & Planet radius ($\rm R_J$) & $1.12\pm0.02$ & $1.120\pm0.009$ & $0.960\pm0.012$ \\ $M_\mathrm{p}$ & Planet mass ($\rm M_J$) & $2.3\pm0.2$ & $4.4\pm0.4$ & $0.82\pm0.08$ \\ $\rho_\mathrm{p}$ & Planet density (g~cm$^{-3}$) & $2.1\pm0.2$ & $3.9\pm0.3$ & $1.14\pm0.11$ \\ $T_\mathrm{eq}$\tnote{c} & Equilibrium temperature (K)\tnote{a} & $1295\pm11$ & $1186\pm11$ & $1188\pm10$ \\ $a$ & Semi-major axis (AU) & $0.069\pm0.005$ & $0.089\pm0.004$ & $0.075\pm0.004$ \\ \bottomrule \end{tabular} \begin{tablenotes} \item The parameters above the dashed line are the stepping parameters, and below are the derived parameters. The value given is the median and the uncertainty is the highest posterior density at a confidence level of 0.68. Parameters above the dashed line are stepping parameters, and below we show the derived values. \item[a] Zero-point for TOI-2158 is 2459302.92570 BJD$_\mathrm{TDB}$. \item[b] Zero-point for TOI-2025 is 2459124.41436 BJD$_\mathrm{TDB}$. \item[c] Calculated from \eref{eq:mass}. \item[d] Following \citet{Kempton2018}. \end{tablenotes} \end{threeparttable} \label{tab:mcmc} \end{table*} We find that TOI-1820b is a Jupiter-sized planet, $1.12 \pm 0.02$~R$\rm_J$\xspace, but significantly more massive, $2.3 \pm 0.2$~M$\rm_J$\xspace. With an orbital period of $4.860700 \pm 0.00010$~d\xspace, it is the shortest period planet in our sample. TOI-2025 has a similar size $1.120 \pm 0.009$~R$\rm_J$\xspace as TOI-1820 but has about twice its mass, $4.4 \pm 0.4$~M$\rm_J$\xspace. On the other end of the mass spectrum we find TOI-2158~b with $0.82 \pm 0.08$~M$\rm_J$\xspace. TOI-2158~b is also a bit smaller than the two other planets with a radius of $0.960 \pm 0.012$~R$\rm_J$\xspace. For TOI-2025, we found evidence for a long term RV trend, as can be seen in \fref{fig:rv_toi2025_drift}. We also find evidence for a long-term RV changes in the TOI-2158 system, including evidence for a curvature in the RVs, which we model with a quadratic term, \fref{fig:rv_toi2518_drift}. There is no significant evidence for long term RV changes in TOI-1820. Assuming the long term RV changes are due to a further out companion we can glimpse information about their masses from some back-of-the-envelope calculations. For this we assume circular, edge-on orbits. If we happen to have caught the outer companion in TOI-2025 just after and just before quadrature (phase 0.25 and 0.75), the peak-to-peak amplitude would be the rate of change multiplied by the difference in time between the first and last observation. Therefore, a lower limit for the $K$-amplitude can be estimated as $(t_\mathrm{last \ RV} - t_\mathrm{first \ RV})\times \dot{\gamma}/2$ \citep[see, e.g.,][]{Kane2019,Pepper2020}, resulting in some 170~m~s$^{-1}$. If we then assume values for the period we can use \begin{equation} \frac{M_{\rm p} \sin i}{\mathrm{M}_{\rm J}} = \frac{K \sqrt{1 - e^2}}{28.4~\mathrm{m}~\mathrm{s}^{-1}} \left ( \frac{P}{1~\mathrm{yr}} \right)^{1/3} \left ( \frac{M_\star}{\mathrm{M}_\odot} \right)^{2/3} \, \label{eq:mass} \end{equation} to get an estimate of the mass of the companion. As an illustrative example, assuming orbital periods of 2 or 10 years for such a companion would result in masses of $\approx9$~M$_{\rm J}$ or $\approx15$~M$_{\rm J}$, respectively. For TOI-2158 we found a quadratic trend. We can therefore obtain an order of magnitude estimate for the period of the outer companion as $P=-2 \ddot{\gamma}/\dot{\gamma}$, which gives a period of around 650~d. Using the relation $K=\ddot{\gamma}P^2/4\pi^2$ derived in \citet{Kipping2011} with \eref{eq:mass} yields a mass of $\approx15$~M$_\mathrm{J}$. \subsection{The eccentricity and obliquity of TOI-2025~b} We find TOI-2025~b to travel on an eccentric orbit, $0.44^{+0.03}_{-0.02}$\xspace. However, the argument of periastron is close to and fully consistent with $90^\circ$. This configuration can be deceptive when it comes to determining the eccentricity \citep[e.g.,][]{Laughlin2005}. This is because the RV curves would be symmetric for values close to $|\omega|=90$~$^\circ$, even for eccentric orbits. To further investigate the orbital eccentricity we carried out a few experiments. First off, as mentioned we ran an MCMC where we fixed $e$ to 0. The best-fitting model from this run can be seen in \fref{fig:rv_toi2025_no_ecc}, where the residuals clearly have structure in them. Our model involving a circular orbit does apparently not capture all the complexity present in the data. Consequently, the derived RV jitter terms for both FIES and FIES+ are significantly higher with values of $108^{+18}_{-22}$~m~s$^{-1}$ respectively $66^{+10}_{-12}$~m~s$^{-1}$ as opposed to the values of $44^{+11}_{-14}$~m~s$^{-1}$\xspace and $19\pm6$~m~s$^{-1}$\xspace from the eccentric fit. As there might be stellar signals that are coherent on time scales of hours, but not days, and given that we have much higher sampling during the transit night, it is worthwhile to investigate if the eccentricity hinges on those measurements and to what extent. Therefore, we performed a fit in which the eccentricity was allowed to vary, but where we only included the first and the last data point from the transit night. Here, we obviously do not try to fit the obliquity. From this we get values of $e=0.46^{+0.04}_{-0.03}$ and $\omega=92 \pm 4$~$^\circ$, consistent with the values from the run using all the RV data. \begin{figure*}[h!] \centering \includegraphics[width=\textwidth]{figures/bootstrap.png} \caption{{\bf Bootstrapping the orbit of TOI-2025~b.} {\it Left:} Bootstrap with 50,000 iterations (only 52 of these were omitted as the fit did not converge) displaying the eccentricity plotted against the argument of periastron. The points are colour coded in terms of the resulting $\chi^{2}_\nu$ from the fit. {\it Right:} The average $\chi^2_{\nu}$ for a given data point, but only counted when that point was drawn. We show the best-fitting eccentric model (i.e., from \tref{tab:mcmc}) on top, and the best-fitting model from our MCMC run with $e$ fixed to 0 in the bottom. Note that here we have omitted all, but the first and last data point from the transit night (\fref{fig:rm_toi2025}), and we have not added the jitter term in quadrature.} \label{fig:bootstrap} \end{figure*} Next we performed a bootstrap experiment using the RV data only. In our bootstrap we used alternate realizations of the data in \tref{tab:rv_toi2025}, again excluding all but the first and last data point from the transit night. After redrawing a data set from the original data we fit for $e$, $\omega$, $\gamma_\mathrm{FIES}$, $\gamma_\mathrm{FIES+}$, $K$, and $\dot{\gamma}$. In \fref{fig:bootstrap} we plot the results for $e$ and $\omega$ for the 50,000 realizations. We have colour coded the points according to the reduced chi-squared, $\chi^{2}_{\nu}$, which shows that the lowest values for $\chi^{2}_{\nu}$ are found around $e\sim0.4$ and $\omega\sim90^\circ$. However, leaving out certain data points might result in a (more) circular orbit. Which data points 'drive' the eccentricity can be seen in the right panel of \fref{fig:bootstrap}. Therefore, we conclude that our result for the eccentricity is significant, but hinges on relatively few data points. In addition to finding an eccentric orbit for the planet, we also measured the projected obliquity of TOI-2025. We find the projected obliquity to be consistent with \review{no misalignment} $9^{+36}_{-34}$~$^\circ$\xspace. The relevant transit RVs and our best-fitting model can be seen in \fref{fig:rm_toi2025}. Despite having only measured the projected obliquity, $\lambda$, here, we can make it probable that it is close to the obliquity, $\psi$, which necessitates the stellar inclination along the line-of-sight to be close to $90^\circ$. That $i_\star$ is close to $90^\circ$ is supported by Figure~3 in \citet{Louden2021}, where a correlation between $T_\mathrm{eff}$ and $v \sin i_\star$ is plotted. From this plot we should not expect $v \sin i_\star$ to be markedly different from the value of $6.0 \pm 0.3$~km~s$^{-1}$ given the effective temperature for TOI-2025 of $\sim5900$~K that we have found. This therefore suggests that the system is aligned. \section{Discussion and conclusions} \label{sec:conclusions} We validated and characterised three hot Jupiters discovered by \tess: TOI-1820~b, TOI-2025~b, and TOI-2158~b. Common for all three systems is that we in some way or another see evidence for companions. The outer companions may have played a role in the migration of the gas giants, thus shaping the final architecture of the systems. \citet{Ngo2016} argue that sites hosting outer stellar companions are either more favorable environments for gas giant formation at all separations, or the presence of stellar companions might drive the inwards migration, e.g., through Kozai-Lidov \citep{Kozai1962,Lidov1962}, or other dynamical processes. Through our speckle interferometry of TOI-1820, we detected a $\sim$4~mag fainter stellar companion at a distance of $\sim$110~AU from the bright host. It would be interesting to get good estimates of the stellar parameters for this companion in order to assess whether it would have been able to drive Kozai-Lidov cycles responsible for the migration. If the outer companions are planets within $\sim$1~AU from the stellar host, \citet{Becker2017} found that they should be coplanar with the inner hot Jupiters, suggesting that Kozai-Lidov migration would not be viable. However, if these companions are found at greater distances (gas giants $\gtrsim$5~AU or stellar $\gtrsim$100~AU) they could still be inclined and the formation of the hot Jupiter could take place through Kozai-Lidov migration \citep{Lai2018}. In the RVs for both TOI-2025 and TOI-2158 we see long-term trends; a linear trend in the case of TOI-2025 and a quadratic trend for the TOI-2158 system. In contrast to TOI-1820 the companions in TOI-2025 and TOI-2158 are likely of planetary, or at least substellar, nature and closer in (cf. the mass and period estimates in Section~\ref{sec:results}). As the companions in TOI-2025 and TOI-2158 are most likely found beyond 1~AU given the (lower) estimates for their periods and the stellar masses, Kozai-Lidov migration could be a viable transport mechanism for TOI-2025~b and TOI-2158~b. \review{\tess might be able to shed more light on these outer companions as more sectors become available. According to the Web \tess Viewing Tool\footnote{\url{https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py}}, TOI-2025 should be observed again in Sectors 52, 53, and 58-60, and TOI-2158 is set to be observed in Sector 53.} In \fref{fig:tidal} we show the tidal diagram (left) and modified tidal diagram (right) from \citet{Bonomo2017} with our measurements for TOI-1820~b, TOI-2025~b, and TOI-2158~b. We find that the orbital eccentricity of TOI-2158~b is consistent with $e=0$. This planet joins the small group of planets in \citet{Bonomo2017} with circular orbits and relatively large values for $a/a_{\rm R}$, $a_{\rm R}$ being the Roche limit. This would allude to disc migration, however, given the age of $8 \pm 1$~Gyr for TOI-2158, the orbit of the planet might have had sufficient time to circularise, should the migration have taken place through high-eccentricity migration. For TOI-1820~b we find a modest eccentricity of $0.043^{+0.008}_{-0.009}$\xspace (about three times that of Earth). In \fref{fig:tidal} the planets with modest eccentricities are found at various relative masses and various relative distances. From the modified tidal diagram it appears that TOI-1820~b should have a circularisation time scale of around 1-2~Gyr and with the age of $11 \pm 2$~Gyr for TOI-1820, this leaves plenty of time for the system to dampen the eccentricity in the case of high-eccentricity migration. However, this modest eccentricity is not irreconcilable with disc migration \citep{Dawson2018}. In contrast TOI-2025~b belongs to the subgroup of systems with significant eccentricity. The planet TOI-2025~b is too massive for the star to effectively raise tides on the planet in order to circularise the orbit, meaning that the circularisation time scale is too long for the orbit to have been circularised \citep{Dawson2018}. The modified tidal diagram suggests that the circularisation time scale could be some $10$~Gyr, which is much longer than the age of $1.7 \pm 0.2$~Gyr for this system. On the same token the planet seems to be massive enough for it to effectively raise tides on the star, while the star is sufficiently cool for tidal dissipation to be efficient \citep{Winn2010,Albrecht12b}. The projected obliquity we find for TOI-2025 is in-line with other massive planets on eccentric, aligned orbits, such as HD~147506b \citep{Winn2007}, HD~17156~b \citep{Narita2009}, and HAT-P-34~b \citep{Albrecht12b}. Contrary to these findings, \citet{RiceWangLaughlin2022} has found that cool stars ($T_\mathrm{eff}<$6100~K) harboring eccentric planets tend to have higher obliquities. Although, due to the sample size it is still unclear whether misalignment is associated with orbital eccentricity. Given the orbital, stellar, and planetary parameters the low projected obliquity in TOI-2025 might be the result of tidal alignment \citep{Albrecht2022}. If so it would be interesting to further reduce the uncertainty of the obliquity measurement to test if the system is aligned to within $1^\circ$ as recently observed in some systems \citep{Albrecht2022}. This would suggest tidal alignment as primordial alignment would presumably lead to a certain spread, as it has apparently done in the Solar System. TOI-1820 and TOI-2158 would for similar reasons be excellent RM targets as well. In addition their higher impact parameters might lead to an even higher accuracy. \begin{figure*} \centering \includegraphics[width=\textwidth]{figures/tidal_diagram.pdf} \caption{{\bf Tidal diagrams.} Tidal diagrams for transiting giant planets from \citet{Bonomo2017}. Open circles denote planets on circular orbits with $\sigma_e < 0.05$. Markers shown with plusses are planets with undertermined eccentrities, i.e., $\sigma_e > 0.05$. Most of these are consistent with $e=0$. Triangles represent planets with significant, but small eccentricities $e<0.1$, and squares are eccentric systems $\geq 0.1$. Adhering to this notation we have shown the planets in our sample with the corresponding marker, however, we have colour coded them for clarity. Created from the catalog \citet{Bonomo2017Cat}. {\it Left: Tidal diagram.} The solid and dashed lines show the position of a planet with a separation of $a=a_\mathrm{R}$ respectively $a=2a_\mathrm{R}$ ($a_\mathrm{R}$ being the Roche limit) and radius $R_\mathrm{p}=1.2$~R$_\mathrm{J}$. The dotted line is a circularisation isochrone for a planet with $P=3$~d, $Q^{\prime}_\mathrm{p}=10^5$, and $e=0$. Note that in \citet{Bonomo2017} the planetary modified tidal quality factor, $Q^{\prime}_\mathrm{p}$, assumed for the isochrones is said to be $10^6$, while it seems that Fig.~8 (tidal diagram) and Fig.~9 (modified tidal diagram) were created using a value of $10^5$, which we have opted for here to make them comparable. {\it Right: Modified tidal diagram.} The dotted, dashed, and solid line denote the 1, 7, and 14~Gyr circularisation time scales, respectively, assuming $e=0$ and $Q^{\prime}_\mathrm{p}=10^5$. } \label{fig:tidal} \end{figure*} \section{Acknowledgements} The authors would like to thank the staff at the Nordic Optical Telescope for their help and expertise. This paper includes data taken at the Nordic Optical Telescope under the programs IDs 59-210, 59-503, 61-510, 61-804, 62-506, and 63-505. This study is based on observations made with the Nordic Optical Telescope, owned in collaboration by the University of Turku and Aarhus University, and operated jointly by Aarhus University, the University of Turku and the University of Oslo, representing Denmark, Finland, and Norway, the University of Iceland and Stockholm University at the Observatorio del Roque de los Muchachos, La Palma, Spain, of the Instituto de Astrofisica de Canarias. This paper includes data taken at The McDonald Observatory of The University of Texas at Austin. We acknowledge the use of public TESS data from pipelines at the TESS Science Office and at the TESS Science Processing Operations Center. Resources supporting this work were provided by the NASA High-End Computing (HEC) Program through the NASA Advanced Supercomputing (NAS) Division at Ames Research Center for the production of the SPOC data products. Funding for the Stellar Astrophysics Centre is provided by The Danish National Research Foundation (Grant agreement no.: DNRF106). A.A.B., B.S.S., and I.A.S. acknowledge the support of Ministry of Science and Higher Education of the Russian Federation under the grant 075-15-2020-780(N13.1902.21.0039). The numerical results presented in this work were obtained at the Centre for Scientific Computing, Aarhus \url{http://phys.au.dk/forskning/cscaa/}. This work makes use of observations from the LCOGT network. Part of the LCOGT telescope time was granted by NOIRLab through the Mid-Scale Innovations Program (MSIP). MSIP is funded by NSF. P. R. and L. M. acknowledge support from National Science Foundation grant No. 1952545. This research made use of Astropy,\footnote{http://www.astropy.org} a community-developed core Python package for Astronomy \citep{art:astropy2013,art:astropy2018}. This research made use of matplotlib \citep{misc:hunter2007}. This research made use of TESScut \citep{art:brasseur2019}. This research made use of astroplan \citep{misc:morris2018}. This research made use of SciPy \citep{misc:scipy2020}. This research made use of corner \citep{corner}. \bibliographystyle{aa} \section{Introduction} Giant planets on short period orbits (also called hot Jupiters) were the first planets to be discovered, and their numbers increased quickly during the first years of exoplanetary science. Their existence itself immediately posed a challenge on planet formation theories, which at the time only had one example, the Solar System. Despite almost three decades of discovery of hot Jupiters, there is still no consensus on their exact origin channel \citep{Dawson2018}. While it is still unclear whether hot Jupiters can form \emph{in situ} or not \citep{Batygin2016}, \emph{ex situ} formation processes require a mechanism responsible of transporting these giant planets from larger separations to the current close-in orbits. The two leading hypotheses for such large-scale migration that have been put forward are disc migration and high-eccentricity tidal migration. In the former scenario, the planets exchange angular momentum with the gas and dust particles in the circumstellar disc. As a result, the semi-major axis slowly shrinks, while the orbit remains circular \citep[e.g.,][]{Lin1996,Baruteau2014}. In contrast, the latter scenario could result in very eccentric and misaligned orbits, since it involves gravitational interactions with other bodies in the system \citep[e.g.,][]{Nagasawa2008,Chatterjee2008}. The advent of space-based transit search missions has led to the discovery of thousands of new exoplanet candidates \citep[see, e.g.,][]{Borucki2010,Huang2013,Livingston2018,Kruse2019}. Combining these discoveries with ground-based spectroscopic follow-up observations leads to a large sample of well characterised exoplanet systems, including the bulk density of the transiting planets, host star properties, orbital eccentricities, stellar obliquities and companionship of outer planets or stars \citep[see, e.g.,][]{Gandolfi2019,VanEylen2019, Carleo2020,Albrecht2021,Knudstrup2021,Smith2022}. Here we report on the discovery of three transiting hot Jupiters, TOI-1820b, TOI-2025b, and TOI-2158b. The transit-like features associated with these systems were detected by the Transiting Exoplanet Survey Satellite \citep[\tess;][]{Ricker15}. We have confirmed these as bona fide planets, and we have characterised the planets and their host systems in terms of masses and orbital eccentricities. For one system (TOI-2025) we additionally performed spectroscopic transit observations and used them to determine the sky-projected spin-orbit obliquity. During the preparation of this manuscript we became aware of the efforts of another team to announce the discovery of TOI-2025~b (Rodriguez in prep.). The results were determined independently, and the communication between the teams were strictly related to the coordination of the manuscripts. In Section~\ref{sec:tessphot} we describe the \tess photometry and data extraction. We present our ground-based observations, which include both additional photometry and spectroscopic follow-up as well speckle interferometry, in Section~\ref{sec:data}. \review{In Section~\ref{sec:stelpars} we explain how we obtain stellar parameters for the three systems.} The methodology behind our analysis is described in Section~\ref{sec:mcmc}. We discuss our results in Section~\ref{sec:results} before placing these planets in the context of the population from the literature and drawing our conclusions in Section~\ref{sec:conclusions}. \begin{figure*} \centering \includegraphics[width=\textwidth]{figures/lc_toi1820.pdf} \caption{{\bf Photometry for TOI-1820.} Our different photometric observations of TOI-1820 with the best-fitting transit model shown with a grey line, and the residuals following the subtraction of the best-fitting model shown below.} \label{fig:lc_toi1820} \end{figure*} \section{TESS photometry of candidate systems} \label{sec:tessphot} The transiting planet candidates, TOI-1820, TOI-2025, and TOI-2158 were identified by the Massachusetts Institute of Technology (MIT) Quick Look Pipeline \citep[QLP;][]{Huang2020} in a search of light curves extracted from the 30-minute cadence Full Frame Images (FFIs) using the box-least-squares \citep[BLS;][]{Kovacs2002,Hartman2016} algorithm. Transit signals were detected for all three systems, which were then alerted as TESS Objects of Interest (TOIs) by the TESS Science Office at MIT \citep{Guerrero21}. Two of the targets, namely TOI-2025 and TOI-2158, were subsequently put on the target list for 2 minute cadence. The 2 minute cadence data are processed by the Science Processing Operation Center \citep[SPOC;][]{SPOC} team at the NASA Ames Research Center, where light curves are extracted through simple aperture photometry \citep[SAP;][]{Twicken2010,Morris2020} and processed using the Presearch Data Conditioning \citep[PDC;][]{Smith2012,Stumpe2012,Stumpe2014} algorithm. \review{We downloaded and extracted all the \tess light curves from the target pixel files using the \texttt{lightkurve} \citep{lightkurve} package, where we use the implemented \texttt{RegressionCorrector} to correct for any background noise.} We also removed outliers. First we removed the transits from the light curve through a light curve model using parameters from an initial fit, next we applied a Savitsky-Golay filter and identified outliers through $5\sigma$ sigma clipping, which we then excluded from the unfiltered light curve with transits. For all the three systems, we confirmed the presence of the transit-like features identified by \review{QLP}, by performing an independent search using the BLS and the Transit Least Squares \citep[TLS;][]{Hippke19b} algorithm. We furthermore searched for additional transits, without finding hints of any. \subsection{TOI-1820} TOI-1820 was observed in Sector 22 (February 18 2020 and March 18 2020), with CCD 1 in {\it TESS'} camera 1 with a cadence of 30~minutes. TOI-1820 was alerted on 17 April, 2020 with an SNR of 53. \review{In the top left of \fref{fig:lc_toi1820} we show the \tess light curve phase folded to the periodic transit signal occurring every $\sim$4.9~d with a depth of $\sim$0.6\%.} \subsection{TOI-2025} TOI-2025 was observed with a 30 minute cadence using {\it TESS'} camera 3 in Sector 14 (July 18 2019 to August 15 2019), Sectors 18-20 (November 2 2019 to January 21 2020), Sectors 24-26 (April 16 2020 to July 4 2020), as well as in 2~min. cadence in Sector 40 (June 24 2021 to July 23 2021) \review{and Sector 47 (December 30 2021 to January 28 2022)}, also with camera 3. \review{Since the \tess light curves of TOI-2025 display a periodic $\sim$8.9~d dip of $\sim$0.7\% with an SNR of 151, the candidate was announced as a TOI on 19 June, 2020. The two panels on the top left of \fref{fig:lc_toi2025} shows the phase folded \tess light curves.} \begin{figure*} \centering \includegraphics[width=\textwidth]{figures/lc_toi2025.pdf} \caption{{\bf Photometry for TOI-2025.} Our different photometric observations of TOI-2025 with the best-fitting transit model shown with a grey line, and the residuals following the subtraction of the best-fitting model shown below.} \label{fig:lc_toi2025} \end{figure*} \subsection{TOI-2158} TOI-2158 was observed with {\it TESS'} camera 1 during Sector 26 (June 8 2020 to July 4 2020) with a cadence of 30 minutes, and in Sector 40 (June 24 2021 to July 23 2021) with a 2 min. cadence. On 10 August, 2020 TOI-2158 was announced as a TOI with an SNR of 59. \review{The \tess light curve for TOI-2158 can be seen in the top of \fref{fig:lc_toi2158} phase folded onto the $\sim$8.6~d showing the $\sim$0.5\% decrease in flux.} \begin{figure*} \centering \includegraphics[width=\textwidth]{figures/lc_toi2158.pdf} \caption{{\bf Photometry for TOI-2158.} Our different photometric observations of TOI-2158 with the best-fitting transit model shown with a grey line, and the residuals following the subtraction of the best-fitting model shown below.} \label{fig:lc_toi2158} \end{figure*} \begin{table*}[t] \centering \caption{Parameters of the stellar hosts in the three systems of this study. } \begin{threeparttable} \begin{tabular}{l l c c c} \toprule & \tess Object of Interest &TOI-1820 & TOI-2025 & TOI-2158\\ & \tess Input Catalog & TIC 393831507 & TIC 394050135 & TIC 342642208 \\ & TYCHO-2 & TYC 1991-1863-1 & TYC 4595-797-1 & TYC 1577-691-1\\ \midrule $G$\tnote{a}\tnote{a} & Gaia $G$ magnitude & 10.97 & 11.36 & 10.67 \\ $\alpha_\mathrm{J2000}$\tnote{a} & Right Ascension & 12:30:44.813 & 18:51:10.861 & 18:27:14.413 \\ $\delta_\mathrm{J2000}$\tnote{a} & Declination & 27:27:07.206 & 82:14:43.492 & 20:31:36.793 \\ $\mu_\alpha$\tnote{a} & Proper motion in R.A. (mas yr$^{-1}$) & 50.54$\pm$0.08 & 2.79$\pm$0.04 & -44.00$\pm$0.04 \\ $\mu_\delta$\tnote{a} & Proper motion in Dec. (mas yr$^{-1}$) & -33.93$\pm$0.08 & -4.52$\pm$0.05 & 7.89$\pm$0.07 \\ $\varpi$\tnote{a} & Parallax (mas) & 4.00$\pm$0.06 & 2.95$\pm$0.02 & 5.01$\pm$0.04 \\ $\pi$\tnote{a} & Distance (pc) & 250$\pm$4 & 339$\pm$2 & 200$\pm$1 \\ $T_\mathrm{eff}$\tnote{b} & Effective temperature (K) & 5734$\pm$50 & 5880$\pm$53 & 5673$\pm$50 \\ $\log g$\tnote{b} & Surface gravity (dex) & 4.24$\pm$0.05 & 4.17$\pm$0.06 & 4.19$\pm$0.05 \\ $\rm [Fe/H]$\tnote{b} & Metallicity (dex) & 0.14$\pm$0.15 & 0.18$\pm$0.08 & 0.47$\pm$0.08 \\ $v\sin i_\star$\tnote{b} & Projected rotational velocity (km s$^{-1}$) & 4.5$\pm$0.8 & 6.0$\pm$0.3 & 3.7$\pm$0.5 \\ $A_\mathrm{V}$\tnote{c} & Extinction (mag) & 0.04$\pm$0.02 & 0.10$\pm$0.03 & 0.24$\pm$0.02 \\ $F_\mathrm{bol}$\tnote{c} & Bolometric flux (erg s$^{-1}$ cm$^{-2}$) & $(1.017 \pm 0.018)\times10^{-9}$ & $(7.02 \pm 0.16)\times10^{-10}$ & $(1.540 \pm 0.018)\times 10^{-9}$ \\ $R_\star$\tnote{c} & Radius (R$_\odot$) & 1.51$\pm$0.06 & 1.56$\pm$0.03 & 1.41$\pm$0.03 \\ $M_\star$\tnote{c} & Mass (M$_\odot$) & 1.04$\pm$0.13 & 1.32$\pm$0.14 & 1.12$\pm$0.12 \\ $P_\mathrm{rot}/\sin i$\tnote{c} & Rotation period (days) & 25$\pm$6 & 13.2$\pm$0.7 & 19$\pm$3 \\ $P_\mathrm{pred}$\tnote{c} & Predicted rotation period (days) & 40$\pm$2 & - & 43$\pm$3 \\ $\log$ R$^{\prime}_\mathrm{HK}$\tnote{c} & Activity & -5.37\tnote{d} & - & -5.06$\pm$0.05 \\ $\tau$\tnote{c} & Age (Gyr) & 11$\pm$2 & 1.7$\pm$0.2 & 8$\pm$1 \\ $\rho$\tnote{c} & Density (g cm$^{-3}$) & 0.43$\pm$0.07 & 0.49$\pm$0.06 & 0.56$\pm$0.07 \\ \bottomrule \end{tabular} \begin{tablenotes} \item[a] \gaia EDR3 \citep{GaiaEDR3}. \item[b] This work: SPC. \item[c] This work: SED. \item[d] This work: HIRES spectra. \end{tablenotes} \end{threeparttable} \label{tab:targets} \end{table*} \section{Ground-based observations} \label{sec:data} In addition to \tess\ space-based photometry, we gathered ground-based photometry via the Las Cumbres Observatory Global Telescope \citep[LOCGT;][]{Brown:2013} as well as ground-based spectroscopic measurements from different telescopes. Reconnaissance spectroscopy was acquired with the High Resolution Echelle Spectrometer \citep[HIRES;][]{Vogt1994} located at the Keck Observatory, the Tillinghast Reflector Echelle Spectrograph \citep[TRES;][]{Furesz2008} situated at the Fred L. Whipple Observatory, Mt. Hopkins, AZ, USA, as well as the FIber-fed Echelle Spectrograph \citep[FIES;][]{Frandsen99,Telting14} at the Nordic Optical Telescope \citep[NOT;][]{NOT2010} of the Roque de los Muchachos observatory, La Palma, Spain. To confirm and characterize the systems in terms of masses, \review{bulk} densities, and orbital parameters, we monitored the systems with the FIES spectrograph, and the Tull Coude Spectrograph \citep{Tull1995} at the 2.7\,m Harlan J. Smith telescope at the McDonald Observatory, Texas, USA. The FIES and Tull spectrographs are both cross-dispersed spectrographs with resolving powers of 67,000 (in high resolution mode) respectively 60,000. Finally, to investigate companionship in the systems we obtained speckle imaging using the 2.5-m reflector at the Caucasian Mountain Observatory of Sternberg Astronomical Institute \citep[CMO SAI;][]{Shatsky2020}. \subsection{Speckle interferometry with SPP} TOI-2158, TOI-2025, TOI-1820 were observed using speckle interferometry with the SPeckle Polarimeter (SPP; \citealt{Safonov2017}) on the 2.5-m telescope at the Sternberg Astronomical Institute of Lomonosov Moscow State University (SAI MSU). The detector has a pixel scale of 20.6~mas px$^{-1}$, and the angular resolution was 83~mas. The atmospheric dispersion compensation by two direct vision prisms allowed us to use the relatively broadband $I_c$ filter. For all targets 4000 of 30~ms frames were obtained. The detection limits are provided in \fref{fig:speckle}. For TOI-2158 and TOI-2025 we did not detect stellar companions, with limits for $\delta$mag for any potential companion of 6.5~mag and 7~mag at $1^{\prime\prime}$, respectively. \review{\subsubsection{Stellar companion to TOI-1820}} For TOI-1820 we detected a companion 4.0 magnitudes fainter than the primary on 2020-12-02 and 2021-07-15. The separation, position angle, and contrast were determined by the approximation of average power spectrum with the model of a binary star \citep[see Eq. (9) in][]{Safonov2017}. As the weight for the approximation, we took the inverse squared uncertainty of the power spectrum determination. The results are presented in \tref{tab:spp_TOI1820}. All binarity parameters for the two dates coincide within the uncertainties. According to \gaia EDR3 \citep{GaiaEDR3}, the proper motion of TOI-1820 is relatively high, being $50.54\pm0.08$~mas\,yr$^{-1}$ and $-33.93\pm0.08$~mas\,yr$^{-1}$ along right ascension and declination, respectively. If the companion of TOI-1820 were a background star, its position with respect to TOI-1820\footnote{In the SIMBAD entry \url{http://simbad.u-strasbg.fr/simbad/sim-basic?Ident=TYC+1991-1863-1&submit=SIMBAD+search} TOI-1820 is listed as a member of the cluster Melotte 111, however, the proper motion ($\mu_\alpha\sim-12$~mas yr$^{-1}$, $\mu_\delta\sim-9$~mas yr$^{-1}$) and parallax ($\varpi\sim12$~mas) are significantly different from the \gaia EDR3 \citep{GaiaEDR3} values listed in \tref{tab:targets}.} would change by $37.694\pm0.051$~mas between the two epochs of our observations. As long as we see a displacement much smaller than this, we conclude that TOI-1820 and its companion are gravitationally bound. With a \gaia parallax of 4~mas (see \tref{tab:targets}) we find a physical separation between the target and the companion of $\approx$110~AU. Furthermore, from our HIRES reconnaissance and using the algorithm from \citet{Kolbl2015}, we can constrain \review{this} secondary companion to only contribute 1\% in flux if the RV separation between the components in TOI-1820 is greater than 10 km/s. If the RV separation is less than 10 km~s$^{-1}$, the flux of the secondary \review{would have been unconstrained without the speckle interferometry.} \begin{table} \caption{Results from the SPP speckle interferometry of TOI-1820: separation, position angle, and contrast.} \begin{tabular}{ccccc} \hline Date & Separation & P.A. & $\Delta m$ \\ UT & mas & $^{\circ}$ & \\ \hline 2020-12-02 & $470\pm5$ & $102.6\pm0.3$ & $4.0\pm0.1$ \\ 2021-07-15 & $474\pm8$ & $101.7\pm0.9$ & $3.7\pm0.1$ \\ \hline \end{tabular} \label{tab:spp_TOI1820} \end{table} \begin{figure*} \centering \includegraphics[width=\textwidth]{figures/speckle_SAI_no_grid.png} \caption{{\bf Speckle interferometry.} SAI-2.5m speckle sensitivity curve and ACF for TOI-1820 (left panel), TOI-2025 (middle panel), and TOI-2158 (right panel). All images shown here were taken in the $I$-band. Only the speckle image of TOI-1820 shows evidence of a nearby companion as can be seen as the bump in the ACF around 0.45~arcsec.} \label{fig:speckle} \end{figure*} \subsection{\review{Photometric Follow-up}} We acquired ground-based time-series follow-up photometry of TOI-1820, TOI-2025, and TOI-2158 as part of the \textit{TESS} Follow-up Observing Program \citep[TFOP;][]{collins:2019}\footnote{https://tess.mit.edu/followup} to attempt to (1) rule out or identify nearby eclipsing binaries (NEBs) as potential sources of the detection in the \textit{TESS} data, (2) detect the transit-like events on target to confirm the depth and thus the \textit{TESS} photometric deblending factor, (3) refine the \textit{TESS} ephemeris, and (4) place constraints on transit depth differences across optical filter bands. We used the {\tt TESS Transit Finder}, which is a customized version of the {\tt Tapir} software package \citep{Jensen:2013}, to schedule our transit observations. Unless otherwise noted, the images were calibrated and the photometric data were extracted using the {\tt AstroImageJ} ({\tt AIJ}) software package \citep{Collins:2017}. The observing facilities are described below, and the individual observations are detailed in Table \ref{table:transitfollowup}. The ground-based light curves for TOI-1820, TOI-2025, and TOI-2158 are shown in \fref{fig:lc_toi1820}, \fref{fig:lc_toi2025}, and \fref{fig:lc_toi2158}, respectively. We observed six transits using the Las Cumbres Observatory Global Telescope \citep[LCOGT;][]{Brown:2013} 1.0\,m and 0.4\,m networks. Three transits were observed in alternating filter mode, resulting in a total of nine light curves. The 1\,m telescopes are equipped with $4096\times4096$ pixel SINISTRO cameras having an image scale of $0\farcs389$ per pixel, resulting in a $26\arcmin\times26\arcmin$ field of view. The 0.4-m telescopes are equipped with $2048\times3072$ pixel SBIG STX6303 cameras having an image scale of 0$\farcs$57 pixel$^{-1}$, resulting in a $19\arcmin\times29\arcmin$ field of view. The images were calibrated by the standard LCOGT {\tt BANZAI} pipeline \citep{McCully:2018}. We observed a transit from KeplerCam on the 1.2\,m telescope at the Fred Lawrence Whipple Observatory using alternating filters, resulting in two light curves. The $4096\times4096$ Fairchild CCD 486 detector has an image scale of $0\farcs336$ per pixel, resulting in a $23\farcm1\times23\farcm1$ field of view. We observed one transit each from the Kotizarovci Private Observatory 0.3\,m telescope near Viskovo, Croatia, the C.R. Chambliss Astronomical Observatory (CRCAO) 0.6\,m telescope at Kutztown University near Kutztown, PA, and the Conti Private Observatory 0.3\,m telescope near Annapolis, MD. The Kotizarovci telescope is equipped with a $765\times510$ pixel SBIG ST7XME camera having an image scale of $1\farcs2$ per pixel, resulting in a $15\arcmin\times10\arcmin$ field of view. The CRCAO telescope is equipped with a $3072\times2048$ pixel SBIG STXL-6303E camera having an image scale of $0\farcs76$ after $2\times2$ pixel image binning, resulting in a $13\arcmin\times20\arcmin$ field of view. The Conti telescope is equipped with a $2750\times2200$ pixel StarlightXpress SX694M camera having an image scale of $1\farcs0$ after $2\times2$ pixel image binning, resulting in a $23\arcmin\times18\arcmin$ field of view. \subsection{RV Follow-up} Our NOT and McDonald Observatory monitoring was carried out from May 2020 to September 2021. In \tref{tab:rv_toi1820}, \tref{tab:rv_toi2025}, and \tref{tab:rv_toi2158} we list all epochs and RVs for TOI-1820, TOI-2025, and TOI-2158, respectively. We reduced the FIES spectra using the methodology described in \citet{Buchhave2010} and \citet{Gandolfi2015}, which includes bias subtraction, flat fielding, order tracing and extraction, and wavelength calibration. We traced the RV drift of the instrument acquiring long-exposed ThAr spectra ($\sim$80~s) immediately before and after each science observation. The science exposure time was set to 1800-2700~s seconds, depending on the sky conditions and scheduling constraints. As our exposures were longer than 1200~s, we split the exposure in 3 sub-exposures to remove cosmic ray hits using a sigma clipping algorithm while combining the frames. RVs were derived via multi-order cross-correlations, using the first stellar spectrum as a template. For Tull we used 30 minute integrations to give an SNR of 60-70 per pixel. An $I_2$ gas absorption was used to provide the high precision radial velocity metric. All Tull spectra were reduced and extracted using standard IRAF tasks. Radial velocities were extracted using the Austral code \citep{Endl2000}. To validate the planetary nature of the transiting signal in TOI-1820 and fully characterize the system, we acquired 18 spectra with FIES and 12 spectra with Tull, shown to the left in \fref{fig:rv_all}. \fref{fig:gls_all} displays the generalised Lomb-Scargle \citep[GLS;][]{Lomb76,Scargle82} periodograms with TOI-1820 to the left in which the $\sim$4.9~d transiting signal has been overplotted as the dashed line. This periodicity corresponds to the peak we see in the GLS of the RVs. \begin{figure*} \centering \includegraphics[width=\textwidth]{figures/rv_all.pdf} \caption{{\bf Radial velocities.} From left to right are our FIES (blue), FIES+ (orange), and Tull (green) RVs for TOI-1820, TOI-2025, and TOI-2158, respectively, where the black part of the error bars denote the jitter added in quadrature. The grey curves are the best-fitting models. In the bottom row are the residuals after subtracting the best-fitting models.} \label{fig:rv_all} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{figures/gls_all.pdf} \caption{{\bf Generalised Lomb-Scargle periodograms}. From left to right are the GLS periodograms for TOI-1820, TOI-2025, and TOI-2158, respectively. In the top row we show the GLS periodograms from directly from the RVs, and in the bottom we have removed the orbit of the planet. The dashed vertical lines from left to right denote the 4.9~d, 8.9~d, and 8.6~d signals seen in the photometry for TOI-1820, TOI-2025, and TOI-2158, respectively. The solid lines are our baselines, i.e., $1/(t_\mathrm{last \ RV} - t_\mathrm{first \ RV})$ with $t_\mathrm{first \ RV}$ and $t_\mathrm{last \ RV}$ being the times for the first and last acquired RVs. \review{The horizontal dashed lines show the 1\% false alarm probability.}} \label{fig:gls_all} \end{figure*} We collected a total of 46 FIES RVs to validate the planetary nature of the signal as well as to characterise the TOI-2025 system. In the middle panel of \fref{fig:rv_all}, FIES+ refers to RVs collected after 1st of July, 2021 (see Section~\ref{sec:mcmc}). As before, the transiting signal coincides with the peak in the GLS periodogram in the middle panels of \fref{fig:gls_all}. For TOI-2158 we collected 30 FIES RVs and 23 Tull RVs shown in the right panel of \fref{fig:rv_all}. As for the other two systems, the peak associated with the $\sim$8.6~d period planet is detected in the GLS periodogram in \fref{fig:gls_all}, since it is stronger than the False Alarm Probability. \review{\section{Stellar parameters}\label{sec:stelpars} We made use of the the Stellar Parameter Classification \citep[SPC;][]{Buchhave2012,Buchhave2014,Bieryla2021} tool to obtain stellar parameters, where we reduced and extracted the spectra following the approach in \citet{Buchhave2010}. For TOI-2025 and TOI-2158 we used the TRES spectra as reconnaissance, and for TOI-1820 we used our FIES spectra. The derived stellar parameters are tabulated in \tref{tab:targets}. In addition, for TOI-1820 we also used our HIRES spectra with \texttt{Specmatch-Synth} to derive stellar parameters as described in \citet{Petigura2017}. From the two HIRES spectra, we find $T_\mathrm{eff}=5695 \pm 100$~K, $\log g=4.1 \pm 0.1$, [Fe/H]$=0.01\pm0.06$, and $v \sin i = 3.07 \pm 0.77$~km~s$^{-1}$. We also estimated the R$^\prime_\mathrm{HK}$ activity indicator. As a result we obtained $\log $R$^{\prime}_\mathrm{HK}$ = -5.37, a hint that the star is inactive. } \subsection{SED} As an independent check on the derived stellar parameters, we performed an analysis of the broadband spectral energy distribution (SED) together with the {\it Gaia\/} EDR3 parallax in order to determine an empirical measurement of the stellar radius, following the procedures described in \citet{Stassun:2016,Stassun:2017,Stassun:2018}. In short, we pulled the $B_T V_T$ magnitudes from Tycho-2, the $BVgri$ magnitudes from APASS, the $JHK_S$ magnitudes from {\it 2MASS}, the W1--W4 magnitudes from {\it WISE}, and the $G G_{\rm BP} G_{\rm RP}$ magnitudes from {\it Gaia}. We also used the {\it GALEX} NUV flux when available. Together, the available photometry spans the stellar SED over the wavelength range 0.35--22~$\mu$m, and extends down to 0.2~$\mu$m when {\it GALEX} data are available (see \fref{fig:sed}). We performed a fit using Kurucz stellar atmosphere models, with the priors on effective temperature ($T_{\rm eff}$), surface gravity ($\log g$), and metallicity ([Fe/H]) from the spectroscopically determined values. The remaining free parameter is the extinction ($A_V$), which we restricted to the maximum line-of-sight value from the dust maps of \citet{Schlegel:1998}. \begin{figure*} \centering \includegraphics[width=\textwidth]{figures/seds.png} \caption{{\bf Spectral Energy Distribution.} The SEDs for TOI-1820 (left panel), TOI-2025 (middle panel), and TOI-2158 (right panel). Red symbols represent the observed photometric measurements, where the horizontal bars represent the effective width of the passband. Blue symbols are the model fluxes from the best-fit Kurucz atmosphere model (black).} \label{fig:sed} \end{figure*} \review{The resulting SED fits are shown in \fref{fig:sed}, with reduced $\chi^2$ values of 1.5, 1.2, and 1.2, respectively. The resulting best-fit are summarized in \tref{tab:targets}. Integrating the (unreddened) model SED gives the bolometric flux at Earth, $F_{\rm bol}$, which with the $T_{\rm eff}$ and the {\it Gaia\/} EDR3 parallax \citep[with no systematic adjustment; see][]{StassunTorres:2021} gives the stellar radius. The stellar mass can then be determined empirically from the stellar radius and the spectroscopic $\log g$, and compared to the mass estimated from the empirical relations of \citet{Torres:2010}. Finally, we can estimate the age of the star from the spectroscopic $R'_{\rm HK}$ via the empirical relations of \citet{Mamajek:2008}, which we can also corroborate by comparing the stellar rotation period predicted at that age from the empirical gyrochronology relations of \citet{Mamajek:2008} against that determined from the stellar radius together with the specroscopic $v\sin i$. These parameters are also summarised in \tref{tab:targets}. The rather old ages inferred for TOI-1820 and TOI-2158 would predict slow stellar rotation periods of $P_{\rm rot} = 40 \pm 2$~d and $P_{\rm rot} = 43 \pm 3$~d, respectively, whereas the (projected) rotational periods estimated from the spectroscopic $v\sin i$ together with $R_\star$ gives $P_{\rm rot} / \sin i = 24.9 \pm 6.3$~d and $P_{\rm rot} / \sin i = 19.3 \pm 3.2$~d, suggesting either somewhat younger ages, or else a process that kept the stars rotating faster than expected for their ages. } It is interesting that both TOI-1820 and TOI-2158 appear to be rotating faster than what would be expected given their ages, especially seeing as both of these stars host a hot Jupiter. Discrepancy between ages inferred from isochrone fitting and gyrochronology among hot Jupiter hosts has been seen in studies by \citet{Brown2014} and \citet{Maxted2015}, both studies suggested tidal spin-up as a possible explanation. Further evidence for this has recently been found in \citet{Arevalo2021}. Tidal spin-up might therefore be the mechanism responsible for the discrepancy we are seeing in TOI-1820 and TOI-2158. Of course, this might also apply to the TOI-2025 system as this system also harbors a hot Jupiter, but as this system is younger, the effect might be less pronounced. \section{Joint analysis} \label{sec:mcmc} To estimate the planetary and orbital parameters, we fit the photometry and the RVs jointly, where we extract confidence intervals through Monte Carlo Markov Chain (MCMC) sampling using the \texttt{emcee} package by \citet{Foreman}. We model the light curves using the \texttt{batman} package \citep{Kreidberg}, which utilizes the formalism by \citet{Mandel2002}. To account for any morphological light curve distortion \citep{Kipping2010} caused by the 30~min. sampling, we oversample our 30~min. cadence light curves to correspond to a sampling of 2~min. In an attempt to mitigate correlated noise in the \tess photometry we make use of Gaussian Process (GP) Regression through the \texttt{celerite} package \citep{celerite}. We use the Matérn-3/2 kernel, which include two hyper parameters; the amplitude of the noise, $A$, and the time scale, $\tau$. For our ground-based photometry we do not have long out-of-transit baselines. Therefore, we did not model the noise from these transits with GPs, instead we use a Savitsky-Golay filter to de-trend the data with each draw in our MCMC. To fit the RVs we use a Keplerian orbit, where we naturally have different systemic velocities, $\gamma$, for the RVs stemming from FIES and Tull, when this is relevant. Due to a refurbishment of the FIES spectrograph, an offset in RV was introduced between the RVs obtained before 1st of July 2021 and those obtained after. We assign two independent systemic velocities and two independent jitter terms to RVs obtained before (FIES) and after (FIES+) this date. Our MCMC analysis for the three systems stepped in $\cos i$ instead of $i$, as well as in $\sqrt{e}\cos \omega$ and $\sqrt{e}\sin \omega$ instead of $e$ and $\omega$. Furthermore, the code stepped in the sum of the limb darkening parameters, i.e., $q_1 + q_2$, where we applied a Gaussian prior with a width of 0.1. We instead fixed the difference fixed, $q_1 - q_2$, during the sampling. We retrieved the starting values of $q_1$ and $q_2$ for the \tess passband from the table \citet{Claret17}, while we used the values from \citet{Claret2013} for the ground-based photometry. Furthermore, we used $V$ as a proxy for our transit observations of TOI-2025 using FIES. The initial and resulting values for the limb-darkening coefficients can be found in \tref{tab:ld_toi1820}, \tref{tab:ld_toi2025}, and \tref{tab:ld_toi2158}. We list all the adopted priors in \tref{tab:priors}, where a hyphen denotes that the associated parameter is not relevant for that run. We define our likelihood function as \begin{equation} \label{equ:likelihood} \log \mathcal{L} =-0.5 \sum_{i=1}^{N} \left [ \frac{(O_i - C_i)^2}{\sigma_i^2} + \log 2 \pi \sigma_i^2 \right] + \sum_{j=1}^{M} \log \mathcal{P}_{j}\, , \end{equation} where $N$ indicates the total number of data points from photometry and RVs. $C_i$ represents the model corresponding to the observed data point $O_i$. $\sigma_i$ represents the uncertainty for the $i$th data point, where we add a jitter term in quadrature and a penalty in the likelihood for the RVs. $\mathcal{P}_j$ is the prior on the $j$th parameter. \review{We run our MCMC until convergence, which we assess by looking at the rank-normalised $\hat{R}$ diagnostic test as implemented in the \texttt{rhat} module in \texttt{ArviZ} \citep{arviz_2019}.} \subsection{TOI-1820} Given the large separation of around 110~AU for the companion, the orbital period must be rather large and the expected $K$-amplitude must be rather small, meaning that, even if it is bound, it will not affect our RVs. The companion will, however, dilute the light curve. We therefore include a contaminating factor, where we write the total flux as a function of time as $F(t)=(F_1(t) + F_2)/(F_1 + F_2)$ with $F_1(t)$ and $F_1$ being the flux respectively in- and out-of-transit from the planet hosting star, and $F_2$ is the (constant) flux from the contaminating source (or sources). Here, we included the flux from the contaminating source as a fraction of the host, $F_2/F_1$, as the difference in magnitude, i.e., $\delta \rm M = -2.5 \log (F_2/F_1)$. \subsection{TOI-2025} For TOI-2025, we have two sets of light curves with different cadences (2~min. and 30~min.), we apply two different oversampling factors, while using the same limb darkening coefficients for both. We observed a spectroscopic transit of TOI-2025 at the NOT (FIES+) on the night starting on the 8th of August, 2021 allowing us to determine the projected obliquity, $\lambda$, of the host star. The RVs obtained during this transit night can be seen in \fref{fig:rm_toi2025}. We therefore also included a model for the RM effect using the algorithm by \citet{Hirano2011} for this fit. We used our SPC value in \tref{tab:targets} for $v \sin i_\star$ as a prior. For the macro- and micro-turbulence, we used priors stemming from the relations in \citet{Doyle2014} and \citet{Bruntt10}, respectively, along with the stellar parameters in \tref{tab:targets}. \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/rm_TOI2025.pdf} \caption{{\bf The Rossiter-McLaughlin effect in TOI-2025.} Our in-transit observations of TOI-2025 with FIES+. {\it Top:} The Keplerian orbit and linear trend has been subtracted from the RVs to {\bf better show} the RM effect with the grey line being the best-fitting model. {\it Bottom:} Here we have further subtracted this best-fitting model from the RVs.} \label{fig:rm_toi2025} \end{figure} We carried out three MCMC runs for TOI-2025. One where we included an additional first-order acceleration parameter, $\dot{\gamma}$, and one where we did not allow for any long-term drift, \fref{fig:rv_toi2025_drift}. Furthermore, we performed a fit where we fixed the eccentricity to 0, but allowed for a linear trend, \fref{fig:rv_toi2025_no_ecc}. \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/rv_TOI2025_drift.pdf} \caption{{\bf The long-term trend in TOI-2025.} The linear trend in TOI-2025. Symbols are the same as in \fref{fig:rv_all}, but here the RVs are plotted against time, and we have subtracted the planetary signal. {\it Top:} A fit where we only allow for a linear trend. {\it Bottom:} Here we do not include any long-term drift.} \label{fig:rv_toi2025_drift} \end{figure} \subsection{TOI-2158} Similarly to the case of TOI-2025, the RVs of TOI-2158 show a long-term trend. The Tull baseline is sufficiently long to reveal the curvature in the entire RV signal, \fref{fig:rv_toi2518_drift}. We therefore performed three runs, 1) a run where we included both a first-order and a second order, $\ddot{\gamma}$, acceleration parameter, 2) a run with a linear drift, and 3) a run without any long-term trend. \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/rv_TOI2158_drift.pdf} \caption{{\bf The long-term trend in TOI-2158.} The quadratic trend in TOI-2158. Symbols are the same as in \fref{fig:rv_all}, but here the RVs are plotted against time, and we have subtracted the planetary signal. {\it Top:} A fit where we allow for a quadratic trend. {\it Middle:} Here we only allow for a linear trend. {\it Bottom:} Here we do not include any long-term drift.} \label{fig:rv_toi2518_drift} \end{figure} \section{Results} \label{sec:results} The results from the MCMC for our preferred orbital configuration for each of the systems are tabulated in \tref{tab:mcmc}. \begin{table*}[t] \centering \caption{{\bf Results from our MCMC analysis.}} \begin{threeparttable} \begin{tabular}{l l c c c c} \toprule \multicolumn{2}{l}{Parameter} & TOI-1820 & TOI-2025 & TOI-2158 \\ \midrule $P$ & Period (days) & $4.860700\pm0.000010$ & $8.872086\pm0.000009$ & $8.60077\pm0.00003$ \\ $T_0$ & Mid-transit time (BJD$_\mathrm{TDB}$) & $2458903.0631_{-0.0006}^{+0.0007}$ & $2458690.2896\pm0.0005$ & $2459018.9224\pm0.0010$ \\ $R_\mathrm{p}/R_\star$ & Planet-to-star radius ratio & $0.0759_{-0.0014}^{+0.0013}$ & $0.0738_{-0.0005}^{+0.0004}$ & $0.0700\pm0.0009$ \\ $a/R_\star$ & Semi-major axis to star radius ratio & $9.8\pm0.6$ & $12.3_{-0.4}^{+0.5}$ & $11.4\pm0.5$ \\ $K$ & Velocity semi-amplitude (m s$^{-1}$) & $273\pm4$ & $402_{-15}^{+14}$ & $75\pm3$ \\ $\cos i$ & Cosine of inclination & $0.081\pm0.008$ & $0.025_{-0.025}^{+0.011}$ & $0.075_{-0.006}^{+0.005}$ \\ $\sqrt{e} \cos \omega$ & & $0.20\pm0.02$ & $-0.02\pm0.03$ & $0.10_{-0.07}^{+0.10}$ \\ $\sqrt{e} \sin \omega$ & & $0.032_{-0.032}^{+0.017}$ & $0.661_{-0.016}^{+0.019}$ & $0.10_{-0.10}^{+0.04}$ \\ $\gamma_1$ & Systemic velocity FIES (m s$^{-1}$) & $227_{-4}^{+5}$ & $-372_{-17}^{+18}$ & $-9_{-12}^{+11}$ \\ $\gamma_2$ & Systemic velocity FIES+ (m s$^{-1}$) & - & $-98_{-52}^{+50}$ & $-55_{-16}^{+15}$ \\ $\gamma_3$ & Systemic velocity Tull (m s$^{-1}$) & $13947\pm4$ & - & $-64814_{-13}^{+11}$ \\ $\sigma_1$ & Jitter FIES (m s$^{-1}$) & $7_{-7}^{+3}$ & $44_{-14}^{+11}$ & $13_{-5}^{+4}$ \\ $\sigma_2$ & Jitter FIES+ (m s$^{-1}$) & - & $19\pm6$ & $10\pm3$ \\ $\sigma_3$ & Jitter Tull (m s$^{-1}$) & $6_{-6}^{+2}$ & - & $9_{-9}^{+4}$ \\ $\log A_1$ & GP amplitude \tess 30 min. & $-6.96_{-0.12}^{+0.11}$ & $-8.30\pm0.06$ & $-8.95_{-0.12}^{+0.13}$ \\ $\log \tau_1$ & GP time scale \tess 30 min. ($\log$ days) & $-0.50_{-0.15}^{+0.14}$ & $-0.31\pm0.11$ & $-1.7\pm0.4$ \\ $\log A_2$ & GP amplitude \tess 2 min. & - & $-7.81_{-0.14}^{+0.13}$ & $-7.271\pm0.017$ \\ $\log \tau_2$ & GP time scale \tess 2 min. ($\log$ days) & - & $-0.34_{-0.19}^{+0.16}$ & $-7.29_{-0.06}^{+0.08}$ \\ $\ddot{\gamma}$\tnote{a} & Quadratic trend (m s$^{-1}$ d$^{-2}$) & - & - & $-0.0043\pm0.0009$ \\ $\dot{\gamma}$\tnote{a,b} & Linear trend (m s$^{-1}$ d$^{-1}$) & - & $0.95\pm0.16$ & $1.4\pm0.2$ \\ $\rm \delta M$ & Dilution & $3.9\pm0.5$ & - & - \\ $\lambda$ & Projected obliquity ($^{\circ}$) & - & $9_{-34}^{+36}$ & - \\ $v \sin i_\star$ & Projected rotational velocity (km s$^{-1}$) & - & $6.0\pm0.3$ & - \\ $\zeta$ & Macro-turbulence (km s$^{-1}$) & - & $4.0_{-0.9}^{+1.0}$ & - \\ $\xi$ & Micro-turbulence (km s$^{-1}$) & - & $1.3_{-1.0}^{+0.6}$ & - \\ \hdashline $e$ & Eccentricity & $0.043_{-0.009}^{+0.008}$ & $0.44_{-0.02}^{+0.03}$ & $0.031_{-0.030}^{+0.013}$ \\ $\omega$ & Argument of periastron ($^\circ$) & $9_{-9}^{+5}$ & $92\pm2$ & $50_{-49}^{+19}$ \\ $i$ & Inclination ($^\circ$) & $85.4\pm0.5$ & $88.6_{-0.6}^{+1.4}$ & $85.7\pm0.3$ \\ $b$ & Impact parameter & $0.79\pm0.03$ & $0.31_{-0.31}^{+0.12}$ & $0.86_{-0.03}^{+0.02}$ \\ $T \rm _{4,1}$ & Total transit duration (hours) & $2.77_{-0.07}^{+0.06}$ & $3.620_{-0.023}^{+0.018}$ & $3.77\pm0.06$ \\ $T \rm _{2,1}$ & Time from 1st to 2nd contact (hours) & $0.47\pm0.07$ & $0.256_{-0.011}^{+0.009}$ & $0.75\pm0.07$ \\ $R_\mathrm{p}$ & Planet radius ($\rm R_J$) & $1.12\pm0.02$ & $1.120\pm0.009$ & $0.960\pm0.012$ \\ $M_\mathrm{p}$ & Planet mass ($\rm M_J$) & $2.3\pm0.2$ & $4.4\pm0.4$ & $0.82\pm0.08$ \\ $\rho_\mathrm{p}$ & Planet density (g~cm$^{-3}$) & $2.1\pm0.2$ & $3.9\pm0.3$ & $1.14\pm0.11$ \\ $T_\mathrm{eq}$\tnote{c} & Equilibrium temperature (K)\tnote{a} & $1295\pm11$ & $1186\pm11$ & $1188\pm10$ \\ $a$ & Semi-major axis (AU) & $0.069\pm0.005$ & $0.089\pm0.004$ & $0.075\pm0.004$ \\ \bottomrule \end{tabular} \begin{tablenotes} \item The parameters above the dashed line are the stepping parameters, and below are the derived parameters. The value given is the median and the uncertainty is the highest posterior density at a confidence level of 0.68. Parameters above the dashed line are stepping parameters, and below we show the derived values. \item[a] Zero-point for TOI-2158 is 2459302.92570 BJD$_\mathrm{TDB}$. \item[b] Zero-point for TOI-2025 is 2459124.41436 BJD$_\mathrm{TDB}$. \item[c] Calculated from \eref{eq:mass}. \item[d] Following \citet{Kempton2018}. \end{tablenotes} \end{threeparttable} \label{tab:mcmc} \end{table*} We find that TOI-1820b is a Jupiter-sized planet, $1.12 \pm 0.02$~R$\rm_J$\xspace, but significantly more massive, $2.3 \pm 0.2$~M$\rm_J$\xspace. With an orbital period of $4.860700 \pm 0.00010$~d\xspace, it is the shortest period planet in our sample. TOI-2025 has a similar size $1.120 \pm 0.009$~R$\rm_J$\xspace as TOI-1820 but has about twice its mass, $4.4 \pm 0.4$~M$\rm_J$\xspace. On the other end of the mass spectrum we find TOI-2158~b with $0.82 \pm 0.08$~M$\rm_J$\xspace. TOI-2158~b is also a bit smaller than the two other planets with a radius of $0.960 \pm 0.012$~R$\rm_J$\xspace. For TOI-2025, we found evidence for a long term RV trend, as can be seen in \fref{fig:rv_toi2025_drift}. We also find evidence for a long-term RV changes in the TOI-2158 system, including evidence for a curvature in the RVs, which we model with a quadratic term, \fref{fig:rv_toi2518_drift}. There is no significant evidence for long term RV changes in TOI-1820. Assuming the long term RV changes are due to a further out companion we can glimpse information about their masses from some back-of-the-envelope calculations. For this we assume circular, edge-on orbits. If we happen to have caught the outer companion in TOI-2025 just after and just before quadrature (phase 0.25 and 0.75), the peak-to-peak amplitude would be the rate of change multiplied by the difference in time between the first and last observation. Therefore, a lower limit for the $K$-amplitude can be estimated as $(t_\mathrm{last \ RV} - t_\mathrm{first \ RV})\times \dot{\gamma}/2$ \citep[see, e.g.,][]{Kane2019,Pepper2020}, resulting in some 170~m~s$^{-1}$. If we then assume values for the period we can use \begin{equation} \frac{M_{\rm p} \sin i}{\mathrm{M}_{\rm J}} = \frac{K \sqrt{1 - e^2}}{28.4~\mathrm{m}~\mathrm{s}^{-1}} \left ( \frac{P}{1~\mathrm{yr}} \right)^{1/3} \left ( \frac{M_\star}{\mathrm{M}_\odot} \right)^{2/3} \, \label{eq:mass} \end{equation} to get an estimate of the mass of the companion. As an illustrative example, assuming orbital periods of 2 or 10 years for such a companion would result in masses of $\approx9$~M$_{\rm J}$ or $\approx15$~M$_{\rm J}$, respectively. For TOI-2158 we found a quadratic trend. We can therefore obtain an order of magnitude estimate for the period of the outer companion as $P=-2 \ddot{\gamma}/\dot{\gamma}$, which gives a period of around 650~d. Using the relation $K=\ddot{\gamma}P^2/4\pi^2$ derived in \citet{Kipping2011} with \eref{eq:mass} yields a mass of $\approx15$~M$_\mathrm{J}$. \subsection{The eccentricity and obliquity of TOI-2025~b} We find TOI-2025~b to travel on an eccentric orbit, $0.44^{+0.03}_{-0.02}$\xspace. However, the argument of periastron is close to and fully consistent with $90^\circ$. This configuration can be deceptive when it comes to determining the eccentricity \citep[e.g.,][]{Laughlin2005}. This is because the RV curves would be symmetric for values close to $|\omega|=90$~$^\circ$, even for eccentric orbits. To further investigate the orbital eccentricity we carried out a few experiments. First off, as mentioned we ran an MCMC where we fixed $e$ to 0. The best-fitting model from this run can be seen in \fref{fig:rv_toi2025_no_ecc}, where the residuals clearly have structure in them. Our model involving a circular orbit does apparently not capture all the complexity present in the data. Consequently, the derived RV jitter terms for both FIES and FIES+ are significantly higher with values of $108^{+18}_{-22}$~m~s$^{-1}$ respectively $66^{+10}_{-12}$~m~s$^{-1}$ as opposed to the values of $44^{+11}_{-14}$~m~s$^{-1}$\xspace and $19\pm6$~m~s$^{-1}$\xspace from the eccentric fit. As there might be stellar signals that are coherent on time scales of hours, but not days, and given that we have much higher sampling during the transit night, it is worthwhile to investigate if the eccentricity hinges on those measurements and to what extent. Therefore, we performed a fit in which the eccentricity was allowed to vary, but where we only included the first and the last data point from the transit night. Here, we obviously do not try to fit the obliquity. From this we get values of $e=0.46^{+0.04}_{-0.03}$ and $\omega=92 \pm 4$~$^\circ$, consistent with the values from the run using all the RV data. \begin{figure*}[h!] \centering \includegraphics[width=\textwidth]{figures/bootstrap.png} \caption{{\bf Bootstrapping the orbit of TOI-2025~b.} {\it Left:} Bootstrap with 50,000 iterations (only 52 of these were omitted as the fit did not converge) displaying the eccentricity plotted against the argument of periastron. The points are colour coded in terms of the resulting $\chi^{2}_\nu$ from the fit. {\it Right:} The average $\chi^2_{\nu}$ for a given data point, but only counted when that point was drawn. We show the best-fitting eccentric model (i.e., from \tref{tab:mcmc}) on top, and the best-fitting model from our MCMC run with $e$ fixed to 0 in the bottom. Note that here we have omitted all, but the first and last data point from the transit night (\fref{fig:rm_toi2025}), and we have not added the jitter term in quadrature.} \label{fig:bootstrap} \end{figure*} Next we performed a bootstrap experiment using the RV data only. In our bootstrap we used alternate realizations of the data in \tref{tab:rv_toi2025}, again excluding all but the first and last data point from the transit night. After redrawing a data set from the original data we fit for $e$, $\omega$, $\gamma_\mathrm{FIES}$, $\gamma_\mathrm{FIES+}$, $K$, and $\dot{\gamma}$. In \fref{fig:bootstrap} we plot the results for $e$ and $\omega$ for the 50,000 realizations. We have colour coded the points according to the reduced chi-squared, $\chi^{2}_{\nu}$, which shows that the lowest values for $\chi^{2}_{\nu}$ are found around $e\sim0.4$ and $\omega\sim90^\circ$. However, leaving out certain data points might result in a (more) circular orbit. Which data points 'drive' the eccentricity can be seen in the right panel of \fref{fig:bootstrap}. Therefore, we conclude that our result for the eccentricity is significant, but hinges on relatively few data points. In addition to finding an eccentric orbit for the planet, we also measured the projected obliquity of TOI-2025. We find the projected obliquity to be consistent with \review{no misalignment} $9^{+36}_{-34}$~$^\circ$\xspace. The relevant transit RVs and our best-fitting model can be seen in \fref{fig:rm_toi2025}. Despite having only measured the projected obliquity, $\lambda$, here, we can make it probable that it is close to the obliquity, $\psi$, which necessitates the stellar inclination along the line-of-sight to be close to $90^\circ$. That $i_\star$ is close to $90^\circ$ is supported by Figure~3 in \citet{Louden2021}, where a correlation between $T_\mathrm{eff}$ and $v \sin i_\star$ is plotted. From this plot we should not expect $v \sin i_\star$ to be markedly different from the value of $6.0 \pm 0.3$~km~s$^{-1}$ given the effective temperature for TOI-2025 of $\sim5900$~K that we have found. This therefore suggests that the system is aligned. \section{Discussion and conclusions} \label{sec:conclusions} We validated and characterised three hot Jupiters discovered by \tess: TOI-1820~b, TOI-2025~b, and TOI-2158~b. Common for all three systems is that we in some way or another see evidence for companions. The outer companions may have played a role in the migration of the gas giants, thus shaping the final architecture of the systems. \citet{Ngo2016} argue that sites hosting outer stellar companions are either more favorable environments for gas giant formation at all separations, or the presence of stellar companions might drive the inwards migration, e.g., through Kozai-Lidov \citep{Kozai1962,Lidov1962}, or other dynamical processes. Through our speckle interferometry of TOI-1820, we detected a $\sim$4~mag fainter stellar companion at a distance of $\sim$110~AU from the bright host. It would be interesting to get good estimates of the stellar parameters for this companion in order to assess whether it would have been able to drive Kozai-Lidov cycles responsible for the migration. If the outer companions are planets within $\sim$1~AU from the stellar host, \citet{Becker2017} found that they should be coplanar with the inner hot Jupiters, suggesting that Kozai-Lidov migration would not be viable. However, if these companions are found at greater distances (gas giants $\gtrsim$5~AU or stellar $\gtrsim$100~AU) they could still be inclined and the formation of the hot Jupiter could take place through Kozai-Lidov migration \citep{Lai2018}. In the RVs for both TOI-2025 and TOI-2158 we see long-term trends; a linear trend in the case of TOI-2025 and a quadratic trend for the TOI-2158 system. In contrast to TOI-1820 the companions in TOI-2025 and TOI-2158 are likely of planetary, or at least substellar, nature and closer in (cf. the mass and period estimates in Section~\ref{sec:results}). As the companions in TOI-2025 and TOI-2158 are most likely found beyond 1~AU given the (lower) estimates for their periods and the stellar masses, Kozai-Lidov migration could be a viable transport mechanism for TOI-2025~b and TOI-2158~b. \review{\tess might be able to shed more light on these outer companions as more sectors become available. According to the Web \tess Viewing Tool\footnote{\url{https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py}}, TOI-2025 should be observed again in Sectors 52, 53, and 58-60, and TOI-2158 is set to be observed in Sector 53.} In \fref{fig:tidal} we show the tidal diagram (left) and modified tidal diagram (right) from \citet{Bonomo2017} with our measurements for TOI-1820~b, TOI-2025~b, and TOI-2158~b. We find that the orbital eccentricity of TOI-2158~b is consistent with $e=0$. This planet joins the small group of planets in \citet{Bonomo2017} with circular orbits and relatively large values for $a/a_{\rm R}$, $a_{\rm R}$ being the Roche limit. This would allude to disc migration, however, given the age of $8 \pm 1$~Gyr for TOI-2158, the orbit of the planet might have had sufficient time to circularise, should the migration have taken place through high-eccentricity migration. For TOI-1820~b we find a modest eccentricity of $0.043^{+0.008}_{-0.009}$\xspace (about three times that of Earth). In \fref{fig:tidal} the planets with modest eccentricities are found at various relative masses and various relative distances. From the modified tidal diagram it appears that TOI-1820~b should have a circularisation time scale of around 1-2~Gyr and with the age of $11 \pm 2$~Gyr for TOI-1820, this leaves plenty of time for the system to dampen the eccentricity in the case of high-eccentricity migration. However, this modest eccentricity is not irreconcilable with disc migration \citep{Dawson2018}. In contrast TOI-2025~b belongs to the subgroup of systems with significant eccentricity. The planet TOI-2025~b is too massive for the star to effectively raise tides on the planet in order to circularise the orbit, meaning that the circularisation time scale is too long for the orbit to have been circularised \citep{Dawson2018}. The modified tidal diagram suggests that the circularisation time scale could be some $10$~Gyr, which is much longer than the age of $1.7 \pm 0.2$~Gyr for this system. On the same token the planet seems to be massive enough for it to effectively raise tides on the star, while the star is sufficiently cool for tidal dissipation to be efficient \citep{Winn2010,Albrecht12b}. The projected obliquity we find for TOI-2025 is in-line with other massive planets on eccentric, aligned orbits, such as HD~147506b \citep{Winn2007}, HD~17156~b \citep{Narita2009}, and HAT-P-34~b \citep{Albrecht12b}. Contrary to these findings, \citet{RiceWangLaughlin2022} has found that cool stars ($T_\mathrm{eff}<$6100~K) harboring eccentric planets tend to have higher obliquities. Although, due to the sample size it is still unclear whether misalignment is associated with orbital eccentricity. Given the orbital, stellar, and planetary parameters the low projected obliquity in TOI-2025 might be the result of tidal alignment \citep{Albrecht2022}. If so it would be interesting to further reduce the uncertainty of the obliquity measurement to test if the system is aligned to within $1^\circ$ as recently observed in some systems \citep{Albrecht2022}. This would suggest tidal alignment as primordial alignment would presumably lead to a certain spread, as it has apparently done in the Solar System. TOI-1820 and TOI-2158 would for similar reasons be excellent RM targets as well. In addition their higher impact parameters might lead to an even higher accuracy. \begin{figure*} \centering \includegraphics[width=\textwidth]{figures/tidal_diagram.pdf} \caption{{\bf Tidal diagrams.} Tidal diagrams for transiting giant planets from \citet{Bonomo2017}. Open circles denote planets on circular orbits with $\sigma_e < 0.05$. Markers shown with plusses are planets with undertermined eccentrities, i.e., $\sigma_e > 0.05$. Most of these are consistent with $e=0$. Triangles represent planets with significant, but small eccentricities $e<0.1$, and squares are eccentric systems $\geq 0.1$. Adhering to this notation we have shown the planets in our sample with the corresponding marker, however, we have colour coded them for clarity. Created from the catalog \citet{Bonomo2017Cat}. {\it Left: Tidal diagram.} The solid and dashed lines show the position of a planet with a separation of $a=a_\mathrm{R}$ respectively $a=2a_\mathrm{R}$ ($a_\mathrm{R}$ being the Roche limit) and radius $R_\mathrm{p}=1.2$~R$_\mathrm{J}$. The dotted line is a circularisation isochrone for a planet with $P=3$~d, $Q^{\prime}_\mathrm{p}=10^5$, and $e=0$. Note that in \citet{Bonomo2017} the planetary modified tidal quality factor, $Q^{\prime}_\mathrm{p}$, assumed for the isochrones is said to be $10^6$, while it seems that Fig.~8 (tidal diagram) and Fig.~9 (modified tidal diagram) were created using a value of $10^5$, which we have opted for here to make them comparable. {\it Right: Modified tidal diagram.} The dotted, dashed, and solid line denote the 1, 7, and 14~Gyr circularisation time scales, respectively, assuming $e=0$ and $Q^{\prime}_\mathrm{p}=10^5$. } \label{fig:tidal} \end{figure*} \section{Acknowledgements} The authors would like to thank the staff at the Nordic Optical Telescope for their help and expertise. This paper includes data taken at the Nordic Optical Telescope under the programs IDs 59-210, 59-503, 61-510, 61-804, 62-506, and 63-505. This study is based on observations made with the Nordic Optical Telescope, owned in collaboration by the University of Turku and Aarhus University, and operated jointly by Aarhus University, the University of Turku and the University of Oslo, representing Denmark, Finland, and Norway, the University of Iceland and Stockholm University at the Observatorio del Roque de los Muchachos, La Palma, Spain, of the Instituto de Astrofisica de Canarias. This paper includes data taken at The McDonald Observatory of The University of Texas at Austin. We acknowledge the use of public TESS data from pipelines at the TESS Science Office and at the TESS Science Processing Operations Center. Resources supporting this work were provided by the NASA High-End Computing (HEC) Program through the NASA Advanced Supercomputing (NAS) Division at Ames Research Center for the production of the SPOC data products. Funding for the Stellar Astrophysics Centre is provided by The Danish National Research Foundation (Grant agreement no.: DNRF106). A.A.B., B.S.S., and I.A.S. acknowledge the support of Ministry of Science and Higher Education of the Russian Federation under the grant 075-15-2020-780(N13.1902.21.0039). The numerical results presented in this work were obtained at the Centre for Scientific Computing, Aarhus \url{http://phys.au.dk/forskning/cscaa/}. This work makes use of observations from the LCOGT network. Part of the LCOGT telescope time was granted by NOIRLab through the Mid-Scale Innovations Program (MSIP). MSIP is funded by NSF. P. R. and L. M. acknowledge support from National Science Foundation grant No. 1952545. This research made use of Astropy,\footnote{http://www.astropy.org} a community-developed core Python package for Astronomy \citep{art:astropy2013,art:astropy2018}. This research made use of matplotlib \citep{misc:hunter2007}. This research made use of TESScut \citep{art:brasseur2019}. This research made use of astroplan \citep{misc:morris2018}. This research made use of SciPy \citep{misc:scipy2020}. This research made use of corner \citep{corner}. \bibliographystyle{aa}
1,108,101,566,251
arxiv
\section{Introduction} Part of our visual world constantly experiences situations that require us to forecast what will happen over time by observing one still image from a single moment. Studies in neuroscience show that this \emph{preplay} activity might constitute an automatic prediction mechanism in human visual cortex~\cite{ekman2017time}. Given the great progress in artificial intelligence, researchers also begin to let machines learn to perform such a predictive activity for various applications. For example in Figure~\ref{fig:teaser}(top), from a snapshot by the surveillance camera, the system is expected to predict the man's next action which could be used for safety precautions. Another application in computational photography is turning still images into vivid cinemagraphs for aesthetic effects, as shown in Figure~\ref{fig:teaser}(bottom). In this work, we mainly study how to generate pixel-level future frames in multiple time steps given one still image. A number of existing prediction models~\cite{mathieu-ICLR-2016,convLSTM-NIPS-2015,mcnet-ICLR-2017,drnet-NIPS-2017} are under the assumption of observing a short video sequence ($>$1 frame). Since multiple historical frames explicitly exhibit obvious motion cues, most of them use deterministic models to render a fixed future sequence. In contrast, our single-image based prediction task, without any motion information provided, implies that there are obvious uncertainties existed in both spatial and temporal domains. Therefore we propose a probabilistic model based on a conditional variational autoencoder (cVAE) to model the uncertainty. Our probabilistic model has two unique features. First, it is a 3D-cVAE model, i.e., the autoencoder is designed in a spatial-temporal architecture with 3D convolution layers. The 3D convolutional layer~\cite{c3d-ICCV-2015}, which takes a volume as input, is able to capture correlations between the spatial and temporal dimension of signals, thereby rendering distinctive spatial-temporal features for better predictions. Second, the output of our model is optical flows which characterize the spatial layout of how pixels are going to move step by step. Different from other methods that predict trajectories~\cite{walker2016uncertain}, frame differences~\cite{crossconv-NIPS-2016} or frame pixels~\cite{drnet-NIPS-2017}, the flow is a more natural and general representation of motions. It serves as a relatively low-dimensional reflection of high-level structures and can be obtained in an unsupervised manner. \input{figs/teaser.tex} With the predicted flows, we next formulate the full frame synthesis as a generation problem. Due to the existence of occlusions, flow-based pixel-copying operations (e.g., warping) are obviously ineffective here. The model should be capable of ``imagining'' the appearance of future frames and removing the unnecessary parts in the previous frame at the same time. Therefore we propose a generative model \emph{Flow2rgb} to generate pixel-level future frames. Such a model is non-trivial and is demonstrated to be effective in keeping the generated sequence staying close to the manifold of real sequences (Figure~\ref{fig:embedding_manifold}). Overall, we formulate the multi-frame prediction task as a multiple time step flow prediction phase followed by a flow-to-frame generation phase. Such a two-phase design prevents the model from directly looking at the high-dimensional pixel space of the frame sequence and is demonstrated to be more effective in predicting better results. During the testing, by drawing different samples from the learned latent distribution, our approach can also predict diverse future sequences. The main contributions of this work are summarized as follows: \begin{itemize} \item We propose a spatial-temporal conditional VAE model (3D-cVAE) to predict future flows in multiple time steps. The diversity in predictions is realized by drawing different samples from the learned distribution. \item We present a generative model that learns to generate the pixel-level appearance of future frames based on predicted flows. \item We demonstrate the effectiveness of our method for predicting sequences that contain both articulated (e.g., humans) objects and dynamic textures (e.g., clouds). \end{itemize} \section{Related Work} \paragraph{\bf Action prediction.} The macroscopic analysis of prediction based on the given frame(s) can be predicting what event is going to happen~\cite{yuen2010data,lan2014hierarchical,hoai2014max}, trajectory paths~\cite{kitani2012activity}, or recognizing the type of human activities~\cite{vondrick2016anticipating,walker-ICCV-2015}. Some of early methods are supervised, requiring labels (e.g., bounding boxes) of the moving object. Later approaches~\cite{walker-ICCV-2015} realize the unsupervised way of prediction by relying on the context of scenes. However, these approaches usually only provide coarse predictions of how the future will evolve and are unable to tell richer information except for a action (or event) label. \paragraph{\bf Pixel-level frame prediction.}~Recent prediction methods move to the microcosmic analysis of more detailed information in the future. This is directly reflected by requiring the pixel-level generation of future frames in multiple time steps. With the development of deep neural networks, especially when recursive modules are extensively used, predicting realistic future frames has being dominated. Much progress has been made in the generated quality of future outputs by designing different network structures~\cite{srivastava-ICML-2015,oh-NIPS-2015action,mathieu-ICLR-2016,babaeizadeh2017stochastic,finn2017deep} or using different learning techniques, including adversarial loss~\cite{videoGAN-NIPS-2016,liang-ICCV-2017}, motion/content separation~\cite{mcnet-ICLR-2017,mocoGAN-2017,drnet-NIPS-2017}, and transformation parameters~\cite{finn2016unsupervised,carl-CVPR-2017transformer}. Our work also aims at accurate frame predictions but the specific setting is to model the uncertainties of multi-frame prediction given a single still image as input. In terms of multi-frame predictions conditioning on still images, closest work to ours are~\cite{johnny-CVPR-2017,ruben-ICML-2017}. However,~\cite{johnny-CVPR-2017} only predicts the pose information and the proposed model is deterministic. The work in \cite{ruben-ICML-2017} also estimates poses first and then use an image-analogy strategy to generate frames. But their pose generation step relies on observing multiple frames. \Yijun{ Moreover, both approaches employ the recursive module (e.g., recurrent neural networks) for consecutive predictions which may overemphasize on learning the temporal information only. Instead, we use the 3D convolutional layer~\cite{c3d-ICCV-2015} which takes a volume as input. Since both spatial and temporal information are encoded together, the 3D convolution can generally capture correlations between the spatial and temporal dimension of signals, thereby rendering distinctive spatial-temporal features~\cite{c3d-ICCV-2015}. } In addition, both~\cite{johnny-CVPR-2017,ruben-ICML-2017} focus on human dynamics while our work targets on both articulated objects and dynamic textures. In terms of modeling future uncertainties, two methods~\cite{crossconv-NIPS-2016,walker2016uncertain} are closely related. However, Xue et al.~\cite{crossconv-NIPS-2016} only model the uncertainty in the next one-step prediction. If we iteratively run the one-step prediction model for multi-step predictions, the frame quality will degrade fast through error accumulations, due to the lack of temporal relationships modeling between frames. Though Walker et al.~\cite{walker2016uncertain} could keep forecasting over the course of one second, instead of predicting real future frames, it only predicts the dense trajectory of pixels. Also such a trajectory-supervised modeling requires laborious human labeling. Different from these methods, our approach integrates the multi-frame prediction and uncertainty modeling in one model. \paragraph{\bf Dynamic textures.}~The above-mentioned methods mainly focus on the movement of articulated objects (e.g., human). In contrast, dynamic textures often exhibit more randomness in the movement of texture elements. Both traditional methods based on linear dynamical systems~\cite{doretto-IJCV-2003,yuan2004synthesizing} and neural network based methods \cite{xie-CVPR-2017} require learning a model for each sequence example. Different from those methods, we collect a large number of dynamic texture video data and aims at modeling the general distribution of their motions. Such a model can immediately serve as an editing tool when animating static texture examples. \section{Proposed Algorithm} We formulate the video prediction as two phases: flow prediction and flow-to-frame generation. The flow prediction phase, triggered by a noise, directly predicts a set of consecutive flow maps conditioned on the observed first frame. Then the flow-to-frame phase iteratively synthesizes future frames with the previous frame and the corresponding predicted flow map, starting from the first given frame and first predicted flow map. \begin{figure}[t] \centering \begin{tabular}{c@{\hspace{0.005\linewidth}}c} \includegraphics[width = .96\linewidth]{figs/framework/Framework3.pdf} & \\ \end{tabular} \caption{Architecture of the proposed multi-step prediction network. It consists of a 3D-cVAE (left) for predicting consecutive flows and a \emph{Flow2rgb} model to generate future frame pixels (right). During the testing, the encoder (blue rectangle) of 3D-cVAE is no longer used and we directly sample points from the distribution for predictions.} \label{fig:framework} \end{figure} \subsection{Flow prediction} \label{sec:flow_prediction} Figure~\ref{fig:framework}(left) illustrates the architecture of our proposed model for predicting consecutive optical flows. Formally, our model is a conditional variational autoencoder~\cite{vae-ICLR-2014,cvae-sohn2015learning} with a spatial-temporal convolutional architecture (3D-cVAE). Given a sequence $X=\{x_i\}^{M}_{0}$ with $x_0$ as the starting frame, we denote the set of consecutive optical flows between adjacent frames in $X$ as $F=\{f_i\}^{M-1}_{0}$. The network is trained to map the observation $F$ (conditioned on $x_0$) to the latent variable $z$ which are likely to reproduce the $F$. In order to avoid training a deterministic model, we produces a distribution over $z$ values, which we sample from before the decoding. Such a variational distribution $q_{\phi}(z\vert x_0,F)$, known as the recognition model in~\cite{cvae-sohn2015learning}, is assumed to be trained to follow a Gaussian distribution $p_{z}(z)$. Given a sampled $z$, the decoder decodes the flow $F$ from the conditional distribution $p_{\theta}(F\vert x_0,z)$. Therefore the whole objective of network training is to maximize the variational lower-bound~\cite{vae-ICLR-2014} of the following negative log-likelihood function: \begin{equation} \label{eq:vae} \mathcal{L}(x_0, F;\theta,\phi) \approx -\mathcal{D}_{KL} (q_{\phi}(z\vert x_0,F) \vert \vert p_{z}(z)) + \frac{1}{L} \sum^{L}_{1} \log p_{\theta} (F \vert x_0, z), \end{equation} where $\mathcal{D}_{KL}$ is the Kullback-Leibler (K-L) divergence and $L$ is the number of samples. Maximizing the term at rightmost in (\ref{eq:vae}) is equivalent to minimizing the L1 distance between the predicted flow and the observed flow. Hence the loss $\mathcal{L}$ consists of a flow reconstruction loss and a K-L divergence loss. \input{figs/flow_prediction.tex} Different from traditional cVAE models~\cite{cvae-sohn2015learning,crossconv-NIPS-2016,walker2016uncertain}, our 3D-cVAE model employs the 3D convolution (purple blocks in Figure~\ref{fig:framework}) which is demonstrated to be well-suited for spatial-temporal feature learning~\cite{c3d-ICCV-2015,videoGAN-NIPS-2016}. In terms of network architecture, the 3D convolutional network outputs multiple (a volume of) flow maps instead of one, which can be used to predict multiple future frames. More importantly, the spatial-temporal relationship between adjacent flows are implicitly modeled during the training due to the 3D convolution operations, ensuring that the predicted motions are continuous and reasonable over time. In order to let the variational distribution $q_{\phi}(z\vert x_0,F)$ conditioned on the starting frame, we stack $x_0$ with each flow map $f_i$ in $F$ as the encoder input. Meanwhile, learning the conditional distribution $p_{\theta}(F\vert x_0,z)$ for flow reconstruction also needs to be conditioned on $x_0$ in the latent space. Therefore, we propose an image encoder (pink blocks in Figure~\ref{fig:framework}) to first map $x_0$ to a latent vector that has the same dimension as $z$. Inspired by the image analogy work~\cite{reed2015deep}, we use a conditioning strategy of combining the multiplication and addition operation, as shown in Figure~\ref{fig:framework}(left). After we obtain the flow sequence for the future, we proceed to generate the pixel-level full frames. \subsection{Frame generation} \label{sec:frame_generation} Given the flow information, a common way to obtain the next frame is warping or pixel copying~\cite{zhou2016view}. However, due to the existence of occlusions, the result is often left with unnecessary pixels inherited from the previous frame. The frame interpolation work~\cite{liu-ICCV-2017} predicts a mask indicating where to copy pixels from previous and next frame. But they require at least two frames to infer the occluded parts. Since we only observe one image, it is straightforward to formulate this step as a generation process, meaning that this model can ``imagine'' the appearance of next frame according to the flow and starting frame. The similar idea is also applied in the task of novel view synthesis~\cite{park2017transformation}. \input{figs/chair_manifold.tex} The architecture of the proposed frame generation model \emph{Flow2rgb} is shown in Figure~\ref{fig:framework}(right). Given the input $x_t$ and its optical flow $f_t$ that represents the motion of next time step, the network is trained to generate the next frame $x_{t+1}$. Since two adjacent frames often share similar information (especially in the static background regions), in order to let the network focus on learning the difference of two frames, we first warp the $x_t$ based on the flow to get a coarse estimation $\tilde{x}_{t+1}$. Then we design a Siamese-like~\cite{chopra2005learning} network with the warped frame and the flow as two streams of input. The frame and flow encoders (blue and green blocks) borrow the same architecture of the VGG-19 up to the Relu\_4\_1 layer, and the decoder (yellow blocks) is designed as being symmetrical to the encoder with the nearest neighbor upsampling layer used for enlarging feature maps. We train the model using a pixel reconstruction loss and a feature loss~\cite{johnson2016perceptual,Doso-NIPS2016-Generation} as shown below: \begin{equation}\label{reconstruction} \mathcal{L} = \|\hat{x}_{t+1}-x_{t+1}\|_{2} + \sum^{5}_{K=1} \lambda\|\Phi_{K} (\hat{x}_{t+1}) - \Phi_{K} (x_{t+1})\|_{2}~, \end{equation} where $\hat{x}_{t+1}$, $x_{t+1}$ are the network output and ground truth (GT), and $\Phi_K$ is the VGG-19~\cite{VGG-ICLR-2015} encoder that extracts the Relu\_K\_1 features. $\lambda$ is the weight to balance the two losses. This model is learned in an unsupervised manner without human labels. Note that this is a one-step flow-to-frame model. Since we predict multi-step flows in the flow prediction stage, starting with the first given frame, we iteratively run this model to generate the following frame based on the next flow and previous generated frame. \begin{figure}[t] \centering \begin{tabular}{c@{\hspace{0.005\linewidth}}c@{\hspace{0.005\linewidth}}c@{\hspace{0.005\linewidth}}c} \includegraphics[width = .48\linewidth]{figs/manifold/manifold_pool5_tsne.pdf} & \includegraphics[width = .48\linewidth]{figs/manifold/manifold_fc6_tsne.pdf} & \\ {(a) VGG-19 pool5} & {(b) VGG-19 fc6} \\ \end{tabular} \caption{Visualization of sequence (a chair turning around) manifold in deep feature space. Staring from the same frame, each predicted frame of three sequences is visualized as a 2-D point by applying t-SNE~\cite{tsne} on its deep features. The moving average is shown as lines to imply the shape (or trending) of the manifold. For example in (a), the GT rotating chair (blue) follows a ``8'' like manifold in pool5 feature space, which our predicted sequence (yellow) follows closely but the warping sequence (green) deviates much further. } \label{fig:embedding_manifold} \end{figure} We show the effectiveness of our \emph{Flow2rgb} model in Figure~\ref{fig:chair} with an example of chair rotating sequence~\cite{chair-CVPR-2015}. To verify the frame generation phase alone, we assume that the flows are already available (computed by~\cite{spynet-CVPR-2017}). Then given the first frame and future flows, the second row of Figure~\ref{fig:chair} shows the iterative warping results where the chair legs are repeatedly copied in future frames as the warping is unable to depict the right appearance of chair in difference views. In contrast, our model iteratively generates the occluded parts and removed unnecessary parts in the previous frame according to the flow at each time step. As claimed in~\cite{chair-CVPR-2015}, the deep embeddings of objects under consecutively changing views often follow certain manifold in feature space. If we interpret this changing view as a type of rotating motion, our predicted results for different views also needs to stay close to the manifold shape of the GT sequence. We demonstrate this by extracting the VGG-19~\cite{VGG-ICLR-2015} features of each predicted frame, mapping it to a 2-D point through t-SNE~\cite{tsne}, and visualizing it in Figure~\ref{fig:embedding_manifold}. It clearly shows that our predictions follows closely with the manifold of the GT sequence, while the warping drives the predictions to deviate from the GT further and further. \section{Experimental Results} In this section, we first discuss the experimental settings and implementation details. We then present qualitative and quantitative comparisons between the proposed algorithm and several competing algorithms. Finally, we analyze the diversity issue in uncertainty modeling. \paragraph{\bf Datasets.} We mainly evaluate our algorithm on three datasets. The first one is the KTH dataset~\cite{KTH-ICPR-2004} which is a human action video dataset that consists of six types of action and totally 600 videos. It represents the movement of articulated objects. Same as in~\cite{mcnet-ICLR-2017,drnet-NIPS-2017}, we use person 1-16 for training and 17-25 for testing. We also collect another two datasets from online websites, i.e., the \emph{WavingFlag} and \emph{FloatingCloud}. These two datasets represents dynamic texture videos where motions may bring the shape changes on dynamic patterns. The \emph{WavingFlag} dataset contains 341 videos of 80K+ frames and the \emph{FloatingCloud} dataset has 415 videos of 150K+ frames in total. In each dataset, we randomly split all videos into the training (4/5) and testing (1/5) set. \paragraph{\bf Implementation details.}~Given the starting frame $x_0$, our algorithm predicts the future in next $M=16$ time steps. Each frame is resized to 128$\times$128 in experiments. Similar to~\cite{walker-ICCV-2015,gao-2017im2flow}, we employ an existing optical flow estimator SPyNet~\cite{spynet-CVPR-2017} to obtain flows between GT frames for training the 3D-cVAE. As described in Section~\ref{sec:flow_prediction}, we stack $x_0$ with each flow map $f_i$ in $F$. Thus during the training, the input cube to the 3D-cVAE is of size $16\times 5 \times 128\times 128$ where $5=2+3$ (2-channel flow and 3-channel RGB). The dimension of the latent variable $z$ in the bottle neck is set as 2000. Another important factor for a successful network training is to normalize the flow roughly to (0,1) before feeding it into the network, ensuring pixel values of both flows and RGB frames are within the similar range. Since the \emph{Flow2rgb} model can be an independent module for motion transfer with known flows, we train the 3D-cVAE and \emph{Flow2rgb} model separately in experiments. \input{figs/visual_result.tex} \paragraph{\bf Evaluations.}~Different prediction algorithms have their unique settings and assumptions. For example, Mathieu et al.~\cite{mathieu-ICLR-2016} requires four frames stacked together as the input. Villegas et al.~\cite{mcnet-ICLR-2017} ask for feeding the image difference (at least two frames). Their following work~\cite{ruben-ICML-2017}, though based on one frame, additionally needs multiple historical human pose maps to start the prediction. For fair comparisons, we mainly select prediction methods~\cite{drnet-NIPS-2017,crossconv-NIPS-2016} that accept one single image as the only input to compare. The work of~\cite{drnet-NIPS-2017} represents the typical recursive prediction pipeline, which builds upon a fully-connected long short-term memory (FC-LSTM) layer for predictions. Their model is originally trained and tested by observing multiple frames. Here we change their setting to one-frame observance in order to be consistent with our setting. The work of~\cite{crossconv-NIPS-2016} is the typical one-step prediction method based on one given frame. To get multi-frame predictions, we train their model and iteratively test it to get the next prediction based on the previous prediction. In Figure~\ref{fig:visual_result}, we provide a visual comparison between the proposed algorithm and~\cite{drnet-NIPS-2017,crossconv-NIPS-2016}. In~\cite{drnet-NIPS-2017}, a pre-trained and disentangled \emph{pose} embedding is employed to keep predicting the pose of the next frame through a FC-LSTM module. \Yijun{ For articulated objects, the pose is often compact and in low dimensions, which is relatively easier to handle with a single LSTM module. However, for dynamic textures (e.g., flag, cloud) where all pixels are likely to move, the global pose becomes complex and is no longer a low-dimensional structure representation. Therefore the capacity of recursive models is not enough to capture the spatial and temporal variation trend at the same time. } The first two examples in Figure~\ref{fig:visual_result} show that the flag and cloud in predicted frames are nearly static. Meanwhile, the pose only describes the static structure of the object in the current frame and cannot tell as much information as the flow about the next-step motion. In the third example of Figure~\ref{fig:visual_result}, it is obvious that the human is walking to the right. But the results of~\cite{drnet-NIPS-2017} show that the human is going in a reverse direction. Moreover, since they directly predict frame pixels and use the reconstruction loss only, their results are relatively blurry. In~\cite{crossconv-NIPS-2016}, as they only predict the next one frame, the motion is often clear in the second frame. But after we keep predicting the following frame using the previous predicted frame, the motion gradually disappears and the quality of results degrades fast during a few steps. Moreover, they choose to predict the image difference which only shows global image changes but does not capture how each pixel will move to its corresponding one in the next frame. In contrast, our results show more continuous and reasonable motion, reflected by better generated full frames. For example, in the first flag example, the starting frame indicates that the fold on top right will disappear and the fold at bottom left will bring bigger folds. Our predicted sequence presents the similar dynamics as what happens in the GT sequence, which makes it look more realistic. \begin{figure}[t] \centering \begin{tabular}{c@{\hspace{0.005\linewidth}}c@{\hspace{0.005\linewidth}}c@{\hspace{0.005\linewidth}}c} \includegraphics[width = .48\linewidth]{figs/quantitative/rmse.pdf} & \includegraphics[width = .48\linewidth]{figs/quantitative/flow_rmse.pdf} & \\ {(a) RMSE on frames} & {(b) RMSE on flows} \\ \includegraphics[width = .48\linewidth]{figs/quantitative/perceptual.pdf} & \includegraphics[width = .48\linewidth]{figs/quantitative/userstudy.pdf} & \\ {(c) Perceptual metric~\cite{zhang2018perceptual} on frames} & {(d) Which sequence looks more realistic? } \\ \end{tabular} \caption{Quantitative evaluations of different prediction algorithms. We start from the per-pixel metrics (e.g., RMSE) and gradually take human perception into consideration. Our method achieves the best performance under metrics (b)-(d).} \label{fig:quantitative} \end{figure} \Yijun{ We also quantitatively evaluate these prediction methods using three different metrics, i.e., the root-mean-square error (RMSE), perceptual similarity~\cite{zhang2018perceptual}, and user preference. The RMSE is the classic per-pixel metric which measures the spatial correspondence without considering any high-level semantics and is often easily favored by smooth results. Based on this observation, the recent work of ~\cite{zhang2018perceptual} proposes a perceptual similarity metric by using deep network embeddings. It is demonstrated to agree with human perceptions better. Lastly, we directly ask the feedback from users by conducting user studies to understand their preference towards the predicted results by different algorithms. } We start with the traditional RMSE to compute the difference between predicted sequence and GT sequence frame-by-frame and show the result in Figure~\ref{fig:quantitative}(a). To understand how effective these prediction methods are, we design a simple baseline by copying the given frame as multi-step predictions. However, we do not observe obvious difference among all these methods. While the prediction from one single image is originally ambiguous, the GT sequence can be regarded as just one possibility of the future. The trending of motion may be similar but the resulted images can be significantly different in pixel-level. But the RMSE metric is actually very sensitive to the pixel spatial mismatch. Similar observations are also found in~\cite{drnet-NIPS-2017,zhang2018perceptual}. That is why all these methods, when comparing with the GT sequence, shows the similar RMSE results. Therefore, instead of measuring the RMSE on frames, we turn to measure the RMSE on optical flows because the optical flow represents whether the motion field is predicted similarly or not. We compute the flow maps between adjacent frames of the GT sequence and other predicted sequences using the SPyNet~\cite{spynet-CVPR-2017} and show the RMSE results in Figure~\ref{fig:quantitative}(b). Now the difference becomes more clear and our method achieves the lowest RMSE results, meaning that our prediction is the closest to the GT in terms of the predicted motions. However, the evaluation of prediction results still need to take human perception into consideration in order to determine whether sequences look as realistic as the GT sequence. Therefore we turn to the perceptual similarity metric~\cite{zhang2018perceptual}. We use the Alex-Net~\cite{krizhevsky2012imagenet} for feature extraction and measure the similarity between predicted sequence and GT sequence frame-by-frame. Since this metric is obtained by computing feature distances, we denote it as perceptual dissimilarity so that small values means being more similar. The results in Figure~\ref{fig:quantitative}(c) show that the proposed method outperforms other algorithms with an even larger margin than that in Figure~\ref{fig:quantitative}(b), which means that the predicted sequence of our method is perceptually more similar to the GT sequence. Finally, we conduct the user study to get the feedback from human subjects on judging different predicted results. We prepare 30 starting frames (10 from each dataset) and generated 30 sequences (16-frame) for each method. For each subject, we randomly select 15 sets of sequences predicted by three methods. For each starting frame, the three predicted sequences are displayed side-by-side in random order. Each subject is asked to vote one sequence that looks most realistic for each starting frame. We finally collect 900 votes from 60 users and report the results (in percentage) in Figure~\ref{fig:quantitative}(d). The study results clearly show that the proposed method receives the most votes for more realistic predictions among all three categories. Both Figure~\ref{fig:quantitative}(c) and (d) indicate that the proposed method performs favorably against~\cite{drnet-NIPS-2017,crossconv-NIPS-2016} in terms of perceptual quality. \begin{figure}[t] \centering \begin{tabular}{c@{\hspace{0.005\linewidth}}c@{\hspace{0.005\linewidth}}c} \includegraphics[width = .59\linewidth]{figs/random/random.pdf} & \includegraphics[width = .37\linewidth]{figs/random/manifold_pool5_rebuttal_random.pdf} \\ {(a) Visual comparisons of an exemplary sequence} & {(b) VGG-19 pool5} \\ \end{tabular} \caption{Comparison with a naive baseline which transfers a random motion field. (b) The GT sequence follows a ``C" like manifold in pool5 feature space, which our prediction follows closely but the random prediction deviates much further. } \label{fig:random} \end{figure} \paragraph{\bf Random motion.}~We also compare with a naive approach which uses random flow maps (e.g., sampling from the Gaussian distribution $N(0,2)$ for each pixel). We apply the proposed \emph{flow2rgb} model to both random and the learned motions by our method to generate frames. Figure~\ref{fig:random}(a) shows one example. In Figure~\ref{fig:random}(b), we visualize the manifold of predicted sequences in the deep feature space using the t-SNE scheme (as did in Figure~\ref{fig:embedding_manifold}). Both demonstrate that the learned motion generates much better results than those by the random motion, as the naive approach neither models the motion distribution nor considers the temporal relationship between frames. \begin{figure}[t] \centering \begin{tabular}{c@{\hspace{0.005\linewidth}}c@{\hspace{0.005\linewidth}}c@{\hspace{0.005\linewidth}}c} \includegraphics[width = .48\linewidth]{figs/quantitative/uncertainty.pdf} & \includegraphics[width = .48\linewidth]{figs/quantitative/uncertainty3.pdf} & \\ {(a) Sensitivity on the perceptual quality} & {(b) Visualized distribution } \\ {under different noise} & { of predictions under different noise} \\ \end{tabular} \caption{Comparisons between~\cite{crossconv-NIPS-2016} and the proposed algorithm on uncertainty modeling given the same starting frame. By drawing different samples, the generated predictions by our method exhibits more diversities while still being more similar to GT. } \label{fig:uncertainty} \end{figure} \input{figs/diversity.tex} \paragraph{\bf Diversity.}~Both~\cite{crossconv-NIPS-2016} and the proposed method model the uncertainty in predictions, but are different in one-step~\cite{crossconv-NIPS-2016} or multi-step uncertainties. By drawing different samples, we evaluate how the quality of predictions is affected by the noise input and how diverse the predicted sequences are. While~\cite{crossconv-NIPS-2016} uses a noise vector of 3200 dimensions and we use that of 2000 dimensions, the noise inputs of two models are not exactly the same but they are all sampled from $N(0,1)$. We sample 10 noise inputs for each method, while ensuring that the two sets of noise inputs have the similar mean and standard deviation. Then we obtain 10 sequences for each method, and compare them with the GT sequence. Figure~\ref{fig:uncertainty}(a) shows the mean and standard deviation of the perceptual metric over each method's 10 predictions when compared with the GT frame-by-frame. Under different noise inputs, our method keeps generating better sequences that are more similar to the GT. Meanwhile, the results of our algorithm show larger deviation, which implies that there are more diversities in our predictions. To further verify this, we show the embeddings of generated sequences in Figure~\ref{fig:uncertainty}(b). For each sequence, we extract the VGG-19~\cite{VGG-ICLR-2015} features (e.g., fc6 layer) of each frame, stack them as one vector, and map it to a 2-D point through t-SNE~\cite{tsne}. Figure~\ref{fig:uncertainty}(b) shows that our 10 predictions are much closer to the GT sequence while being scattered to be different from each other. In contrast, the 10 predictions of~\cite{crossconv-NIPS-2016} huddle together and are far from the GT. Those comparisons demonstrate that the proposed algorithm generates more realistic and diverse future predictions. Figure~\ref{fig:diversity} shows an example of two predicted sequences. \input{figs/image2life.tex} \paragraph{\bf Bringing still images to life.}~Unlike previous video prediction methods~\cite{mcnet-ICLR-2017,ruben-ICML-2017,walker2016uncertain} that mainly focus on humans for action recognition, our algorithm is more general towards bringing elements in the still image to life, i.e., turning a still image into a vivid GIF for aesthetic effects. It can be an effective tool for video editing. In Figure~\ref{fig:image2life}(a), we show a example of turning a photo into a vivid sequence. We mask out the sky region, apply our model trained on the \emph{FloatingCloud} dataset and generate the effect of clouds floating in the sky. This could further benefit existing sky editing methods~\cite{tsai2016sky}. Moreover, if we replace our flow prediction with known flows from a reference sequence, our flow-to-frame model \emph{Flow2rgb} becomes a global motion style transfer model. As the current random sampling strategy for flow predictions is uncontrollable, future work may include introducing more interactions from users to control detailed motions. \section{Conclusions} In this work, we propose a video prediction algorithm that synthesizes a set of likely future frames in multiple time steps from one single still image. Instead of directly estimating the high-dimensional future frame space, we choose to decompose this task into a flow prediction phase and a flow-grounded frame generation phase. The flow prediction models the future uncertainty and spatial-temporal relationship in a 3D-cVAE model. The frame generation step helps prevent the manifold shape of predicted sequences from straying off the manifold of real sequences. We demonstrate the effectiveness of the proposed algorithm on both human action videos and dynamic texture videos. \paragraph{\bf Acknowledgement.}~This work is supported in part by the NSF CAREER Grant \#1149783, gifts from Adobe and NVIDIA. YJL is supported by Adobe and Snap Inc. Research Fellowship. \clearpage \bibliographystyle{splncs}
1,108,101,566,252
arxiv
\section{Introduction} Experiments plan to study fluctuations of conserved quantities in heavy-ion collisions at the RHIC and LHC in different rapidity windows. With proper particle identification, one can measure in the experiment both absolutely conserved quantities like the baryon number ($B$) and the electrical charge ($Q$), as well as quantities which are conserved only under the strong interactions, such as the third component of isospin ($I_3$), the strangeness ($S$) and the hypercharge ($Y$). These observations can be used to extract fluctuations in the numbers of these quantities \cite{ahm,jk}. Such observations need to be compared to predictions of quark number susceptibilities (QNS) from lattice QCD. In this paper we report on lattice computations of a variety of diagonal QNS--- $\chi_B$, $\chi_Q$, $\chi_I$, $\chi_S$ and $\chi_Y$. One of the main results in this paper is the extraction of predictions for the ratios of these susceptibilities which survive the continuum limit. Our second important result is the investigation of the strange quark sector of the theory: we extract the Wroblewski parameter in a dynamical QCD computation for the first time, and also investigate the dynamics and kinematics of flavour symmetry breaking in QCD. Further, we present results on the cross correlations $\chi_{BQ}$, $\chi_{BY}$, $\chi_{BS}$ and $\chi_{QY}$. These cross correlations are used to explore the charge and baryon number of objects that carry flavour. We find that the baryon number of flavour carrying objects immediately above the QCD crossover temperature, $T_c$, are 1/3 and the charges are 1/3 or 2/3. We find furthermore, that these objects are almost pure flavour--- anything carrying u flavour has only tiny admixtures of d and s flavours, {\sl etc.\/}. This is our third main result. We have bypassed the necessity of numerically taking the continuum limit of the theory by restricting attention to the high temperature phase where it is easy to define robust observables which have little, or no, lattice spacing dependence. We demonstrate the robustness of the observables in quenched QCD, and then compute these quantities in QCD with two flavours of light dynamical quarks. These are also good observables in the sense of \cite{ahm}--- \begin{equation} C_{K/L} \equiv \frac{\chi_K}{\chi_L} = \frac{\sigma^2_K}{\sigma^2_L} \,, \label{ratio}\end{equation} where $\chi_K$ and $\chi_L$ are QNS for the conserved quantum numbers $K$ and $L$ and $\sigma_K$ and $\sigma_L$ are the variances. The two variances must be obtained under identical experimental conditions, after removing counting (Poisson) fluctuations as suggested by \cite{pruneau}. Thus the robust lattice observables give predictions for robust experimental observables. Either of $K$ or $L$ can also stand for a composite label $(M,N)$ where $M$ and $N$ are conserved quantum numbers--- in this case the susceptibility is an off-diagonal susceptibility, and the variance has to be replaced by the covariance of $M$ and $N$. Note the relation with the correlation coefficient--- \begin{equation} r_{MN} = \frac{\langle MN\rangle -\langle M\rangle \langle N\rangle}{\sigma_M\sigma_N} = \frac{\chi_{MN}}{\sqrt{\chi_M\chi_N}} = C_{(M,N)/M}\sqrt{C_{M/N}} = C_{(N,M)/N}\sqrt{C_{N/M}} \label{robcor}\end{equation} where again, the expressions are robust both on the lattice and in experiment. The study of these robust variables tells us about the relative magnitudes of fluctuations in different quantum numbers. The study of these quantities is one of the main results reported here. We further present investigations of the strange quark sector of the theory. The robust variable $C_{SU}$ is closely related to the Wroblewski parameter which can be extracted from experiments. This shows strong dependence on the actual strange quark mass, $m_s$, in the vicinity of $T_c$. Since $m_s\simeq T_c$, it seems that part of this sensitivity could be attributed purely to kinematics. We investigate the dynamical matrix elements which are responsible for flavour symmetry breaking in QCD and compare the importance of kinematics and dynamics in the strange quark sector. This is our second major result. One outstanding question about the high temperature phase of QCD is the nature of flavoured excitations. There is ample evidence that quarks are liberated at sufficiently high temperature--- the continuum limit of lattice computations of screening masses are consistent with the existence of such a Fermi gas for $T\ge2T_c$ \cite{mtc,valence}; quantitative agreement between weak coupling estimates of the susceptibilities \cite{bir,alexi} and the lattice data \cite{pushan,valence} also confirm this; the equation of state at very high temperature also testifies to this. However, comparison of lattice results and weak coupling computations of these quantities fail for $T<2T_c$. Our third new result concerns this matter of the thermodynamically important single particle excitations. We address this question in the most direct way possible--- create an excitation with one quantum number and observe what other quantum numbers it carries. Technically, this involves the measurement of robust ratios of off-diagonal QNS; the correlation between quantum numbers $K$ and $L$ can be studied through the ratio \begin{equation} C_{(KL)/L} = \frac{\langle KL\rangle-\langle K\rangle\langle L\rangle}{ \langle L^2\rangle-\langle L\rangle^2}. \label{example}\end{equation} We find that such measurements are feasible on the lattice, and are open to direct interpretation. We also suggest that they could be performed in heavy-ion experiments, as direct tests of whether quarks exist in the hot and dense matter inside the fireball. A recent suggestion of \cite{koch} is the measurement of just such a variable: essentially $C_{(BS)/S}$. We find that, immediately above $T_c$, the baryon number, charge and other flavour quantum numbers are linked with each other in exactly the same way as they are in quarks. For example, excitations which carry unit strangeness carry baryon number of $-1/3$ and charge of $+1/3$. This, together with the fact that there is also a failure of weak coupling theory, would imply that the QCD plasma phase is a ``quark liquid'' in the sense that the quasi-particles carry the quantum numbers of quarks, but the interactions between them are too strong for the system to be treated in weak coupling theory. Extension of these measurements to finite chemical potential for $T>T_c$ and $\mu\gg T$ could allow us to check whether or not the system is a normal Fermi liquid \cite{landau}. Such an extension is feasible since the Taylor series expansion of the free energy in $\mu/T$ has a radius of convergence much higher than unity for $T>T_c$. This is an appropriate place to remark upon a few aspects of our computations. Having removed most of the lattice spacing uncertainties by using robust variables, we have to control only the quark masses. We do this partly by performing the computations in an approximation called partial quenching. In this approximation the valence quark masses in the theory are tuned keeping the sea quark masses fixed. We explore the dependence of the robust variables on the sea quark masses and find that the results are not very sensitive to these parameters. This is expected--- away from a phase transition there is no more than a 5\% change in the QNS in going from quenched to $N_f=2$ dynamical QCD, and one expects the change to be smaller in going from $N_f=2$ to $N_f=2+1$, as long as one avoids the vicinity of the phase transition. The ratios are even less sensitive to the sea quark content than the QNS. In this study we have concentrated on the numerically more important effect of the valence quark masses. We have used two flavours of dynamical sea quarks of bare mass $m=0.1T_c$ to study a temperature range upto about $2T_c$. These quark masses are such that $m_\rho/T_c=5.4$ and $m_\pi/m_\rho=0.3$--- which makes this the smallest quark mass used in a systematic study of fluctuations. We have taken the strange quark to be quenched and to have a bare mass in the range $m_s/T_c=0.75$--1. This gives the correct physical values of the ratio $m_K/m_\rho$. We have also investigated the effect of decreasing the valence light quark mass by a factor of three in order to get at the same time the correct physical value of the ratio $m_\pi/m_\rho$, and varying the strange quark mass about the physical value. Details of simulations and the results are given in the next section, and a summary of the results in the final section. Details of the formalism, including expressions for various QNS are given in the appendix. \section{Simulations and results} \subsection{The simulations} In earlier papers \cite{first,endpt,nls} we have shown that finite volume effects on the QNS are negligible for lattices with $N_s\ge2N_t$ ($N_s$ is the spatial extent of the lattice and $N_t$ the temporal extent). The data we discuss here are obtained on $4\times16^3$ lattices. The setting of scale, the parameters employed and the statistics are detailed in \cite{endpt}. To that set of data we have added two more sets--- 55 configurations separated by more than two autocorrelation times at $T/T_c=0.975\pm0.010$ ({\sl i.e.\/}, $\beta=5.2825$) and 86 configurations, similarly spaced, at $T/T_c=1.15\pm0.01$ ({\sl i.e.\/}, $\beta=5.325$). The configurations are generated with a bare sea quark mass $m=0.1T_c$, which gives $m_\pi=0.3 m_\rho$. We have explored the dependence of the physics on the strange quark mass and on variations in the light quark mass through partially quenched computations, {\sl i.e.\/}, the approximation in which the number of valence quark flavours is different from the number of dynamical sea quark flavours, and their masses are also different. Errors in partial quenching are bounded by comparing results with the fully quenched theory. \subsection{Quark number susceptibilities} \begin{figure} \begin{center} \scalebox{0.7}{\includegraphics{chi1.eps}} \end{center} \caption{Some of the QNS, $\chi/T^2$, as functions of $T/T_c$ for $m_{ud}=0.1T_c$ and $m_s=T_c$.} \label{fg.qns}\end{figure} Our primary results for QNS are shown in Figure \ref{fg.qns}. These were obtained using the eqs. (\ref{qnsa}) and (\ref{qnsb}) in Appendix \ref{sc.qns}. The diagonal QNS and several of the off-diagonal ones show the characteristic crossover from small values in the low temperature phase to large values in the high temperature phase which gave rise to the original interpretation that the QCD phase transition liberates quarks \cite{milc,gavai}. Observe that $\chi_B < \chi_Q$ through the full temperature range explored. Both $\chi_I$ and $\chi_Y$ have values between the two others. In the low temperature phase one has $\chi_Y < \chi_I$, but for $T\ge1.5 T_c$ one obtains $\chi_Y > \chi_I$. We expect the crossover temperature between these two regimes to vary with quark masses. Our results are compatible with earlier results with staggered fermions at the same cutoff and quark mass which were obtained in the high temperature phase \cite{pushan}. They are not directly comparable to results obtained in \cite{milc2} at the same lattice spacing due to differences in the discretization. \subsubsection{Robust observables} \begin{figure} \begin{center} \scalebox{0.5}{\includegraphics{conti.eps}} \scalebox{0.5}{\includegraphics{conti2.eps}} \end{center} \caption{Ratios of QNS are robust observables--- being insensitive to both changes in lattice spacing $a\propto1/N_t^2$ at fixed $T=2T_c$, and the sea quark content of QCD. The quenched results come from a reanalysis of data from \cite{conti}. In both cases the light valence quark mass is $0.03T_c$ and the strange quark mass is $T_c$.} \label{fg.ratio}\end{figure} \begin{figure} \begin{center} \scalebox{0.7}{\includegraphics{robust.eps}} \end{center} \caption{Some robust predictions of fluctuation measures from QCD: all the quantities shown are the ratio $C_{X/Q}$ for the $X$ indicated in the figure, except for $X=S$ which is $C_{S/Q}/2$.} \label{fg.robust}\end{figure} In the quenched theory it was found that the QNS depended quadratically on the lattice spacing \cite{conti}, {\sl i.e.\/}, $\chi(a)=\chi+{\cal O}(a^2)$. Since staggered Fermions have order $a^2$ lattice artifacts, one expects the same behaviour in the theory with sea quarks. We are therefore forced to search for observables which are robust against changes in the lattice spacing, in the sense that $r(a)=r+{\cal O}(a^n)$ with $n>2$. We expect the ratios of QNS to have very good scaling properties in the high temperature phase, where the flavour off-diagonal QNS are much smaller than the flavour diagonal QNS. In the low-temperature phase we do not necessarily expect such behaviour to hold, since these two pieces are comparable, and the coefficient of the order $a^2$ corrections in the two parts depend on different physical quantities. As shown in Figure \ref{fg.ratio}, ratios of QNS in the high temperature phase have this property. The figure also shows another pleasant property--- these ratios have little statistically significant dependence on the sea quark content of the theory. We have checked that these two aspects of robustness hold for all ratios in the high temperature phase of QCD. The dependence of such ratios on the valence quark masses can be determined using the quadratic response coefficients (QRC) defined in \cite{rajarshi} and applied to the study of $C_{B/S}$. In view of these results, the hierarchy of QNS shown in the previous subsection must be a robust feature of QCD. It is therefore useful to demonstrate this hierarchy by plotting $C_{X/S}$ as a function of $T/T_c$ in Figure \ref{fg.robust}. Our results indicate that experimental studies of $C_{S/Q}$, $C_{B/Q}$ and $C_{Y/Q}$ are the most promising in terms of distinguishing between the two phases of QCD, because they exhibit the largest changes in going from one phase to the other. \subsection{Strange quarks} \begin{figure} \begin{center} \scalebox{0.7}{\includegraphics{lambda_s.eps}} \end{center} \caption{The robust variable $C_{s/u}=\lambda_s$ as a function of $T/T_c$ when the light quark masses are taken to be $m_{ud}=0.03T_c$, corresponding to a realistic pion mass, and the strange quark mass is set to $m_s=T_c$, which gives a realistic value of the ratio $m_K/m_\pi$.} \label{fg.wrob}\end{figure} The Wroblewski parameter, $\lambda_s$, as extracted from experiments, is the ratio of the numbers of primary produced strange and light quark pairs. It has been argued earlier \cite{valence} that under certain conditions, whose satisfaction can be verified by independent observations, one has $\lambda_s=C_{s/u}$. Our results for this robust quantity are shown in Figure \ref{fg.wrob} \cite{prelim}. In this computation we have taken the strange quark mass to be $m_s=T_c$ and the two light quark masses to be degenerate, $m_{ud}=0.03T_c$, such that it reproduces the correct value of $m_\pi/m_\rho$. As can be seen from the figure, the value of the ratio at $T_c$ is $\lambda_s\approx0.4$, in agreement with the value of the Wroblewski parameter extracted from experiments, when the freeze-out temperature is close to $T_c$ \cite{cleymans}. It is also a pleasant fact that at lower temperatures the ratio keeps decreasing. The dependence of this ratio on the valence quark masses was investigated in \cite{rajarshi}, where it was shown that, in the continuum limit, there was no dependence on valence quark mass except near $T_c$. In the vicinity of $T_c$, and immediately below, we found $\chi_s$ to be strongly dependent on $m_s$. It increases as a function of $T/T_c$ and at large enough $T$ reaches the same value as $\chi_u$, but it does this slowly when $m_s/T_c$ is large, and faster when $m_s\ll T_c$. If the plasma contains strange quark quasi-particles, as we argue later, then this behaviour could be a kinematic effect, which measures the phase space for a thermal gluon to split into a strange quark-antiquark pair. That the first effect is dynamical and the second kinematical can be motivated by a study of quantities which vanish in the SU(3) flavour symmetric limit. \subsubsection{Flavour symmetry breaking} \begin{figure} \begin{center} \scalebox{0.5}{\includegraphics{cby.eps}} \scalebox{0.5}{\includegraphics{isobrk.eps}} \end{center} \caption{The first panel shows $\chi_{BY}/T^2$ as a function of $T/T_c$ for various patterns of SU(3) flavour symmetry breaking. Holding $m_{ud}=0.1T_c$ constant we vary $m_s$ in (A) $m_s=T_c$, (B) $m_s=0.75T_c$, and (C) $m_s=0.5T_c$. Holding $\Delta_{us}=0.25T_c$ constant, we vary all the quark masses in (D) $m_s=0.75$ and (E) $m_s=T_c$. In (F) all the quark masses are small $m_{ud}=0.01T_c$, $m_s=0.1T_c$. The second panel shows $\chi_{IY}/T^2$, as a function of $T/T_c$ when $m_u=0.03T_c$, $m_d=0.1T_c$ and $m_s=T_c$.} \label{fg.peak}\end{figure} \begin{figure} \begin{center} \scalebox{0.7}{\includegraphics{nonpert.eps}} \end{center} \caption{The flavour symmetry breaking matrix elements (A) $\chi_{BY}/\Delta_{us}^2$ extracted with $m_{ud}=0.03T_c$ and $m_s=0.1T_c$, and (B) $A_{IY}= \chi_{IY}/\Delta_{ud}^2$ extracted using $m_u=0.03T_c$, $m_d=0.1T_c$ and $m_s=T_c$, as a function of $T/T_c$. The kinematic suppression for realistic strange quark masses is clear from the significantly smaller values of $\chi_{BY}/\Delta_{us}^2$ when (C) $m_s=T_c$ and (D) $m_s=0.75T_c$.} \label{fg.coeff}\end{figure} Two off-diagonal susceptibilities show an interesting pattern--- $\chi_{BQ}$ and $\chi_{BY}$ are both continuous through $T_c$, but peak in the vicinity of $T_c$. Since $\chi_{BQ}=\chi_{BY}/2$, as seen from eqs. (\ref{qnsa}) and (\ref{qnsb}), we show only the latter in Figure (\ref{fg.peak}) for various values of quark masses explained in the caption. The figure also displays $\chi_{IY}$ for $m_u\ne m_d$. Direct computations also show that $\chi_{BQ}=\chi_{BY}=0$ when all three quark masses are equal, and that $\chi_{IY}=0$ in the SU(2) symmetric limit--- providing an explicit demonstration that non-zero values of these quantities are due to flavour symmetry breaking (see the discussion in Appendix \ref{sc.qns}). From a comparison of the cases (D) and (E) in Figure \ref{fg.peak} it is clear that $\chi_{BY}$ is not only a function of $\Delta_{us}=m_s-m_u$ and $T$ for large values of this asymmetry, since the two curves are not coincident although they have equal $\Delta_{us}$. A careful look at the cases (A), (B) and (C) in the same figure shows that when $m_s$ is comparable to $T_c$ then both the position and the value of the peak in these QNS are dependent on $m_s$. Explicit dependence of the flavour symmetry breaking matrix elements on the actual value of $m_s$ (and not just the asymmetry parameter) can only come as a kinematic effect. We try to confirm the magnitude of this effect next. In Figure \ref{fg.coeff} we display the values of the dimensionless quantities $A_{IY}=\chi_{IY}/\Delta_{ud}^2$ and $A_{BY}=\chi_{BY}/\Delta_{us}^2$, extracted using the computations in which $\Delta_{us}$ and $\Delta_{ud}$ are much smaller than $T_c$. It would be interesting to check the temperature range in which these dimensionless quantities are computable in weak coupling theory. In the same figure we also show $\chi_{BY}/\Delta_{us}^2$ when $\Delta_{us}$ is comparable to $T_c$. Its strong suppression relative to the former case shows the kinematic effect which is responsible for the shape of $C_{s/u}$ shown in Figure \ref{fg.wrob}. The physics of the region just above $T_c$ is known to be complicated when observed through gluonic variables such as $\Delta/T^4 = (\epsilon-3P)/T^4$ (where $\epsilon$ is the energy density and $P$ the pressure) as well as the ratio of the lowest lying screening masses in the CP-even and CP-odd sectors \cite{saumen}. The peaks in $A_{BY}$ and $A_{IY}$ are the first observations of interesting structures near $T_c$ in fermionic variables unconnected with the order parameter. It would be interesting to see what temperature range in this is explainable by weak coupling theory. \subsection{Flavour carrying degrees of freedom} \begin{figure} \begin{center} \scalebox{0.7}{\includegraphics{koch.eps}} \end{center} \caption{The robust variables $C_{BS}$ and $C_{QS}$, as functions of $T/T_c$. The quark masses used are $m_{ud}=0.1T_c$ and $m_s=T_c$, although in the high temperature phase there is no statistically significant dependence on the quark masses.} \label{fg.bcs}\end{figure} The question of which are the thermodynamically relevant degrees of freedom in the QCD plasma is easier to answer in the quark sector than in the gluon sector. The reason is that the multitude of flavour quantum numbers allow us to look for ``linkage'' of flavour, {\sl i.e.\/}, exciting one quantum number and seeing the magnitude of another quantum number that is simultaneously excited. \subsubsection{Strangeness carriers} Robust variables involving off-diagonal QNS serve precisely this purpose. In \cite{koch} the robust variable \begin{equation} C_{BS} = -3C_{(BS)/S} = -3\,\frac{\chi_{BS}}{\chi_S} = 1 + \frac{\chi_{us}+\chi_{ds}}{\chi_s} = 1 + C_{(us)/s}+C_{(ds)/s} = 1 + 2 C_{(us)/s} \label{corr}\end{equation} is identified as one which can distinguish between bound state QCD \cite{bqcd} and the usual picture of the excitations in the plasma phase of QCD (in the last expression above we have used eq. (\ref{qnsc}) and flavour SU(2) symmetry to write $C_{(us)/s}=C_{(ds)/s}$). This is expected to have a value of unity if strangeness is carried by quarks ({\sl i.e.\/}, $S=1$ always comes linked with $B=-1/3$). In \cite{koch} it was shown that bound state QGP gives a value of $C_{BS}\approx2/3$ (for $T>T_c$). We present the first estimate for this quantity from lattice QCD in Figure \ref{fg.bcs}. In the low-temperature phase $C_{BS}$ is very different from unity, but immediately above $T_c$ the value is clamped to unity. There is no statistically significant change in $C_{BS}$ as $m_s/T_c$ is varied between 0.1 and 1. Since the statistical error bars are extremely small for $T\ge T_c$, this is a strong statement which contrasts with the $m_s$ dependence of $\lambda_s$ and $\chi_{BY}$. Another interesting measure is the correlation of charge and strangeness measured by the robust observable \begin{equation} C_{QS} = 3C_{(QS)/S} = 1 - \frac{2\chi_{us}-\chi_{ds}}{\chi_s}. \label{dorr}\end{equation} When strangeness is carried by quarks one would expect this to be unity (since $S=1$ comes with $Q=1/3$). In Figure \ref{fg.bcs} we have also shown the first measurement of $C_{QS}$. Immediately above $T_c$ it reaches close to unity with small errors. As a result, these two measurements together quite strongly indicate that unit strangeness is carried by objects with baryon number $-1/3$ and charge $+1/3$ in the high temperature phase of QCD, immediately above $T_c$. Furthermore, eqs.\ (\ref{corr}, \ref{dorr}) indicate that our observations imply that $\chi_{us}=0$, and hence strangeness carrying excitations do not carry u or d flavour. This is the most direct lattice evidence to date that strangeness is linked to other quantum numbers exactly as it would be for strange quarks, in the high temperature phase of QCD; and that these linkages are quite different below $T_c$. Later in this section we show that one should think of these as quasi-particles, dressed by the strong residual interactions, rather than as elementary quarks. Apart from the direct evidence of linkage between quantum numbers, we also draw attention to the cryptic evidence in the temperature and $m_s$ dependence of $C_{BS}$ and $C_{QS}$. The rapid change of $C_{BS}$ with $T$ (for $T<T_c$) has a natural explanation if the thermodynamics is controlled by a spectrum of strange baryons such that the amount of (anti-) strangeness per baryon increases with mass, and the masses are larger than $T$. The temperature independence of the two quantities above $T_c$ similarly implies that there is one excitation, which has mass less than $T_c$. The fact that the values of these quantities do not depend on $m_s$ within errors, for $T>T_c$, further implies that the effective masses of these quasi-particles is less than $T_c$, so that the infrared cutoff on the Dirac operator spectrum is provided by $T$. The next heavier quark, charm quark, does not affect the thermodynamics of the QCD plasma in the range of temperatures we investigate, since its mass is well beyond $T$. However, these heavier quarks do probe changes in other aspects of physics, such as screening, as is evident from \cite{jpsi}. \subsubsection{The light quark sector} \begin{figure}[bht] \begin{center} \scalebox{0.7}{\includegraphics{fermiliq.eps}} \end{center} \caption{The robust variable $-C_{(ud)/u}$ which measures the correlation between u and d flavours for $m_{ud}=0.1T_c$. It is positive in the low temperature phase since u quarks are found along with d antiquarks in charged pions, and vanishes in the high temperature phase, indicating that u and d are fully decorrelated in the plasma. There is no statistically significant dependence on $m_{ud}$ in the high temeperature phase.} \label{fg.lightqk}\end{figure} In transplanting these methods to the light quark sector, we find that the composite QNS, $\chi_{BI}$ and $\chi_{QI}$, are not informative, since the quark of one flavour has the same isospin as the antiquark of the other flavour. One way to extract information on the degrees of freedom would be to consider QNS of G-parity. However, it is more transparent to turn to the flavoured QNS $\chi_{ud}\propto\langle{\cal N}_u{\cal N}_d\rangle$. We can then use the quantity \begin{equation} C_{(ud)/u} = \frac{\chi_{ud}}{\chi_u}, \label{light}\end{equation} which looks at the linkage between u and d flavours in the same way that $C_{(QS)/S}$ looked for linkage of strangeness and charge. Our results are plotted in the first panel of Figure \ref{fg.lightqk}. In the hadronic phase it is non-vanishing because of charged pions, and negative because in these mesons each u comes with a $\overline{\rm d}$, and vice versa. In the QGP phase the vanishing of this normalized covariance implies that a particle with $u$ quantum number does not exhibit $d$ quantum numbers. Further tests come from investigating \begin{eqnarray} \nonumber C_{(BU)/U} &=& C_{(BD)/D} = \frac13(1+C_{(ud)/u}+C_{(us)/u}),\\ \nonumber C_{(QU)/U} &=& \frac13(2-C_{(ud)/u}-C_{(us)/u}),\\ C_{(QD)/D} &=& -\frac13(1-2C_{(ud)/u}+C_{(us)/u}). \label{crosscorr}\end{eqnarray} The vanishingly small values of $C_{(ud)/u}$ and $C_{(us)/u}$ imply that the u flavour is carried by excitations with baryon number $+1/3$ and charge $+2/3$, whereas the d flavour is carried by particles with baryon number $+1/3$ and charge $-1/3$. These are, therefore, quark quasi-particles. \subsubsection{Quasi-quarks} One might wonder why we talk of, for example, baryon number 1/3 when the measurements even at $2T_c$ differ from this number by a few parts in a thousand. What does this small but statistically significant deviation tell us? The answer is that it says something about the spatial structure of the quasi-particle. If flavour were carried by pointlike bare quarks, then $\chi_{ud}$ and $\chi_{us}$ would be precisely zero. However, interactions dress each quark into a spatially extended quasi-particle, and a thermodynamic average probes the spatial dimension of the charge with a resolution of $1/2\pi T$. When $T$ is sufficiently large, so that the gauge coupling is sufficiently small, this structure can be computed in weak coupling theory. As the coupling grows, the perturbative computation fails quantitatively, but as long as the correction to the charge or baryon number remains small, one can fruitfully talk of quasi-quarks. \begin{figure}[bht] \begin{center} \scalebox{0.7}{\includegraphics{chiud.eps}} \end{center} \caption{The off-diagonal QNS $\chi_{ud}/T^2$ for two different quark masses compared to weak coupling theory. The band includes uncertainties due to neglected of higher loop effects, the effect of changing from one-loop to two-loop computation of the running coupling, and statistical uncertainties in the determination of $T_c/\Lambda_{\overline{\scriptscriptstyle MS}}$.} \label{fg.chiud}\end{figure} In Figure \ref{fg.chiud} we show the flavour off-diagonal QNS $\chi_{ud}/T^2$ for two different quark masses, along with the prediction of weak coupling perturbation theory \cite{bir}--- \begin{equation} \frac{\chi_{ud}}{T^2} = -\frac{10}{27\pi^3}\alpha_s^3 \log\left(\frac c{\alpha_s}\right), \label{bir}\end{equation} where $c$ is a constant whose evaluation requires larger number of loops in the perturbation theory. The strong coupling, $\alpha_s$, has been evaluated to two loop accuracy at scale $2\pi T$ with the estimate $T_c/\Lambda_{\overline{\scriptscriptstyle MS}}=0.49\pm0.05$ \cite{precise}. This variation in $T_c/\Lambda_{\overline{\scriptscriptstyle MS}}$, a variation of $c$ by two orders of magnitude, $0.1\le c\le10$, and the variation in $\alpha_s$ in going from one loop to the two loop expression are included in the band in the figure. We find that in this range of temperature the prediction is somewhat smaller than the lattice data. Since this is not a robust variable, it is possible that taking the continuum limit will improve the agreement between the two. However, it is clear that as one comes closer to $T_c$ the disagreement increases, although the magnitude of $C_{(ud)/u}$ remains small. Thus, it seems that a Fermi gas picture, which may be valid at large $T/T_c$, gives way to something more complicated as one approaches $T_c$, although the quantum numbers are linked in exactly the same way as for the elementary quarks. This is the meaning of quasi-quarks. \section{Summary} We have presented an extensive computation of many different quark number susceptibilities (see Appendix \ref{sc.qns} for the definitions). All the diagonal QNS, and some of the off-diagonal QNS, track the phase structure of QCD--- being small in the confined phase and crossing over to larger values in the high temperature phase of QCD, as shown in Figure \ref{fg.qns}. An important observation was that ratios of QNS, $C_{A/B}$, defined in eq.\ (\ref{ratio}), are robust variables which depend weakly on the lattice spacing and the sea quark content of QCD in the high temperature phase, as shown in Figure \ref{fg.ratio}. These ratios can be compared to experimentally determined ratios of variances (or covariances) in event-to-event fluctuations of conserved quantum numbers. The relative magnitudes of the diagonal QNS are among these robust observables, and we found the ordering $\chi_S > \chi_Q > \chi_Y > \chi_I > \chi_B$, shown in Figure \ref{fg.robust}. A second set of results concerns the thermal production rate of strange quarks. It has been argued \cite{conti} that under certain (testable) conditions the Wroblewski parameter is the robust observable $C_{s/u}$. While it is insensitive to the sea quark content of QCD, it is known to depend sensitively on the valence quark masses \cite{rajarshi}. Here we have determined this quantity for realistic values of the strange and light quark masses (see Figure \ref{fg.wrob}). We attributed this dependence on $m_s$ to kinematic effects visible when $m_s\simeq{\cal O}(T_c)$. However, kinematic effects should manifest themselves in other quantities as well. We tested this hypothesis by examining certain QNS which vanish in the flavour symmetric limit. We extracted the matrix elements which are quadratic in the flavour symmetry breaking mass differences, $\Delta_{us}$, when $\Delta_{us}\ll T_c$. By comparing these (in Figure \ref{fg.coeff}) to the corresponding quantities when $\Delta_{us}\approx{\cal O}(T_c)$, we demonstrated the presence of such kinematic effects in other quantities as well. The flavour symmetry breaking matrix elements themselves (Figure \ref{fg.coeff}) peak at $T$ slighly larger than $T_c$, and are the first known example of observables in the quark sector of QCD which parallel similiar structures seen in the gluon sector. Our final result is that the high temperature phase of QCD essentially consists of quasi-quarks. We demonstrated this by observing that unit strangeness is carried by something which has baryon number $-1/3$ and charge $1/3$, as shown in Figure \ref{fg.bcs}. Part of the argument is that this correlation does not depend on the strange quark mass even when it is as large as $T_c$. Similarly, in the light quark sector one finds that u and d quantum numbers are not produced together (Figure \ref{fg.lightqk}). Through eqs.\ (\ref{crosscorr}) we found that this implies that the u flavour is carried by excitations with baryon number $+1/3$ and charge $+2/3$, whereas the d flavour is carried by particles with baryon number $+1/3$ and charge $-1/3$. We presented an argument that the carriers of these quantum numbers are not elementary quarks but their dressed counterparts which are called quasi-quarks. This argument involved the comparison of $\chi_{ud}$ with a weak-coupling prediction, which is shown in Figure \ref{fg.chiud}. The key point is that this comparison fails badly as one approaches $T_c$, although the correlations of flavour quantum numbers remains as they would for quarks. A similar comparison of weak coupling prediction with lattice results for the diagonal QNS $\chi_u$ also fails near $T_c$, leading us to the same conclusion. The argument about the existence of quasi-quarks in the high temperature phase of QCD depends on the examination of robust variables given in eq.\ (\ref{example}). It is useful to note that their use is not restricted to the lattice. It is also possible to measure them in heavy-ion collisions and thereby deduce the nature of excitations in the fireball produced in these collisions. We end by pointing out that we have not studied the low-temperature phase of QCD in much detail here. This is an interesting problem, which we have touched upon very briefly in the discussion of $C_{BS}$ and $C_{QS}$, and has been left for the future. {\bf Acknowledgements}: This computation was carried out on the Indian Lattice Gauge Theory Initiative's CRAY X1 at the Tata Institute of Fundamental Research. It is a pleasure to thank Ajay Salve for his administrative support on the Cray. Part of this work was done during a visit under Indo-French (IFCPAR) project, 3104-3 to SPhT, Saclay. The hospitality of Saclay and IFCPAR's support is gratefully acknowledged.
1,108,101,566,253
arxiv
\section{Introduction} The aim of this paper is to develop analytic tools, in order to design a relevant mechanism for carbon markets, where relevant refers to emission reduction. For this purpose, we focus on electricity producers in a power market linked to a carbon market. In this context, where the number of agents is limited, a standard game theory approach applies. The producers are considered as players behaving on the two financial markets represented here by carbon and electricity. We establish a Nash equilibrium for this non-cooperative $J$-player game through a coupling mechanism between the two markets. The original idea comes from the French electricity sector, where the spot electricity market is often used to satisfy peak demand. Producers behavior is demand driven and linked to the maximum level of electricity production. Each producer strives to maximize its market share. In the meantime, it has to manage the environmental burden associated with its electricity production through a mechanism inspired by the EU ETS\footnote{European Emission Trading System} framework~: each producer emission level must be balanced by a permit or through the payment of a penalty. Emission permit allocations are simulated through a carbon market that allows the producers to buy the allowances at an auction. Our focus on the electricity sector is motivated by its introduction in phase III of the EU ETS, and its prevalence in the emission share. In the present paper, the design assumptions made on the carbon market are dedicated to foster emissions reduction in the entire electricity sector. Based on a static elastic demand curve (referring to the times stages in an organized electricity market, mainly day-ahead and intra-day), we solve the local problem of establishing a non-cooperative Nash equilibrium for the two coupled markets. While literature mainly addresses profit maximization, our share maximization approach deals with profit through specific assumptions: sale at no loss and striking a balance between the purchase of allowances and the carbon footprint of the electricity generated. Here the market is driven through demand dynamics rather than electricity spot prices dynamics as it has been done in recent works (see \cite{carmona-coulon-schwarz-13a}\cite{carmona-coulon-schwarz-13} \cite{carmona-delarue-etal-13}). In Section \ref{sec:market-rules}, we formalize the markets (carbon and electricity) rules and the associated admissible set of players' coupled strategies. We then first study the Nash equilibrium on the electricity market alone (see Proposition \ref{propo-Nash}). Section \ref{sec:nash} is devoted to our Nash equilibrium results. \section{Coupling markets mechanism}\label{sec:market-rules} \subsection{Electricity market} In the electricity market, the demand is aggregated and summarized by a function $p\mapsto D(p)$, where $D(p)$ is the quantity of electricity that buyers are ready to obtain at maximal unit price $p$. We assume the following~: \begin{ass}\label{hypo:demande} The demand function $D(\cdot):[0,+\infty)\rightarrow[0+\infty)$ is decreasing, left continuous, and such that $D(0) >0$. \end{ass} Each producer $j \in \{1, \ldots, J \}$ is characterized by a finite production capacity $\kappa_j$ and a bounded and increasing function $ c_{j}: [0,\kappa_{j}] \longrightarrow \mathbb{R}^{+}$ that associates a marginal production cost to any quantity $q$ of electricity. These marginal production costs depend on several exogenous parameters reflecting the technical costs associated with electricity production e.g. energy prices, O\&M costs, taxes, carbon penalties \emph{etc\/}... This parameter dependency makes possible to build different market coupling mechanisms. In the following we use it to link the carbon and the electricity markets. The merit order ranking features marginal cost functions sorted according to their production costs. These are therefore increasing staircase functions whereby each stair refers to the marginal production cost of a specific unit owned by the producer. The producers trade their electricity on a dedicated market. For a given producer $j$, the strategy consists in a function that makes it possible to establish an ask price on the electricity market, defined as \begin{align*} s_{j} : & \mathscr{C}_j \times \mathbb{R}^{+} \longrightarrow \mathbb{R}^{+} \\ & (c_{j}(\cdot), q) \longrightarrow s_{j}(c_{j}(\cdot), q), \end{align*} where $\mathscr{C}_j$ the set of marginal production cost functions {\bf are explicitly given in the following} (see \eqref{def:set-C_j}). { \\} $s_{j}(c_{j}(\cdot), q)$ is the unit price at which the producer is ready to sell quantity $q$ of electricity. An admissible strategy fulfills the following sell at no loss constraint \begin{equation}\label{contrainteStrategie} s_{j}(c_{j}(\cdot), q) \geq c_{j}(q), \quad \forall q \in \text{Dom}(c_{j}). \end{equation} For example we can take $s_{j}(c_{j}(\cdot), q) = c_{j}(q)$ or $s_{j}(c_{j}(\cdot), q) = c_{j}(q)+ \lambda(q)$, where $\lambda(q)$ stands for any additional profit. \\ As mentioned in the introduction, the constraint \eqref{contrainteStrategie} guarantees profitable trade in as much as equilibrium established through this class of strategy will bring benefit to each producer. This establishes a link between market share maximization and profit maximization paradigms. Let us denote $\mathcal S$ as the class of admissible strategy profiles on electricity market. We have \begin{equation}\label{classeStratAdmiss} \begin{aligned} \mathcal S = \left \{ \begin{array}{l} \begin{array}{rcl} {\bf s} = (s_1,\ldots,s_j); \; s_{j}: \mathscr{C}_j \times \mathbb{R}^{+} & \longrightarrow & \mathbb{R}^{+} \\ (c_{j}(\cdot), q) & \longrightarrow & s_{j}(c_{j}(\cdot), q) \end{array} \\ \begin{array}{l} \mbox{ such that }s_{j}(c_{j}(\cdot), q) \geq c_{j}(q), \quad \forall q \in \text{Dom}(c_{j}) \end{array} \end{array} \right\}. \end{aligned} \end{equation} As a function of $q$, $s_{j}(c_{j}(\cdot),q)$ is bounded on $\text{Dom}(c_{j})$. For the sake of clarity, we define for each $q \not \in \text{Dom}(c_{j}) $, $s_{j}(c_{j}(\cdot),q) = p_{\text{lolc}}$, where $p_{\text{lolc}}$ is the loss of load cost, chosen as any overestimation of the maximal production costs. For producer $j$'s strategy $s_{j}$, we define the associated ask size at price $p$ as \begin{equation}\label{defOffrej} \Of(c_{j}(\cdot),s_{j};p) := \sup\{q, \; s_{j} (c_{j}(\cdot), q) < p \}. \end{equation} Hence $\Of(c_{j}(\cdot),s_{j};p)$ is the maximum quantity of electricity at unit price $p$ supplied by producer $j$ on the market. \begin{remark}\label{property:offre croissante} \item[(i)] The ask size function $p\mapsto \Of(c_{j}(\cdot),s_{j};p)$ is, with respect to $p$, an increasing surjection from $[0,+\infty)$ to $[0,\kappa_j]$, right continuous and such that $\Of(c_{j}(\cdot),s_{j};0)=0$. For an increasing strategy $s_{j}$, $\Of(s_{j};.)$ is its generalized inverse function with respect to $q$. \item[(ii)] Given two strategies $q\mapsto s_{j}(c_{j}(\cdot), q)$ and $q\mapsto s_j'(c_{j}(\cdot), q)$ such that $s_{j}(c_{j}(\cdot), q) \leq s_j'(c_{j}(\cdot), q)$, for all $q\in \text{Dom}(c_{j})$ we have for any positive $p$ \begin{equation*} \Of(c_{j}(\cdot),s_{j};p) \geq \Of(c_{j}(\cdot),s_j';p). \end{equation*} Indeed, if $p_{1} \geq p_{2}$ then $\{ q, \; s_{j}(c_{j}(\cdot), q) \leq p_{2} \} \subset \{ q, \; s_{j}(c_{j}(\cdot), q) \leq p_{1} \}$ from which we deduce that $\Of(c_{j}(\cdot),s_j;\cdot)$ is increasing. Next, if $s_j(c_{j}(\cdot),\cdot) \leq s_j'(c_{j}(\cdot),\cdot)$, for any fixed $p$, we have $\{ q, \; s_{j}'(c_{j}(\cdot),q) \leq p \} \subset \{ q, \; s_{j}(c_{j}(\cdot), q) \leq p \}$ from which the reverse order follows for the asks. \end{remark} We now describe the electricity market clearing. Note that from the market view point, the dependency of the offers with respect to the marginal cost does not need to be explicit. For the sake of clarity, we will write $s_{j}(q)$ and $\Of(s_j;p)$ instead of $s_{j}(c_{j}(\cdot),q)$, $\Of(c_{j}(\cdot),s_j;p)$. The dependency will be expressed explicitly whenever needed. By aggregating the $J$ ask size functions, we can define the overall supply function $p\mapsto {\mathcal O} \!\!\!\!{\mathcal O} ({\bf s};p)$ for producers strategy profile ${\bf s}= (s_{1}, \ldots, s_{J})$ as~: \begin{equation} {\mathcal O} \!\!\!\!{\mathcal O} ({\bf s}; p) = \sum_{j = 1}^{J} \Of(s_{j};p). \end{equation} Hence, for any producer strategy profile ${\bf s}$, ${\mathcal O} \!\!\!\!{\mathcal O} ({\bf s} ; p)$ is the quantity of electricity that can be sold on the market at unit price $p$. The overall supply function $p\mapsto {\mathcal O} \!\!\!\!{\mathcal O} ({\bf s}; p)$ is an increasing surjection defined from $[0,+\infty)$ to $[0,\sum_{j=1}^J\kappa_j]$, such that ${\mathcal O} \!\!\!\!{\mathcal O} ({\bf s};0)=0$. \subsubsection{Electricity market clearing} Taking producers strategy profile ${\bf s}= (s_{1}(\cdot), \ldots, s_{J}(\cdot))$ the market sets the electricity market price ${p^{\text{elec}}}({\bf s})$ together with the quantities $(q_{1}({\bf s}), \ldots, q_J({\bf s}))$ of electricity sold by each producer. The market clearing price ${p^{\text{elec}}}({\bf s})$ is the unit price paid to each producer for the quantities $q_{j}({\bf s})$ of electricity. The price $p({\bf s})$ may be defined as a price whereby offer satisfies the demand. As we are working with a general non-increasing demand curve (eventually locally inelastic), the price that satisfies the demand is not necessarily unique. We thus define the clearing price generically with the following definition. \begin{definition}[The clearing electricity price.]\label{def:clearingElec} Let us define \begin{equation}\label{reacmarche-prix} \begin{aligned} &\underline{p}({\bf s}) = \inf \left\{ p > 0 ; \; {\mathcal O} \!\!\!\!{\mathcal O} ({\bf s} ; p) > D(p) \right\} \\ \mbox{ and }\quad&\\ &\overline{p}({\bf s}) = \sup \left\{ p\in [\underline{p}({\bf s}),p_{\text{lolc}}]; D(p) = D(\underline{p}({\bf s}))\right\} \end{aligned} \end{equation} with the convention that $\inf\emptyset = p_{\text{lolc}}$. The clearing price may then be established as any ${p^{\text{elec}}}({\bf s}) \in [\underline{p}({\bf s}), \overline{p}({\bf s})]$ as an output of a specific market clearing rule. For consistency of the price, the market rule must be such that for any two strategy profiles ${\bf s}$ and ${\bf s} '$, \begin{equation}\label{regleChoixPrix} \begin{aligned} \mbox{if } \underline{p}({{\bf s}}) < \underline{p}({{\bf s} '}) \mbox{ then } {p^{\text{elec}}}({{\bf s}}) < {p^{\text{elec}}}({{\bf s} '}), \\ \mbox{if } \underline{p}({{\bf s}}) = \underline{p}({{\bf s} '}) \mbox{ then } {p^{\text{elec}}}({{\bf s}}) = {p^{\text{elec}}}({{\bf s} '}). \end{aligned} \end{equation} \end{definition} Note that $\underline{p}({\bf s})\neq \overline{p}({\bf s})$ only if the demand curve $p\mapsto D(p)$ is constant on some interval $[\underline{p}({\bf s}),\underline{p}({\bf s})+ \epsilon]$. \begin{figure}[ht] \begin{center} \begin{tikzpicture}[xscale=7,yscale=0.02]\footnotesize \newcommand{-.02}{-.02} \newcommand{ 1.04}{ 1.04} \newcommand{-.4}{-.4} \newcommand{185}{185} \begin{scope}<+->; \draw[black] (0,0) node[anchor=north east] {$0$}; \draw[black,thick,->] (-.02, 0) -- ( 1.04, 0); \draw[black,thick,] (0.95, -15) node[right] {price}; \draw[black,thick,->] (0, -.4) -- (0, 185)node[left] {quantity}; \end{scope} \begin{scope}[thick,blue] \draw (0,25) node {$\bullet$} ; \draw (0,25) -- (0.2,25); \filldraw[very thin,opacity=.2] (0,0.0) rectangle (0.2,25); \draw (0.2,40) node {$\bullet$} ; \draw (0.2,40) -- (0.4,40); \filldraw[very thin,opacity=.2] (0.2,0) rectangle (0.4,40); \draw (0.4,50) node {$\bullet$} ; \draw (0.4,50) -- (0.55,50); \filldraw[very thin,opacity=.2] (0.4,0) rectangle (0.55,50); \draw (0.55,100) node {$\bullet$} ; \draw (0.55,100) -- (0.65,100); \filldraw[very thin,opacity=.2] (0.55,0) rectangle (0.65,100); \draw (0.59,43) node[right] {Total offer $p\mapsto {\mathcal O} \!\!\!\!{\mathcal O} (p)$}; \draw (0.65,120) node {$\bullet$} ; \draw (0.65,120) -- (0.9,120); \filldraw[very thin,opacity=.2] (0.65,0) rectangle (0.9,120); \draw (0.9,150) node {$\bullet$} ; \draw (0.9,150) -- (1.0,150); \filldraw[very thin,opacity=.2] (0.9,0) rectangle (1.0,150); \end{scope} \begin{scope}[thick,red] \draw (0.17,100) node[right,above] {Demand $p \mapsto D(p)$} ; \draw (0.15,175) node {$\bullet$} ; \draw (0,175) -- (0.15,175); \filldraw[very thin,opacity=.2] (0.0,0.0) rectangle (0.15,175); \draw (0.27,143) node {$\bullet$} ; \draw (0.15,143) -- (0.27,143); \filldraw[very thin,opacity=.2] (0.15,0) rectangle (0.27,143); \draw (0.32,130) node {$\bullet$} ; \draw (0.27,130) -- (0.32,130); \filldraw[very thin,opacity=.2] (0.27,0) rectangle (0.32,130); \draw (0.7,65) node {$\bullet$} ; \draw (0.32,65) -- (0.7,65); \filldraw[very thin,opacity=.2] (0.32,0) rectangle (0.7,65); \draw (0.9,32) node {$\bullet$} ; \draw (0.7,32) -- (0.9,32); \filldraw[very thin,opacity=.2] (0.7,0) rectangle (0.9,32); \draw (0.9,19) -- (1,19); \filldraw[very thin,opacity=.2] (0.9,0) rectangle (1,19); \end{scope} \begin{scope}[black] \draw[dashed] (0.55,0.0) -- (0.55,150); \draw (0.55,0.0) node {$\bullet$} ; \draw[thin,<-] (0.55,-4) -- (0.45,-14) node[below] {$\underline{p}({\bf s})$}; \draw[dashed] (0.7,0.0) -- (0.7,150); \draw (0.7,0.0) node {$\bullet$} ; \draw[thin,<-] (0.7,-4) -- (0.80,-14) node[below] {$\overline{p}({\bf s})$}; \draw[dashed] (0.32,65) -- (-0.0,65) node {$\bullet$} node[left]{quantity sold}; \end{scope} \end{tikzpicture} \end{center} \label{clearing} \end{figure} Note also that price $\underline{p}({\bf s})$ is well defined in the case where demand does not strictly decrease. This includes the case where demand is constant. In such case, $\underline{p}({\bf s})=p_{\text{lolc}}$ only if the demand curve never crosses the offer. Next, we define the quantity of electricity sold at price ${p^{\text{elec}}}({\bf s})$. When ${\mathcal O} \!\!\!\!{\mathcal O} ({\bf s};{p^{\text{elec}}}({\bf s})) \leq D({p^{\text{elec}}}({\bf s}))$, each producer sells $\Of({\bf s}_{j};{p^{\text{elec}}}({\bf s}))$, but cases where ${\mathcal O} \!\!\!\!{\mathcal O} ({\bf s};{p^{\text{elec}}}({\bf s})) > D({p^{\text{elec}}}({\bf s}))$ may occur, requiring the introduction of an auxiliary rule to share $D({p^{\text{elec}}}({\bf s}))$ among the producers that propose ${\mathcal O} \!\!\!\!{\mathcal O} ({\bf s};{p^{\text{elec}}}({\bf s}))$. In such a case, $\underline{p}({\bf s})$ is a discontinuity point of ${\mathcal O} \!\!\!\!{\mathcal O} ({\bf s};\cdot)$ and/or $\underline{p}({\bf s}) < {p^{\text{elec}}}({\bf s})$. We can split the offer as follows: \begin{align*} {\mathcal O} \!\!\!\!{\mathcal O} ({\bf s} ; {p^{\text{elec}}}({\bf s})) = \sum_{j=1}^J \Of(s_j;\underline{p}({\bf s})^-) + \sum_{j=1}^J \Delta^- \Of(s_j;{p^{\text{elec}}}({\bf s})), \end{align*} where $ \Delta^- \Of(s_j;{p^{\text{elec}}}({\bf s})) := \Of(s_{j};{p^{\text{elec}}}({\bf s})) - \Of(s_{j}; \underline{p}({\bf s})^{-})$. The market's choice is to fully accept the ask size of producers with continuous ask size curve at point $\underline{p}({\bf s}^-)$. For producers with discontinuous ask size curve at ${p^{\text{elec}}}({\bf s})$, a market rule based on proportionality that favors abundance, is used to share the remaining part of the supply. More precisely we define $\varphi_{j}({\bf s})$, the quantity of electricity sold by $j$, as \begin{equation}\label{reacmarche-qantite-elec} \begin{aligned} \quad \varphi_{j}({\bf s}) = \left\{ \begin{array}{l} \Of({\bf s}_{j}; {p^{\text{elec}}}({\bf s})),\\ \quad \mbox{ if }D({p^{\text{elec}}}({\bf s}))\geq {\mathcal O} \!\!\!\!{\mathcal O} ({\bf s} ; {p^{\text{elec}}}({\bf s})), \\ \\ \Of({\bf s}_{j}; \underline{p}({\bf s})^-)+\Delta^-\Of({\bf s}_{j};{p^{\text{elec}}}({\bf s}))\dfrac{D({p^{\text{elec}}}({\bf s})) - {\mathcal O} \!\!\!\!{\mathcal O} ({\bf s} ; \underline{p}({\bf s})^{-})}{\displaystyle \Delta^- {\mathcal O} \!\!\!\!{\mathcal O} ({\bf s} ; {p^{\text{elec}}}({\bf s}))},\\ \quad \mbox{ if }D({p^{\text{elec}}}({\bf s})) < {\mathcal O} \!\!\!\!{\mathcal O} ({\bf s} ; {p^{\text{elec}}}({\bf s})), \end{array}\right. \end{aligned} \end{equation} where $ \Delta^- {\mathcal O} \!\!\!\!{\mathcal O} ({\bf s} ; {p^{\text{elec}}}({\bf s})) := \sum_{j=1}^{J} \Delta^- \Of(s_j;{p^{\text{elec}}}({\bf s})) > 0$. \\ Note that, when $D({p^{\text{elec}}}({\bf s})) < {\mathcal O} \!\!\!\!{\mathcal O} ({\bf s} ; {p^{\text{elec}}}({\bf s}))$ then $\Delta^- {\mathcal O} \!\!\!\!{\mathcal O} ({\bf s} ; {p^{\text{elec}}}({\bf s})) > 0$. Note also that we always have \begin{align} \sum_{i=1}^J \varphi_{j}({\bf s}) = D({p^{\text{elec}}}({\bf s}))\wedge {\mathcal O} \!\!\!\!{\mathcal O} ({\bf s} ; {p^{\text{elec}}}({\bf s})). \end{align} \subsection{Carbon market} Producers are penalized according to their emission level if they do not own allowances. Hence independently from their position on the electricity market, producers buy $\text{CO}_{2}$ emission allowances on a $\text{CO}_{2}$ auction market. This market has a finite known quantity $\mathbb{W}$ of $\text{CO}_{2}$ emission allowances available. On this market, producers adopt a strategy that consists in a series of bids which may be reorganized in a decreasing function $w \mapsto A_{j}(w)$ defined from $[0,+\infty)$ to $[0,+\infty)$. Quantity $A_{j}(w)$ is the unit price that producer $j$ is ready to pay for quantity $w$ of $\text{CO}_{2}$ allowance. $\mathscr{A}$ denotes the strategy profile set on the $\text{CO}_{2}$ market, \[ \mathscr{A} := \{ {\bf A} = (A_{1},\ldots ,A_{J}); \mbox{s.t. }A_k:[0,+\infty)\rightarrow[0,+\infty) \mbox{ is decreasing } \}. \] Strategy $ A_{j}$ is associated with a supply (to buy) function, denoted by $p \mapsto \Theta(A_j;p)$. The quantity $\Theta(A_j;p)$ is the maximum quantity that producer $j$ is ready to buy at price $p$. It is a decreasing left continuous function defined as \begin{align*} \Theta (A_j;p) = \sup\{w,\;A_j(w)\geq p\}. \end{align*} The $\text{CO}_{2}$ market reacts by aggregating the $J$ offers by $ {{\bf{\Theta}}}({\bf A}; p) = \sum_{j=1}^J \Theta (A_j;p), $ and the clearing market price is established following a {\it second item auction} as~: \begin{align}\label{reacmarche-prix-quotas} {p^{\text{CO}_{2}}}({\bf A}) := \inf\{p,\;{\bf{\Theta}}({\bf A}; p)< \mathbb{W}\}. \end{align} \begin{figure} \begin{center} \begin{tikzpicture}[xscale=8,yscale=0.02]\footnotesize \newcommand{-.02}{-.02} \newcommand{ 1.04}{ 1.04} \newcommand{-.4}{-.4} \newcommand{185}{185} \begin{scope}<+->; \draw[black] (0,0) node[anchor=north east] {$0$}; \draw[black,thick,->] (-.02, 0) -- ( 1.04, 0); \draw[black,thick,] (0.95, -15) node[right] {$w$}; \draw[black,thick,->] (0, -.4) -- (0, 185)node[left] {price}; \end{scope} \begin{scope}[thick,Violet] \draw (0.20,100) node[right,above] {Aggregated bid curve} ; \draw (0.15,175) node {$\bullet$} ; \draw (0,175) -- (0.15,175); \filldraw[very thin,opacity=.2] (0.0,0.0) rectangle (0.15,175); \draw (0.27,143) node {$\bullet$} ; \draw (0.15,143) -- (0.27,143); \filldraw[very thin,opacity=.2] (0.15,0) rectangle (0.27,143); \draw (0.40,130) node {$\bullet$} ; \draw (0.27,130) -- (0.40,130); \filldraw[very thin,opacity=.2] (0.27,0) rectangle (0.40,130); \draw (0.7,65) node {$\bullet$} ; \draw (0.40,65) -- (0.7,65); \filldraw[very thin,opacity=.2] (0.40,0) rectangle (0.7,65); \draw (0.9,32) node {$\bullet$} ; \draw (0.7,32) -- (0.9,32); \filldraw[very thin,opacity=.2] (0.7,0) rectangle (0.9,32); \draw (0.9,19) -- (1,19); \filldraw[very thin,opacity=.2] (0.9,0) rectangle (1,19); \end{scope} \begin{scope}[black] \draw[dashed] (0.55,0.0) -- (0.55,150) node[above]{carbon clearing}; \draw (0.55,0.0) node {$\bullet$} ; \draw[thin] (0.55,-4) node[below] {$\mathbb{W}$}; \draw[dashed] (0.40,65) -- (-0.0,65) node {$\bullet$} node[left]{${p^{\text{CO}_{2}}}$}; \end{scope} \end{tikzpicture} \caption{Clearing on the allowances market} \end{center} \label{clercarb} \end{figure} Note that ${p^{\text{CO}_{2}}}({\bf A}) =0$ indicates that there are too many allowances. It is worth a reminder here, that the aim of allowances is to decrease emissions. In section \ref{sec:design}, we discuss a design hypothesis (assumption \ref{ass:hypoTW} ) that guarantees an equilibrium price ${p^{\text{CO}_{2}}}({\bf A}) >0$. Therefore, in the following, we assume that the overall quantity $\mathbb{W}$ of allowances, is such that ${p^{\text{CO}_{2}}}({\bf A}) >0$. By Definition \eqref{reacmarche-prix-quotas}, we have ${{\bf{\Theta}}}({\bf A} ; {p^{\text{CO}_{2}}}({\bf A}))\geq \mathbb{W} ~\mbox{ and }~ {{\bf{\Theta}}}({\bf A} ; {{p^{\text{CO}_{2}}}}({\bf A})^{+})\leq \mathbb{W}.$\\ Producers with $(\Theta(A_j;{p^{\text{CO}_{2}}}({\bf A}))>0$ each obtain the following quantity $\delta_j({\bf A})$ of allowances \begin{equation}\label{reacmarche-qantite-quotas} \delta_{j}({\bf A}) := \left\{ \begin{array}{l} \Theta(A_{j}; {p^{\text{CO}_{2}}}({\bf A})),\\ \quad \mbox{ if }\Delta^+\Theta(A_j;{p^{\text{CO}_{2}}}({\bf A})) = 0, \\ \\ \Theta(A_{j}; {{p^{\text{CO}_{2}}}({\bf A})^{+}}) + \Delta^+\Theta(A_j;{p^{\text{CO}_{2}}}({\bf A})) \dfrac{\left( \mathbb{W} - \Theta({\bf A};{{p^{\text{CO}_{2}}}}({\bf A})^+)\right)}{\Delta^+ \Theta({\bf A};{{p^{\text{CO}_{2}}}}({\bf A}))} ,\\ \quad \mbox{ otherwise} \end{array}\right. \end{equation} where $\Delta^+ f(x) := f(x) - f(x^+)$. \subsection{Carbon and electricity markets coupling} As mentioned earlier, for each producer, the marginal cost function is parametrized by the positions ${\bf A}$ of the producers on the carbon market. Indeed, producer $j$ can obtain $\text{CO}_{2}$ emission allowances on the market to avoid penalization for (some of) its emissions. Those emissions that are not covered by allowances are penalized at a unit rate ${\mathfrak{p}}$. A profile of an offer to buy from the producers ${\bf A} = (A_1, \ldots, A_J)$, through the $\text{CO}_{2}$ market clearing, corresponds to a unit price of ${p^{\text{CO}_{2}}}({\bf A})$ of the allowance and quantities $\delta_{j}({\bf A})$ of allowances bought by each producer (defined by the market rules \eqref{reacmarche-prix-quotas},\eqref{reacmarche-qantite-quotas}). We assume that for all producers the emission rate, $e_{j}$, is constant. Then, the marginal production cost function $c_{j}^{\bf A}(\cdot)$, parametrized by the emission regulations, comes out as \begin{equation}\label{coutsRegulation} q\mapsto c_{j}^{\bf A}(q) = \left \{ \begin{array}{ll} c_j(q) + {e_j} {p^{\text{CO}_{2}}}({\bf A}),& \mbox{ for } 0 < q \leq \displaystyle\frac{\delta_{j}({\bf A})}{e_{j}} \\ c_j(q) + {e_j} {\mathfrak{p}}, &\mbox{ for } \displaystyle\frac{ \delta_{j}({\bf A}) }{e_{j}} < q \leq \kappa_{j},\\ \end{array} \right. \end{equation} where $c_j(\cdot)$ stands for the marginal production cost without any emission regulation. In this coupled market setting, the strategy of producer $j$ thus makes a pair $(A_{j}, s_{j})$. The set of admissible strategy profile is defined as \begin{align*} {\Large \bf \Sigma} = \left \{ ({\bf A},{\bf s}); \;{\bf A}\in\mathscr{A}, {\bf s}\in \mathcal S \right\}, \end{align*} where in the definition \eqref{classeStratAdmiss}, we use \begin{align}\label{def:set-C_j} \mathscr{C}_j = \left\{ c^{\bf A}_j; \;{\bf A} \in \mathscr{A} \right\}. \end{align} To any strategy profile ${({\bf A},{\bf s})} \in {\Large \bf \Sigma}$, through the market mechanisms described, corresponds prices for allowances and electricity, ${p^{\text{CO}_{2}}}({({\bf A},{\bf s})})$ and ${p^{\text{elec}}}({({\bf A},{\bf s})})$, quantities of allowances bought by each producer, $\delta_{j}({({\bf A},{\bf s})})$ and market shares on electricity market $\varphi_{j}({({\bf A},{\bf s})})$ of each producer. \section{Nash Equilibrium}\label{sec:nash} \subsection{Definition} We suppose that the $J$ producers behave non cooperatively, aiming at maximizing their individual market share on the electricity market. For a strategy profile ${({\bf A},{\bf s})} \in {\Large \bf \Sigma}$, the market share of a producer $j$ depends upon its strategy $(A_{j},s_{j}(\cdot))$ but also on the strategies $({\bf A}_{-j},{\bf s}_{-j})$ of the other producers \footnote{Here ${\bf v}_{-j}$ stands for the profile $(v_{i},\cdots, v_{j-1},v_{j+1},\cdots, v_{J})$.}. In this set-up the natural solution is the Nash equilibrium (see e.g. \cite{basar-olsder-98}). More precisely we are looking for a strategy profile \[ {({\bf A}^{\ast},{\bf s}^{\ast})} = ( (A_{1}^{\ast},s_{1}^{\ast}), \cdots, (A_{J}^{\ast},s_{J}^{\ast}) ) \in {\Large \bf \Sigma} \] that satisfies Nash equilibrium conditions: none of the producers would strictly benefit, that is, would strictly increase its market share from a unilateral deviation. Namely, for any producer $j$ strategy ${({\bf A}_{j},{\bf s}_{j})}$ such that $({({\bf A}^{\ast}_{-j},{\bf s}^{\ast}_{-j})}; {(A_j,s_j)}) \in {\Large \bf \Sigma} $, we have \footnote{$({\bf v}_{-j} ; v)$ stands for $(v_{1},\cdots v_{j-1}, v, v_{j+1}, \cdots v_{J})$} \begin{align}\label{NashGlob} \varphi_{j}({{({\bf A}^{\ast},{\bf s}^{\ast})}}) \geq \varphi_{j}({{({\bf A}^{\ast}_{-j},{\bf s}^{\ast}_{-j})}};{(A_j,s_j)}), \end{align} where $q_j$ is the quantity of electricity sold. Note that the dependency in terms of ${\bf A}$ through the marginal cost $c_j^{\bf A}$ is now explicit in $q_j$. Condition \eqref{NashGlob} has to be satisfied for any unilateral deviation of any producer $j$. In particular \eqref{NashGlob} has to be satisfied for a producer $j$ admissible deviation $(A_{j}^{\ast}, s_{j})$ such that $({({\bf A}^{\ast}_{-j},{\bf s}^{\ast}_{-j})}; {(A_j^{\ast},s_j)}) \in {\Large \bf \Sigma} $ of producer $j$ that would change its behavior only on the electricity market. Consequently, Nash equilibrium for the electricity component component ${\bf s} ^{\ast}$ of the Nash equilibrium is also a Nash equilibrium for a game where producers only behave on an electricity market with marginal production costs $c_{j}^{{\bf A}^{\ast}}(\cdot)$, $j=1, \cdots J$. The Nash equilibrium for a game restricted to the electricity market characterizes the ${\bf s}^{\ast}$ component for the coupled market game equilibrium. Note that, if ${\bf s}^{\ast}$ is the producers behavior on the electricity market at the Nash equilibrium, any behavior ${\bf A}$ on the $\text{CO}_{2}$ market, such that the strategy profile $({\bf A},{\bf s}^{\ast})$ is admissible yields to the same market share for each producer. Next section focuses on determining a Nash equilibrium on the game restricted to the electricity market. \subsection{Equilibrium on Power market} In this restricted set-up, we consider that the marginal costs $\{c_j,j=1\ldots,J\}$ are known data, possibly fixed through the position ${\bf A}$ on the $\text{CO}_{2}$ market. In this section, we refer to $\mathcal S$ as the set of admissible strategy profiles,in the particular case where $\mathscr{C}_j=\{c_j\}$ for each $j=1,\ldots,J$. The Nash equilibrium problem is as follows : find a strategy profile ${\bf s}^{\ast} = (s^{\ast}_{1}, \ldots, s^{\ast}_{J}) \in \mathcal S$ such that \begin{align}\label{nashQuantiteElec} \begin{aligned} \forall j, \forall \;s_{j}\neq s^{\ast}_{j}, \quad \varphi_{j}({\bf s}^{\ast}) \geq \varphi_{j}({\bf s}^{\ast}_{-j}; s_{j}). \end{aligned} \end{align} The following proposition exhibits a Nash equilibrium, whereby each producer must choose the strategy denoted by $C_{j}$, and referred to as {\it marginal production cost strategy}. It is defined by \begin{align}\label{stratCoutMarg} C_{j}(q) = \left \{ \begin{array}{l} c_{j}(q), \mbox{ for } q \in \text{Dom}(c_{j}) \\ p_{\text{lolc}} , \mbox{ for } q \not \in \text{Dom}(c_{j}) . \end{array} \right. \end{align} \begin{proposition}\label{propo-Nash} \item[(i)] For any strategy profile ${{\bf s}} = (s_1,\ldots,s_J)$, no producer $j\in\{1,\ldots,J\}$ can be penalized by deviating from strategy $s_j$ to is marginal production cost strategy $C_j$, namely, \begin{equation} \label{propoPti} \varphi_j({\bf s})\leq \varphi_({\bf{s}}_{-j};C_j). \end{equation} In other words, $C_{j}$ is a dominant strategy for any producer $j$. \item[(ii)] The strategy profile ${\bf C} =(C_{1},\dots C_{J})$ is a Nash equilibrium. \item[(iii)] If the strategy profile ${\bf s} \in \mathcal S$ is a Nash equilibrium, then we have ${p^{\text{elec}}}({\bf s}) = {p^{\text{elec}}}({\bf C})$ and for any producer $j$, $\varphi_{j}({\bf s}) = \varphi_{j}({\bf C})$. \end{proposition} Point (ii) of the previous proposition is a direct consequence of the dominance property (i). The proof of both (i) and (iii) can be found in \cite{preprint-BMP}. Point (ii) of the proposition exhibits a Nash equilibrium strategy profile. Clearly this equilibrium is not unique since we can easily show that a producer's given supply can follow from countless different strategies. Nevertheless point (iii) shows that for any Nash equilibrium, the associated electricity prices are the same and the quantity of electricity bought by any producer $j$ are the same for all equilibrium profiles. \subsection{Coupled markets design through Nash equilibrium}\label{sec:design} From this point we restrict our attention to a particular design of the market. In the following, the scope of the analysis applies to a special class of producers, a specific electricity market price clearing (satisfying Definition \ref{def:clearingElec}), a range of quantities of allowances available on the $\text{CO}_{2}$ market. Although not necessary, the following restriction brings simplifications to the development. \begin{ass}{\bf On the producers.}\label{ass:producers} Each producer $j$ operates a single production unit (with emission rate $e_{j}$), for which \begin{itemize} \item[(i)] The marginal production cost is as in Equation (\ref{coutsRegulation}), where the contribution that does not depend on the producer positions ${\bf A}$ in the $\text{CO}_{2}$ market is constant, $c_{j}(q) = c_{j}\index{q \in [0,\kappa_{j}]}$. \item[(ii)] The producers are two by two different~: $\forall i, j \in \{1, \cdots J \}, (c_{i},e_{i}) \neq (c_{j},e_{j})$. \end{itemize} \end{ass} For a given strategy profile on the electricity market, Definition \ref{def:clearingElec} gives a range of possible determination for the electricity price. Previously, the analysis of the Nash Equilibrium restricted to the electricity market, did not require a precise clearing price determination. Nevertheless to extend our analysis of the coupling we need to explicit this determination and assume the following~: \begin{ass}{\bf On the market electricity}\label{ass:ElecClearingPrice} For a given strategy profile ${\bf s}$ of the producers, the clearing price of electricity is $p({\bf s}) = \overline{p}({\bf s})$, where $\overline{p}({\bf s})$ is defined in Definition \ref{def:clearingElec} by Equation (\ref{reacmarche-prix}). \end{ass} As previously noted, this choice of electricity price ensures that for any strategy profile ${\bf s}$, for any positive $\epsilon$ we have $D(p({\bf s})+\epsilon) < D(p({\bf s}))$, for any strategy profile ${\bf s}$. This property is necessary for the main theorem \ref{propo:Nashcoupled} proof. The quantity $\mathbb{W}$ of $\text{CO}_{2}$ allowances available on the market plays a crucial role in the market design. As a matter of fact, if this quantity is too large, its price on market will drop to zero, leaving the market incapable of fulfilling its role of decreasing $\text{CO}_{2}$ emissions. Therefore we clearly need to make an assumption that restricts the number of allowances available. Capping the maximum quantity of allowances available requires information about producers willing to obtain allowances. This is the objective of the following paragraph where we define a {\it willing to buy} function that plays a central part on the construction of the Nash equilibrium. \subsection*{Willing to buy functions} In this paragraph, we aim at guessing a Nash equilibrium candidate. We base our reasoning on the dominant strategy on the electricity market alone (see Proposition\eqref{propo-Nash}). For a while, we consider an exogenous $\text{CO}_{2}$ cost $\tau$. The producers marginal cost become for any $\tau \in [0,{\mathfrak{p}}]$, $c_{j}^{\tau}, \; j=1,\cdots J$, $c_{j}^{\tau}(q) = c_{j} + \tau e_{j}, \;\; q \in [0,\kappa_{j}]$. In this framework, the dominant strategy is also parametrized by $\tau$ as $C^{\tau}_j(\cdot)$ defined as in \eqref{stratCoutMarg}. In the same way, we define the clearing electricity price and quantities in terms of $\tau$ only by \begin{align*} {p^{\text{elec}}}(\tau) &= {p^{\text{elec}}} (\{C_j^\tau(\cdot),j=1\ldots,J\})\\ \varphi_j(\tau) &= \varphi_j(\{C_j^\tau(\cdot),j=1\ldots,J\}). \end{align*} We determine two {\it willing-to-buy-allowances functions} ${\mathcal{W} }(\cdot)$ and ${\overline{\mathcal{W} }}(\cdot)$, following a Dutch auction mechanism-like as follows : \begin{align}\label{WillingQuota} &{\mathcal{W} } (\tau) = \sum_{j=1}^J e_{j} \varphi_j(\tau) \quad\mbox{ and } \quad {\overline{\mathcal{W} }}(\tau) = \sum_{j=1}^J e_{j} \kappa_{j} \ind_{\{\varphi_j(\tau)>0\}} \end{align} Given the $\text{CO}_{2}$ cost $\tau$, the amount ${\mathcal{W} } (\tau)$ represent the allowances needed to cover the global emissions generated by the players who won electricity market shares on the electricity market. ${\overline{\mathcal{W} }}(\tau)$ represent the allowances needed in the case producers whish to cover their overall production capacity $\kappa_{j}$. Obviously we have ${\mathcal{W} } (\tau) \leq {\overline{\mathcal{W} }}(\tau)$. We now can state our last design assumption, \begin{ass}{\bf On carbon market design.}\label{ass:hypoTW} The number $\mathbb{W}$ of the allowances available on the auction $\text{CO}_{2}$ market satisfies \[{\mathcal{W} }(0) > \mathbb{W} > {\overline{\mathcal{W} }}({\mathfrak{p}}).\] \end{ass} \begin{proposition}\label{pelecCroissante} As functions of $\tau$, ${p^{\text{elec}}}(\cdot)$ and $\sum_{j} q_{j}(\cdot)$ are respectively increasing and decreasing. \end{proposition} This proposition is a consequence of Remark \ref{property:offre croissante}, through cost parameter $\tau$. Assumption \ref{ass:hypoTW} allows us to define two prices of particular interest for the construction of the equilibrium strategy: \begin{align}\label{eq:prix_ante_carbon} & {\tau^{\text{guess}}} = \sup \{\tau\in[0,{\mathfrak{p}}]~\text{s.t.}~{\mathcal{W} }(\tau) > \mathbb{W} \} \; \mbox{ and } \;\; \overline{\tau}^{\text{guess}} = \sup \{\tau\in[0,{\mathfrak{p}}]~\text{s.t.}~{\overline{\mathcal{W} }}(\tau) > \mathbb{W}\}. \end{align} \begin{lemma}\label{lem:couplage-continuite} \item{(i)} We have ${\mathcal{W} }({\tau^{\text{guess}}}) = \mathbb{W}$. \item{(ii)} ${\overline{\mathcal{W} }}$ is a staircase function valued in the finite set $\{ \sum_{i\in {\mathcal I} } \kappa_{j} e_{j}; {\mathcal I}\subset \{ 1,\cdots J\}\}$. \end{lemma} Let us define $\mathcal I(\tau) := \{j;\;q_j(\tau) \neq 0\}$. \begin{proposition} At $\overline{\tau}^{\text{guess}}$, only one of the following two cases may occur~: \item{\bf Case A.} There exists a unique producer, denoted $\bar{i}$ such that $\mathcal I(\overline{\tau}^{\text{guess}}) = \mathcal I({\overline{\tau}^{\text{guess}}}^+) \cup \{\bar{i}\}$. For $\bar{i}$ we have $\frac{1}{e_{\bar{i}}}\left({p^{\text{elec}}}(\overline{\tau}^{\text{guess}}) -c_{\bar{i}} \right) = \overline{\tau}^{\text{guess}}$ and $ \frac{1}{e_{\bar{i}}}\left({p^{\text{elec}}}({\tau^{\text{guess}}}) - c_{\bar{i}} \right) = {\tau^{\text{guess}}}. $ \item{\bf Case B.} There exists two producers, denoted $i_{l}$ and $i_{r}$ such that \[ \mathcal I(\overline{\tau}^{\text{guess}}) = \mathcal I({\overline{\tau}^{\text{guess}}}^+) \cap \mathcal I(\overline{\tau}^{\text{guess}})\cup \{i_l\}\quad \mbox{and} \quad \mathcal I({\overline{\tau}^{\text{guess}}}^+) = \mathcal I({\overline{\tau}^{\text{guess}}}^+) \cap \mathcal I(\overline{\tau}^{\text{guess}})\cup \{i_r\}. \] \end{proposition} \begin{proof} This follows directly from the fact that ${\overline{\mathcal{W} }}(\cdot)$ is a staircase function, and from the fact that the producers are two by two different. \end{proof} We define a strategy profile on the coupled market in the two cases. \begin{definition} We define the strategy profile ${({\bf A},{\bf s})}^{\ast} = ( (s_{1}^{\ast},A_{1}^{\ast}), \cdots (s_{J}^{\ast},A_{J}^{\ast}))$, where $s^{\ast}_{j} := C_{j}$ is the {\it marginal production cost strategy} and $A_{j}^{\ast}$ is defined as follows (depending whether any of the two cases occurs): \item{\bf Case A.} \begin{equation} \begin{aligned} &\mbox{For } {\bar{i}},& \quad w\mapsto A^\ast_{{\bar{i}}}(w) := & \displaystyle \left(\overline{\tau}^{\text{guess}} + \delta \right) \ind_{\displaystyle\{w < e_{{\bar{i}}} \varphi_{{\bar{i}}}(\overline{\tau}^{\text{guess}}) - \epsilon\}} \\ & & & + \displaystyle {\tau^{\text{guess}}} \ind_{\displaystyle\{ e_{{\bar{i}}} \varphi_{{\bar{i}}}(\overline{\tau}^{\text{guess}}) - \epsilon < w \leq e_{{\bar{i}}} \kappa_{{\bar{i}}} \}}\\ &\mbox{For } k \neq {\bar{i}},& \quad w\mapsto A^\ast_k(w) := & \frac{1}{e_k}\left( {p^{\text{elec}}}(\overline{\tau}^{\text{guess}}) - c_k\right) \ind_{\{w \leq e_k \kappa_{k} \}} \end{aligned} \end{equation} \item{\bf Case B.} \begin{equation*} \begin{aligned} &\mbox{For } i_l,& w\mapsto A^\ast_{i_{l}}(w) := & \displaystyle \left(\overline{\tau}^{\text{guess}} + \delta\right) \ind_{\displaystyle\{0 < w \leq \mathbb{W}-{\overline{\mathcal{W} }}({\overline{\tau}^{\text{guess}}}^+)-\epsilon \} } \\ & & &+ \overline{\tau}^{\text{guess}} \ind_{\displaystyle\{\mathbb{W}-{\overline{\mathcal{W} }}({\overline{\tau}^{\text{guess}}}^+) - \epsilon < w \leq e_{i_l}\kappa_{i_l} \}} \\ ~\\ &\mbox{For } i_r,& w\mapsto A^\ast_{i_r}(w) := & \left(\overline{\tau}^{\text{guess}} + \delta \right) \ind_{\{w \leq e_{i_r}\kappa_{i_r}\}} \\ ~\\ &\mbox{For } k \not \in \{i_l, i_r\},& w\mapsto A^\ast_k(w) := & \frac{1}{e_k}\left({p^{\text{elec}}}(\overline{\tau}^{\text{guess}}) - c_k\right)\ind_{\{w\leq e_k\kappa_k\}}. \end{aligned} \end{equation*} \end{definition} Now we can state our main result~: \begin{theorem}\label{propo:Nashcoupled} Under Assumptions \ref{ass:producers}, \ref{ass:ElecClearingPrice} and \ref{ass:hypoTW}, one can identify $(\epsilon, \delta)$ such that $({\bf A}^\ast, {\bf s}^\ast )$ is a Nash equilibrium. For this equilibrium the following applies \item{(i)} the carbon price is ${\tau^{\text{guess}}}$ \item{(ii)} the electricity price is ${p^{\text{elec}}}({\tau^{\text{guess}}})$ \item{(iii)} for any producer, the quantity of allowance bought is null if the quantity of electricity sold is null, \end{theorem} The proof of this proposition relies on the analysis of any possible deviation. It can be found in \cite{preprint-BMP}. Note that the expression of the equilibrium is explicit, which allows to explicit the quantity of emission. This allows a analysis of the impact of such markets on the overall emission. \section{Conclusion} Once emitted into the atmosphere, $\text{CO}_{2}$ will remain there for more than a century. Estimating its value is an essential indicator for efficiently defining policy. Therefore, carbon valuation remains a main issue in order to design markets fostering emission reductions. In this paper, we established the links between an electricity market and a carbon auction market through the analysis of electricity producers strategies. They have been proven to lead to a Nash equilibrium enabling the computation of equilibrium prices on both markets. This equilibrium derives, for each producer, in a level electricity produced and $\text{CO}_{2}$ emissions covered. Beyond the analysis of the Nash equilibrium, we envisage the analysis of the electricity production mix, with a particular focus on renewable shares which do not participate to emissions.
1,108,101,566,254
arxiv
\subsection{Available scripts} \makeatletter \newcommand*\reset[1]{ \renewcommand\AB@affillist{}% \global\let\AB@authlist\@empty \renewcommand\AB@authlist{}% \setcounter{affil}{0}% \setcounter{authors}{0}% \emptythanks } \makeatother \begin{document} \newbibliography{main} \bibliographystyle{main}{References/mycustombib} \newbibliography{app} \bibliographystyle{app}{References/mycustombibSI} \date{} \title{ Absorption Enhancement for Ultra-Thin Solar Fuel Devices with Plasmonic Gratings} \author[1,2,3]{Phillip Manley\thanks{[email protected]}} \author[4]{Fatwa F. Abdi} \author[4]{Sean Berglund} \author[5]{A.T.M. Nazmul Islam} \author[3]{Sven Burger} \author[4]{Roel van de Krol} \author[1,6]{Martina Schmid} \affil[1]{Nanooptical Concepts for Photovoltaics, Helmholtz-Zentrum Berlin f\"{u}r Materialien und Energie GmbH, Hahn-Meitner-Platz 1, 14109 Berlin, Germany} \affil[2]{Nanostructured Silicon for Photonic and Photovoltaic Implementations, Helmholtz-Zentrum Berlin f\"{u}r Materialien und Energie GmbH, Kekul\'{e}str. 5, 12489 Berlin, Germany} \affil[3]{Zuse Institute Berlin, Takustr. 7, 14195 Berlin, Germany} \affil[4]{Institute for Solar Fuels, Helmholtz-Zentrum Berlin f\"{u}r Materialien und Energie GmbH, Hahn-Meitner-Platz 1, 14109 Berlin, Germany} \affil[5]{Institute for Quantum Phenomena in Novel Materials, Helmholtz-Zentrum Berlin f\"{u}r Materialien und Energie GmbH, Hahn-Meitner-Platz 1, 14109 Berlin, Germany} \affil[6]{University of Duisburg-Essen and CENIDE, Lotharstr. 1, 47057 Duisburg, Germany} \AB@maketitle This paper was published in ACS Applied Energy Materials \textbf{1} p.5810-5815 (2018) doi: \href{dx.doi.org/10.1021/acsaem.8b01070}{10.1021/acsaem.8b01070} and is made available as an electronic preprint with permission from the American Chemical Society. \begin{abstract} We present a concept for an ultra-thin solar fuel device with a nanostructured back contact. Using rigorous simulations we show that the nanostructuring significantly increases the absorption in the semiconductor, CuBi$_2$O$_4$ in this case, by 47\% (5.2~mAcm$^{-2}$) through the excitation of plasmonic modes. We are able to attribute the resonances in the device to metal-insulator-metal plasmons coupled to either localised surface plasmon resonances or surface plasmon polaritons. Rounding applied to the metallic corners leads to a blueshift in the resonance wavelength while maintaining absorption enhancement, thus supporting the possibility for a successful realization of the device. For a 2D array, the tolerance of the polarization-dependent absorption enhancement is investigated and compared to a planar structure. The device maintains an absorption enhancement up to incident angles of 75$^{\circ}$. The study highlights the high potential for plasmonics in ultra-thin opto-electronic devices such as in solar fuel generation. \end{abstract} \section{Introduction} The conversion of sunlight to storable fuel is a challenge of paramount importance to modern society. Photoelectrochemical devices, consisting of semiconductor photoelectrodes immersed in aqueous solution, are a particularly interesting way to achieve this solar-to-fuel conversion. In recent years many different device designs based on various materials have seen intensive research \cite{main}{Montoya2017,Park2006}. Among these, metal oxides are particularly interesting since they possess general aqueous stability and are relatively inexpensive \cite{main}{Sivula2013,Abdi2013_2}. However, they share a common drawback, that of the discrepancy between carrier transport and light absorption. Due to the poor transport properties, the carrier diffusion length in oxides is typically less than 100 nm \cite{main}{Joly2006, Cherepy1998, Kennedy1978, Paracchino2012, Abdi2013, Berglund2016, Abdi2017}. On the other hand, they normally have an indirect band gap; relatively thick films ($>$500~nm) are needed to absorb enough light. This mismatch often severely limits the performance of a metal oxide photoelectrode. In the present work we focus on the metal-oxide semiconductor CuBi$_2$O$_4$, which is an emerging p-type semiconductor for solar fuel applications. It has a band gap of around 1.8 eV, which is ideal for a top absorber in a tandem configuration \cite{main}{Arai2007, Hahn2012, Berglund2013, Berglund2016}. It also has a suitable band position; the conduction and valence band edges straddle both water reduction and oxidation potentials. As a result, the photocurrent onset potential for CuBi$_2$O$_4$ has been reported to be $\sim$1 V vs reversible hydrogen electrode (RHE), which is beneficial for a tandem configuration \cite{main}{Arai2007, Hahn2012, Berglund2013}. Previous reports of CuBi$_2$O$_4$ synthesis deposition have shown highly porous and irregular surface structures \cite{main}{Berglund2016,Hahn2012,Kang2016}. Alternative synthesis methods such as spray-pyrolysis have been used to make dense, homogeneous CuBi2O4 thin films \cite{main}{Wang2017} and new methods such as pulsed laser deposition (PLD) could be used to obtain a highly uniform CuBi2O4 ultra-thin film. PLD has been demonstrated as a viable option for other metal oxide photoelectrode materials including resonant light trapping Ti-doped alpha-Fe2O3 \cite{main}{Dotan2013}. However, the material is limited by the optical absorption vs. carrier transport mismatch as was previously mentioned. The diffusion length is in the range of $\sim$50 nm, while a thickness of more than 500~nm is needed to absorb 90\% of the incident light \cite{main}{Berglund2016}. This is especially true for the longer wavelengths ($>$ 450~nm), where the quantum efficiency has been reported to be very low. In order to combat this challenge, we propose to reduce the semiconductor thickness to 100~nm which should ensure efficient carrier collection by shortening the length photo-generated carriers have to travel. Simultaneously we use light management to obtain a sufficient absorption of incident sunlight. Various light management strategies applied to the field of solar fuels have been presented in the literature. Photonic crystal structuring of the active layer has been used to enhance absorption \cite{main}{Jeremy2016}. Other approaches use dielectric particles and gratings to localize light inside the active layer \cite{main}{Kim2014,Cheng2018}.Structuring the active material into nanorods can also increase absorption while maintaining short carrier diffusion lengths \cite{main}{Pihosh2015}. A further proposed light management strategy is that of plasmonics \cite{main}{Thomann2011,Abdi2014}. Plasmonic metallic nanoparticles act as optical antennas, allowing light to be concentrated in the vicinity of the semiconductor material, thereby enhancing absorption \cite{main}{Atwater2010,Schmid2016}. Furthermore metallic particles themselves may have beneficial catalytic properties \cite{main}{Berglund2013,Li2013}. Despite this, certain challenges are present for particles, such as quenching the photocatalysis process through recombination \cite{main}{DiVece2012,Govorov2006}. In this paper we circumvent the challenges of using particles by considering a 100~nm thick CuBi$_2$O$_4$ layer on a grating consisting of laterally alternating Ag and SiO$_2$ on top of an Ag layer. Ag and SiO$_2$ are less positive than CuBi$_2$O$_4$ vs. RHE. In order to circumvent a Schottky at the rear interface, an additional back contact layer or heavy doping layer of the CuBi$_2$O$_4$ may be necessary \cite{main}{Hudait2001}. Since we focus on the optical device design, such a layer is not taken into account in the current work. The unit cell of this periodic structure is shown in the inset of figure \ref{fig:RTA}(b). We will refer to the SiO$_2$ region as a nanoslot since it forms a slot in the Ag. Plasmonic gratings have been realized for multiple applications including photovoltaic absorption enhancement \cite{main}{Paetzold2011} and biosensing \cite{main}{Iqbal2017}. For the current application, the nanoslot grating serves as the metallic back contact to transport photo-generated holes to the anode side as well as to enable better light management. A similar structure has been applied to infrared absorption enhancement in photovoltaics \cite{main}{Wang2013}. Through careful device design, we are able to shift the operational frequency to visible wavelengths. \section{Results and Discussion} \subsection{1D Grating} \begin{figure} \begin{center} \includegraphics[width=\textwidth]{Final/Planar_Nanostructure_Comparison_Verarbeitet_V2.pdf} \end{center} \caption{The absorption in the 100 nm thick CuBi$_2$O$_4$ layer (divided into bulk [first 95~nm] and interface [last 5~nm] contributions), and the losses in Ag and Reflection for a solar fuel device with a flat Ag back contact (a) and a nanostructured Ag back contact (b). The dotted lines indicate resonance wavelengths shown in figure \ref{fig:NearFields}. Insets show schematic drawings of the periodic unit cell for each case.} \label{fig:RTA} \end{figure} Figure \ref{fig:RTA} shows the absorption and losses for the ultra-thin photoelectrode with a planar Ag back reflector with no nanoslots (a) and a nanostructured back reflector (b). The inset shows a schematic of each structure. Both of the devices have a 100~nm thick CuBi$_{2}$O$_{4}$ layer. The catalytic reaction, in this case water reduction, occurs at the interface between H$_{2}$O and the inorganic semiconductor photocathode, CuBi$_{2}$O$_{4}$. Light is incident through the H$_{2}$O. The nanostructured device has a unit cell width (pitch) of 112~nm, the SiO$_{2}$ slot is 70~nm wide and 60~nm deep, it is infinitely extended in the plane perpendicular to the page, therefore defining a 1D grating. These values were obtained from an optimization (figure \ref{fig:Opt}). We split the semiconductor into two regions. Firstly, the upper 95~nm of the semiconductor material (dark green), which is in contact with the water. Secondly, the final 5~nm of the semiconductor material (light green), which is in contact with the Ag back contact. These regions will be referred to as the 'bulk' and 'interface' regions respectively. Analogously, the absorption in each region will be referred to as 'bulk' and 'interface' absorption, respectively. For the first case of the simple back reflector (no SiO$_2$ nanoslots), the interface absorption of the semiconductor is very small compared to the bulk. This is due to the exponential damping of light while traversing the first 95~nm of the material (Lambert-Beer law) and also due to the volume of the region in question being much smaller than that of the bulk region. In order to quantify this absorption for solar fuel applications, we calculate the photocurrent density ($J_{abs}$) from the absorption curve and the solar spectrum. This assumes that all absorbed photons can contribute to water reduction. This assumption should be interpreted as the upper limit on photocurrent density for the proposed photoelectrode architecture. When modeling practical devices, the various loss mechanisms (e.g. recombination) have to be taken into account through the absorbed photon-to-current efficiency (APCE). We note that there have been reports for nanostructured metal-oxides with APCE values close to 100\% \cite{main}{Pihosh2015}. The proposed structure is able to reach a theoretical photocurrent density of 11.1 mAcm$^{-2}$ which is 54\% of the maximum achievable short circuit current density for a material with a band-gap of 1.8~eV (20.5 mAcm$^{-2}$). Due to the low losses provided by Ag, the absorption in the back reflector is minimal, meaning that the main loss mechanism is reflection. This loss mechanism is eliminated entirely at the wavelength of 440~nm due to the presence of a Fabry-Perot resonance which eliminates the reflection. These kinds of resonances are clearly beneficial to absorption, however they cannot provide an arbitrarily broad absorption enhancement, since the only free parameter for tuning such a resonance is the film thickness. Absorption is seemingly still present at wavelengths up to 700~nm. The complex refractive index used for CuBi$_{2}$O$_{4}$ also contains parasitic absorption at and below the band gap (figure \ref{fig:nk_data}). However analysis of the absorption coefficient has shown that the band gap for CuBi$_{2}$O$_{4}$ lies around 700~nm. Therefore we use this wavelength as the cutoff for absorption contributing to the photocurrent density. \begin{figure} \begin{center} \includegraphics[width=\textwidth]{Final/NearFields_New.pdf} \end{center} \caption{The electric field in the nanostructured solar fuel device for four different wavelengths taken from the peaks in figure~\ref{fig:RTA}. Wavelengths are 380, 410, 470, 650~nm for parts (a-d), respectively. Light is incident normally from above and polarized in the x-z plane.} \label{fig:NearFields} \end{figure} Figure \ref{fig:RTA}(b) shows the reflection and absorption for the nanostructured back contact (Ag with SiO$_2$ nanoslots). The polarization is oriented in the $x$~$-$~$z$ plane so that the plasmonic effects can be studied. In this case the total absorption is over 80\% for most of the visible spectrum, particularly between 500 and 650~nm where the solar spectrum has peak intensity. The overall absorption increase leads to a maximum short circuit current density of 16.3~mAcm$^{-2}$ (80\% of the maximum achievable). This large increase in absorption can be attributed to the excitation of resonant modes that exist at the Ag~/~CuBi$_2$O$_4$ interface, due to the large contribution of the interface absorption (light green). In this case the bulk absorption contributes 11.1~mAcm$^{-2}$ to $J_{abs}$ while the interface absorption contributes 5.2~mAcm$^{-2}$ to $J_{abs}$. In addition the parasitic absorption in the Ag back contact also increases due to increased interaction with the Ag arising from the resonant modes. In order to investigate the mechanism of the absorption enhancement further, we show in figure \ref{fig:NearFields} the electric near field strength of the nanostructured back contact for the peak wavelengths shown in figure~\ref{fig:RTA}(b), namely 380, 410, 470 and 650~nm. The presence of a resonant mode is clearly shown in each case with strong localization of the electric field. All four of the modes can be associated with metal-insulator-metal (MIM) plasmon resonances of the slot \cite{main}{Maier2007,Kurokawa2007}. This can be seen in the variations of the resonance wavelength with respect to the slot length which varies with a wavelength close to the analytical MIM mode wavelength (figures \ref{fig:Length_Variation} and \ref{fig:Length_Variation_Near_Fields}). The four modes can be grouped into two categories. At the shorter wavelengths of 380 and 410~nm (figures \ref{fig:NearFields}(a) and (b)), the resonance is mainly confined to a localized mode at the bottom corners of the slot. In contrast, the resonances at the longer wavelengths of 470 and 650~nm are mainly located at the upper corners of the slot and at the Ag / CuBi$_2$O$_4$ interface. An analysis of the variation of these modes with the device pitch (keeping the slot size constant) reveals that at wavelengths between 450 and 700~nm, the field localized at the upper corners acts as a source of surface plasmon polariton (SPP) modes (figures \ref{fig:Pitch_Variation} and \ref{fig:Pitch_Variation_Near_Fields}). Since the SPP modes have significant field strength in the CuBi$_2$O$_4$ they are able to increase the absorption there. In contrast the shorter wavelength modes remain localized in the bottom of the slot and therefore contribute mainly to losses in the Ag. The maximum absorption enhancement is observed when an antinode of the MIM mode resonance is at the slot opening and when the SPP mode which is excited constructively interferes with itself. This holds true for the device at a wavelength of 650~nm. Slot length variations (figure \ref{fig:Length_Variation}) and pitch variations (figure \ref{fig:Pitch_Variation}) which affect the MIM and SPP resonances, respectively, show a peak for 60~nm slot length and 112~nm pitch. For the peak at 470~nm wavelength, the MIM mode is at maximum enhancement while the SPP is slightly off resonance. Conversely, at 575~nm wavelength, the MIM mode is off resonance while the SPP mode shows maximum constructive interference causing the increase in interface absorption visible at this wavelength. Due to the coupling to SPP interface modes, the total absorber layer thickness can, in principle, be reduced while maintaining a high absorption. However, the total absorption obtained will still drop with decreasing layer thickness, as the short wavelength light ($<$ 350~nm) still needs to be absorbed conventionally since the inter-band transition losses in Ag prevent any beneficial resonances from forming at these wavelengths. Therefore, although we have presented absorption curves for the device with a 100~nm thick absorbing layer, we stress that the presented nanostructured device could also conceivably provide absorption enhancements for much thinner layers. \subsection{Effect of Corner Rounding} A further consideration for the implementation of these nanostructures in realistic devices is the ability to fabricate precise geometries. As the near field pictures from figure \ref{fig:NearFields} show, there is a strong localization of electric field in the vicinity of the sharp edges of the Ag grating. Such perfectly sharp edges are difficult to fabricate, therefore a certain amount of rounding on the edges should be expected. In figure \ref{fig:CornerRounding} we show the absorption in the 100 nm of CuBi$_{2}$O$_{4}$ for the nanostructured 1D grating for three cases of corner rounding radius ($R_{c}$) at both the upper and lower grating corners. The definition of $R_{c}$ is shown in the inset of figure \ref{fig:CornerRounding}. The values of R$_{c}$ presented are 0 (same absorption curve as shown in figure~\ref{fig:RTA}(b)), 2~nm and 10~nm. It can be seen that even for $R_{c}$~=~2~nm, the resonances are blueshifted and this becomes more pronounced with increasing corner rounding. The blueshift seen is a combination of multiple factors. The MIM mode resonance wavelength tends to decrease with decreasing slot width. This may be more relevant to the shorter wavelength resonances since they are localised to the bottom of the slot. Furthermore, due to the inhomogeneous width of the slot, the length of slot which has the necessary width for the supporting the MIM resonance will be effectively shorter. Since the MIM mode provides an absorption enhancement when an antinode is present at the slot opening, the wavelength of MIM mode necessary for this may be shifted to shorter wavelengths. Finally, as the corner rounding increases, more of the SPP resonance will be located inside the slot which has a lower refractive index than CuBi$_{2}$O$_{4}$. This lower refractive index will tend to redshift the SPP resonance. Due to competing factors, the effect of corner rounding on the exact resonance position is difficult to predict, necessitating further numerical study for an optimum to be found. \begin{figure} \begin{center} \includegraphics[width=0.5\textwidth]{Final/CornerRounding.pdf} \end{center} \caption{The absorption inside the absorbing semiconductor (CuBi$_2$O$_4$) as a function of the wavelength for three different values of corner rounding $R_{c}$. The first three structures have the same geometrical parameters as in figure \ref{fig:RTA}(b), while the fourth has the geometrical parameters optimized for the corner rounding. Inset shows the definition of $R_{c}$. All other aspects of the geometry are the same as in figure \ref{fig:RTA}(b). Light is incident normally from above and polarized in the $x$-$z$ plane.} \label{fig:CornerRounding} \end{figure} Despite the blueshifting the core resonant modes are all still present. We can conclude that the resonant absorption enhancement is not reliant on an unphysical singularity at the corners. This is important for the physical realization of the device. The maximum photocurrent density obtained for the device with no corner rounding was previously shown to be 16.3~mAcm$^{-2}$. When a corner rounding of 2~nm is imposed, the photocurrent density lowers slightly to 15.8~mAcm$^{-2}$. As the corner rounding is increased to 10~nm, the photocurrent density further decreases to 14.6~mAcm$^{-2}$. All of these values show a significant improvement over the planar value of photocurrent density of 11.1~mAcm$^{-2}$. It should be further noted that the geometry can be reoptimized with corner rounding. If the corner rounding is 10~nm then a new optimum absorption enhancement can be found for a pitch of 200~nm, slot width of 100~nm and slot length of 50~nm. In this case the photocurrent density is increased to 16.5~mAcm$^{-2}$. The absorption profile for the optimized structure with corner rounding is shown in figure \ref{fig:CornerRounding}. \subsection{2D Grating} For integration into a solar fuel device, the proposed nanostructured grating has to continue to provide a strong enhancement in the presence of unpolarized light at different angles of incidence. In order to enhance the unpolarized response, the 1D grating can be extended to a 2D grating while keeping the same dimensions as the 1D grating. The resonance conditions found previously can be maintained for either a Ag grating with SiO$_{2}$ nanoslots (1), or a SiO$_{2}$ matrix containing Ag cubic nanoparticles connected to a Ag back contact (2). A schematic of configuration (2) is shown in figure \ref{fig:2DSchema}. Configuration (2) was found to be more optically beneficial than configuration (1) and was therefore chosen for the results presented in this section. For reasons of computational efficiency, corner rounding has not been used for the 2D grating. \begin{figure} \centering \includegraphics[width=\textwidth]{Final/AngularDependenceAll.pdf} \caption{The absorbed photocurrent density as a function of the incident polar angle $\theta$ (a). The absorption in the semiconductor (CuBi$_2$O$_4$) for a 2D grating for polarization s (b) and p (c) as a function of both wavelength and polar incidence angle.} \label{fig:2DGrating} \end{figure} The periodic unit cell of the 2D grating has a 3 fold mirror symmetry. The mirror planes lie along the $x$ axis, the $y$ axis and at 45$^{\circ}$ to the $x$ and $y$ axes. Therefore it is sufficient to use azimuthal angles $0^{\circ}<\phi<45^{\circ}$ to obtain the azimuthally averaged response. We chose 5 azimuthal angles equally spaced between 0$^{\circ}$ and 45$^{\circ}$. The difference in absorption between averaging over 3 and 5 angles was smaller than $10^{-3}$, therefore it was concluded that 5 angles are sufficient to obtain an averaged response. For each polar angle ($\theta$ in figure \ref{fig:2DSchema}) of incidence $> 0$ we can define two orthogonal polarizations: p polarization where the electric field orientation lies in the scattering plane and s polarization where the electric field orientation is perpendicular to the scattering plane, as shown in figure \ref{fig:2DSchema}. Figure \ref{fig:2DGrating}(a) shows how the maximum photocurrent density $J_{abs}$ for the structure depends on the incident polar angle. For oblique incidence p polarization provides a higher current density than for s polarization. Taking the unpolarized response into account, the grating outperforms a planar stack for angles up to 75$^{\circ}$ which is highly beneficial to solar fuel applications. Figure \ref{fig:2DGrating}(b-c) shows the wavelength resolved polar angular response for s and p polarization. The 2D grating, even at normal incidence, does not show the same behavior as the 1D grating. This is to be expected as they are physically different systems, however the main resonances present in figure~\ref{fig:RTA}(b) are also present in figure~\ref{fig:2DGrating}, namely a strong absorption enhancement between 500 and 600~nm and a slightly weaker enhancement between 600 and 700~nm. In general the absorption for the case of p polarization is higher as we move towards higher angles. This is partly due to reflection at the initial H$_{2}$O/CuBi$_{2}$O$_{4}$ interface being lower for p polarization. The dependence of the plasmonic resonances on incident angle differs for the p and s polarizations. In figure \ref{fig:2DGrating}(b) we see that for s polarization, the resonance wavelengths remain constant with increasing angle. The resonance positions do not change due to the electric field remaining normal to the vertical sides of the Ag cuboids for all incident angles. For the case of p polarization shown in figure \ref{fig:2DGrating}(c), the resonances broaden and increase for higher angles. This is due to electric field component no longer being purely normal to the vertical sides of the Ag cuboids, thereby changing the resonance condition for excitation of MIM modes. Broader resonances will be beneficial to the broadband functioning of the device. \section{Conclusion} We have presented a nanostructured back contact for use in ultra-thin solar fuel devices. The nanostructuring was shown to significantly increase the absorption in the absorbing semiconductor, CuBi$_{2}$O$_{4}$ in this case, by 47\% (5.2 mAcm$^{-2}$) through the excitation of plasmonic modes. By varying the length of the SiO$_2$ nanoslot and the device pitch the resonances could be classified as MIM modes which remain isolated in the nanoslot or couple to SPP modes dependent on the wavelength. By simulating the effect of corner rounding, it could be confirmed that the presented results do not rely on unphysical singularities at material interfaces. This means that translation of the simulated results to an experimental reality is promising. Moreover, we demonstrated that the detrimental effect of corner rounding, which is unavoidable in practical systems, can be fully compensated by adjusting the pitch and slot dimensions of the SiO$_2$. For a 2D grating based on the optimal 1D nanoslot grating, the angular tolerance of the absorption enhancement was investigated for two different polarizations. A greater angular tolerance was shown for p polarization, with the absorption even increasing at higher angles. The unpolarized response was shown to outperform planar layers up to angles of 75$^{\circ}$. These results provide a clear pathway to overcome the mismatch between the optical absorption and carrier diffusion length, which is present in many semiconducting photoelectrodes. In the end, it is expected that higher solar-to-fuel conversion efficiency with metal oxide photoelectrodes, especially the promising photocathode material CuBi$_{2}$O$_{4}$, is to be achieved with our proposed architecture. \section{Methods} All simulations were done using the commercial software JCMsuite \cite{main}{Pomplun2007}, a finite element solver for Maxwell's equations. All simulations modelled the upper and lower half spaces with perfectly matched layers while periodic boundary conditions were used in the $x-y$ plane. A plane wave source was incident from the upper half space. For H$_2$O and SiO$_2$ a wavelength independent refractive index of 1.33 and 1.5 was used, respectively. For Ag the data of Johnson and Christy were used \cite{main}{Johnson1972}. The complex refractive index of CuBi$_2$O$_4$ was obtained via spectroscopic ellipsometry of a single crystal using an M-2000D ellipsometer (193-1000 nm, J.A. Woollam Co., Inc.). Details of crystal synthesis are given in the supporting information. By using a finite element degree of 3 and mesh side length constraint smaller or equal to one tenth of the wavelength in the material (for dielectrics) or a constant value of 5 nm (for metal) we were able to ensure an accuracy of greater than $10^{-3}$ for the absorption and reflection. \section{ASSOCIATED CONTENT} Supporting Information Available: The complex refractive index of CuBi$_{2}$O$_{4}$ used in all simulations. Description of the procedure for obtaining absorption and photocurrent density. The effects of length and pitch variations of the nanoslot. Optimization of the nanoslot with respect to slot width, slot length and pitch ratio. The geometry of the 2D nanoslot array and the definition of the scattering angles. \section{Acknowledgements} The authors would like to thank Dr. A. Bronneberg for ellipsometry measurements. P. Manley and M. Schmid would like to acknowledge funding and support from the Initiative and Networking fund of the Helmholtz Association for the Young Investigator Group VH-NG-928. Part of the work was done at the Berlin Joint Lab for Optical Simulations for Energy Research (BerOSE). P. Manley acknowledges funding from the Helmholtz Innovation Lab HySPRINT, which is financially supported by the Helmholtz Association.
1,108,101,566,255
arxiv
\section{Introduction} \label{sec:introduction} Much has been learned about the physical properties of exoplanets in the nearly three decades following the discovery of the exoplanet candidate HD~114762\,b \citep{latham:1989}. As of 2018 September 27, the NASA Exoplanet Archive lists 3791 confirmed and validated exoplanets, the majority of which were found by the NASA {\em Kepler} mission via the transit method. Among the confirmed planets are 418 short-period gas giant planets ($P < 10$\,days, and $M_{p} > 0.2$\,\ensuremath{M_{\rm J}}\ or $R_{P} > 0.7$\,\ensuremath{R_{\rm J}}). These are the so-called hot-Jupiters. Especially important are the 375 hot Jupiters which are known to transit their host stars. These objects are among the best-studied planets, providing a wealth of information about their physical properties. Among the 270 planets for which the mass and radius have both been determined with a precision of 20\% or better, 235 are hot Jupiters. Of the 133 planets for which the (sky projected) stellar obliquity has been measured, 117 are hot Jupiters \citep[TEPCat; ][]{southworth:2011}. Similarly, the majority of exoplanets with observational constraints on the properties of their atmospheres are hot Jupiters \citep[e.g.,][]{madhusudhan:2018}. All of these observations have been greatly facilitated by the frequently occurring and deep ($\sim1\%$) transits presented by these systems. All but twelve of the 418 hot Jupiters in the NASA Exoplanet Archive have been found around F, G or K-type host stars ($4000\,{\rm K} < T_{\rm eff} < 7300\,{\rm K}$, or $0.6\,M_{\odot} < M < 1.6\,M_{\odot}$ if $T_{\rm eff}$ is not given in the database). One of the hot Jupiters in this sample is around a B-star, seven are around A stars, and only four have been found around M dwarf stars. The hot Jupiters that have previously been discovered around M dwarf stars include Kepler-45\,b \citep[$M_{P} = 0.505 \pm 0.090$\,\ensuremath{M_{\rm J}}, $M_{S} = 0.59 \pm 0.06$\,\ensuremath{M_\sun}, $T_{\rm eff} = 3820 \pm 90$\,K][]{johnson:2012}, HATS-6\,b \citep[$M_{P} = 0.319 \pm 0.070$\,\ensuremath{M_{\rm J}}, $M_{S} = 0.574^{+0.020}_{-0.027}$\,\ensuremath{M_\sun}, $T_{\rm eff} = 3724 \pm 18$\,K][]{hartman:2015:hats6}, NGTS-1\,b \citep[$M_{P} = 0.812^{+0.066}_{-0.075}$\,\ensuremath{M_{\rm J}}, $M_{S} = 0.617^{+0.023}_{-0.062}$\,\ensuremath{M_\sun}, $T_{\rm eff} = 3916^{+71}_{-63}$\,K][]{bayliss:2018:ngts1}, and HD~41004\,B\,b \citep[$M_{P}\sin i = 18.37\pm0.22$\,\ensuremath{M_{\rm J}}, $M_{S} \sim 0.4$\,\ensuremath{M_\sun}][]{zucker:2003}. The latter object was detected in the radial velocity (RV) observations of the M2V component of a K1V+M2V visual binary, and the inferred 19\,\ensuremath{M_{\rm J}}\ brown-dwarf companion mass is a lower limit. The other three objects are transiting systems. Theoretical models of planet formation and evolution have predicted that hot Jupiters should be less common around M dwarf stars than around solar-type stars \citep{mordasini:2012}. While there is some observational support for this prediction from RV surveys \citep{johnson:2010}, the number of M dwarfs that have been systematically surveyed for hot Jupiters is still too low to be certain of this conclusion \citep{obermeier:2016}. One of the main goals in current exoplanet research is to expand the sample of well-characterized hot Jupiters known around M dwarfs and A or earlier-type stars. This will allow the occurrence rate of hot Jupiters to be measured as a function of stellar mass, and will also enable the dependence of other planetary system properties on stellar mass to be studied. Some of these properties that might be investigated include the orbital obliquities of the planets, the degree of inflation in the planetary radii, and the atmospheric properties of the planets. Giant planets transiting M dwarf stars also provide at least two observational advantages over similar-size planets transiting larger stars. They produce very deep transits. In principle, a giant planet could completely obscure a very low-mass star, although no such system has been discovered to date. The deep transits allow for observations with a higher signal-to-noise ratio (S/N), especially if conducted in the IR where the stars have a higher photon flux density. The stars themselves undergo very little evolution over the lifetime of the Galaxy, enabling a more precise constraint on the mass and radius of the star (and hence of the planet) from the available observations compared to what can be done for more massive stars \citep[e.g.,][]{hartman:2015:hats6}. The primary challenge in discovering transiting hot Jupiters around M dwarfs is the faintness of these stars. In order to survey a sufficient number of M dwarfs to detect the rare cases of transiting hot Jupiters, it is necessary to observe stars down to $V \sim 15$\,mag, which is fainter than the limits of many of the ground-based transit surveys that have been productive at discovering transiting hot Jupiters. The two ground-based surveys which have discovered transiting hot Jupiters around M dwarfs are the HATSouth survey \citep{bakos:2013:hatsouth} and the NGTS survey \citep{wheatley:2018}. Both of these projects use larger aperture telescopes compared to the other wide-field transit surveys (0.18\,m in the case of HATSouth and 0.20\,m in the case of NGTS) allowing for greater sensitivity to M dwarf stars. In this paper we present the discovery of HATS-71b{} by the HATSouth survey, the fifth hot Jupiter found around an M dwarf star, and the fourth transiting system of this type. With a spectroscopic effective temperature of \ifthenelse{\equal{\hatcurSMEversion}{i}}{\hatcurSMEiteff{}}{\hatcurSMEiiteff{}}\,K, and a spectral type of M3V, HATS-71{} is the coolest M dwarf known to host a transiting hot Jupiter. The 4.7\% deep transits are also the deepest of any transiting system discovered to date. The planet was first detected by HATSouth, and then confirmed using ground-based spectroscopic and photometric follow-up. It was also recently observed in Sector 1 of the NASA {\em Transiting Exoplanet Survey Satellite} mission ({\em TESS}, \citealp{ricker:2015}), and included in the first set of alerts released to the public. In this paper we present all of these data and analyze them to determine the physical properties of the planet HATS-71b{} and its host star HATS-71{}. We also present evidence, driven largely by observations from the Gaia~DR2 mission \citep{gaiamission,gaiadr2}, that the planet host may have an unresolved binary star companion with a current projected physical separation of less than $14$\,AU\@. If confirmed, the presence of this companion might be responsible for shrinking the orbit of the gas giant planet to its current short period orbit. In Section~\ref{sec:obs} we present the observations. We describe the analyses that we have performed to confirm the planetary system and determine its properties in Section~\ref{sec:analysis}. We conclude with a discussion of the results in Section~\ref{sec:discussion}. \section{Observations} \label{sec:obs} \subsection{Photometric detection} \label{sec:detection} HATS-71\ was initially detected as a transiting planet candidate based on observations by the HATSouth network. A total of 26,668 observations were gathered at 4\,min cadence between UT 2011 July 17 and UT 2012 October 25. The source was observed by the HS-1, HS-3 and HS-5 instruments (located in Chile, Namibia, and Australia, respectively) in HATSouth field G755, and by the HS-2, HS-4 and HS-6 instruments (located in Chile, Namibia, and Australia, respectively) in HATSouth field G756. Observations were carried out as described by \citet{bakos:2013:hatsouth}, and reduced to trend-filtered light curves \citep[filtered using the method of][]{kovacs:2005:TFA} and searched for transiting planet signals \citep[using the Box-fitting Least Squares or BLS method;][]{kovacs:2002:BLS} as described by \citet{penev:2013:hats1}. We identified a periodic box-shaped transit signal in the trend-filtered light curve of HATS-71{} with a period of $\hatcurLCPshort$\,days\ and a depth of \hatcurLCdip{}\,mmag. Based on this we selected the object as a candidate, assigning it the HATSouth candidate identifier \hatcurhtr. The trend-filtered HATSouth light curve has a residual RMS of 50\,mmag. The light curve is shown phase-folded in \reffigl{hatsouth}, while the data are made available in \reftabl{phfu}. We searched for additional periodic signals in the combined HATSouth light curve using both the Generalized Lomb-Scargle periodogram \citep{zechmeister:2009} and the BLS algorithm, in both cases applied to the light curve after subtracting the best-fit transit model for HATS-71b{}. We find a peak in the GLS periodogram at a period of \hatcurrotper\,days with a false alarm probability of $10^{-31}$ (\reffigl{gls}). This false alarm probability is estimated using the relations from \citep{zechmeister:2009} appropriate for Gaussian white-noise, but calibrated to the observed sampling and magnitude distribution via bootstrap simulations. The signal is independently detected in the G755 and G756 HATSouth light curves (with peak periods of $37.02$\,days and $41.86$\,days, and false alarm probabilities of $10^{-10}$ and $10^{-15}$, respectively), which have similar time-coverage but were obtained with different instruments using different pointings on the sky. Fitting a sinusoid to the phase-folded data yields a semi-amplitude of $0.0134 \pm 0.0039$\,mag. We interpret this period as the photometric rotation period of the star. Given the measured rotation period and stellar radius, the spectroscopic \ensuremath{v \sin{i}}\ should be $<0.625\,\ensuremath{\rm m\,s^{-1}}$, i.e., undetectable even with the current high-resolution spectroscopy. Both the period and amplitude are typical values for a field M3 dwarf star. No additional significant transit signals are detected by BLS in the combined HATSouth light curve. The highest peak in the spectrum has a period of $82.7$\,days, a transit depth of $8.5$\,mmag and a signal-to-pink-noise of only $4.5$. \ifthenelse{\boolean{emulateapj}}{ \begin{figure}[!ht] }{ \begin{figure}[!ht] } \plotone{HATS755-002-hs.pdf} \caption{ Phase-folded unbinned HATSouth light curve for HATS-71{}. {\em Top:} the full light curve. {\em Middle:} the light curve zoomed-in on the transit. {\em Bottom:} the residuals from the best-fit model zoomed-in on the transit. The solid line shows the model fit to the light curve. The dark filled circles show the light curve binned in phase with a bin size of 0.002. \label{fig:hatsouth}} \ifthenelse{\boolean{emulateapj}}{ \end{figure} }{ \end{figure} } \ifthenelse{\boolean{emulateapj}}{ \begin{figure}[!ht] }{ \begin{figure}[!ht] } \plotone{HATS755-002-GLS.pdf} \caption{ {\em Top:} Generalized Lomb-Scargle (GLS) periodogram of the combined HATSouth light curve after subtracting the best-fit transit model for HATS-71b{}. The horizontal dashed blue line shows the $10^{-5}$ false alarm probability level. {\em Middle:} The HATSouth light curve phase-folded at the peak GLS period of \hatcurrotpershort\,days. The gray points show the individual photometric measurements, while the dark red filled squares show the observations binned in phase with a bin size of 0.02. {\em Bottom:} Same as the middle, here we restrict the vertical range of the plot to better show the variation seen in the phase-binned measurements. \label{fig:gls}} \ifthenelse{\boolean{emulateapj}}{ \end{figure} }{ \end{figure} } \subsection{Spectroscopic Observations} \label{sec:obsspec} Spectroscopic follow-up observations of HATS-71{} were obtained with WiFeS on the ANU~2.3\,m \citep[][]{dopita:2007}, PFS on the Magellan~6.5\,m \citep[][]{crane:2006,crane:2008,crane:2010}, and ARCoIRIS on the Blanco~4\,m telescope \citep{abbott:2016}. The target was also observed with FEROS on the MPG~2.2\,m \citep[][]{kaufer:1998} between 2016 July 1 and 2016 September 16, but the spectra were all too low S/N to be of use. \ifthenelse{\boolean{emulateapj}}{ \begin{figure*} [ht] }{ \begin{figure}[ht] } \plotone{HATS-71_wifes.pdf} \caption{ WiFeS/ANU~2.3\,m $R = 3000$ optical spectra of HATS-71{} (middle spectrum) and two other M dwarf standard stars for comparison. HATS-71{} has the optical spectrum of an M3 dwarf star. The relative fluxes are on an arbitrary scale, and the two standard stars have been shifted vertically for clarity. } \label{fig:wifes} \ifthenelse{\boolean{emulateapj}}{ \end{figure*} }{ \end{figure} } The WiFeS observations of HATS-71{}, which were reduced following \citet{bayliss:2013:hats3}, were used for reconnaissance of this faint M dwarf. We obtained a single spectrum at resolution $R \equiv \Delta\,\lambda\,/\,\lambda \approx 3000$ and S/N per resolution element of 18.9 on UT 2014 August 6 (\reffigl{wifes}). We used this observation to estimate the atmospheric parameters of the star. The classification pipeline described by \citet{bayliss:2013:hats3} yielded parameters of \ensuremath{T_{\rm eff\star}}$ = 3500 \pm 300$\,K, \ensuremath{\log{g}}$ = 4.7 \pm 0.3$ (cgs), and \ensuremath{\rm [Fe/H]}$ = 0.0 \pm 0.5$\,dex, however a comparison to M dwarf standards indicates a somewhat lower temperature (\reffigl{wifes}). Based on spectral matching to BT-Settl models \citep{allard:2011} we estimate a temperature of 3350\,K. The spectrum reveals this object to be a single-lined mid-M dwarf star with $\ensuremath{v \sin{i}} < 50$\,\ensuremath{\rm km\,s^{-1}}. We also obtained four spectra at a resolution of $R \approx 7000$ between 2014 August 6--9 which we used to check for any large amplitude RV variations. The spectra have a S/N between 5.9 and 21.2. The resulting radial velocities have good phase coverage and an RMS scatter of 2.3\,\ensuremath{\rm km\,s^{-1}}, comparable to the median per-point uncertainty of 2.9\,\ensuremath{\rm km\,s^{-1}}. The resulting upper limit on the mass of the transiting companion is $\ensuremath{M_{p}} < 31$\,\ensuremath{M_{\rm J}}\ at $3\sigma$ confidence. A total of eight PFS observations were obtained for HATS-71{} between 2014 December 31 and 2017 January 13. These include seven observations through an I$_{2}$ absorption cell, and one observation without the cell used to construct a template spectrum for use in the RV measurements. The observations were reduced to high-precision relative RV measurements following \citet{butler:1996}, while spectral line bisector spans (BSs) and their uncertainties were measured as described by \citet{jordan:2014:hats4} and \citet{brahm:2017:ceres}. To avoid excessive cosmic ray contamination and smearing due to changes in time in the barycentric velocity correction, each observation was composed of two to four exposures which were independently reduced and then co-added. The high-precision RV and BS measurements are given in \reftabl{rvs}, and are shown phase-folded, together with the best-fit model, in \reffigl{rvbis}. Due to the faintness of the source, the RVs have a median per-point uncertainty of 17\,\ensuremath{\rm m\,s^{-1}}, which may be underestimated. The residuals from the best-fit model have an RMS of 89\,\ensuremath{\rm m\,s^{-1}}\ (the observations themselves have an RMS of 106\,\ensuremath{\rm m\,s^{-1}}). The BS measurements have an even larger scatter of 1.6\,\ensuremath{\rm km\,s^{-1}}, limiting their use in excluding blended eclipsing binary scenarios (such scenarios are considered and rejected in Section~\ref{sec:blend}). We checked the PFS observations for H$\alpha$ emission, indicative of chromospheric activity, and found no evidence for this. If anything, H$\alpha$ is seen in absorption in these spectra. The surface temperature of HATS-71{} is too low to apply ZASPE \citep{brahm:2017:zaspe}, a synthetic-template-cross-correlation-based method to determine precise stellar atmospheric parameters, which we have used in analyzing most of the other planetary hosts discovered by HATSouth. For this reason we obtained a near-infrared spectrum of HATS-71{} using the ``Astronomy Research using the Cornell Infra Red Imaging Spectrograph'' (ARCoIRIS) instrument on the Blanco~4\,m at CTIO \citep{arcoiris:2016}. This spectrum was used to determine \ensuremath{T_{\rm eff\star}}\ and \ensuremath{\rm [Fe/H]}. ARCoIRIS is a cross-dispersed, single-object, long-slit, near-infrared spectrograph covering most of the wavelength range from 0.8 to 2.47 \ensuremath{\mu {\rm m} }, at a resolution of roughly 3500. ARCoIRIS spectra can only be taken in a single setup with a fixed slit assembly of 1\farcs1 $\times$ 28\arcsec. We observed HATS-71{} using a pair of ABBA patterns (eight 100\,s exposures in total) interleaved with hallow cathode lamp spectra, and using HD~1860 as a telluric standard. The observations were carried out on UT 2016 July 15, and were reduced to wavelength- and telluric-corrected spectra using the standard SPEX-tool package \citep{cushing:2004, vacca:2004}. We note, that we did not attempt to flux calibrate our spectrum as the observing conditions were not photometric. The data reduction resulted in six extracted orders, though we did not consider the sixth order in our analysis. Finally, we cut out regions strongly affected by telluric lines, normalized the spectra and removed a 2nd order polynomial fit. In order to estimate $\ensuremath{T_{\rm eff\star}}$\ and \ensuremath{\rm [Fe/H]}\ from our NIR spectrum, we used the procedure described by \citet{newton:2015}. These relations were calibrated using IRTF/SpeX spectra with a resolution of R$\sim$2,000, but ARCoIRIS has a resolution of R$\sim$3,500, therefore we downgraded our ARCoIRS spectra to the IRTF/SpeX resolution. In these downgraded spectra we measured the equivalent width (EW) of some selected lines and applied the relation from \citet{newton:2015}. Based on this we measure $\ensuremath{T_{\rm eff\star}} = \ifthenelse{\equal{\hatcurSMEversion}{i}}{\hatcurSMEiteff{}}{\hatcurSMEiiteff{}}$\,K, and \ensuremath{\rm [Fe/H]}$= \ifthenelse{\equal{\hatcurSMEversion}{i}}{\hatcurSMEizfeh{}}{\hatcurSMEiizfeh{}}$. \ifthenelse{\boolean{emulateapj}}{ \begin{figure} [ht] }{ \begin{figure}[ht] } \plotone{HATS755-002-rv.pdf} \caption{ Phased high-precision RV measurements from PFS for \hbox{HATS-71{}{}}. {\em Top:} the phased measurements together with our best-fit model (see \reftabl{planetparam}). Zero-phase corresponds to the time of mid-transit. The center-of-mass velocity has been subtracted. {\em Middle:} the velocity $O\!-\!C$ residuals from the best fit. The error bars include the jitter term listed in \reftabl{planetparam} added in quadrature to the formal errors. {\em Bottom:} the phased bisector spans (BS). Note the different vertical scales of the panels. } \label{fig:rvbis} \ifthenelse{\boolean{emulateapj}}{ \end{figure} }{ \end{figure} } \subsection{Ground-Based photometric follow-up observations} \label{sec:phot} Follow-up higher-precision ground-based photometric transit observations were obtained for HATS-71{} using the Danish 1.54\,m telescope at La Silla Observatory in Chile \citep{andersen:1995}, 1\,m telescopes from the Las Cumbres Observatory (LCOGT) network \citep{brown:2013:lcogt}, a 0.32\,m telescope at Hazelwood Observatory in Victoria, Australia, and a 0.36\,m telescope at El Sauce Observatory in Chile. Three of the light curves were obtained through the {\em TESS} Follow-up Program (TFOP) following the independent detection of HATS-71{} as a candidate transiting planet system by the {\em TESS} team (see Section~\ref{sec:spacephot}). All of the ground-based follow-up light curves are shown in \reffigl{lc}, while the data are available in \reftabl{phfu}. An egress event was observed with the DFOSC camera on the DK~1.54\,m telescope on the night of UT 2014 Oct 5. A total of 51 images were collected at a median cadence of 225\,s. The observations were carried out and reduced to a relative light curve following \citet{rabus:2016:hats11hats12}. The residuals from the best-fit transit model have a point-to-point RMS of 2.4\,mmag. An ingress event was observed with the SBIG camera on one of the LCOGT~1\,m telescopes at the South African Astronomical Observatory (SAAO) on UT 2014 Oct 24. A total of 39 images were collected at a median cadence of 76\,s. We also observed a full transit with the sinistro camera on one of the LCOGT~1\,m telescopes at Cerro Tololo Inter-American Observatory (CTIO) in Chile on UT 2014 Nov 9. A total of 56 images were collected at a median cadence of 227\,s. These observations were reduced to relative light curves as described in \citet{hartman:2015:hats6}. A full transit was also observed through the TFOP program using the sinistro camera on one of the LCOGT~1\,m telescopes at CTIO on UT 2018 Sep 17. A total of 44 images were collected at a median cadence of 163\,s. These data were reduced to aperture photometry using the AstroImageJ software package \citep[AIJ][]{collins:2013,collins:2017}. The residuals from the best-fit transit model have a point-to-point RMS of 15\,mmag, 3.4\,mmag, and 4.6\,mmag, on each of the respective nights. An egress event was observed on UT 2018 Sep 13 at Hazelwood Observatory, a backyard observatory operated by Chris Stockdale in Victoria, Australia. The observations were carried out using a 0.32\,m Planewave CDK12 telescope and an SBIG STT-3200 CCD imager. The images had a pixel scale of $1\farcs1$, while the average estimated PSF FWHM on the night of the observations was $9\arcsec$. We include in the analysis the photometry measured from 28 images collected at a median cadence of 314\,s. Aperture photometry was performed using AIJ\@. The residuals from the best-fit transit model have a point-to-point RMS of 15\,mmag. A full transit was observed on UT 2018 Sep 17 at El Sauce Observatory in Chile by Phil Evans using a 0.36\,m Planewave CDK14 telescope and a SBIG STT1603-3 CCD imager. These images had a pixel scale of $1\farcs47$, while the average estimate PSF FWHM on the night of the observations was $8.2\arcsec$. A total of 90 images are included in the analysis. The median cadence was 185\,s. Aperture photometry was performed using AIJ\@. The residuals from the best-fit transit model have a point-to-point RMS of 11\,mmag. \begin{figure*}[!ht] \plotone{HATS755-002-lc.pdf} \caption{ Unbinned, de-trended, ground-based, follow-up transit light curves{} for HATS-71{}. The dates of the events, filters and instruments used are indicated. Light curves following the first are displaced vertically for clarity. Our best fit from the global modeling described in \refsecl{globmod} is shown by the solid lines. The residuals from the best-fit model are shown on the right-hand-side in the same order as the original light curves. The error bars represent the photon and background shot noise, plus the readout noise. } \label{fig:lc} \end{figure*} \subsection{Space-Based photometric follow-up observations} \label{sec:spacephot} Photometric time-series observations of HATS-71{} were carried out by the NASA {\em TESS} mission between 2018 July 25 and 2018 August 22 (Sector 1 of the mission). The target (TIC~234523599{}) was selected for observations at 2-minute cadence through the {\em TESS} Guest Observer program\footnote{Program G011214, PI Bakos, "TESS Observations Of Transiting Planet Candidates From HAT"}. The data were processed, and the source was identified as a candidate transiting planet system (denoted~TOI~127.01{}) by the {\em TESS} team following the methods described by \citet{huang:2018}. We note that the identification of this object as a candidate by the {\em TESS} team was made independently of the observations described in the previous sections. Here we make use of the preliminary de-trended light curve for HATS-71{} produced by the {\em TESS} Science Processing Operations Center pipeline \citep[based on][]{jenkins:2016} which was included in the set of {\em TESS} alerts released to the public on 2018 September 5. Note that these Presearch Data Conditioning (PDC) light curves have not be arbitrarily detrended, but rather have had instrumental systematic signatures identified and removed using a multi-scale, Maximum A Posteriori (msMAP) approach \citep{stumpe:2014,smith:2012}. A total of 8 consecutive primary transits, and 6 epochs of secondary eclipse are included in the light curve. The residuals from the best-fit model have a point-to-point RMS of 16.5\,mmag. The light curve is shown, together with the best-fit model, in \reffigl{tess}, while the time-series data are included in \reftabl{phfu}. We searched for additional periodic signals in the {\em TESS} light curve in the same manner as we did for the HATSouth data (Section~\ref{sec:detection}). No significant signals were found with either GLS or BLS in the {\em TESS} light curve after subtracting the best-fit transit model for HATS-71b{}. No evidence for the \hatcurrotper\,day photometric rotation period seen with HATSouth is observed in the {\em TESS} data, though this is hardly surprising as this period exceeds the duration of the {\em TESS} observations, and a long-term linear or quadratic trend could have been filtered out by the PDC pipeline. The highest peak in the BLS spectrum of the {\em TESS} residuals has a period of $9.06$\,days, a depth of $3.4$\,mmag and a signal-to-pink-noise ratio of only $5.4$. \ifthenelse{\boolean{emulateapj}}{ \begin{figure*}[!ht] }{ \begin{figure}[!ht] } \plotone{HATS755-002-TESS.pdf} \caption{ {\em TESS} unbinned light curve for HATS-71{}. We show the full un-phased light curve as a function of time ({\em top}), the full phase-folded light curve ({\em middle left}), the phase-folded light curve zoomed-in on the primary transit ({\em middle right}), the phase-folded light curve zoomed-in on the secondary eclipse ({\em bottom left}), and the residuals from the best-fit model, phase-folded and zoomed-in on the primary transit ({\em bottom right}). The solid line in each panel shows the model fit to the light curve. The dark filled circles show the light curve binned in phase with a bin size of 0.002. \label{fig:tess}} \ifthenelse{\boolean{emulateapj}}{ \end{figure*} }{ \end{figure} } \ifthenelse{\boolean{emulateapj}}{ \begin{deluxetable*}{lrrrrl} }{ \begin{deluxetable}{lrrrrl} } \tablewidth{0pc} \tablecaption{ Light curve data for HATS-71\label{tab:phfu}. } \tablehead{ \colhead{BJD\tablenotemark{a}} & \colhead{Mag\tablenotemark{b}} & \colhead{\ensuremath{\sigma_{\rm Mag}}} & \colhead{Mag(orig)\tablenotemark{c}} & \colhead{Filter} & \colhead{Instrument} \\ \colhead{\hbox{~~~~(2,400,000$+$)~~~~}} & \colhead{} & \colhead{} & \colhead{} & \colhead{} & \colhead{} } \startdata \input{phfu_tab_short.tex} \enddata \tablenotetext{a}{ Barycentric Julian Date computed on the TDB system with correction for leap seconds. } \tablenotetext{b}{ The out-of-transit level has been subtracted. For observations made with the HATSouth instruments (identified by ``HS'' in the ``Instrument'' column) these magnitudes have been corrected for trends using the EPD and TFA procedures applied {\em prior} to fitting the transit model. This procedure may lead to an artificial dilution in the transit depths when used in its plain mode, instead of the signal reconstruction mode \citep{kovacs:2005:TFA}. The blend factors for the HATSouth light curves are listed in Table~\ref{tab:planetparam}. For observations made with follow-up instruments (anything other than ``HS'' in the ``Instrument'' column), the magnitudes have been corrected for a quadratic trend in time, and for variations correlated with up to three PSF shape parameters, fit simultaneously with the transit. } \tablenotetext{c}{ Raw magnitude values without correction for the quadratic trend in time, or for trends correlated with the seeing. These are only reported for the follow-up observations. } \tablecomments{ This table is available in a machine-readable form in the online journal. A portion is shown here for guidance regarding its form and content. } \ifthenelse{\boolean{emulateapj}}{ \end{deluxetable*} }{ \end{deluxetable} } \subsection{Search for Resolved Stellar Companions} \label{sec:luckyimaging} In order to detect neighboring stellar companions we obtained $z^{\prime}$-band high-spatial-resolution lucky imaging observations with the Astralux Sur imager \citep{hippler:2009} on the New Technology Telescope (NTT) on the night of 2015 December 23. The observations were reduced as in \citet{espinoza:2016:hats25hats30}, and no neighbors were detected. The effective FWHM of the reduced image is $46.3 \pm 5.5$\,mas. Figure~\ref{fig:hatsastralux} shows the resulting $5\sigma$ contrast curve. We may exclude neighbors with $\Delta z^{\prime} < 2.5$\,mag at $0\farcs2$, and $\Delta z^{\prime} < 3.2$\,mag at 1\arcsec. We also note that there are no neighbors within 10\arcsec\ of HATS-71{} in the Gaia~DR2 catalog, based on which we rule out neighbors with $G \la 20$\,mag down to a limiting resolution of $\sim 1\arcsec$ \citep[e.g.,][]{ziegler:2018}. \ifthenelse{\boolean{emulateapj}}{ \begin{figure*}[!ht] }{ \begin{figure}[!ht] } \plottwo{HATS-71.pdf}{contrast_curve_HATS-71.pdf} \caption{ {\em Left:} Astralux Sur $z^{\prime}$ image of HATS-71{} showing no apparent neighbors. {\em Right:} $5\sigma$ contrast curve for HATS-71{} based on our Astralux Sur $z^{\prime}$ observation. The gray band shows the variation in the limit in azimuth at a given radius. \label{fig:hatsastralux}} \ifthenelse{\boolean{emulateapj}}{ \end{figure*} }{ \end{figure} } \section{Analysis} \label{sec:analysis} \subsection{Joint Modeling of Observations} \label{sec:globmod} \ifthenelse{\boolean{emulateapj}}{ \begin{figure}[!ht] }{ \begin{figure}[!ht] } \plotone{HATS755-002-iso-showgaia-bprp-gabs.pdf} \caption{ Hertzsprung-Russell diagram constructed from the Gaia DR2 photometry corrected for distance and extinction. The blue-filled circle shows HATS-71{} (the uncertainties are smaller than the size of the circle), while the gray-filled circles show other stars in Gaia~DR2 with $\varpi < 7$\,mas and within a $10^{\circ}\times10^{\circ}$ box centered on HATS-71{}. Overplotted are PARSEC model isochrones for metallicities of -0.5 (left set of cyan lines), 0 (middle set of cyan lines), +0.4414 (right set of cyan lines), and the spectroscopically estimated metallicity of 0.26\,dex (black lines). At each metallicity we show models for ages 1.0 and 5.0 and 12.0\,Gyr, though the difference with age at fixed metallicity is negligible at the scale shown here. We also show the median main sequence relation based on the Gaia~DR2 stars included in the plot (left red line) and the sequence shifted upward in magnitude by $0.753$\,mag (right red line; this corresponds to equal-mass binary stars with both components falling on the median main sequence). HATS-71{} lies near the upper red line, and above the +0.4414\,dex isochrones, hinting that it may be a unresolved binary system. } \label{fig:iso} \ifthenelse{\boolean{emulateapj}}{ \end{figure} }{ \end{figure} } We analyzed the photometric and spectroscopic observations of HATS-71{} following \citet{hartman:2018:hats6069}. In this case we make use of the empirical method for determining the masses and radii of the host stars described in that paper, which is similar to the method proposed by \citet{stassun:2018}. The method jointly fits all of the light curves, the RV observations, the Gaia DR2 parallax, the Gaia DR2 and 2MASS broad-band photometry, and the spectroscopically determined \ensuremath{T_{\rm eff\star}}\ and \ensuremath{\rm [Fe/H]}\ (here we use the values determined from the ARCoIRIS observations, Section~\ref{sec:obsspec}). We adopt a Keplerian orbit to model the RV observations and \citet{mandel:2002} light curve models in fitting the light curves, and assume fixed quadratic limb darkening coefficients taken from \citet{claret:2004} for $\ensuremath{T_{\rm eff\star}} = 3500$\,K and $\ensuremath{\log{g}} = 4.5$ (for the {\em TESS} light curve we adopt the $I$-band coefficients). We used a Differential Evolution Markov Chain Monte Carlo (DEMCMC) procedure to explore the fitness landscape and to determine the posterior distribution of the parameters. This modeling allows us to directly determine the radius of the star (making use of bolometric corrections determined from the PARSEC stellar evolution models, \citealp{marigo:2017}; and using the MWDUST model of \citealp{bovy:2016} to place a prior on the extinction). Combining this with the density determined from the transits allows us to then directly measure the mass of the star as well. In \citet{hartman:2018:hats6069} we found that this empirical method, when applied to the planetary systems HATS-60 through HATS-69, failed to provide reasonably tight constraints on the stellar masses. In the case of HATS-71{}, however, the observational constraints on the stellar density are more stringent, allowing a significantly tighter constraint on the stellar mass. In carrying out the analysis we assumed a circular orbit. Note that if the orbit is eccentric, the stellar density inferred from the light curve would be systematically different from what we measured here, which would in turn affect the stellar mass measurement and the inferred planetary mass limits. A solution can be found, for example, with $e = 0.413$ which passes nicely through the RV observations and is consistent with the host star having a mass and radius of $0.46$\,\ensuremath{M_\sun}\ and $0.45$\,\ensuremath{R_\sun}, respectively, and the planet having a mass and radius of $1.68$\,\ensuremath{M_{\rm J}}\ and $0.94$\,\ensuremath{R_{\rm J}}, respectively. The limited number of RV observations gathered, however, prevents us from putting a believable constraint on the eccentricity from the data. Additional RV measurements are required, but are expensive due to the faintness of the host star. In fitting the DK~1.54\,m follow-up light curve we included the light curves for 10 neighboring stars as TFA templates to account for systematic drifts in the photometry shared by some of the comparisons that were not well modeled by a simple function of time. For the other ground-based follow-up light curves, where systematic variations were less pronounced, we included only a quadratic function in time to account for trends. We also attempted to model the observations using the stellar isochrone-based analysis method described by \citet{hartman:2018:hats6069}. We found, however, that the PARSEC theoretical model does not reproduce the high-precision measurements of color, density and absolute magnitude that are available for HATS-71{}. In \reffigl{iso} we show the HR diagram using the extinction- and distance-corrected Gaia~DR2 BP$_{0}-$RP$_{0}$ and G$_{\rm abs}$ measurements. Here we show the measurements for HATS-71{} as well as for all stars in the Gaia~DR2 catalog in a $10^{\circ}\times10^{\circ}$ box centered on HATS-71{} with parallax $\varpi > 7$\,mas, $\sigma_{\varpi} < 0.2$\,mas, and BP, RP, and G all measured to greater than 10$\sigma$ confidence, and with $1.5 < $BP$-$RP$_{0} < 3.5$ and $7.0 < $G$_{\rm abs} < 12.0$. We also show theoretical PARSEC isochrones for a range of ages and metallicities, the median main sequence relation based on the selected stars from the Gaia~DR2 sample, and the median main sequence shifted upward in magnitude by $0.753$\,mag (corresponding to equal-mass binary stars with both components falling on the median main sequence). As is apparent, HATS-71{} falls above the highest metallicity theoretical relation calculated, and near the equal-mass binary sequence. This provides suggestive evidence that HATS-71{} may be an unresolved binary star system, though we caution that there is no other spectroscopic or imaging evidence for such a companion. We consider how the inferred planetary and stellar parameters would change if there is an unresolved stellar companion in \refsecl{blend}. Previous work has shown that rapidly rotating, magnetically active M dwarfs often have cooler surface temperatures and larger radii than predicted by theoretical stellar evolution models (e.g., see the recent work by \citealp{jaehnig:2018} and \citealp{somers:2017} investigating the inflation of M dwarfs in the Hyades and Pleiades; see also references therein for a rich literature on this topic). HATS-71{}, however, does not exhibit H$\alpha$ emission typical of magnetically active M dwarfs, and its measured photometric rotation period of \hatcurrotper\,days (Section~\ref{sec:detection}) is substantially longer than the periods of M dwarf stars for which radius inflation is typically observed ($P_{\rm rot} \la 10$\,days). The measured astrometric, spectroscopic and photometric parameters of HATS-71{} are collected in \reftabl{stellarobserved}. \reftabl{stellarderived} gives the stellar parameters that are derived through the modelling discussed in this Section, while \reftabl{planetparam} gives the planetary parameters derived through this modeling. The parameters listed under the ``Single Star'' columns in each table are those derived here under the assumption that HATS-71{} is a single star without a stellar binary companion. We find that, thanks to Gaia~DR2, the star HATS-71{} has a tightly constrained radius of \hatcurISOrlong\,\ensuremath{R_\sun}. This, combined with the measured bulk stellar density (from the transits) of \hatcurLCrho{}\,\ensuremath{\rm g\,cm^{-3}}, gives a stellar mass of \hatcurISOmlong\,\ensuremath{M_\sun}. For comparison, using the \citet{delfosse:2000} mass--M$_{K}$ relation gives an estimated stellar mass of $0.455$\,\ensuremath{M_\sun}, while using the \citet{benedict:2016} mass--M$_{K}$ relation gives an estimated stellar mass of $0.50$\,\ensuremath{M_\sun}, consistent with the value coming from Gaia~DR2 and the mean density estimate. We find that the planet HATS-71b{} has a radius of \hatcurPPrlong\,\ensuremath{R_{\rm J}}. Due to the faintness of the source we are unable to determine the mass of the planet with greater than $2\sigma$ confidence. Our modeling yields a mass of \hatcurPPmlong\,\ensuremath{M_{\rm J}}, with a 95\% confidence upper limit of $\ensuremath{M_{p}} < 0.81$\,\ensuremath{M_{\rm J}}. The planet has an estimated equilibrium temperature (assuming full redistribution of heat and zero albedo) of \hatcurPPteff\,K. The 89\,\ensuremath{\rm m\,s^{-1}}\ scatter in the PFS RV residuals is significantly larger than the median per-point uncertainty of 17\,\ensuremath{\rm m\,s^{-1}}. Given the limited number of RVs obtained we cannot say whether this is due to the planet having a significant eccentricity, stellar activity, additional planets in the system, or our underestimating the uncertainties in these low S/N spectra. In modeling the data we incorporated a jitter term, which we added in quadrature to the formal uncertainties, and varied in the fit. We find a jitter of \hatcurRVjitter\,\ensuremath{\rm m\,s^{-1}}\ is needed to explain the excess scatter. If the orbit is eccentric, the jitter could be as low as $37$\,\ensuremath{\rm m\,s^{-1}}. \subsection{Blend Analysis} \label{sec:blend} In order to rule out the possibility that HATS-71{} is a blended stellar eclipsing binary system, we carried out a blend analysis of the photometric data following \citet{hartman:2018:hats6069}. In this analysis we model the photometric and spectroscopic observations of HATS-71{} under four different scenarios: a single star with a planet (referred to as the H-p model following the nomenclature from \citealp{hartman:2009:hat12}), a hierarchical triple star system where the two fainter stars form an eclipsing binary (referred to as the H,S-s model), a blend between a bright foreground star and a fainter background eclipsing binary star system (referred to as the H,S-s$_{\rm BGEB}$ model), and a bright star with a transiting planet and a fainter unresolved stellar companion (referred to as the H-p,s model). We find that the best-fitting model is the H-p,s model which yields $\Delta \chi^2 = -345$, $-278$ and $-657$ compared to the best-fit H-p model, H,S-s$_{\rm BGEB}$ and H,S-s models, respectively. The H,S-s model is strongly disfavored, however the H,S-s$_{\rm BGEB}$ provides a better fit to the data modeled in this analysis than the H-p model. As noted in Section~\ref{sec:globmod} the PARSEC models do not reproduce the combined high-precision measurements of color, density and absolute magnitude that are available for HATS-71\ assuming a single star, so it is perhaps not surprising that the H,S-s$_{\rm BGEB}$ model can provide a better fit than the H-p model. The best-fit H,S-s$_{\rm BGEB}$ model consists of a 0.42\,\ensuremath{M_\sun}\ foreground star blended with a $0.44+0.12$\,\ensuremath{M_\sun}\ eclipsing binary at a distance modulus that is 0.65\,mag greater than the foreground star, and we find that the primary star in the background binary can be at most only 1\,mag fainter in apparent brightness than the foreground star. Based on the Astralux Sur imaging (Section~\ref{sec:luckyimaging}) the projected separation between the foreground star and the background binary would have to be $\lesssim 0\farcs05$. This H,S-s$_{\rm BGEB}$ model still fails to fit the observations to within the uncertainties, yielding, for example, a predicted parallax of 6.93\,mas for the foreground star which differs from the measured value of \hatcurCCparallax\,mas by $4\sigma$. What is more, we find that all of the H,S-s$_{\rm BGEB}$ blend models which fit the observations as well as or better than the H-p model (i.e., have $\Delta \chi^2 < 25$ compared to the H-p model) predict a significantly larger RV variation measured from the composite spectrum than observed (with RMS ranging from 660\,\ensuremath{\rm m\,s^{-1}}\ to 1.2\,\ensuremath{\rm km\,s^{-1}}). Based on these factors we consider both the H,S-s and H,S-s$_{\rm BGEB}$ models excluded, and conclude that HATS-71{} is a confirmed transiting planet system. Because the H-p,s model provides a significantly better fit to the data than the H-p model, we also list in \reftabl{stellarderived} and \reftabl{planetparam} the stellar parameters (for both the primary and secondary stars) and the planetary parameters for the H-p,s model derived from a DEMCMC analysis. Based on this modeling, we find that the planetary host star HATS-71{}A has a mass of $\hatcurISOmlonghpsmodel{}$\,\ensuremath{M_\sun}, and a radius of $\hatcurISOrlonghpsmodel{}$\,\ensuremath{R_\sun}, while the unresolved binary star HATS-71{}B has a mass of $\hatcurISOmlongBhpsmodel{}$\,\ensuremath{M_\sun}\ and a radius of $\hatcurISOrlongBhpsmodel{}$\,\ensuremath{R_\sun}. The planet has a radius of $\hatcurPPrlonghpsmodel{}$\,\ensuremath{R_{\rm J}}\ and a poorly determined mass of $\hatcurPPmlonghpsmodel{}$\,\ensuremath{M_{\rm J}}\ (95\% confidence upper limit of $\hatcurPPmtwosiglimhpsmodel{}$\,\ensuremath{M_{\rm J}}). Here we do not incorporate the RV observations directly into the modeling in this case, but instead determine an approximate scaling factor of $1.16 \pm 0.23$, which we apply to the value of $K$ as determined in \refsecl{globmod} for the single star modeling to account for the effective dilution in the measured orbital variation of the primary star due to the non-varying spectral features contributed by the secondary star. This scaling factor is calculated by simulating blended spectral cross-correlation functions in the same manner as done in ruling out the H,S-s$_{\rm BGEB}$ model, and we conservatively assume a 20\% uncertainty. We then re-calculate all parameters that depend on K after applying this scaling. We also find that HATS-71{}B would have $\Delta G = 2.05$\,mag, $\Delta z = 1.77$\,mag compared to HATS-71{}A. Based on the Astralux Sur observations (Section~\ref{sec:luckyimaging}) the two stars would have to be separated by less than $0\farcs1$, implying a projected physical separation of less than 14\,AU. Note that we also checked whether there was enough proper motion for HATS-71 to have moved between archival images, but the proper motion was too small to reveal anything by blinking the UK Schmidt image (1997) and the DK 1.5m telescope image (2014). The slight over-luminosity of HATS-71{} could also be caused by being a very young M-dwarf. However, a query with BANYAN Sigma \citep{gagne:2018} yields no matches, so the star is unlikely to be the member of a young association. \section{Discussion} \label{sec:discussion} The discovery of HATS-71b{} demonstrates that, at least in some cases, Jupiter-sized planets are able to form and migrate around stars with masses as low as HATS-71{} ($\hatcurISOmlong{}$\,\ensuremath{M_\sun}). It remains to be seen whether such planets occur with the same frequency as they do around solar-type stars (i.e.~0.43$\pm0.05$\%: \citet{fressin:2013}), or if giant planet formation is rarer around low mass stars as predicted by core accretion theory \citep[e.g.][]{laughlin:2004,liu:2016}. Figure~\ref{fig:massmass} shows giant planet masses as a function of host star mass, for systems with measured planetary masses. HATS-71b{} is the giant planet with the lowest host star mass that has been discovered to-date. The sparsity of systems with host masses $<$0.5\,\ensuremath{M_\sun}\ is apparent from Figure~\ref{fig:massmass}, although this may just be an reflection of the fact that most of the surveys contributing to the discoveries shown did not monitor sufficient numbers of low mass stars. Over the next two years of HATSouth and \textit{TESS} discoveries, we should gain a better statistical understanding of these systems. The deep transits that these systems present makes photometric detection relatively robust in both the HATSouth and \textit{TESS} survey data. Indeed, the 4.7\% transit for HATS-71b\ makes this the deepest transit observed by a hot Jupiter (as defined in the Introduction). In \reffig{perdepth} we show the transit depths of these planets as a function of period, where the depths were calculated from the $\ensuremath{R_{p}}$ planetary radius, $\ensuremath{R_\star}$ stellar radius, $b$ impact parameter, $e$ eccentricity and $\omega$ argument of periastron of the orbit (whenever available), also taking into account the grazing nature of some orbits. The second and third deepest transits are Qatar-4b \citep[3.4\%; ][]{alsubai:2017:qatar4b}, and HATS-6b, \citep[3.3\%; ][]{hartman:2015:hats6}. \ifthenelse{\boolean{emulateapj}}{ \begin{figure}[!ht] }{ \begin{figure}[!ht] } \plotone{per_depth.pdf} \caption{ Transit depth as a function of orbital period for hot Jupiters. The depth was calculated from the $\ensuremath{R_{p}}$ planetary radius, $\ensuremath{R_\star}$ stellar radius, $b$ impact parameter, $e$ eccentricity and $\omega$ argument of periastron of the orbit (whenever available), also taking into account the grazing nature of some orbits. Data was taken from \url{exoplanet.eu}. } \label{fig:perdepth} \ifthenelse{\boolean{emulateapj}}{ \end{figure} }{ \end{figure} } However, radial velocity follow-up is extremely challenging since such stars are generally faint at visible wavelengths where most high precision spectrographs operate. The spectrum of these stars may also be less amendable to measuring precise radial velocity variations, as they are dominated by broader molecular absorption features rather than the narrow metal lines in solar-type stars (see Figure~\ref{fig:wifes}). A new generation of stable IR spectrographs will measure precise radial velocities in order to search for planets orbiting M-dwarfs, and these include CARMENES \citep{carmenes:2014}, SPIROU \citep{spirou:2014}, IRD \citep{ird:2014}, HPF \citep{hpf:2018}, NIRPS \citep{nirps:2017} and GIARPS \citep{claudi:2018}. This may provide another avenue for radial velocity follow-up of transiting giant planets orbiting M-dwarfs. However we note that for mid M-dwarfs such as HATS-71{}, optical spectroscopy will probably remain the best source of high precision radial velocities. For the CARMENES spectrograph, which hosts both an optical and IR arm, it appears that the radial velocity precision is still higher in the optical wavelengths until spectral types of M8 or later \citep{reiners:2018}. The deep transits will facilitate atmospheric characterization of the planet using transmission spectroscopy. We estimate that the transmission signature could be anywhere from 300\,ppm to 700\,ppm, assuming the cloud properties of hot Jupiters around M stars are similar to those around F, G and K stars. Atmospheric characterization might be used instead of radial velocities to get the mass of the planet via MassSpec \citep{wit:2013}, although note the ambiguities detailed in \citet{batalha:2017}. HATS-71{} was observed by the \textit{TESS} spacecraft with 2 minute cadence as a candidate from the HATSouth Guest Observer Program (GO11214; PI Bakos). Due to the high precision ground-based light curves that had already been obtained in 2014 using 1\,m-class telescopes (see Section~\ref{sec:phot}), the addition of the \textit{TESS} light curve did not have a significant impact on parameters such as the planetary radius or the orbital ephemerides. However the \textit{TESS} light curve did contain the best photometry available at phase 0.5, which allowed us to rule out a secondary eclipse with much higher confidence. With many hundreds of transiting planet candidates, follow-up photometry that covers both the primary transit and any possible secondary eclipse is a time consuming and resource intensive task. The use of \textit{TESS} light curves to help confirm existing candidates is therefore an obvious synergy between HATSouth and \textit{TESS}, and this method will continue to be adopted for future \textit{TESS} sectors. \ifthenelse{\boolean{emulateapj}}{ \begin{figure}[!ht] }{ \begin{figure}[!ht] } \plotone{mass-mass-crop.pdf} \caption{ Planet mass as a function of host star mass for all known giant (\ensuremath{M_{p}}$>$0.3\,\ensuremath{M_{\rm J}}) planets with measured masses and radii (blue circles) and for HATS-71b{} (red square with errorbars). Data from NASA Exoplanet Archive as of 2018 October 4. } \label{fig:massmass} \ifthenelse{\boolean{emulateapj}}{ \end{figure} }{ \end{figure} } \acknowledgements Development of the HATSouth project was funded by NSF MRI grant NSF/AST-0723074, operations have been supported by NASA grants NNX09AB29G, NNX12AH91H, and NNX17AB61G, and follow-up observations have received partial support from grant NSF/AST-1108686. G.\'A.B wishes to thank Konkoly Observatory of the Hungarian Academy of Sciences for their warm hospitality during numerous visits during the past years, in particular the Distinguished Guest Fellow program. A.J.\ acknowledges support from FONDECYT project 1171208, BASAL CATA AFB-170002, and project IC120009 ``Millennium Institute of Astrophysics (MAS)'' of the Millenium Science Initiative, Chilean Ministry of Economy. N.E.\ is supported by CONICYT-PCHA/Doctorado Nacional. R.B.\ acknowledges support from FONDECYT Post-doctoral Fellowship Project No. 3180246. N.E.\ acknowledges support from project IC120009 ``Millenium Institute of Astrophysics (MAS)'' of the Millennium Science Initiative, Chilean Ministry of Economy. L.M.\ acknowledges support from the Italian Minister of Instruction, University and Research (MIUR) through FFABR 2017 fund. L.M.\ acknowledges support from the University of Rome Tor Vergata through ``Mission: Sustainability 2016'' fund. V.S.\ acknowledges support form BASAL CATA PFB-06. A.V.~is supported by the NSF Graduate Research Fellowship, Grant No. DGE 1144152. Based on observations at Cerro Tololo Inter-American Observatory, National Optical Astronomy Observatory (NOAO Prop.~ID 2016A/CN-615, 2016B-CN0908, 2017A-C79, 2017B-0909, 2018A-CN46/908; PI: Rabus), which is operated by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation. M.R.~acknowledges support from CONICYT project Basal AFB-170002. This paper also makes use of observations from the LCOGT network. Some of this time was awarded by NOAO. We acknowledge the use of the AAVSO Photometric All-Sky Survey (APASS), funded by the Robert Martin Ayers Sciences Fund, and the SIMBAD database, operated at CDS, Strasbourg, France. This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement. This research has made use of the NASA Exoplanet Archive, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. We acknowledge the use of TESS Alert data, which is currently in a beta test phase, from pipelines at the TESS Science Office and at the TESS Science Processing Operations Center. This paper includes data collected by the TESS mission, which are publicly available from the Mikulski Archive for Space Telescopes (MAST). Finally, G.\'A.B.~wishes to thank Princeton's AST205 class for all the inspiration they gave during the fall semester of 2018. \facilities{HATSouth, ATT (WiFeS), Magellan:Clay (PFS), Blanco (ARCoIRIS), Danish 1.54m Telescope (DFOSC), LCOGT, NTT (Astralux Sur), TESS, Gaia, Exoplanet Archive} \software{FITSH \citep{pal:2012}, BLS \citep{kovacs:2002:BLS}, VARTOOLS \citep{hartman:2016:vartools}, CERES \citep{brahm:2017:ceres}, AstroImageJ \citep{collins:2013,collins:2017}, SPEX-tool \citep{cushing:2004,vacca:2004}, SExtractor \citep{bertin:1996}, Astrometry.net \citep{lang:2010}, MWDUST \citep{bovy:2016}} \bibliographystyle{aasjournal}
1,108,101,566,256
arxiv
\section{Introduction} The black hole information paradox lies in the fact that a pure state seems to evolve into a thermal state through Hawking radiation, and thus it violates unitarity of quantum mechanics. This paradox can be partially resolved if there exists black hole microstates, which are pure states, cannot be distinguished from the underlying thermal state. This resolution however calls for a complete theory of quantum gravity which is beyond the reach at this moment. However, with the help of the anti-de Sitter/conformal field theory (AdS/CFT) correspondence \cite{Maldacena:1997re} one may glimpse the answer for this quantum gravity problem from the viewpoint of its dual CFT. Recently, it was proposed in \cite{Bao:2017guc} to characterize distinguishability of the black hole microstates from its underlying thermal state by the Holevo information. One can call it in short the distinguishability of black hole microstates. The thermal state of the whole system is described by \begin{equation} \label{thermalstate} \rho = \sum_i p_i \rho_i, ~~ \rho_i = |i\rangle \langle i|, \end{equation} with the orthonormal microstates $|i\rangle$ satisfying $\langle i|i'\rangle = \delta_{ii'}$. Note that $0 \leq p_i \leq 1$, $\sum_i p_i = 1$. One would like to distinguish the microstates from the thermal state by performing measurements in a subsystem $A$, whose complement is denoted by $B$. The first step is to consider the relative entropy by comparing the reduced density matrix $\rho_{A,i}=\mathrm{tr}_B \rho_i$ of each of the microstates with the reduced density matrix $\rho_{A}=\mathrm{tr}_B \rho$ of the corresponding thermal state, i.e., \begin{equation} S(\rho_{A,i}\|\rho_A) = \mathrm{tr}(\rho_{A,i}\log \rho_{A,i}) - \mathrm{tr}(\rho_{A,i}\log \rho_{A}). \end{equation} This quantity is a well-defined divergence and characterizes the difference between the two reduced density matrices. The average relative entropy gives the Holevo information \begin{equation} \chi_A = \sum_i p_i S(\rho_{A,i}\|\rho_A) = S_A - \sum_i p_i S_{A,i}, \end{equation} with entanglement entropies (EEs) $S_A = -\mathrm{tr} (\rho_A \log \rho_A)$, $S_{A,i} = -\mathrm{tr} (\rho_{A,i} \log \rho_{A,i})$. It is just the difference between the thermal state EE and the average EE of the microstates. The Holevo information $\chi_A$ is the upper bound of the mutual information between the thermal state and any measurement inside $A$, which is aiming to reproduce the thermal state and to characterize the accessible information. By construction \begin{equation} 0 \leq \chi_A \leq S_{\rm thermal}, \end{equation} with $S_{\rm thermal}$ being thermal entropy of the whole system \begin{equation} S_{\rm thermal} = - \sum_i p_i \log p_i. \end{equation} When $\chi_A=0$, $\rho_{A,i} = \rho_{A}$ so that the microstates are totally indistinguishable by measurements inside $A$. On the other hand, when $\chi_A=S_{\rm thermal}$, $\rho_{A,i} \rho_{A,i'} = 0$ for arbitrary $i,i'$ and thus the microstates are completely distinguishable. To investigate the information loss paradox of black hole in Einstein gravity in the AdS$_3$ background, i.e., the Ba\~nados-Teitelboim-Zanelli (BTZ) black hole \cite{Banados:1992wn}, we calculate the Holevo information in a two-dimensional (2D) CFT. When the gravity is weakly coupled, the CFT has a large central charge \cite{Brown:1986nw} \begin{equation} c=\frac{3 R}{2 G_N}, \end{equation} with $G_N$ being the Newton constant and $R$ being the AdS radius. The $1/c$ corrections on the CFT side correspond to quantum corrections on the gravity side. We consider a 2D large $c$ CFT in thermal state on a cylinder with spatial period $L$. For an interval $A$ with length $\ell$, we denote the Holevo information by $\chi(\ell)$. The Holevo information $\chi(\ell)$ is monotonically increasing with respect to $\ell$. It is easy to see that \begin{equation} \lim_{\ell\to0}\chi(\ell)=0, ~~ \lim_{\ell\to L}\chi(\ell)=S(L). \end{equation} By using the holographic entanglement entropy (HEE) \cite{Ryu:2006bv,Hubeny:2007xt}, it was recently found in \cite{Bao:2017guc} that the holographic Holevo information shows plateau behaviors around both $\ell\to0$ and $\ell\to L$. This indicates that the microstates are totally indistinguishable until the interval reaches a non-vanishing critical length, and are perfectly distinguishable after the interval reaches another critical length that is shorter than length of the whole system. However, the HEE is only the classical gravity result, and it is expected that quantum corrections to the HEE \cite{Headrick:2010zt,Barrella:2013wja,Faulkner:2013ana} would resolve both plateaus of the holographic Holevo information. On the dual CFT side, these correspond to $1/c$ corrections. The problem has been addressed in \cite{Michel:2018yta} for the 2D CFT due to the zero mass BTZ black hole. In this letter, we consider the more general thermal states, including the canonical ensemble thermal state with both high and low temperatures, as well as the microcanonical ensemble thermal state. This is not only technically challenging by performing the thermal average over all eigenstates, i.e., including both primaries and their descendants, but also conceptually interesting to see if the peculiar non-thermal/non-geometrical descendants states found in \cite{Guo:2018fnv} will be thermally averaged out so that the microstates remain almost ultra-locally indistinguishable. We find that the Holevo information is not vanishing as long as the length of the interval is non-vanishing, and this indicates that the black hole microstates are distinguishable from thermal state as long as the measuring region is non-vanishing. We also find the Holevo information is smaller than the thermal entropy as long as the interval is shorter than the whole system. For calculation convenience we choose that the interval $A$ is short, i.e., $\ell / L \ll 1$, and thus its complement $B$ has a length $L-\ell$ comparable to $L$. Then we have \begin{eqnarray} && S_A = S(\ell), ~~ S_{A,i} = S_i(\ell), ~~ \chi_A = \chi(\ell),\\ && S_B = S(L-\ell), ~~ S_{B,i} = S_i(L-\ell), ~~ \chi_B = \chi(L-\ell).\nonumber \end{eqnarray} Note that $S_{A,i}=S_{B,i}$. To get the short and long interval Holevo information $\chi_A$ and $\chi_B$, we need to calculate the short and long interval EEs of thermal state, i.e., $S_A$, $S_B$, and the average of the short interval EEs of the microstates, i.e., $\sum_i p_i S_{A,i}$. For the short interval, as in \cite{Chen:2016lbu,Lin:2016dxa,He:2017vyf,He:2017txy}, we use the operator product expansion (OPE) of twist operators \cite{Calabrese:2004eu,Cardy:2007mb,Headrick:2010zt,Calabrese:2010he,Rajabpour:2011pt,Chen:2013kpa,Bianchini:2015uea} to calculate the short interval expansion of the EE. This method is still available for the long interval case \cite{Chen:2014ehg,Chen:2015kua,Chen:2017ahf}. \section{Canonical ensemble thermal state with high temperature} For a canonical ensemble thermal state we have \begin{equation} p_i = \frac{\mathrm{e}^{-\beta E_i}}{Z(\beta)}, ~~ Z(\beta) = \sum_i \mathrm{e}^{-\beta E_i}, \end{equation} with $\beta$ being the inverse temperature. We consider high temperature limit $\beta/L\ll1$ and omit the terms suppressed by the exponential factor $\mathrm{e}^{-2\pi L/\beta}$. The thermal entropy is \begin{equation} S(L) = \frac{\pi c L}{3\beta}, \end{equation} which is just the entropy of a non-rotating BTZ black hole. Using the HEE \cite{Ryu:2006bv,Hubeny:2007xt}, one can get the holographic Holevo information \cite{Bao:2017guc} \begin{equation} \label{chiholo} \chi_{\rm{holo}}(\ell) = \left\{ \begin{array}{cl} 0 & \ell < \frac{\beta}{2\pi}\log2 \\ \frac{\pi c L}{3\beta} & \ell> L - \frac{\beta}{2\pi}\log2 \end{array} \right.\!\!\!. \end{equation} The holographic Holevo information $\chi_{\rm{holo}}(\ell)$ with $\frac{\beta}{2\pi}\log2 < \ell < L-\frac{\beta}{2\pi}\log2$ is unknown. The result is plotted in Fig.~\ref{holoCFT}. There are plateaus at both $\ell<\frac{\beta}{2\pi}\log2$ and $\ell>L-\frac{\beta}{2\pi}\log2$. We will resolve the plateaus in CFT. \begin{figure*}[htpb] \centering \includegraphics[height=0.22\textwidth]{holoCFT.pdf} \caption{The holographic Holevo information $\chi_{\rm{holo}}$ (\ref{chiholo}), the short and long interval expansion of the CFT Holevo information $\chi_{\rm{CFT}}$ (\ref{chiA}) and (\ref{chiB}), i.e., (S14) and (S18) in the supplemental material, and the leading order $c$ Holevo information $\chi_{\rm{BO}}$ (\ref{chiBO}), for the high temperature thermal state with $\beta/L=0.1$ (Left), $\beta/L=0.2$ (Middle), and $\beta/L=0.5$ (Right), respectively. The unknown region of holographic Holevo information $\chi_{\rm{holo}}$ is left blank. To draw the figures we have set $c=30$.}\label{holoCFT} \end{figure*} We consider only contributions from the vacuum conformal family, and will briefly discuss the contributions from non-vacuum conformal families in the end of the letter. For the short interval $A$ we have the EE \cite{Calabrese:2004eu} \begin{equation} \label{SAexact} S_A = \frac{c}{3}\log \Big( \frac{\beta}{\pi\epsilon}\sinh\frac{\pi\ell}{\beta} \Big). \end{equation} Though we do not calculate $S_{A,i}$ for all the pure states, using the results in \cite{He:2017txy,Guo:2018pvi} we can get the average EE \begin{eqnarray} \label{SipiSi} && \sum_i p_i S_{A,i} = \frac{c}{3}\log\frac{\ell}{\epsilon} +\frac{\pi^2 c \ell^2}{18 \beta^2} -\frac{\pi^3\ell^4 (\pi c L +24 \beta)}{540 \beta^4 L}\\ && ~~ ~~ ~~ +\frac{\pi^4 \ell^6(\pi^2 c^2 L^2+72 \pi c \beta L+864 \beta^2)}{8505 c \beta^6 L^2} + \cdots + O(\ell^{12}).\nonumber \end{eqnarray} We have omitted some involved terms denoted by $\cdots$, and one can find full form of the equation in (S13) of the supplemental material. There are technical issues in calculating the result to higher orders of $\ell$. See details in the supplemental material. Combining them, we obtain the short interval Holevo information \begin{equation}\label{chiA} \chi_A = \frac{2 \pi^3 \ell^4}{45 \beta^3 L} - \frac{8\pi^4\ell^6 (\pi c L + 12 \beta ) }{945 c \beta^5 L^2} + \cdots + O(\ell^{12}). \end{equation} See full form of the equation in (S14) of the supplemental material. We find that to the order we consider it is vanishing in the thermodynamic limit \cite{Lashkari:2016vgj,Dymarsky:2016aqv}, i.e., the limit $L \rightarrow \infty$ with $\beta,\ell$ fixed. For the long interval $B$ we have the EE \cite{Chen:2017ahf} \begin{equation} S_B = \frac{c}{3} \log \Big( \frac{\beta}{\pi\epsilon} \sinh\frac{\pi\ell}{\beta} \Big) + \frac{\pi c L}{3\beta} - I(1-\mathrm{e}^{-\frac{2\pi\ell}{\beta}}). \end{equation} The function $I(x)$ is the mutual information of two intervals on a complex plane with cross ratio $x$. The small $x$ expansion of $I(x)$ to order $x^{8}$ was calculated in \cite{Barrella:2013wja,Chen:2013dxa} and to order $x^{10}$ was calculated in \cite{Beccaria:2014lqa,Li:2016pwu}. Note that, nothing but tediousness prevents one from calculating the mutual information to even higher orders of $\ell$. Combining with the fact $S_{B,i}=S_{A,i}$, we obtain the long interval Holevo information \begin{eqnarray} \label{chiB} && \chi_B = \frac{\pi c L}{3 \beta } -\frac{2 \pi^3 ( 4 \pi L - 7 \beta )\ell^4}{315 \beta^4 L} +\frac{32 \pi^5 \ell^5}{3465 \beta^5} \\ && \phantom{\chi_B =} +\frac{8 \pi^4 ( 32 \pi^2 L^2-143 \pi \beta L )\ell^6}{135135 \beta^6L^2} + \cdots +O(\ell^{11},1/c). \nonumber \end{eqnarray} One can find full form of the equation in (S18) of the supplemental material. Note that $S(L)-\chi_B$ is non-vanishing in the thermodynamic limit. We denote the results (\ref{chiA}) and (\ref{chiB}) as the CFT Holevo information $\chi_{\rm{CFT}}(\ell)$ and $\chi_{\rm{CFT}}(L-\ell)$, respectively. Note that they are only valid for $\ell\ll\beta\ll L$. They are consistent with the holographic Holevo information $\chi_{\rm{holo}}$ (\ref{chiholo}) at the leading order of large $c$, while at the sub-leading orders we see the corrections. We plot them in Fig.~\ref{holoCFT}. We see that with $1/c$ corrections both the short and long interval plateaus are resolved. The leading $c$ of (\ref{SipiSi}) is consistent with the result \begin{equation} \label{SipiSic} \sum_i p_i S_{A,i} = \frac{c}{3}\log\Big( \frac{\beta}{\pi\epsilon}\sinh\frac{\pi\ell}{\beta} \Big) + O(c^0), \end{equation} which was got in \cite{Bao:2017guc} by assuming that the contributions from the primary excited states dominate the average. In fact, from the result in \cite{Kraus:2016nwo}, we can show that there are far more descendant states than primary states in high levels of a large $c$ CFT \cite{Guo:2018pvi}. It is intriguing to show explicitly why primary excited states dominate the average. Supposing (\ref{SipiSic}) is valid as long as $\ell<L/2$, one can get the Holevo information by Bao and Ooguri in \cite{Bao:2017guc} \begin{equation} \label{chiBO} \chi_{\rm{BO}}(\ell) = \left\{ \begin{array}{cl} 0 & \ell<L/2 \\ \frac{c}{3} \log \frac{\sinh\frac{\pi\ell}{\beta}}{\sinh\frac{\pi(L-\ell)}{\beta}} & L/2<\ell<L-\frac{\beta}{2\pi}\log2\\ \frac{\pi c L}{3\beta} & \ell > L-\frac{\beta}{2\pi}\log2 \end{array} \right.\!\!\!. \end{equation} It is a combination of the holographic and CFT results, and is the leading order $c$ Holevo information. For comparison, we also plot $\chi_{\rm{BO}}$ in Fig.~\ref{holoCFT}. \section{Canonical ensemble thermal state with low temperature} In low temperature limit, we have $\beta\gg L$. The dual gravity background is the thermal AdS and the holographic thermal entropy is vanishing \begin{equation} S_{\rm{holo}}(L)=0. \end{equation} From $0\leq\chi(\ell)\leq S(L)$, we obtain \begin{equation} \chi_{\rm{holo}}(\ell)=0. \end{equation} In CFT, the above total indistinguishability can be lifted by taking into account the finite-size effect exponentially suppressed by the factor $q=\mathrm{e}^{-2\pi\beta/L}$. Using the results in \cite{Chen:2017ahf} and considering only the contributions from the holomorphic sector of the vacuum conformal family, for the short interval we get \begin{eqnarray} \label{chiAlow} && \chi_A = \Big[ \frac{32 q^2}{15 c} +\frac{24 q^3}{5 c} +\frac{64 q^4}{5 c}+O(q^5) \Big] \Big( \frac{\pi\ell}{L} \Big)^4 \nonumber\\ && \phantom{\chi_A =} +\Big[ \frac{128 (c-16) q^2}{315 c^2} +\frac{32 (c-24) q^3}{35 c^2} \\ && \phantom{\chi_A =} +\frac{256 (c-40) q^4}{105 c^2}+O(q^5) \Big] \Big( \frac{\pi\ell}{L} \Big)^6 +O(\ell^8), \nonumber \end{eqnarray} and for the long interval we obtain \begin{eqnarray} \label{chiBlow} && \chi_B - S(L) = - \Big[ \frac{32\pi\beta(\beta^2+L^2)(4\beta^2+L^2)}{15L^5}q^2 \nonumber\\ && \phantom{\chi_B - S(L) =} + O(q^3) \Big] \Big( \frac{\pi\ell}{L} \Big)^4 + O(\ell^5). \end{eqnarray} \section{Microcanonical ensemble thermal state} We now consider the microcanonical ensemble thermal state with fixed high energy $E$, with contributions from both the holomorphic and anti-holomorphic sectors. We have the thermal sate (\ref{thermalstate}) with \begin{equation} \label{pime} p_i = \frac{\delta(E-E_i)}{\Omega(E)}. \end{equation} At energy $E$ the number of states $\Omega(E)$ is given by the Cardy formula \cite{Cardy:1986ie} and it is an inverse Laplace transformation of canonical ensemble partition function $Z(\beta)$. Beyond the saddle point approximation of \cite{Cardy:1986ie,Carlip:2000nv}, it turns out that \begin{equation} \label{OE} \Omega(E) = \sqrt{\frac{\pi c L}{6E}} I_1\Big(\sqrt{\frac{2\pi c L E}{3}}\Big), \end{equation} with $I_\nu$ being modified Bessel function of the first kind. As the case of canonical ensemble thermal state with high temperature, we omit the exponentially suppressed terms of large $E$ but keep the power suppressed terms. The Cardy formula can be generalized to the cases of various multi-point correlation functions on a torus \cite{Kraus:2016nwo,Brehm:2018ipf,Romero-Bermudez:2018dim,Hikida:2018khg}, i.e., in canonical ensemble thermal state. One can use the inverse Laplace transformation of the canonical ensemble average to obtain the corresponding microcanonical ensemble one. In this way, we can derive the one-point functions, and thus the short interval EE, of the microcanonical ensemble thermal state from the canonical ensemble one-point functions. Similarly, we can obtain the microcanonical ensemble average short interval EE from the corresponding canonical ensemble one. Combining the short interval EE and average EE, we obtain the Holevo information \begin{equation} \label{chiAmc} \chi_A = \frac{\pi ^3 \ell ^4[ \pi c L ({I_3}-{I_1})+24 \lambda{I_2} ]}{540\lambda ^4 L{I_1}} + \cdots + O(\ell^{12}), \end{equation} with the definition $\lambda := \sqrt{\frac{\pi c L}{6E}}$, which is fixed in the thermodynamic limit, and $I_\nu$ being the shorthand notation of $I_\nu(\frac{\pi c L}{3\lambda})$. The full form of the equation is presented in (S38) of the supplemental material. For the long interval case, we use the OPE of twist operators in \cite{Chen:2014ehg,Chen:2015kua,Chen:2017ahf} and obtain the following result, \begin{equation} \label{chiBmc} \chi_B - S(L) = O(\ell^{12}). \end{equation} However, we cannot get the term of order $\ell^{12}$ explicitly. It is possibly non-vanishing. See details in the supplemental material. \section{Contributions from a non-identity primary operator} Lastly, we consider the leading contribution to the Holevo information from a non-identity primary operator $\psi$ with normalization $\alpha_\psi$, conformal weights $(h_\psi,\bar h_\psi)$. We have the scaling dimension $\Delta_\psi=h_\psi+\bar h_\psi$ and spin $s_\psi=h_\psi-\bar h_\psi$. For a general thermal state with density matrix (\ref{thermalstate}), we use the OPE of twist operators \cite{Calabrese:2004eu,Cardy:2007mb,Headrick:2010zt,Calabrese:2010he,Rajabpour:2011pt,Chen:2013kpa,% Chen:2014ehg,Bianchini:2015uea,Chen:2015kua,Chen:2017ahf} and get the short and long interval Holevo information \begin{widetext}\begin{eqnarray} \label{dpsichiAdpsichiB} && \delta_\psi \chi_A = \frac{\sqrt\pi\Gamma(\Delta_\psi+1)\ell^{2\Delta_\psi}}{2^{2\Delta_\psi+2}\Gamma(\Delta_\psi+\f32)} \frac{\mathrm{i}^{2s_\psi}}{\alpha_\psi} \Big[ \sum_i p_i \langle \psi \rangle_{\rho_i}^2 - \Big( \sum_i p_i \langle \psi \rangle_{\rho_i} \Big)^2 \Big] + o(\ell^{2\Delta_\psi}), \nonumber\\ && \delta_\psi \chi_B = \delta_\psi S(L) - \frac{\ell^{2\Delta_\psi}}{2^{2\Delta_\psi+1}} \frac{\mathrm{i}^{2s_\psi}}{\alpha_\psi} \sum_{i \neq i'} \langle i | \psi | i' \rangle \langle i' | \psi | i \rangle p_i \partial_n \Big[ \sum_{j=1}^{n-1} \frac{(p_{i'}/p_i)^j}{(\sin \frac{\pi j}{n})^{2\Delta_\psi}} \Big]_{n=1} + o(\ell^{2\Delta_\psi}). \end{eqnarray}\end{widetext} These forms are general and can be applied to both canonical ensemble and microcanonical ensemble thermal states. The results however are not universal in the sense that they depend on the structure constants, so that we cannot evaluate their explicit forms without knowing the details of the theory. See more details in the supplemental material. \section{Discussion} For concluding the letter, we would like to mention the implication of the almost vanishing short interval Holevo information to our recent finding of non-geometric states in \cite{Guo:2018fnv}. As shown in \cite{Guo:2018fnv} some special descendant states are non-geometric, which indicates that they cannot be locally like thermal. The ensemble average for obtaining the Holevo information is over all states including those non-geometric descendant states. However, we see the resultant leading order $c$ short interval Holevo information is still consistent with thermality. Using the results in \cite{Kraus:2016nwo} we can show there are far more descendant states than primary ones at high levels in a large $c$ CFT \cite{{Guo:2018pvi}}. This indicates that the contributions from the non-geometric descendant states are suppressed. It is intriguing to show how this happens explicitly. ~ \noindent\textit{% We would like thank Alice Bernamonti, Pasquale Calabrese, Federico Galli, Manuela Kulaxizi, Hong Liu, Andrei Parnachev, Tadashi Takayanagi, and Erik Tonni for helpful discussions. JZ would like to thank the Galileo Galilei Institute for Theoretical Physics and the organisers of the workshop ``Entanglement in Quantum Systems'' for hospitality and for being given the opportunity to present part of the result of work, and to thank participants of the workshop for helpful discussions. WZG is supported in part by the National Center of Theoretical Science (NCTS). FLL is supported by Taiwan Ministry of Science and Technology through Grant No.~103-2112-M-003-001-MY3. JZ is supported in part by Italian Ministero dell'Istruzione, Universit\`a e Ricerca (MIUR), and Istituto Nazionale di Fisica Nucleare (INFN) through the ``Gauge Theories, Strings, Supergravity'' (GSS) research, and by Fondazione Cariplo and Regione Lombardia, Grant No.\ 2015-1253.% }
1,108,101,566,257
arxiv
\section{Introduction} The majority of ordered surface structures known today has been determined by Low Energy Electron Diffraction (LEED) \cite{NIST}. The strong elastic and inelastic interaction of electrons in the energy range 50-500eV involves a particular sensitivity to the atomic arrangement within the outermost layers. In many cases this permits a structure determination with a precision of a few hundredths of an {\AA}ngstr{\o}m, making LEED one of the primary surface crystallography methods. Unfortunately, the strong multiple scattering implied by this type of interaction also tremendously complicates the theoretical analysis of the acquired data. In addition, the real space geometry can not be drawn from the intensities directly, so that standard quantitative LEED structure determinations have to apply a trial-and-error method which is frequently supported by structural search procedures. Calculated diffraction intensities of a multitude of models have to be compared with the experimental data until eventually a sufficiently high agreement between both is achieved \cite{Pendry74,VanHove79,Heinz95}. Though more and more advanced experimental and theoretical developments have recently given access to rather complex surface structures \cite{Heinz95}, it is just this complexity which at a certain degree inhibits the successful application of quantitative LEED. The number of models as resulting from the mere combination of all coordinates of the many atoms in a large unit cell structure becomes so huge that it is difficult if not impossible to handle. This applies even when for example using automated search algorithms in multi-parameter space \cite{Rous93} as the latter has to include the correct model. However, for a large unit cell structure our structural imagination is frequently unable to even define the type of the correct model or the relevant part of the parameter space containing the real structure, in which a search could then be started. Also, methods developed in LEED to determine the atomic positions directly still rely on an initial good guess of the real structure \cite{Pendry88}. The holographic approach represents a revival of the hope for the {\em direct} disclosure of structural information. The idea, first developed for the related Photoelectron Diffraction \cite{Szoke86,Barton88}, aims at the determination of at least partial features of the structure when in addition to the multiple scattering problem the complexity of the surface prohibits its full retrieval. In the present case, the information provided consists of the local environment around an elevated atom in the cell, which might be an adsorbate or an intrinsic adatom resulting from surface restructuring. Even though only some atomic positions at a rather coarse resolution are determined in this manner, the necessary consistency of the obtained structural unit with the complete surface geometry may rule out many models directly. Hence, the remaining parameter space can be reduced to an extent sufficient to allow the application of conventional surface crystallography methods. The first translation of holographic schemes to the field of LEED as proposed by Saldin and De Andres \cite{Saldin90} was restricted to surfaces on which atoms or molecules are adsorbed in lattice gas disorder. The lacking periodicity creates diffraction intensity also outside the sharp substrate Bragg spots and causes a diffuse intensity distribution on the screen (for a recent review on Diffuse LEED (DLEED) see e.g. ref. \onlinecite{Starke96}). This appeared as the natural input for the Fourier-like integral transform typical for holographic techniques. In the course of subsequent theoretical improvements a proper reconstruction algorithm could be established that allowed to circumvent several problems complicating the holographic interpretation of LEED intensities (see section II). In the present investigation we use the latest stage of this development which allows to construct a reliable image of the complete 3D atomic surrounding of the elevated atom from data of normal incidence alone \cite{Saldin95,Saldin96,Saldin97}. However, these theoretical achievements were based on the use of diffuse intensity distributions emerging from disordered systems while the majority of interesting surface structures are ordered phases, often with large superstructure unit cells. Certainly, it would be very advantageous to obtain a partial, but {\em direct} information by holographic means for these {\em ordered} phases, too. As we briefly demonstrated recently \cite{Reuter97_2}, the diffraction intensities arising from this class of systems may be used as input to just the same holographic reconstruction algorithm as developed for the DLEED case. Two important restrictions for this type of application have to be mentioned: there must be only one elevated adatom per surface unit cell and the unit cell must have a minimum size. The first condition arises from the necessity of a unique holographic reference wave as we will outline in the third section. The second limit is based on the data density available and required. The approximate minimum size of the unit cell could recently be estimated as a p(2$\times$2) mesh \cite{Reuter98}. An upper size limit is drawn by experimental factors and the more likely appearance of several adatoms per unit cell with increasing unit cell area. A (7$\times$7) cell already appears to be too large as discussed in section III. Still, a considerable number of ordered reconstructions remains open for a holographic analysis. In the present paper, our investigations are focused on the first successful application to an ordered and {\em a priori} unknown complex structure case, the example of the SiC(111)-(3$\times$3) superstructure. This surface phase is of considerable interest in current crystal growth investigations \cite{Tanaka94} of the promising semiconductor material SiC \cite{SiC-Band}. Previous STM work \cite{Kulakov96,Li96,Starke98_2} had revealed a single large protrusion per surface unit cell. So, this reconstruction seemed particularly suited for a first application of holographic LEED to ordered surfaces meeting both requirements outlined above, i.e. sufficient unit cell size and the presence of a single elevated atom. By comparing the holographic images obtained from experimental data and from calculated intensities for fictitious models deviating from the real surface geometry we illuminate new aspects of the validity and possible shortcomings of the new method. The paper is organized as follows: in the next section we recall the holographic reconstruction algorithm using diffuse LEED intensities. Thereafter we describe the relation between diffuse and discrete intensities and give arguments under which circumstances the holographic algorithm may readily be applied to conventional spot intensities. This is followed by the reconstruction of an atomically well resolved image from the experimental LEED intensities measured for the SiC(111)-(3$\times$3) phase. Section V shows that the spatial depth accessible by the method is rather large, which allows to determine such important features as the stacking sequence of deeper layers. In section VI, we illuminate the role of periodic vacancies within the unit cell acting as additional holographic reference waves. Then we address the issue of intensities arising from substrate relaxations such as buckling and finally discuss the use of the holographic information for a complete surface structure analysis in the case of SiC(111)-(3$\times$3), whose precise real space structure is described in more detail elsewhere \cite{Starke98_1,Schardt98}. \section{The holographic reconstruction algorithm} \begin{figure} \epsfxsize=0.5\textwidth \epsfbox{pics/fig1.ps} \caption{Schematic display of the holographic interpretation of the adatom scattering: a) electrons finally scattering at the adatom form the {\em reference} wave {\em R}, those subsequently hitting one substrate atom represent the kinematic {\em object} wave {\em O} (solid lines). The dashed lines display possible multiple scattering events providing dynamic contributions to the reference and object wave (see text for details). b) Pronounced forward scattering at the beam-splitter indicated by different length of the arrows in different directions.} \end{figure} The holographic approach in DLEED makes use of the fact, that all measurable diffuse intensity outside the sharp substrate Bragg spots necessarily has been caused by at least one scattering event at one of the disordered adsorbates on top of the (unreconstructed) crystal: scattering exclusively within an ideal bulk-terminated substrate can only lead to diffraction intensities at Bragg spot positions. In that sense the adsorbate atom can be viewed as a prominent scatterer which, acting as a beam-splitter, provides a natural separation of all scattering paths as depicted by the solid lines in Fig.~1(a): electrons whose final scattering is by an adsorbate form the {\em reference} wave $R({\bf k})$, while those scattered subsequently by substrate atoms before reaching the detector provide the {\em object} wave $O({\bf k})$ \cite{Saldin90} (where ${\bf k} \equiv ({\bf k}_{\|}, k_{\perp})$ is the wavevector of the detected electron, with the components ${\bf k}_{\|}$ parallel to, and $k_{\perp}$ perpendicular to the surface). This allows to interprete the diffuse intensity as the interference pattern of these two contributions. Hence, the local surrounding of the beam-splitter atom should be extractable by a phased 2D Fourier transform of the data \cite{Barton88,Saldin90}. However, the above interpretation does not include that multiple scattering adds unwanted contributions to both reference and object wave as indicated in Fig.~1(a) by the dashed lines. A considerable improvement in the image quality could be achieved by combining several DLEED patterns measured at different electron energies \cite{Barton91}. The corresponding multi-energy reconstruction algorithms include a 3D integral transform and try to single out the contributions due to the kinematic object wave, suppressing the unwanted effects caused by the multiple scattering of the low energy electrons. Yet, the pronounced forward scattering of the beam-splitter led to only a selective appearance of the atoms in the reconstructed local adsorption geometries, depending on whether they were located within the forward scattering cone \cite{Wei92}, cf. Fig.~1(b). The implied necessity of combining several (at least two) data sets taken at different angles of incidence to deduce the complete 3D surrounding of the beam-splitter \cite{Wei94} could be overcome with the introduction of an improved reconstruction algorithm proposed by Saldin and Chen \cite{Saldin95}. This {\em Compensated Object and Reference wave Reconstruction by an Energy-dependent Cartesian Transform} (CORRECT) \cite{Saldin95} allows the calculation of the real space distribution around the adsorbate $\left| B({\bf r}) \right|^2$ (where ${\bf r} \equiv ({\bf r}_{\|}, z)$ is a position vector relative to the origin at the adsorbate with components ${\bf r}_{\|}$ parallel to, and $z$ perpendicular to the surface) via the following expression: \begin{eqnarray} B({\bf r}) &=& \int\!\!\!\int_{{\bf k}_{\parallel}} \left[ \int_{k_{\perp}} K({\bf k}_{\parallel}, k_{\perp};{\bf r}) \chi({\bf k}_{\parallel}, k_{\perp}) e^{-(ikr-k_{\perp}z)} dk_{\perp} \right] \nonumber \\ & & e^{i{\bf k}_{\parallel}\cdot{\bf r}_{\parallel}} d^2{\bf k}_{\parallel}. \label{correct} \end{eqnarray} Note, that in contrast to previous reconstruction algorithms that performed the involved 3D integral in a polar coordinate system (angle and energy), the data input is provided on a cartesian grid $({\bf k}_{\|},k_{\perp})$, which will be of importance when discussing the step towards ordered superstructures in the next section. The transform does not operate directly on the measured intensities $H$, but rather on a contrast-enhancing and normalizing function \begin{equation} \chi({\bf k}_{\parallel}, k_{\perp}) = \frac{H({\bf k}_{\parallel}, k_{\perp}) - H_{av}({\bf k}_{\parallel})}{H_{av}({\bf k}_{\parallel})} \label{chi} \end{equation} with \begin{equation} H_{av}({\bf k}_{\parallel}) = \frac{\int H({\bf k}_{\parallel}, k_{\perp}) dk_{\perp}}{\int dk_{\perp}}. \label{hav} \end{equation} It has been shown theoretically that the use of such a $\chi$-function helps to partially remove the self-interference terms $\left| R ({\bf k}) \right|^2$ and $\left| O ({\bf k}) \right|^2$ in the DLEED intensity, which give rise to spurious high values of the real space distribution $\left| B({\bf r}) \right|^2$ in the vicinity of the origin \cite{Saldin95}. Additionally, $\chi$ has been designed in such a way as to suppress modulations in the DLEED patterns that arise from some partial ordering among the adsorbates \cite{Saldin97}. The last part in the expression to be described is the integral kernel which corrects for the anisotropy of the reference wave. In a zeroth order approximation it can be written \begin{equation} K({\bf k}_{\parallel}, k_{\perp}; {\bf r}) = \left[ \frac{f_a({\bf k}_i\cdot{\bf \hat{r}}) + C}{r} \right]^{-1}. \label{kernel} \end{equation} \noindent Here $f_a({\bf k}_i\cdot{\bf \hat{r}})$ is the atomic scattering factor of the adsorbate, ${\bf \hat{k}}_i$ the direction of electron incidence, and $C$ the so called kernel constant (which we take to be real), and which represents an isotropic approximation to the backscattering by the substrate prior to scattering by the adsorbate. Optimizing the value of $C$ provides access to those atoms of the local adsorption geometry that lie outside the forward scattering direction of the beam-splitter \cite{Reuter97_1}. This allows the retrieval of the complete 3D surrounding of the latter from data of normal incidence alone. The algorithm in this present form has been shown to give reliable images using theoretical \cite{Saldin95,Reuter97_1}, as well as experimental DLEED data \cite{Saldin96,Saldin97}. \section{Spot intensities versus diffuse distributions} The original holographic reasoning \cite{Saldin90} was based on the assumption that only one beam-splitting adsorbate atom is present on the substrate surface. With several such adsorbates, each time in the same local structure but without long range order among them (lattice gas disorder), intensities simply add up in the low coverage limit leaving the resulting diffuse distribution practically unchanged \cite{Starke96,Pendry84}. A different situation emerges with the onset of order at higher coverages and/or upon thermal annealing: additional modulations in the DLEED pattern are created that eventually cause the breakdown of the holographic algorithm \cite{Saldin97}. For the case of a completely ordered superstructure of such adsorbates, the modulations caused by the lattice factor concentrate the diffuse intensities to a series of discrete superstructure - or fractional order - spots when the unit mesh of the adsorbate layer is larger than that of the crystalline substrate. However, the simultaneous extinction of diffuse intensity between the spots as caused by destructive interference between the waves originating from different adsorbate-substrate clusters, does not remove the crystallographic information wanted: the energy dependence of the superstructure spot intensities is the same as the one displayed by the corresponding ${\bf k}_{\|}$-positions in a diffuse distribution resulting from a disordered adlayer in the equivalent local adsorption geometry \cite{Heinz91}. The only restriction is that scattering between such clusters has to be negligible, a condition satisfied even for relatively small superstructures when using normal incidence data \cite{Quasi,Mendez92}. So, even though the perfect order among the adsorbates significantly reduces the amount of available data, the few remaining intensities are not masked by disturbing modulations as in the case of partial disorder inhibiting the final ${\bf k}_{\|}$-integration in equation (\ref{correct}). The superstructure spots can be thought of as sampling the DLEED intensity distribution of the corresponding lattice gas on a finite grid. Therefore, as suggested earlier \cite{Mendez92}, a DLEED holographic algorithm may in principle be applied to such ordered superstructure systems with the only difference of a reduced density of input data in ${\bf k}_{\|}$. This makes more apparent, why the CORRECT algorithm is so particularly suited for the extension to ordered phases: the data is provided on the appropriate cartesian grid and only normal incidence is required. Interestingly, earlier investigations on the information content of diffuse intensities \cite{Heinz91}, as well as on the minimum data base of the algorithm \cite{Reuter97_1}, showed, that the continuous diffraction distribution resulting from disordered atomic adsorbates is already sufficiently described when using a (3$\times$3) sampling grid. Information on a denser grid is largely redundant. Hence, there are no drastic changes to be expected when making the transition from disordered systems to phases with large superstructure cells like the (3$\times$3) reconstruction of SiC(111) of the present paper. This allows us to apply the algorithm developed for DLEED without any modifications. However, it should be emphasized, that a further reduced data base in connection with superstructures smaller than a (2$\times$2) can lead to aliasing effects in the Fourier-like transform due to insufficient sampling \cite{Reuter98}. The application of LEED holography to ordered surfaces involves several practical advantages. For the diffraction process it is irrelevant, whether the beam-splitter is an externally adsorbed atom or intrinsically belongs to the surface. Thus, besides ordered adsorption systems now also ordered substrate reconstructions can be investigated. Additionally, the measurement of discrete spot intensities is much less delicate than that of the diffuse intensities which are comparatively weak. The high signal-to-noise ratio of the bright spots allows easy subtraction of contributions due to thermal diffuse scattering. Also, at higher energies fractional spot intensities are not that much influenced by cross-talk from the bright substrate spots as is the case for diffuse intensities \cite{Starke96,Mendez92}. Furthermore, holographic LEED seems also suitable to tackle larger unit cell reconstructions: the high number of fractional order spots generated in these cases provides a fine sampling grid and ensures the proper working of the integral transform. However, practical reasons also commend an upper limit for the unit cell size as the increasing number of closely spaced spots impairs a proper data acquisition, especially at higher energies where more and more spots appear and weak spots are disturbed by their bright neighbours. A unit cell such as the (7$\times$7) on Si(111) \cite{Tong88,Takayanagi85} is probably already too large from an experimental point of view as the accessible energy ranges become too small. In addition, it becomes more and more unlikely that such a large unit cell contains only one elevated adatom (the Si(111)-(7$\times$7) actually contains 12 adatoms). This would violate the strongest restriction of the technique at its current stage, i.e. the condition that only a single beam-splitter is allowed within each unit cell. Several such prominent atoms per unit cell would lead to intermixing of their respective contributions as will be demonstrated further below. This is all the more problematic, since the actual number of elevated adatoms is just one of the quantities sought in the structure analysis of an {\em a priori} unknown surface (even though STM might help as in the present case). Future efforts in methodologic improvements should hence be directed to overcome the multiple beam-splitter problem, which did not occur in the previous applications to simpler diffuse or ordered systems. As a consequence, until there is a proper theoretical description of the detailed influences on the reconstructed images, the systems to which holographic LEED is to be applied have to be chosen with considerable care. \section{Reconstruction using experimental data} SiC is a material that displays most suitable electronic properties which have made it a promising candidate for high power and high frequency devices. Particularly, the (3$\times$3) phase of the SiC(111) surface has drawn considerable interest in the last years, caused by the observed crystal growth improvement \cite{Kong88}, that is achieved when this reconstruction is stabilized under highly Si-rich conditions \cite{Tanaka94}. Its complexity, which can already be deduced from an extensive debate in the literature \cite{Kulakov96,Li96,Starke98_2,Kaplan89}, had hitherto prevented a detailed structure analysis using trial-and-error methods. However, the high number of fractional order spots caused by such a comparatively large surface unit cell makes this phase an ideal candidate for a holographic investigation in view of the reasoning outlined above. The cubic 3C-SiC polytype was chosen, since its (111) oriented surface exposes only one definite stacking sequence \cite{Starke97}, i.e. there is no coexistence of domains of different orientation, which would have to be expected in the case of hexagonal polytypes with different layer stackings possible at the surface \cite{Starke97}, and which would certainly complicate if not inhibit the interpretation of the reconstructed images. Additionally, there is strong evidence from comparison of experimental LEED intensities \cite{Schardt98}, as well as from DFT test calculations \cite{Bechstedt97}, that the atomic structure of the (3$\times$3) surface phase itself is rather independent of the sample polytype. So, results obtained for 3C-SiC(111) can be expected to hold also for other polytypes. LEED I(V)-curves of the sharp diffraction pattern were measured in the energy range 50-300~eV using normal electron incidence. Details on the data acquisition and sample preparation will be published elsewhere \cite{Schardt98,Bernhardt98}. The low diffuse background and noise level allowed the recording of 14 fractional order beams closest to specular reflection, which are symmetry-inequivalent at normal incidence. Providing the measured intensities as input to expression (\ref{correct}) resulted in the 3D image displayed in Fig.~2: the real-space distribution $\left| B({\bf r}) \right|^2$ is calculated on a grid of 0.2~{\AA} resolution inside a cylinder of depth 6.0~{\AA} and a lateral radius 3.0~{\AA}, which is consistent with estimates on the lateral validity of the algorithm \cite{Reuter98}. Small spheres are drawn at the grid points, indicating the reconstructed real-space intensity by their diameter which scales linearly with the intensity. As pointed out in previous holographic investigations \cite{Saldin95,Saldin96,Saldin97,Reuter97_1}, this type of display permits a quick understanding of the essential features of the structural unit determined holo\-graphically and will therefore be used in all figures included in the present paper. \begin{figure} \epsfxsize=0.5\textwidth \epsfbox{pics/fig2.ps} \caption{Recovered local geometry of the SiC(111)-(3$\times$3) structure using experimental data in the energy range 50-300~eV and kernel constant $C = 2.7$~{\AA}. The maximum noise level in the image is 48~\% of the maxima denoting the atom positions (noise cut-off: 25~\%). For details on the display procedure, see Section IV. The inset displays a schematic of the retrieved adcluster geometry including chemical bonds and the approximate layer distances as determined by holographic LEED.} \end{figure} The origin of the coordinate system is defined by the beam-splitter, which is artificially added in the image as a black sphere to facilitate understanding. The highly Si-rich conditions under which the (3$\times$3) phase is observed suggest this beam-splitter to be Si, the scattering factor of which is consequently used for the computation of the integral kernel (\ref{kernel}). However, the zeroth order approximation of the latter is most sensitive only to the essential form of the atomic scattering factor, which is very similar for most elements. Using carbon as a beam-splitter in the computation consequently did not change the resulting images considerably. The kernel constant $C$ in this expression is optimized such that all atoms in the geometry appear with approximately equal brightness \cite{Reuter97_1}. The highest disturbing intensity at non-atomic positions (henceforth referred to as noise level) is with 48~\% of the overall maximum value at an unprecedented low level. This has to be attributed to the -- in comparison to the DLEED case -- much better quality of conventional LEED I(V) data and the increased energy range available. The image allows the unambiguous identification of the local adcluster geometry formed by an adatom supporting trimer and two further atoms vertically below the beam-splitter (see inset in Fig.~2). The rough layer distances of 1.3~{\AA} (adatom-trimer), 1.3~{\AA} (trimer and first lower atom) and 2.0~{\AA} (between lower atoms) correspond surprisingly well with the (7$\times$7) DAS model of the Si(111) surface \cite{Takayanagi85}. This already indicates that probably the complete retrieved geometry corresponds to Si atoms on top of the SiC substrate. Note, that the distorted form of the trimer atoms is an effect of the scattering factor in connection with the zeroth order approximation of this property in the integral kernel (\ref{kernel}) \cite{Saldin95}. However, neither the obtained spatial resolution, nor the exact position of the atoms inside the geometry are the primary object or strength of the holographic analysis: it is rather the direct and quick idea of a structural unit belonging to the investigated surface. The obtained tetramer formed by the adatom and the supporting trimer is typical for hexagonal semiconductor surfaces. Its unambiguous determination in the holographic image proves that only one of the two possible orientations rotated by 60$^{\circ}$ with respect to each other is present on the surface. This already excludes domains of differently rotated, i.e. coordinated clusters, as can also directly be deduced from the pronounced threefold symmetry of the measured fractional order spot intensities \cite{Schardt98,Bernhardt98}. It further rules out the model proposed first by Kaplan \cite{Kaplan89} of a (3$\times$3) mesh which in close analogy to the (7$\times$7) DAS model \cite{Takayanagi85} contains two such tetramers per surface unit cell, which in turn would necessarily be differently oriented. It should be noted, that this model was also inconsistent with STM investigations, which clearly revealed only one elevated protrusion per unit cell, thus strongly favouring models including a single tetramer \cite{Kulakov96,Li96,Starke98_2}. Since the atomic beam-splitter has to be identified with the top adatom of this tetramer, exactly these results ultimately enabled the application of LEED holography to this structure: the obligatory uniqueness of the beam-splitter excludes DAS-like models with two tetramers per surface unit cell from the class of systems accessible under the current state of theory. We should recall now that we are dealing with an {\em a priori} unknown structure. Although the low noise level in the image may appear very convincing, it has to be recognized that the strong multiple scattering combined with the anisotropic scattering factors for low energy electrons may lead to serious artefacts in the images that would not easily be distinguishable from real atoms. Since the multi-energy algorithms developed for holographic LEED can only suppress, but not completely eliminate these effects, it is often advisable to vary the used input energy range. In view of the fact that scattering factor anomalies may sometimes even lead to an increased image quality when reducing the number of included energies, the stability of the obtained result under such variations can significantly increase the confidence in the deduced local geometry. Dividing the experimental data into various subsets always resulted in equivalent images, which strongly confirms the structural unit determined. In general, one has to admit that the weak scattering power of light elements like Si and C might provide a favourable case for holography as multiple scattering contributions are expected to be smaller than for example for transition metal crystals. However, recent results for the system O/Ni(001)-p(2x2) \cite{Reuter98} make us believe, that the developed algorithm does also work for stronger scattering materials. Now, although the experimental result appears convincing a test of the validity and sensitivity of the method seems adequate, which is presented in the next sections using simulated intensities from various fictitious models. \section{Vertical sensitivity and stacking of deeper layers} The simplest model consistent with the atomic positions obtained from the holographic reconstruction would be just an adatom on top of a SiC bilayer. Yet, such a model appears improbable since it would not account for the strong silicon enrichment at the surface as detected by earlier Electron Energy Loss (EELS) and Auger Electron Spectroscopy (AES) results \cite{Kaplan89}, which we discuss in detail elsewhere \cite{Schardt98}. Assuming therefore the tetramer as the essential structural unit -- presumably formed by Si atoms in view of the EELS and AES results -- the first question to be verified is its position on the underlying substrate. The simplest possible solution would be to directly place it somehow on top of the SiC sample, to which the two further atoms showing up in the holographic reconstruction would then belong. Given the bilayer stacking sequence of this material in the [111] direction it is, however, impossible to find any location consistent with two atoms directly on top of each other as predicted by Fig.~2. Even though the holographic method is most reliable in just this direction vertically below the beam-splitter, in which atoms show up already without the scattering factor compensation by the integral kernel (\ref{kernel}), there is yet no definite certainty on the limit of the algorithm's validity for deeper layers: all previous investigations with DLEED data had dealt with rather simple test structures, which were already completely determined by the atomic positions in the first two layers. Hence, the now performed calculation of $\left| B({\bf r}) \right|^2$ up to a depth of 6.0~{\AA} raises concerns whether the lowest lying atom identified at 4.6~{\AA} might already be outside such a limit. \begin{figure} \epsfxsize=0.5\textwidth \epsfbox{pics/fig3.ps} \caption{Same as Fig.~2, but using simulated I(V) curves of a simplified Li/Tsong model as described in section V. The electron energy range was 146-300~eV, the kernel constant $C=5.0$~{\AA} and the maximum noise level is at 46~\% of the maxima at the atom positions (noise cut-off: 25~\%). The inset displays the atomic arrangement in the model assumed to calculate the intensities used for the holographic reconstruction.} \end{figure} Consequently, for the moment we focus on only the tetramer and the third layer atom of the holographic image. This arrangement suggests a cluster position in which each of the three trimer atoms is located in a hollow site of the topmost hexagonal bilayer of the SiC substrate. Depending on whether this position is fourfold coordinated, i.e. on top of a carbon atom in the substrate bilayer (T4 site) or truely threefold coordinated (H3 site), the tetramer would have to be oriented as in the SiC substrate bilayers or rotated by 60$^{\circ}$, respectively. The adatom would then reside on top of a Si atom in the bilayer, which corresponds to the third layer atom of the holographic image. No further atom would be present 2~{\AA} below the latter in either site geometry. Note, that this geometry corresponds to the model proposed by Li and Tsong on the basis of their STM work \cite{Li96}, which assumed such coordinated tetramers in a (3$\times$3) periodicity directly on top of the SiC. In order to verify whether the atom additionally appearing in Fig.~2 is an algorithmic artefact or not, we thus simulated theoretical LEED I(V) curves of an adcluster model for the identical number of fractional order beams and the same energy range as in the experiment (details of these calculations will be published elsewhere \cite{Schardt98}). In order to focus on the depth information available from LEED holography we chose the fourfold coordinated trimer atom positions, yet artificially expanded the distance between adcluster and substrate to push the third layer atom to a position 3.2~{\AA} below the beam-splitter. A schematic view of this geometry can be seen in the inset of Fig.~3. The resulting holographic image is displayed in Fig.~3, showing the expected tetramer unit plus the third layer atom, but also consistently not indicating any sign of possible artefacts vertically below the adatom. Only the carbon atoms of the substrate SiC bilayer are still within the reconstruction volume, but do not appear in the reconstructed image (Fig.~3). However, one has to consider that in the geometry chosen they are 4.3~{\AA} apart from the beam-splitter and 1.9~{\AA} off the vertical axis, and in addition represent comparatively weak scatterers. This distance -- not in forward scattering direction -- probably represents the detection limit at least for a weak scatterer. However, in view of the absence of artefacts in the holographic image obtained for our test model, the presence of two atoms vertically below the adatom as indicated in the real space reconstruction from the experimental data has to be assumed correct. Furthermore, the pronounced appearance of the lowest atom might indicate that it is silicon since a weakly scattering C atom should not be detectable at that depth. Our test case resulting in the image shown in Fig.~3 thus clearly rules out the possibility of the Li/Tsong model, whose bilayer stacking sequence results in fourth layer atoms off the vertical axis and which is thus incompatible with the lowest atom in the image obtained from the experimental data. This further emphasizes the importance of the retrieval of the two deepest atoms in the local geometry. Since only small deviations from the bulk positions are usually to be expected in such deep layers, the location of each of these atoms uniquely determines the complete stacking sequence of the corresponding entire layer. Hence, even though the obtained structural unit itself may contain only a small number of atoms, its consistent embedding into the surface unit cell subsequently can reveal a quite important further fraction of the investigated surface. \section{Can a vacancy act as a beam-splitter?} The confirmation of the lowest atom inside the revealed structural unit, whose on top stacking is inconsistent with a SiC bilayer at the very surface, necessitates to include an additional Si adlayer in the crystallographic model. Such an adlayer between tetramer and substrate had already been included in the original DAS-like model, since the EELS and AES results indicated the strong presence of Si-Si bonding in the highly Si-rich surface \cite{Kaplan89}. In order to bring this otherwise very reasonable model in accordance with the STM data described above, Kulakov {\em et al.} proposed the absence of one of the two tetramers per surface unit cell \cite{Kulakov96}. In the language of the DAS-type models the top atom of the trimer represents an adatom on top of a Si bilayer. In the DAS-unit cell one of the two adatoms is located on a piece of bilayer in faulted orientation as indicated in Fig.~4(a). One would expect an energetical difference between the adcluster which follows the substrate stacking direction and the one, which introduces a local stacking fault in the adlayer. Thus it is plausible that in the end exclusively the more favourable type would be present on the surface. Such a model, hence including only a single tetramer with definite orientation in each (3$\times$3) unit cell, could explain not only the single protrusion in the STM images, but also the threefold symmetry of the LEED pattern. What remains, is the question which of both orientations is actually realized. Note, that the orientation of the adcluster can be deduced from a comparison with a previous LEED analysis of the (1$\times$1) phase on the same sample \cite{Starke98_1,Starke97,Starke97_2}. However, we demonstrate a verification of this assignment using test calculations, a method that could generally be applied in cases where no independent analysis of the substrate is available. \begin{figure} \epsfxsize=0.5\textwidth \epsfbox{pics/fig4.ps} \caption{Schematic side view of different models for the SiC(111)-(3$\times$3) reconstruction in a projection parallel to the [1$\bar{1}$0]-plane (Si atoms are depicted by large spheres, C atoms by small darker spheres. Bonds within the projection plane are drawn as single lines, double lines represent two bonds pointing out of the projection plane by +60$^{\circ}$ and -60$^{\circ}$, respectively.) a) DAS-model containing two adatoms and one cornerhole per unit cell as proposed by Kaplan \cite{Kaplan89}. b) Single adatom model containing two cornerholes with a local stacking fault in the Si-bilayer fragment underneath the adatom; derived from the model by Kulakov et al. without the stacking fault \cite{Kulakov96}, c) Single adatom-trimer-cluster residing on a complete monolayer in (1$\times$1) periodicity with all cornerholes filled. The bilayer fragment underneath the adatom again represents a local stacking fault.} \end{figure} Both Kulakov-type models, with the adcluster either introducing or not introducing a local stacking fault, comply with the holographic image from the experimental data, when identifying the upper of the bottom two atoms as belonging to the Si adlayer and the other one as a Si atom of the substrate's topmost bilayer. Yet, the subsequent interplay between LEED holography and conventional LEED can do better than that: when reconstructing images using theoretically simulated data, the orientation of the substrate inside the given coordinate system is known. Depending on the resulting orientation of the adcluster in the image -- or to be exact, the tetramer of four atoms representing the Si bilayer underneath the adatom -- when choosing one of the two equivalent beam assignments in the LEED pattern, its orientation with respect to the bulk can be deduced even without seeing the latter in the reconstructed image itself. From the result shown in Fig.~3, we therefore know the orientation of an unrotated tetramer. Comparing this with the holographic image obtained from the experimental data (cf. Fig.~2) we find that the adcluster geometry involves a local stacking fault of the Si bilayer as shown in Fig.~4(b). Hence, of all the previously existing models of the SiC(111)-(3$\times$3) phase, LEED holography would only be fully consistent with the Kulakov model in the local stacking fault version. \begin{figure} \epsfxsize=0.5\textwidth \epsfbox{pics/fig5.ps} \caption{Same as Fig.~2, but using simulated I(V) curves of a simplified Kulakov model as described in section VI. The electron energy range was 146-300~eV and the kernel constant $C=5.0$~{\AA}. a) geometry including cornerholes in the Si adlayer, cf. Fig.~4(b): maximum noise level at 76~\%, b) geometry with filled cornerholes in the Si adlayer, cf. Fig.~4(c): maximum noise level is at 29~\% (noise cut-off: 25~\%).} \end{figure} In order to further verify this conclusion, we simulated LEED I(V) curves for this model, whereby, however, small deviations from bulk-like positions as for example induced by dimerization were not considered. Surprisingly, the corresponding holographic image displayed in Fig.~5(a) is of considerably worse quality than the previous results. Although all five atoms of the expected structural unit show up, their overall configuration is badly distorted and high noise in form of three concentrated artefacts prevents the unambiguous distinction between real atoms and false contributions. Since the image from ideal theoretical data should only be better than the one from the experiment, the situation in the latter has somehow to be more favourable for LEED holography than so far assumed, which, of course, demands clarification. As mentioned above, a severe source of an algorithmic breakdown at its current level of development is given by the multiple beam-splitter problem. Therefore, we reconsidered the atomic arrangement of the Kulakov model under this point of view. Since it was derived in close analogy to the DAS model, its Si adlayer is not completely closed but contains vacancies to relax the stress induced by the lattice mismatch \cite{Kulakov96}. These so called cornerholes appear just like in the Si(111)-(7$\times$7) structure \cite{Takayanagi85} and break the (1$\times$1) periodicity as much as the identified beam-splitting atom on top of the tetramer. Consequently one might also concede them the {\em same} holographic interpretation. In the sketch of the Kulakov model in Fig.~4(b) it can be seen that the vacancy is even enlarged by the removal of one adcluster from the DAS model, cf. Fig.~4(a). It may appear difficult at first glance to imagine a vacancy as a possible beam-splitter, but speaking in terms of missing wave contributions to achieve the destructive interference corresponding to a perfect (1$\times$1) adlayer helps to understand its influence on the fractional order beams. This would be equivalent to replacing the vacancy by a pseudo-adatom with the same dynamic scattering behaviour. From this point of view, we would have to conclude that the Kulakov model contains various distinct beam-splitters per surface unit cell, whose respective wave contributions could consequently interfere and completely prohibit a holographical interpretation. However, the strong damping of the low energy electrons makes us hope, that the dominant contribution in the reconstructed image is due to the most elevated, periodicity breaking atom in the surface unit cell and that the existence of further extra atoms or vacancies in the superstructure unit cell leads "only" to image disturbances although they might be considerable. This interpretation would help to understand why Fig.~5(a) basically shows the local environment of the top tetramer atom plus artefacts and some distortions that may then be due to the cornerhole vacancies. To test this line of thought, we simulated LEED I(V) data of exactly the same model as before, but filling the vacancies with Si atoms at bulk-like positions in the Si adlayer as shown in Fig.~4(c). The resulting image in Fig.~5(b) is of impressive clarity and contains all essential features of the result obtained with the experimental data. We take this as a strong indication of the correctness of our reasoning, although we want to stress again that there is yet no {\em proper} theoretical treatment of the multiple beam-splitter difficulty in LEED holography. It should further be noted that in our test model, cf. Fig.~4(c) we only filled the Si monolayer. The adatom supporting trimer atoms are still not repeated with the bulk periodicity and thus break the (1$\times$1) periodicity, too. \section{Influence of substrate reconstructions} What had started as a pure necessity to ensure the correct working of the holographic algorithm, subsequently turned out to be the last required piece for the solution of the (3$\times$3) puzzle. Since LEED holography seemed only fully consistent with a Kulakov-derived model containing one tetramer in local stacking fault orientation on a closed Si adlayer without cornerholes, a careful reconsideration then showed that indeed there had been no other reason for including the cornerholes at first hand but the sole analogy to the DAS model. The situation for SiC(111)-(3$\times$3), where the Si adlayer shows an intrinsic lattice mismatch of 20~\% with respect to the underlying substrate, might however require a different form of relaxing the lattice strain under simultaneous dangling bond saturation than the cornerhole and dimerization principle underlying the homoepitactic Si(111)-(7$\times$7). As a further hint, the obtained STM images of the (3$\times$3) phase \cite{Starke98_2,Starke98_1} never showed comparably strong cornerhole depressions as visible on the silicon surface \cite{Becker85}. Consequently, the thus most favoured model with filled cornerholes was input to a refining LEED and Density Functional Theory (DFT) analysis. Even though the holographic results had considerably reduced the multi-parameter space for the trial-and-error search of both methods, it should however be emphasized that the remaining structure determination was still a tremendous task: there is a qualitative difference between a coarse local beam-splitter surrounding that depicts a small fraction of the huge surface unit cell and a detailed variation of all involved atomic positions on a dense grid in steps of a few hundredths of {\AA}ngstr{\o}ms. It was in this respect most gratifying that both analyses independently yielded the same full (3$\times$3) structure by input of the holographically recovered cluster: the resulting final {\em twist} model \cite{Starke98_1} can indeed essentially be described as a SiC substrate with a strongly buckled Si adlayer without any cornerholes plus one tetramer per surface unit cell consisting of a trimer and one adatom. These trimer atoms and the Si adlayer below locally resemble a Si bilayer in stacking fault orientation. A more detailed description of the exact model and both analyses involved will be given elsewhere \cite{Schardt98,Furthmuller98}. \begin{figure} \epsfxsize=0.5\textwidth \epsfbox{pics/fig6.ps} \caption{Same as Fig.~2, but using simulated I(V) curves of the final twist model as described in section VII. The electron energy range was 65-300~eV, the kernel constant $C=0.75$~{\AA} and the maximum noise level is at 51~\% (noise cut-off: 25~\%).} \end{figure} Yet, we have to realize that the bond optimizing relaxations inside the adlayer also contribute to the superstructure spot intensities and might consequently affect the working of the holographic reconstruction algorithm: each atom, that has left its (1$\times$1)-like position, has in principle to be regarded as a possible additional beam-splitter in view of the discussion in section VI. Therefore, as a final test, we used the I(V) curves calculated for the exact geometry of the optimized model as input to the holographic algorithm. The resulting image is displayed in Fig.~6 and shows exactly the same local adatom environment as the experimental data, which we - even in the presence of buckling -- take as a final proof of the validity of our new method. In accordance with the experience made recently with other, simpler structures \cite{Reuter98}, the effect of the slight deviations from bulk-like positions on the reconstructed image is apparently negligible and leads only to an increased overall noise level, which can be seen comparing Fig.~5(b) and Fig.~6, whose underlying structure differs exclusively by just these substrate relaxations. All this becomes more understandable, when recalling, that it is again only the {\em difference} in the outgoing wavelets arising from buckled atoms that can act as a conduit for diffraction intensity in the fractional order beams. The (additionally damped) contributions due to these shifts can hence be regarded small with respect to the major rupture of periodicity caused by the introduction of a completely new and elevated atom such as the adatom in the present structure. In this context, it is also important to notice, that the majority of such shifts is far below the resolution capabilities of LEED holography at its present stage, which can typically be stated to be $\approx$ 0.6~{\AA}. \section{Conclusion} In the present paper we described in detail the contribution that holographic LEED can provide, when applying it to a complex superstructure. Using a holographic interpretation of fractional order spot intensities, a 3D image of the local geometry around an elevated, periodicity breaking adatom can be retrieved. The structural unit thus obtained has to be consistent with the real space atomic structure and can be used to considerably reduce the multi-parameter space and possibly enable the trial-and-error search of geometry optimizing methods like quantitative LEED and DFT energy minimization. We exemplified this for the case of the SiC(111)-(3$\times$3) phase, where the holographically derived adcluster {\em directly} rules out the majority of all previously existing models of this surface and whose geometry has now been fully confirmed by the final {\em twist} model obtained independently by conventional LEED and DFT. This application additionally marks the first example that a holographic inversion of LEED data actually played a crucial part in the determination of a complex and {\em a priori} unknown structure. We have illuminated the power and the limits of this new holographic LEED method for ordered surfaces. Even in the best of all cases the obtained image is restricted to the local geometry around the prominent beam-splitter. Only for very simple surfaces, this uniquely determines all atomic positions inside the unit cell. Furthermore, exclusively for such simple test surfaces has this still developing method been thoroughly tested so far. There, some severe problems like the multiple beam-splitter problem encountered in the present investigation do usually not arise and have therefore not yet been theoretically treated. Consequently, the systems, to which holographic LEED is to be applied, have for the moment to be chosen with great care. Nevertheless, regarding the immense problems that quantitative LEED and DFT face with complex, large superstructure systems, every directly obtainable pre-information is highly welcome. In this view, the possibility of a holographic approach raises hopes that a new ally in the structure analysis of these ordered phases has been found. It is particularly its {\em direct} simplicity that already now renders holographic LEED such an ideal supplement to its established trial-and-error brethren, even though the young method has still a long way to go. \section*{Acknowledgements} The authors are indebted to Prof. D.K. Saldin and Dr. P.L. de Andres for many helpful discussions. This work was supported by Deutsche Forschungsgemeinschaft (DFG) in particular through SFB 292.
1,108,101,566,258
arxiv
\section{Introduction} Boris Dubrovin introduced the notion of a Frobenius manifold as a geometric realization of a potential $\mathbb F$ {which} satisfies a system of partial differential equations known in topological field theory as Witten-Dijkgraaf-Verlinde-Verlinde (WDVV) equations. More precisely, a Frobenius algebra is a commutative associative algebra with an identity $e$ and a nondegenerate bilinear form $\Pi$ {compatible with the product}, i.e., $\Pi(a\circ b,c)=\Pi(a,b\circ c)$. A Frobenius manifold is a manifold with a smooth structure of a Frobenius algebra on the tangent space at any point with certain compatibility conditions. Globally, we require the metric $\Pi$ to be flat and the identity vector field $e$ to be covariantly constant with respect to the corresponding Levi-Civita connection. Detailed information about Frobenius manifolds and related topics can be found in $\cite{DuRev}$. Let $M$ be a Frobenius manifold. In flat coordinates $(t^1,...,t^r)$ for $\Pi$ where $e= \partial_{t^{r}}$ the compatibility conditions imply that there exists a function $\mathbb{F}(t^1,...,t^r)$ which encodes the Frobenius structure, i.e., the flat metric is given by \begin{equation} \label{flat metric} \Pi_{ij}(t)=\Pi(\partial_{t^i},\partial_{t^j})= \partial_{t^{r}} \partial_{t^i} \partial_{t^j} \mathbb{F}(t)\end{equation} and, setting $\Omega_1(t)$ to be the inverse of the matrix $\Pi(t)$, the structure constants of the Frobenius algebra are given by \[ C_{ij}^k(t)=\Omega_1^{kp}(t) \partial_{t^p}\partial_{t^i}\partial_{t^j} \mathbb{F}(t).\] Here, and in what follows, summation with respect to repeated upper and lower indices is assumed. The definition includes the existence of a vector field $E$ of the form $E=(a_i^j t^i+b^j)\partial_{t^j}$ satisfying \begin{equation} \label{quasihomog1} E\mathbb F(t)= \left(3-d \right) \mathbb{F}(t)+ \frac{1}{2}A_{ij} t^i t^j+B_i t^i+c \end{equation} where $a_i^j$, $b_j$, $c$, $A_{ij}$, $B_i$ and $d$ are constants with $a_r^r=1$. The vector field $E$ is called the Euler vector field and the number $d$ is called the charge of the Frobenius manifold. The associativity of Frobenius algebra implies that the potential $\mathbb{F}(t)$ satisfies the WDVV equations \begin{equation} \label{frob} \partial_{t^i} \partial_{t^j} \partial_{t^k} \mathbb{F}(t)~ \Omega_1^{kp} ~\partial_{t^p} \partial_{t^q} \partial_{t^n} \mathbb{F}(t) = \partial_{t^n} \partial_{t^j} \partial_{t^k} \mathbb{F}(t) ~\Omega_1^{kp}~\partial_{t^p} \partial_{t^q} \partial_{t^i} \mathbb{F}(t),~~ \forall i,j,q,n. \end{equation} Conversely, an arbitrary potential $\mathbb F(t^1,\ldots, t^r)$ satisfying equations \eqref{frob} and \eqref{quasihomog1} with \eqref{flat metric} determines a Frobenius manifold structure on its domain \cite{DuRev}. Moreover, there exists a quasihomogenius flat pencil of metrics (QFPM) of degree $d$ associated to the Frobenius structure on $M$ which consists of the intersection form ${\Omega}_2$ and the flat metric $\O_1$ with the function $\tau=\Pi_{i1}t^i$ (see definition \ref{FPM} below). Here \begin{equation} \label{intersection form} \O_2^{ij}(t):=E(dt^i\circ dt^j) \end{equation} where the product $dt^i\circ dt^j$ is defined by lifting the product on $TM$ to $T^*M$ using the flat metric $\O_1$. In this article we prove that, when $d\neq 1$, $e(\tau)=0$ and $E(\tau)=(d-1)\tau$, we can construct another QFPM of degree $2-d$ on $M$ consisting of the intersection form $\O_2$ and a different flat metric $\widetilde\O_1$. We call it the conjugate QFPM. In particular, under a specific regularity condition, we get a conjugation between a certain type of Frobenius manifold structures on a given manifold. Precisely, we prove the following theorem. \begin{theorem} \label{dual Frob manif} Let $M$ be a Frobenius manifold with the Euler vector field $E$ and the identity vector field $e$. Suppose the associated QFPM is regular of degree $d$ with a function $\tau$. Assume that $e(\tau)=0$ and $E(\tau)=(1-d)\tau$. Then we can construct another Frobenius manifold structure on $M\backslash \{\tau=0\}$ of degree $2-d$. Moreover, we can apply the same method to the new Frobenius manifold structure and it leads to the original Frobenius manifold structure. \end{theorem} For a fixed Frobenius manifold the new structure that can be obtained using Theorem \ref{dual Frob manif} will be called the conjugate Frobenius manifold structure. Let us assume $\Pi_{i,j}=\delta_{i+j}^{r+1}$, i.e., the potential $\mathbb F$ has the standard form \begin{equation} \label{norm potential} \mathbb F(t) = \frac{1}{2} (t^r)^2 t^1 + \frac{1}{2} t^r \sum_{i=2}^{r-1} t^i t^{r-i+1} + G(t^1,...,t^{r-1}) \end{equation} and the quasihomogeneity condition \eqref{quasihomog1} takes the form \begin{equation}\label{quasihomog} E= d_i t^i\partial_{t^i},~ E\mathbb F(t)= \left(3-d \right) \mathbb{F}(t);~~d_{r}=1.\end{equation} Here, the numbers $d_i$ are called the degrees of the Frobenius manifold. Recall that a symmetry of the WDVV equations is a transformation of the form \[ t^i\mapsto z^i, ~ {\Pi}\mapsto \widetilde{\Pi},~ \mathbb F\mapsto \widetilde \mathbb F \] such that $\widetilde \mathbb F$ satisfies the WDVV equations. The inversion symmetry (\cite{DuRev}, Appendix B) is an involutive symmetry given by setting \begin{equation}\label{Dob coord} z^1=-\frac{1}{t^1},~z^r=\Pi_{ij}(t)\frac{t^i t^j}{2 t^1},~ z^k=\frac{t^k}{t^1}, ~2\leq k< r. \end{equation} Then \begin{equation}\label{inv potential} \widetilde \mathbb F(z) :=(t^1)^{-2}\left( \mathbb F(t)-\frac{1}{2}t^r \Pi_{ij}t^i t^j\right) \end{equation} is another solution to the WDVV equations with the flat metric $\widetilde\Pi_{ij}(z)=\delta_{i+j}^{r+1}$. The charge of the corresponding Frobenius manifold structure is $2-d$ and the degrees are \begin{equation} \label{degrees inv} \widetilde{d}_1=-d_1,\ \ \widetilde{d}_r=1, \ \ \widetilde{d}_i = d_i-d_1 \ \ for \ \ 1< i < r.\end{equation} The inversion symmetry is obtained from a special Schlesinger transformation of the system of linear ODEs with rational coefficients associated to the WDVV equations. A geometric relation between Frobenius manifold structures correspond to $\mathbb F(t)$ and $\widetilde \mathbb F (z)$ was outlined through the sophisticated notion of Givental groups in \cite{givental}. In this article, we obtained a simple geometric interpertation and we report that $\widetilde \mathbb F(z)$ is the potential of the conjugate Frobenius manifold structure. In other words, we prove the following theorem. \begin{theorem} \label{main thm} Let $M$ be a Frobenius manifold with charge $d\neq 1$. Suppose in the flat coordinates $(t^1,\ldots,t^r)$, the potential $\mathbb F(t)$ has the standard form \eqref{norm potential} and the quasihomogeneity condition takes the form \eqref{quasihomog} with $d_i\neq \dfrac{d_1}{2}$ for every $i$. Then we can construct the conjugate Frobenius manifold structure on $M\backslash\{t^1=0\}$. Moreover, flat coordinates for the conjugate Frobenius manifold are \begin{equation} \label{change coord} s^1= -t^1 , \ \ s^i= t^i (t^1)^{\frac{d_1-2d_i}{d_1}}\ \ for \ \ 1<i< r, \ \ s^r= \frac{1}{2} \sum_{i=1}^{r} t^i t^{r-i+1} (t^1)^{\frac{-2}{d_1}-1}. \end{equation} In addition, the corresponding potential equals the potential obtained by applying the inversion symmetry to $\mathbb F(t)$ and it is given by \b \widetilde{\mathbb F}(s) = (t^1)^{ \frac{-4}{d_1}} \left(\mathbb F(t^1,\ldots,t^r) -\frac{1}{2} t^r \sum_{1}^{r} t^i t^{r-i+1}\right). \end{equation} \end{theorem} Examples of Frobenius manifolds satisfy the hypotheses of Theorem \ref{main thm} include Frobenius manifold structures constructed on orbits spaces of standard reflection representations of irreducible Coxeter groups in \cite{DCG} and \cite{polyZuo} and algebraic Frobenius manifolds constructed using classical $W$-algebras \cite{mypaper5}. However, the result presented in this article is a consequence of the work \cite{dicy} and \cite{nonref}. There, we investigated the existence of Frobenius manifold structures on orbits spaces of some non-reflection representations of finite groups and we noticed that certain structures appear in pairs. Analyzing such pairs led us to the notion of conjugate Frobenius manifold. This article is organized as follows. In section \ref{flat frob}, we review the relation between Frobenius manifold, flat pencil of metrics and compatible Poisson brackets of hydrodynamic type. Then we introduce a conjugacy relation between certain class of quasihomogenius flat pencils of metrics in section \ref{dualityFrob}. It can be interpreted as a conjugacy relation between certain class of compatible Poisson brackets of hydrodynamic type. We prove Theorem \ref{dual Frob manif} in section \ref{dualityFrob} and Theorem \ref{main thm} in section \ref{relation F and new F}. In section \ref{first section}, we discuss the findings of this article on polynomial Frobenius manifolds. We end the article with some remarks. \section{Background} \label{flat frob} We review in this section the relation between flat pencil of metrics, compatible Poisson brackets of hydrodynamics type and Frobenius manifold. More details can be found in \cite{Du98}. Let $M$ be a smooth manifold of dimension $r$ and fix local coordinates $(u^1, . . . , u^r)$ on $M$. \begin{definition} \label{contra metric} A symmetric bilinear form $(. ,. )$ on $T^*M$ is called a contravariant metric if it is invertible on an open dense subset $M_0 \subseteq M$. We define the contravariant Christoffel symbols $\Gamma^{ij}_k$ for a contravariant metric $(. ,. )$ by \[ \Gamma^{ij}_k:=-\Omega^{im} \Gamma_{mk}^j \] where $\Gamma_{mk}^j$ are the Christoffel symbols of the metric $<. ,. >$ defined on $TM_0$ by the inverse of the matrix $\Omega^{ij}(u)=(du^i, du^j)$. We say the metric $(.,.)$ is flat if $<. ,. >$ is flat. \end{definition} Let $(. ,. )$ be a contraviariant metric on $M$ and set $\O^{ij}(u)=(du^i, du^j)$. Then we will use $\Omega$ to refer to the metric and $\Omega(u)$ to refer to its matrix in the coordinates. In particular, the Lie derivative of $(. ,. )$ along a vector field $X$ will be written $\mathfrak{L}_X \Omega$ while $X\Omega^{ij}$ means the vector field $X$ acting on the entry $\Omega^{ij}$. The Christoffel symbols given in definition \ref{contra metric} determine for $\O$ the contravariant (resp. covariant) derivative $\nabla^{i}$ (resp. $\nabla_{i}$) along the covector $du^i$ (resp. the vector field $\partial_{u^i}$). They are related by the identity $\nabla^{i}=\O^{ij}(u) \nabla_{j}$. \begin{definition} A flat pencil of metrics (FPM) on $M$ is a pair $(\Omega_2,\Omega_1)$ of two flat contravariant metrics $\O_2$ and $\O_1$ on $M$ satisfying \begin{enumerate} \item $\O_2+\lambda \O_1$ defines a flat metric on $T^*M$ for a generic constant $\lambda$, \item the Christoffel symbols of $\O_2+\lambda \O_1$ are $\Gamma_{2k}^{ij}+\lambda \Gamma_{1k}^{ij}$, where $\Gamma_{2k}^{ij}$ and $ \Gamma_{1k}^{ij}$ are the Christoffel symbols of $\O_2$ and $\O_1$, respectively. \end{enumerate} \end{definition} \begin{definition} \label{FPM} A flat pencil of metrics $(\O_2,\O_1)$ on $M$ is called quasihomogeneous flat pencil of metrics (QFPM) of degree $d$ if there exists a function $\tau$ on $M$ such that the vector fields $E$ and $e$ defined by \begin{eqnarray} \label{tau flat pencil} E&=& \nabla_2 \tau, ~~E^i =\O_2^{ij}(u)\partial_{u^j}\tau \\\nonumber e&=&\nabla_1 \tau, ~~e^i = \O_1^{ij}(u)\partial_{u^j}\tau \end{eqnarray} satisfy \begin{equation} \label{vector fields} [e,E]=e,~~ \mathfrak{L}_E \O_2 =(d-1) \O_2,~~ \mathfrak{L}_e \O_2 = \O_1~~\mathrm{and}~~ \mathfrak{L}_e\O_1 =0. \end{equation} Such a QFPM is \textbf{regular} if the (1,1)-tensor \begin{equation}\label{regcond} R_i^j = \frac{d-1}{2}\delta_i^j + {\nabla_1}_i E^j \end{equation} is nondegenerate on $M$. \end{definition} Let $(\O_2,\O_1)$ be a QFPM of degree $d$. Then according to \cite{Du98}, we can fix flat coordinates $(t^1,t^2,\ldots,t^r)$ for $\O_1$ such that \begin{equation} \label{gamma id} \tau=t^1, \ E^i= \Omega_2^{i1}, \ e^i= \Omega_1^{i1}, \ \ \Gamma_{1,k}^{ij}=0, \ \ \Gamma_{2,k}^{i1}= \frac{1-d}{2} \delta_k^i, \ \ \Gamma_{2,k}^{1j}= \frac{d-1}{2}\delta_k^j+ \partial_{t^k} E^j,\ \ \partial_{t^1} E^1=1-d. \end{equation} Moreover, if $(\O_2,\O_1)$ is regular then $d\neq 1$. Consider the loop space $\lop M$ of $M$, i.e., the space of smooth maps from the circle $S^1$ to $M$. A local Poisson bracket on $\lop M$ is a Lie algebra structure on the space of local functionals on $\lop M$. Let $\{.,.\}$ be a local Poisson bracket of hydrodynamic type (PBHT), i.e., it has the following form in the local coordinates \cite{Du98} \begin{equation}\label{poisson} \{u^i(x),u^j(y)\}= \Omega^{ij}(u(x))\delta' (x - y) + \Gamma_k^{ij} (u(x)) u_x^k \delta (x-y), \, i,j=1,\ldots,r\end{equation} where $\delta(x-y)$ is the Dirac delta function defined by $\int_{S^1} f(y) \delta(x-y) dy=f(x)$. Then we say $\{.,.\}$ is nondegenerate if $\det \Omega^{ij}\neq 0$ and the Lie derivative of $\{.,.\}$ along a vector field $X:=X^i\partial_{u^i}$ reads \begin{align*} \mathfrak{L}_X\{.,.\}(u^i(x),u^j(y))&= (X^s\partial_{u^s}\Omega^{ij}- \Omega^{s j}\partial_{u^s}X^{i}-\Omega^{is}\partial_{u^s} X^{j})\delta'(x-y)\\\nonumber &+(X^s \partial_{u^s}\Gamma_k^{ij}-\Gamma_k^{sj} \partial_{u^s}X^i-\Gamma^{i s}_k\partial_{u^s}X^j+\Gamma_s^{i j} \partial_{u^k}X^s-\Omega^{i s}\partial_{u^s}\partial_{u^k} X^j)u_x^k\delta(x–y). \end{align*} We will use the following two theorems. \begin{theorem}\label{serg}\cite{serg} Let $X$ be a vector field on $M$ and $\{ . , .\}$ be a PBHT on $\lop M$. If $\mathfrak{L}_X^2 \{.,.\}=0$, then $\mathfrak{L}_X \{.,.\}$ is a PBHT and it is compatible with $\{.,.\}$, i.e., $\{.,.\} + \lambda \mathfrak{L}_X \{. , .\}$ is a PBHT for every constant $\lambda$. \end{theorem} \begin{theorem}\label{Du and Nov}\cite{Du and Nov} The form \eqref{poisson} defines a nondegenerate PBHT $\{ . , .\}$ if and only if the matrix $\Omega^{i j}(u)$ defines a flat contravariant metric on $M$ and $\Gamma_k^{i j}(u)$ are its Christoffel symbols. \end{theorem} From Theorem \ref{Du and Nov} and Theorem \ref{serg}, we get the following corollary: \begin{corollary} \label{FPM and PBHT} Let $\{.,.\}_2$ and $\{.,.\}_1$ be two nondegenerate compatible PBHT on $\lop M$ having the form \[ \{u^i(x),u^j(y)\}_\alpha = \Omega_\alpha^{ij} (u(x))\delta' (x-y) + \Gamma_{\alpha,k}^{ij} (u(x)) u_x^k \delta (x – y),~ \alpha=1,2.\] Suppose $\{.,.\}_2+\lambda \{.,.\}_1$ is a nondegenerate PBHT for a generic constant $\lambda$. Then $(\Omega_2,\Omega_1)$ is a FPM on $M$. Conversely, a FPM on $M$ determines nondegenerate compatible Poisson brackets of hydrodynamic type on $\lop M$. \end{corollary} As mentioned in the introduction, if $M$ is a Frobenius manifold of charge $d$ then there is an associated QFPM $(\O_2,\O_1)$ of degree $d$ on $M$, where $\O_2$ is the intersection form and $\O_1$ is the flat metric. In the flat coordinates $(t^1,\ldots,t^r)$ we have $\tau= \Pi_{i 1} t^i$. Then the Euler vector field $E$ and the identity vector field $e$ of the Frobenius manifold have the form \eqref{tau flat pencil} and satisfy equations \eqref{vector fields}. The following theorem give a converse statement. \begin{theorem}\cite{Du98}\label{dub flat pencil} Let $M$ be a manifold carrying a regular QFPM $(\Omega_2,\Omega_1)$ of degree $d$. Then there exists a unique Frobenius manifold structure on $M$ of charge $d$ where $(\Omega_2,\Omega_1)$ is the associated QFPM. \end{theorem} \section{Conjugate Frobenius manifold} \label{dualityFrob} We fix a manifold $M$ with a QFPM $T=(\Omega_2,\Omega_1)$ of degree $d\neq 1$. We fix a function $\tau$ for $T$ which determines the vector fields $E$ and $e$ (see definition \ref{FPM}). We suppose \begin{equation}\label{new condition} e(\tau)=0 \ \ \textrm{and} \ \ E(\tau)= (1-d) \tau.\end{equation} We introduce the function $f(\tau):=(\tau)^{\frac{2}{1-d}}$ and the vector field $ \widetilde{e} := f(\tau) e$. We define \begin{equation} \widetilde{\Omega}_1: = \mathrm {Lie}_{\widetilde{e}} \Omega_2 = f \Omega_1 -f'(E \otimes e + e \otimes E). \end{equation} Then \begin{align}\label{new o} \mathrm {Lie}^2_{ \widetilde{e}} {\Omega}_2 & = f^2 (\mathrm {Lie}_e^2 \Omega_2) + (2(f')^2 E(\tau) -4f f')e\otimes e +f f'e(\tau) \Omega_1\\ \nonumber &+ ((f')^2- f f'') e(\tau)(E\otimes e+e\otimes E)=0 \end{align} We fix flat coordinates $(t^1,\ldots,t^r)$ leading to the identities \eqref{gamma id}. Considering the condition \eqref{new condition}, we will further assume that $e=\partial_{t^r}$. Thus \begin{equation} \Omega_1^{i1}=\delta^{i}_r, \ \ \partial_{t^r} \Omega_2^{i1}=\partial_{t^r}E^i=\delta^{i}_r. \end{equation} Let $\{.,.\}$ denote the nondegenerate PBHT associated to $\Omega_2$. Then by Corollary \ref{FPM and PBHT}, $\mathrm {Lie}_{e}\{.,.\}$ is the PBHT associated to $\O_1$ and $\mathrm {Lie}_{e}^2\{.,.\}=0$. We have a similar statement for $\widetilde e$. \begin{proposition}\label{PBHT} $\mathrm {Lie}_{\widetilde{e}}^2 \{.,.\}=0$. In particular, $\mathrm {Lie}_{\widetilde{e}} \{.,.\}$ is a PBHT compatible with $\{.,.\}$. \end{proposition} \begin{proof} The PBHT associated to $\Omega_2$ has the form \[ \{t^\alpha(x),t^\beta(y)\} =\Omega_2^{\alpha \beta} \delta' (x-y) + \Gamma_{2,\gamma}^{\alpha \beta} t^\gamma_x \delta(x-y).\] Here and in what follows, it is to be understood that all functions on the right hand side depend on $t(x)$. Note that \[ \mathrm {Lie}_{\widetilde e}\{.,.\}(t^\alpha(x),t^\beta(y))=\widetilde \O_1^{\alpha\beta}\delta'(x-y)+\widetilde\Gamma^{\alpha \beta}_{2,\gamma} t_x^\gamma \delta(x-y) \] where \begin{align*} \widetilde \Gamma_{2,\gamma}^{\alpha \beta}& = { \widetilde{e}}^\varepsilon \partial_\varepsilon \Gamma_{2,\gamma}^{\alpha \beta}- \Gamma_{2,\gamma}^{\varepsilon \beta} \partial_\varepsilon { \widetilde{e}}^\alpha – \Gamma_{2,\gamma}^{\alpha \varepsilon} \partial_\varepsilon { \widetilde{e}}^\beta + \Gamma_{2,\varepsilon}^{\alpha \beta} \partial_\gamma { \widetilde{e}}^\varepsilon – \Omega^{\alpha \varepsilon}_2 \partial_{\varepsilon \gamma}^2 { \widetilde{e}}^\beta\\ & = - \Gamma_{2,\gamma}^{\varepsilon \beta} \delta_r^\alpha \delta_\varepsilon^1 f' – \Gamma_{2,\gamma}^{\alpha \varepsilon} \delta_r^\beta \delta_\varepsilon^1 f' + \Gamma_{2,\varepsilon}^{\alpha \beta} \delta_r^\varepsilon \delta_\gamma^1 f' – \Omega^{\alpha \varepsilon}_2 \delta_r^\beta \delta_\gamma^1 \delta_\varepsilon^1 f''. \end{align*} From equation \eqref{new o}, the coefficients of $\delta'(x-y)$ of $\mathrm {Lie}_{ \widetilde{e}}^2 \{.,.\}$ vanish while the coefficients $\widetilde{\widetilde{\Gamma}}_{2,\gamma}^{\alpha \beta}$ of $\delta(x-y)$ have the form \begin{align*} \widetilde{\widetilde{\Gamma}}_{2,\gamma}^{\alpha \beta}=& -f f'' \partial_r \Omega_2^{\alpha \varepsilon} \delta_r^\beta \delta_\gamma^1 \delta_\varepsilon^1 + f'^2 \delta_r^\alpha \delta_r^\beta \delta_m^1 \delta_\varepsilon^1 \Gamma_{2,\gamma}^{m \varepsilon} - f'^2 \delta_r^\beta \delta_r^m \delta_\gamma^1 \delta_\varepsilon^1 \Gamma_{2,m}^{\alpha \varepsilon } \\ & + f'^2 \delta_r^\beta \delta_r^\alpha \delta_\varepsilon^1 \delta_m^1 \Gamma_{2,\gamma}^{\varepsilon m }- f'^2 \delta_r^\alpha \delta_r^m \delta_\gamma^1 \delta_\varepsilon^1 \Gamma_{2,m}^{\varepsilon \beta}+ f' f'' \Omega_2^{\varepsilon m}\delta_\varepsilon^1 \delta_r^\alpha \delta_r^\beta \delta_\gamma^1 \delta_m^1\\ & - f'^2 \delta_\gamma^1 \delta_r^\beta \delta_m^1 \delta^\varepsilon_r \Gamma_{2,\varepsilon}^{\alpha m} - f'^2 \delta_\gamma^1 \delta_r^\alpha \delta_m^1 \delta^\varepsilon_r \Gamma_{2,\varepsilon}^{ m \beta} -\widetilde{\Omega}_2^{\alpha \varepsilon} \delta_r^\beta \delta_\gamma^1 \delta_\varepsilon^1 f''. \end{align*} Then from the identities \eqref{gamma id} and the definition of $f(\tau)$, it follows that $\widetilde{\widetilde{\Gamma}}_{2,\gamma}^{\alpha \beta}=0$. For example, \begin{align*} \widetilde{\widetilde{\Gamma}}_{2,1}^{r r}& = - f \partial_r{\Omega_2^{r 1} f''} + f'^2 \Gamma_{2,1}^{1 1}- f'^2 \Gamma_{2,r}^{1 r}+ \Omega_2^{1 1} f'' f'+ f'^2 \Gamma_{2,1}^{1 1}- f'^2 \Gamma_{2,r}^{r 1}- f'^2 \Gamma_{2,r}^{r 1} - f'^2 \Gamma_{2,r}^{r 1}- \widetilde{\Omega}_1^{r 1} f''\\ &= -(d+1) f'^2 +(1-d) \tau f'f'' -f f'' -(-f) f'=0 \end{align*} and when $\gamma =1$, $\alpha=r$ and $\beta \neq r$ \begin{align*} \widetilde{\widetilde{\Gamma}}_{2,1}^{r \beta}&= - 2 f'^2 \Gamma_{2,r}^{1 \beta}= - 2 f'^2 (\frac{d-1}{2}\delta^\beta_r+ \partial_{t^r} E^\beta)=0.\end{align*}\end{proof} \begin{lemma}\label{new flat metrics} The pair $\widetilde T=(\O_2,\widetilde{\Omega}_1)$ form a QFPM of degree $\widetilde{d}=2-d$. Moreover, if $T$ is regular then $\widetilde T$ is regular. \end{lemma} \begin{proof} The second term of the identity \[\widetilde{\Omega}_1(t)= f \Omega_1 -f' E^i (\partial_{t^i} \otimes \partial_{t^r} + \partial_{t^r} \otimes \partial_{t^i})\] contributes only to entries of the last row and last column of $ \widetilde{\Omega}_1(t)$. From the normalization of $\Omega_1$, we get \[\widetilde{ \Omega}_1^{i1}(t)=(f-f' E(\tau)) \delta^i_r= (f- (1-d) \tau f') \delta^i_r= (- f) \delta^i_r. \] Therefore, \[ \det \widetilde{\Omega}_1(t)= f^r \det \Omega_1(t) \neq 0. \] Hence, using Proposition \ref{PBHT} and Corollary \ref{FPM and PBHT}, $\widetilde T$ is a FPM. Let $\widetilde \nabla$ denote the contravariant (and also the covariant) derivative of $\widetilde \O_1$ and set $ \widetilde{\tau}:=-\tau=-t^1$. Then the vector fields \[\widetilde{e}:=\widetilde{\nabla}_1 \widetilde{\tau}, ~\text{and} \ \ \widetilde{E}:={\nabla}_2 \widetilde{\tau}=-E\] satisfy equations \eqref{vector fields} and \begin{equation} \mathrm {Lie}_{\widetilde{E}} \Omega_2 = \mathrm {Lie}_{-E} \Omega_2= -(d-1)\Omega_2= (\widetilde{d}-1) \Omega_2.\end{equation} Hence, $\widetilde T$ is a QFPM of degree $\widetilde{d}=2-d$. For the regularity condition \eqref{regcond}, we have \begin{equation} \widetilde R_i^j(t) = \frac{\widetilde{d}-1}{2}\delta_i^j + \widetilde{\nabla}_{1i} (-E^j)=\frac{1-d}{ 2} \delta_i^j-{\nabla}_{1 i} (E^j) =- R_i^j(t). \end{equation} Therefore, $\det (\widetilde R_i^j) \neq 0$ if and only if $\det (R_i^j) \neq 0$. \end{proof} We keep the definitions $\widetilde \tau=-\tau$ and $\widetilde E=-E$ given in the proof of Lemma \ref{new flat metrics} and we call $\widetilde T=(\O_2,\widetilde{\Omega}_1)$ the conjugate QFPM of $T$. The name is motivated by the following corollary. \begin{corollary}\label{cor on duality} $\widetilde T$ has a conjugate and it equals $T$. \end{corollary} \begin{proof} We observe that $\widetilde d=2-d\neq 1$ and the function $\widetilde{\tau}=-\tau$ satisfies the requirements \eqref{new condition} as \begin{equation} \widetilde{e} (\widetilde{\tau})=0 \ \ \text{and} \ \ \widetilde{E} ( \widetilde{\tau})= -E (-t^1)=(1-d) t^1= (1- \widetilde{d}) \widetilde{\tau}.\end{equation} However, applying Lemma \ref{new flat metrics} to $\widetilde T$, we get a QFPM $(\O_2,\mathrm {Lie}_{\widetilde{\widetilde{e}}}\O_2)$ where \[ \widetilde{ \widetilde{e}}=f( \widetilde{\tau}) \widetilde{e}= \widetilde{\tau}^{\frac{2}{1- \widetilde{d}}} \,\widetilde{e}= (t^1)^{\frac{2}{1- \widetilde{d}}}.(t^1)^{\frac{2}{1-{d}}} \partial_{t^r}=e. \] \end{proof} Now we can prove theorem \ref{dual Frob manif}. \begin{proof}[Proof of Theorem \ref{dual Frob manif}] From the work in \cite{Du98}, regularity of the associated QFPM implies that the charge $d\neq 1$. Then the proof follows from applying Lemma \ref{new flat metrics}, Corollary \ref{cor on duality} and Theorem \ref{dub flat pencil} to the associated regular QFPM. \end{proof} For a fixed Frobenius manifold, the new Frobenius manifold structure constructed using Theorem \ref{dual Frob manif} will be called the conjugate Frobenius manifold structure. \begin{example} \label{example dim 2} We consider the Frobenius manifold structure of charge -1 defined by the following solution to the WDVV equations. \[ \mathbb F=\frac{1}{2} t_2^2 t_1 + t_1^2\log t_1 \] In the examples, we use subscript indices instead of superscript indices for convenience. Here, the identity vector field $e=\partial_{t_2}$ and the Euler vector field $E=2 t_1 \partial_{t_1}+t_2 \partial_{t_2}$. Note that $EF=(3-d) F + 2 t_1^2$. The corresponding regular QFPM consists of \begin{equation} \Omega_2(t) =\left( \begin{array}{cc} 2 t_1 & t_2 \\ t_2 & 4 \\ \end{array} \right),~\Omega_1(t) =\left( \begin{array}{cc} 0 & 1 \\ 1 & 0 \\ \end{array} \right). \end{equation} The conjugate QFPM $\widetilde T=(\O_2,\widetilde \O_1)$ is of degree $\widetilde d=3$. In the coordinates \[ s_1=-t_1,\ \ s_2=\frac{t_2}{t_1} \] we have \[\O_2(s)=\left( \begin{array}{ccc} -2s_1 & s_2 \\ s_2 & \frac{4}{s_1^2}\\ \end{array} \right), ~\widetilde\O_1(s)=\left( \begin{array}{ccc} 0 & 1 \\ 1 & 0\\ \end{array} \right)\] and the potential of the conjugate Frobenius manifold structure has the form \[ \widetilde{\mathbb F}=\frac{1}{2} s_1 s_2^2- \log s_1. \] Note that the Euler vector field $\widetilde{E}=-E(s)=-2s_1 \partial_{s_1}+s_2 \partial_{s_2}$ and $\widetilde E \widetilde\mathbb F=(3-\widetilde d)\widetilde\mathbb F+2$. We observe that applying the inversion symmetry to the potential $\mathbb F(t)$, we get \[\widehat\mathbb F(z)=\frac{1}{2} z_1 z_2^2- \log z_1+~ \text{constant} \] and $\widehat\mathbb F(z)$ defines the same conjugate Frobenius manifold structure. We prove this for certain type of Frobenius manifolds in next section. \end{example} Let us assume $E$ has the form $E=d_i t^i \partial_{t^i}$. Then $d_1=1-d$ and we have the following standard results. \begin{corollary} \label{reg bcz E} $T$ is regular QFPM if and only if $d_i \neq \frac{d_1}{2}$ for all $i$. \end{corollary} \begin{proof} Applying the definition \ref{FPM} to the matrix $R_i^j(t)=(\frac{d-1}{2}+d_i) \delta_i^j=(-\frac{d_1}{ 2}+d_i) \delta_i^j$. \end{proof} \begin{lemma}\label{degrees} If $\Omega_1^{i j} \neq 0$, then $d_i+d_j =2-d$. Thus, if the numbers $d_i$ are all distinct then we can choose the coordinates $(t^1,\ldots,t^r)$ such that $\Omega_1^{i j}=\delta^{i+j}_{r+1}$. \end{lemma} \begin{proof} Notice that using $[e,E]=e$, we get $\mathrm {Lie}_E \Omega_1 = (d-2) \Omega_1$. Then the statement follows from the equation \[ (2-d) \Omega_1^{ij}(t)=\mathrm {Lie}_E \Omega_1^{ij} (dt^i,dt^j) = -d_i \Omega_1 (dt^i,dt^j)-d_j \Omega_1 (dt^i,dt^j).\] \end{proof} \section{Relation with inversion symmetry} \label{relation F and new F} We continue using notations and assumptions given in the previous section, but we suppose that $T$ is regular. Consider the Frobenius manifold structure defined on $M$ by Theorem \ref{dub flat pencil} and let $\mathbb F(t)$ be the corresponding potential. We assume $\Omega^{ij}_1(t)=\delta^{i+j}_{r+1}$ which is equivalent to requiring that $\mathbb F(t)$ has the standard form \eqref{norm potential}. We suppose further that the quasihomogeneity condition for $\mathbb F(t)$ takes the form \eqref{quasihomog}. In this case the intersection form $\O_2$ satisfies \cite{Du98} \begin{equation} {\Omega}_2^{ij}(t)=(d-1+d_i+d_j)\Omega^{i\alpha}_1\Omega^{j\beta}_1 \partial_{t^\alpha} \partial_{t^\beta} \mathbb{F}. \end{equation} Note that at this stage we are working under the hypothesis of Theorem \ref{main thm}. Let us consider the coordinates \eqref{change coord} on $M\backslash \{t^1=0\}$. Then the nonzero entries of the Jacobian matrix are \begin{align*}\label{Jvalue} \frac{\partial s^{i}}{\partial t^{1}}&=\frac{d_1-2d_i}{d_1} t^i (t^1)^{\frac{-2d_i}{d_1}},\ \ \frac{\partial s^{r}}{\partial t^{1}}= (\frac{-2 -d_1}{2 d_1}) \sum_2^{r-1} t^{i} t^{r-i+1} (t^1)^{\frac{-2 }{d_1}-2}-\frac{2}{d_1} t^r (t^1)^{\frac{-2}{d_1}-1}, \\ \nonumber \frac{\partial s^{i}}{\partial t^{i}}&= (t^1)^{\frac{d_1-2d_i}{d_1}}, \ \ \frac{\partial s^{r}}{\partial t^{i}}= t^{r-i+1} (t^1)^{\frac{-2 }{d_1}-1},\ \ \frac{\partial s^{r}}{\partial t^{r}}= (t^1)^{\frac{-2 }{d_1}}. \end{align*} \begin{proposition}\label{new flat coord} Consider the conjugate QFPM $\widetilde T=(\O_2,\widetilde \O_1)$. Then $\widetilde\tau=s^1$, $\widetilde \Omega_1^{ij}(s)=\delta^{i+j}_{r+1}$, $\widetilde{e}=\partial_{s^r}$ and $\widetilde E=\widetilde d_i s^i\partial_{s^i}$ where the numbers $\widetilde d_i$ are given in \eqref{degrees inv}. \end{proposition} \begin{proof} Using the duality between the degrees outlined in Lemma \ref{degrees}, we calculate the entries $\widetilde\Omega_1^{ij}(s)$ as follows. \begin{enumerate} \item[I)] For $i=1$ \[\widetilde{\Omega}_1^{1j}(s)= - \frac{\partial s^{j}}{\partial t^{\alpha}} \widetilde{\Omega}_1^{1\alpha}= - \frac{\partial s^{r}}{\partial t^{r}}\widetilde{\Omega}_1^{1r} =- \frac{\partial s^{r}}{\partial t^{r}}(-(t^1)^{\frac{2}{d_1}}) \delta^{1r}=\delta^{1}_r.\] \item[II)] For $1 < i < r $ and $1< j< r $ \begin{align*} \widetilde{\Omega}_1^{ij}(s)&=\frac{\partial s^{i}}{\partial t^{k}} \frac{\partial s^{j}}{\partial t^{k}} \widetilde{\Omega}_1^{k l}\\ &=\frac{\partial s^{i}}{\partial t^{1}} \frac{\partial s^{j}}{\partial t^{1}} \widetilde{\Omega}_1^{11}+ \frac{\partial s^{i}}{\partial t^{i}} \frac{\partial s^{j}}{\partial t^{1}} \widetilde{\Omega}_1^{i1}+ \frac{\partial s^{i}}{\partial t^{1}} \frac{\partial s^{j}}{\partial t^{j}} \widetilde{\Omega}_1^{1j}+ \frac{\partial s^{i}}{\partial t^{i}} \frac{\partial s^{j}}{\partial t^{j}} \widetilde{\Omega}_1^{ij}\\ &= \frac{\partial s^{i}}{\partial t^{i}} \frac{\partial s^{j}}{\partial t^{j}} \widetilde{\Omega}_1^{ij} \delta^{i+j,r+1}\\ &= (t^1)^{\frac{2d_1-2d_i-2d_{r-i+1}+2}{d_1}} \delta^{i+j,r+1}\\ &= \delta^{i+j,r+1}. \end{align*} \item[III)] For $1< i < r$ \begin{align*}\widetilde{\Omega}_1^{i r}(s)&= (t^1)^{\frac{2}{d_1}} \frac{\partial s^{i}}{\partial t^{i}}\frac{\partial s^{r}}{\partial t^{r-i+1}}+ \left(-(t^1)^{\frac{2}{d_1}} \frac{\partial s^{i}}{\partial t^{1}}+\frac{-2 d_i}{d_1} t^i (t^1)^{\frac{2}{d_1}-1} \frac{\partial s^{i}}{\partial t^{i}}\right).\frac{\partial s^{r}}{\partial t^{r}}\\ &=(t^1)^{\frac{2}{d_1}} (t^1)^{\frac{d_1-2d_i}{d_1}}.t^{i} (t^1)^{\frac{-2 }{d_1}-1}+ \left(-\frac{d_1-2d_i}{d_1} (t^1)^{\frac{2}{d_1}} t^i (t^1)^{\frac{-2d_i}{d_1}}+\frac{-2 d_i}{d_1} t^i (t^1)^{\frac{2}{d_1}-1} (t^1)^{\frac{d_1-2d_i}{d_1}}\right) (t^1)^{\frac{-2 }{d_1}}\\ &= (t^1)^{\frac{-2d_i }{d_1}}t^{i} + \left(-\frac{d_1-2d_i}{d_1} (t^1)^{\frac{-2d_i}{d_1}} t^i +\frac{-2 d_i}{d_1} (t^1)^{\frac{-2d_i}{d_1}}t^i \right)\\ &= (t^1)^{\frac{-2d_i }{d_1}}t^{i} - (t^1)^{\frac{-2d_i}{d_1}}t^{i} \\ &=0. \end{align*} \item[IV)] Finally, \begin{align*}\widetilde{\Omega}_1^{rr}(s)&= -(t^1)^{\frac{2}{d_1}} \frac{\partial s^{r}}{\partial t^{r}}.\frac{\partial s^{r}}{\partial t^{1}}+\sum_{i=2}^{r-1} \left( (t^1)^{\frac{2}{d_1}} \frac{\partial s^{r}}{\partial t^{r-i+1}} -\frac{2 d_i}{d_1} t^i (t^1)^{\frac{2}{d_1}-1} \frac{\partial s^{r}}{\partial t^{r}}\right).\frac{\partial s^{r}}{\partial t^{i}}\\ &+\left( -(t^1)^{\frac{2}{d_1}} \frac{\partial s^{r}}{\partial t^{1}}+\sum_{i=2}^{r-1}-\frac{2 d_i}{d_1} t^i (t^1)^{\frac{2}{d_1}-1} \frac{\partial s^{r}}{\partial t^{i}} + \frac{-4}{d_1} t^r (t^1)^{\frac{2}{d_1}-1} \frac{\partial s^{r}}{\partial t^{r}}\right).\frac{\partial s^{r}}{\partial t^{r}}\\ &=(\frac{2}{d_1}+1) \sum_2^{r-1} t^i t^{r-i+1} (t^1)^{\frac{-2}{d_1}-2} +\frac{4}{d_1} t^r (t^1)^{\frac{-2}{d_1}-1}+\sum_2^{r-1} t^i t^{r-i+1} (t^1)^{\frac{-2}{d_1}-2}\\ &-\sum_2^{r-1} \frac{2d_i}{d_1}t^i t^{r-i+1} (t^1)^{\frac{-2}{d_1}-2}-\sum_2^{r-1}\frac{2d_{r-i+1}}{d_1} t^i t^{r-i+1} (t^1)^{\frac{-2}{d_1}-2}-\frac{4}{d_1} t^r (t^1)^{\frac{-2}{d_1}-1}\\ &=\sum_2^{r-1} \left( \frac{2}{d_1}+2 -\frac{2d_i}{d_1}-\frac{2d_{r-i+1}}{d_1} \right) t^i t^{r-i+1} (t^1)^{\frac{-2}{d_1}-2}\\ &=0. \end{align*} \end{enumerate} It is straightforward to show that $\widetilde{e}=\partial_{s^r}$. The vector field $\widetilde E=\Omega_2^{1j}(s)\partial_{s^j}$ while \begin{align} \nonumber {\Omega}_2^{1j}(s)&=\begin{pmatrix} d_1 t^1& -d_1 t^1 \frac{\partial s^2}{\partial t^1} -d_2 t^2 \frac{\partial s^2}{\partial t^2}& &-d_1 t^1 \frac{\partial s^3}{\partial t^1} -d_3 t^3 \frac{\partial s^3}{\partial t^3}&&\cdots & -d_1 t^1\frac{\partial s^r}{\partial t^1} -d_2 t^2\frac{\partial s^r}{\partial t^2}+\cdots -t^r \frac{\partial s^r}{\partial t^r} \end{pmatrix} \\ \nonumber &= \begin{pmatrix} d_1 t^1& (d_2-d_1) t^2 (t^1)^{\frac{d_1-2d_2}{d_1}}&(d_3-d_1) t^3 (t^1)^{\frac{d_1-2d_3}{d_1}}&\cdots& \sum_{i=1}^{r} (-d_1 (\frac{-2-d_1}{2 d_1}) -d_i )t^i t^{r-i+1} (t^1)^{\frac{-2}{d_1}-1} \end{pmatrix} \\ \label{g(1j)} &= \begin{pmatrix} d_1 t^1& (d_2-d_1) t^2 (t^1)^{\frac{d_1-2d_2}{d_1}}&(d_3-d_1) t^3 (t^1)^{\frac{d_1-2d_3}{d_1}}&\cdots& \frac{1}{2}\sum_{i=1}^{r} t^i t^{r-i+1} (t^1)^{\frac{-2}{d_1}-1}\end{pmatrix}\\ \nonumber &= \begin{pmatrix} - d_1 s^1 & \ \ \ (d_2-d_1) s^2 & \ \ \ \ \ \ \ \ \ \ \ (d_3-d_1) s^3 & & \ \ \ \ \ \ \cdots & &\ \ s^r \ \ \ \ \ \ \end{pmatrix}. \end{align} \end{proof} We observe that the inverse transformation of the inversion symmetry \eqref{Dob coord} is given by \[ t^1=\frac{-1}{z^1},\ \ t^r=z^r + \frac{1}{2} \sum_2^{r-1} \frac{ z^i z^{r-i+1}}{z^1},\ \ t^k= \frac{-z^k}{z^1}, ~2\leq k\leq r.\] Thus, the potential \eqref{inv potential} obtained from applying the inversion symmetry to $\mathbb F(t)$ has the form \[\widetilde{\mathbb F}(z) = (z^1)^{2} \mathbb F\left(\frac{-1}{z^1},\frac{-z^2}{z^1},\ldots,\frac{-z^{r-1}}{z^1},\frac{1}{2} \sum_1^r \frac{z^i z^{r-i+1}}{z^1}\right) +\frac{1}{2} z^r \sum_{1}^{r} z^i z^{r-i+1}.\] \begin{lemma}\label{potential in inv coord} The potential $\widetilde\mathbb F(z)$ has the form \begin{equation}\label{F in 3 coord} \widetilde{\mathbb F}(s) = (t^1)^{ \frac{-4}{d_1}} \left(\mathbb F(t^1,\ldots,t^r) -\frac{1}{2} t^r \sum_{1}^{r} t^i t^{r-i+1}\right), z^i\leftrightarrow s^i. \end{equation} \end{lemma} \begin{proof} We use the identities \[ t^1=- s^1= (s^1)^2 (\frac{-1}{s^1}),~ t^r= (s^1)^{\frac{2}{d_1}} \left(\frac{1}{2} \sum_1^r \frac{s^i s^{r-i+1}}{s^1} \right) ,~ t^i=(s^1)^{\frac{2 d_i}{d_1}} (\frac{-s^i}{s^1}), 1<i<r,\] and the quasihomogeneity of the potential $\mathbb F(t)$, i.e., \[ (\frac{2}{d_1} E) \mathbb F(t)= \frac{2(3-d)}{d_1} \mathbb F(t) =(\frac{4}{d_1}+2) \mathbb F(t). \] Then \begin{align*} &(t^1)^{ \frac{-4}{d_1}} \left[\mathbb F(t^1,\ldots,t^r) -\frac{1}{2} t^r \sum_{1}^{r} t^i t^{r-i+1}\right]\\ \nonumber &=(t^1)^{ \frac{-4}{d_1}} \left[\mathbb F(t^1,\ldots,t^r)+\big( -t^1 (t^r)^2\big)-\frac{1}{2} t^r \sum_{2}^{r-1} t^i t^{r-i+1}\right]\\ \nonumber &=(s^1)^{ \frac{-4}{d_1}} \left[\mathbb F \left( (s^1)^2 (\frac{-1}{s^1}),(s^1)^{\frac{2 d_2}{d_1}} (\frac{-s^2}{s^1}),\ldots,(s^1)^{\frac{2 d_{r-1}}{d_1}} (\frac{-s^{r-1}}{s^1}),(s^1)^{\frac{2}{d_1}} (\frac{1}{2} \sum_1^r \frac{s^i s^{r-i+1}}{s^1}) \right) +\big((s^r)^2 (s^1)^{\frac{4}{d_1}+1}\right. \big.\\ \nonumber & \left.\big. +s^r \sum_2^{r-1} s^i s^{r-i+1} (s^1)^{\frac{4}{d_1}} + s^1 \left( \frac{1}{2}\sum_{2}^{r-1} (s^1)^{\frac{2}{d_1}-1} s^i s^{r-i+1}\right)^2\big)- \frac{1}{2}s^r (s^1)^\frac{4}{d_1} \sum_{2}^{r-1} s^i s^{r-i+1} - s^1 \left( \frac{1}{2}\sum_{2}^{r-1} (s^1)^{\frac{2}{d_1}-1} s^i s^{r-i+1}\right)^2\right]\\ \nonumber &=(s^1)^{\frac{-4}{d_1}} \left[(s^1)^{\frac{4}{d_1}+2} \mathbb F\left(\frac{-1}{s^1},- \frac{s^2}{s^1} ,-\frac{s^3}{s^1},\ldots, \frac{1}{2} \sum_{i=1}^{n} \frac{-s^i s^{n-i+1}}{s^1} \right) +(s^r)^2 (s^1)^{\frac{4}{d_1}+1}+ \frac{1}{2} s^r \sum_2^{r-1} s^i s^{r-i+1} (s^1)^{\frac{4}{d_1}} \right] \\ \nonumber &=(s^1)^{2} \mathbb F\left(\frac{-1}{s^1},\frac{-s^2}{s^1},\ldots,\frac{-s^{r-1}}{s^1},\frac{1}{2} \sum_1^r \frac{s^i s^{r-i+1}}{s^1}\right)+ \frac{1}{2} s^r \sum_1^{r} s^i s^{r-i+1} \end{align*} which is the potential of the inversion symmetry by setting $s^i=z^i$. \end{proof} Now we prove Theorem \ref{main thm} stated in the introduction. \begin{proof}[Proof of Theorem \ref{main thm}] By Corollary \ref{reg bcz E} and Theorem \ref{dual Frob manif}, we use the above notations and assume $T=(\O_2,\O_1)$ is the associated QFPM. We need to show that the conjugate QFPM $\widetilde T=(\O_2,\widetilde\O_1)$ equals the QFPM associated to the potential $\widetilde \mathbb F(s)$ given in \eqref{F in 3 coord}. This leads to verifying that $\O_2 (s)$ equals the intersection form $\widehat{\O}_2(s)$ defined by $\widetilde\mathbb F(s)$. It is straightforward to show that $\widetilde F(s)$ is a quasihomogenius function, i.e., $\widetilde E \widetilde F=(3-\widetilde d)\widetilde F$. Hence \[ {\widehat\Omega}_2^{ij}(s):=(\widetilde d-1+\widetilde d_i+\widetilde d_j)\Omega^{i\alpha}_1\Omega^{j\beta}_1 \partial_{s^\alpha} \partial_{s^\beta} \widetilde{\mathbb F}. \] After long calculations we find that $\widetilde{\Omega}_2^{ij}(s) =\widehat{\Omega}_2^{ij}(s)$. For examples, we obtained the first row of $ {\Omega}_2^{ij}(s)$ in \eqref{g(1j)} and for even $r$ and $1< i,j< r$, we get by denoting $\partial_{t^i}\partial_{t^j} G(t)$ as $G_{i,j}$ \begin{align}\nonumber {\Omega}_2^{ij} (s) &= \frac{\partial s^i}{\partial t^1} \frac{\partial s^j}{\partial t^1} \Omega_2^{1,1} +\frac{\partial s^i}{\partial t^i} \frac{\partial s^j}{\partial t^1} \Omega_2^{i,1} + \frac{\partial s^i}{\partial t^1} \frac{\partial s^j}{\partial t^j} \Omega_2^{1,j} +\frac{\partial s^i}{\partial t^i} \frac{\partial s^j}{\partial t^j} \Omega_2^{i,j} \\ \nonumber &=d_1(1-\frac{2d_i}{d_1})(1-\frac{2d_j}{d_1}) t^i t^{j} (t^1)^{1-\frac{2d_i}{d_1}-\frac{2d_j}{d_1}}+ d_i (1-\frac{2d_j}{d_1}) t^i t^j (t^1)^{1-\frac{2d_i}{d_1}-\frac{2d_j}{d_1}}\\\nonumber &+ d_j (1-\frac{2d_i}{d_1}) t^i t^j (t^1)^{1-\frac{2d_i}{d_1}-\frac{2d_j}{d_1}}+ (d-1+d_i+d_j) (t^1)^{2-\frac{2d_i}{d_1}-\frac{2d_j}{d_1}} (G_{r-i+1,n-j+1} + t^r \delta^{r,i+j})\\\nonumber &=(d_1-d_i-d_j) t^i t^j (t^1)^{1-\frac{2d_i}{d_1}-\frac{2d_j}{d_1}} + (-d_1+d_i+d_j) (t^1)^{2-\frac{2d_i}{d_1}-\frac{2d_j}{d_1}} \left( G_{r-i+1,r-j+1} + t^r \delta^{r,i+j}\right)\\ \label{ex dual} &= (d_1-d_i-d_j) (t^1)^{1-\frac{2 d_i}{d_1}-\frac{2 d_j}{d_1}} \left( t^i t^j- t^1 G_{r-i+1,r-j+1} -t^1 t^r \delta^{r, i+j} \right). \end{align} On the other hand \begin{align} \frac{\partial^2 \widetilde{\mathbb F}}{\partial {s^{r-i+1}} \partial {s^{r-j+1}}}&= \left( t^r \delta_{r,i+j} (t^1)^{1-\frac{2}{d_1}-\frac{2d_{r-i+1}}{d_1}}+ G_{r-i+1,r-j+1} (t^1)^{-1-\frac{4}{d_1}+\frac{2d_{r-i+1}}{d_1}} \right) \left( -(s^1)^{\frac{2d_{r-j+1}}{d_1}-1} \right) \\ \nonumber &+ \left( t^i (t^1)^{1-\frac{2}{d_1}-\frac{2d_i}{d_1}} \right) \left( s^i (s^1)^{\frac{2}{d_1}-1} \right)\\ \nonumber &=\left( t^r \delta_{r,i+j} (t^1)^{2-\frac{2d_i}{d_1}-\frac{2d_j}{d_1}}+ G_{r-i+1,r-j+1} (t^1)^{-2-\frac{4}{d_1}+\frac{2d_{r-i+1}}{d_1}+\frac{2d_{r-j+1}}{d_1}} \right)- \left( t^i t^j (t^1)^{2-\frac{2d_i}{d_1}-\frac{2d_j}{d_1}} \right) \\ \nonumber &=(t^1)^{1-\frac{2d_i}{d_1}-\frac{2d_j}{d_1}} \left( t^r \delta^{r,i+j} t^1 + G_{r-i+1,r-j+1} t^1- t^i t^j \right). \end{align} Therefore, \begin{equation} \label{exx dual} \widehat{\Omega}_2^{ij}(s)= (d_i+d_j-d_1)(t^1)^{1-\frac{2d_i}{d_1}-\frac{2d_j}{d_1}} \left( t^r t^1 \delta^{r,i+j} + G_{r-i+1,r-j+1} t^1- t^i t^j \right) =\O_2^{ij}(s). \end{equation} \end{proof} \begin{example} Consider the following solution to WDVV equations \begin{equation} \mathbb F=\frac{t_1^3}{6}-\frac{1}{2} t_2^2 t_1+\frac{1}{2} t_2^2 t_3+\frac{1}{2} t_1 t_3^2. \end{equation} It corresponds to a trivial Frobenius manifold structure, i.e., Frobenius algebra structure does not depend on the point. Here the charge $d=0$, the Euler vector field $E=\sum t_i \partial_{t_i}$ and identity vector field $e=\partial_{t_3}$. The intersection form is \[ \Omega_2(t) =\left( \begin{array}{ccc} t_1 & t_2 & t_3 \\ t_2 & t_3-t_1 & -t_2 \\ t_3 & -t_2 & t_1 \\ \end{array} \right) \] Setting \[ s_1=-t_1,\ \ s_2=\frac{t_2}{t_1},\ \ s_3=\frac{t_2^2}{2 t_1^3}+\frac{t_3}{t_1^2} \] the conjugate QFPM has $\widetilde \O_1^{ij}(s)=\delta^{i+j}_3$ and \[\O_2(s)=\left( \begin{array}{ccc} -s_1 & 0 & s_3 \\ 0 & s_3+\frac{3 s_2^2}{2 s_1}+\frac{1}{s_1} & -\frac{s_2^3}{s_1^2}-\frac{2 s_2}{s_1^2} \\ s_3 & -\frac{s_2^3}{s_1^2}-\frac{2 s_2}{s_1^2} & \frac{3 s_2^4}{4 s_1^3}+\frac{3 s_2^2}{s_1^3}-\frac{1}{s_1^3} \\ \end{array} \right)\] The potential of the conjugate Frobenius manifold structure reads \[\widetilde{\mathbb F}(s)=\frac{-1}{6 s_1} +\frac{s_2^2}{2 s_1} +\frac{s_2^4}{8 s_1}+\frac{1}{2} s_2^2 s_3+\frac{1}{2} s_1 s_3^2.\] One can check that this is the same potential obtained by applying the inversion symmetry to $\mathbb F(t)$. Note that ${\widetilde{E}}=-s_1 \partial_{s_1}+s_3 \partial_{s_3}$ and $\widetilde{E} \widetilde{\mathbb F}= \widetilde{\mathbb F}$. \end{example} \section{The conjugate of a polynomial Frobenius manifold} \label{first section} In this section, we recall the construction of Frobenius manifolds on the space of orbits of Coxeter groups given in \cite{DCG} and we apply the results of this article. We fix an irreducible Coxeter group ${\mathcal W}$ of rank $r$. We consider the standard real reflection representation $\psi: {\mathcal W}\to GL(V)$, where $V$ is a complex vector space of dimension $r$. Then the orbits space $M=V/{\mathcal W}$ is a variety whose coordinate ring is the ring of invariant polynomials $\mathbb C [V]^{\mathcal W}$. Using the Shephard-Todd-Chevalley theorem, the ring $\mathbb C [V]^{\mathcal W}$ is generated by $r$ algebraically independent homogeneous polynomials. Moreover, the degrees of a complete set of generators are uniquely specified by the group \cite{Hum}. We fix a complete set of homogeneous generators $u^1,u^2,\ldots,u^r$ for $\mathbb C [V]^{\mathcal W}$. Let $\eta_i$ be the degree of $u^i$. Here, we have \[2=\eta_1<\eta_2\leq\eta_3\leq \ldots \leq\eta_{r-1}< \eta_r.\] It is known that $\eta_i+\eta_{r-i+1}=\eta_r+\eta_1$. Consider the invariant bilinear form on $V$ under the action of ${\mathcal W}$. Then it defines a contravariant flat metric $\Omega_2$ on $M$ and we let $u^1$ equals its quadratic form. We fix the vector field $e:=\partial_{u^r}$. There is another flat contravariant metric $\Omega_1:=\mathrm {Lie}_{e}\O_2$ on $M$, which was initially studied by K. Saito (\cite{Saito}, \cite{Saito1}) and it is called the Saito flat metric. Then $T:=(\O_2,\O_1)$ is a FPM and Dubrovin proved the following theorem. \begin{theorem}\cite{Du98} $T=(\Omega_2,\O_1)$ is a regular QFPM of charge $\frac{\eta_r-2}{\eta_r}$ and leads to a polynomial Frobenius manifold structure on $M$, i.e., the corresponding potential is a polynomial function in the flat coordinates. \end{theorem} We observe that the polynomial Frobenius structure defined by $T$ has $\tau=\frac{1}{\eta_r} u^1$, the Euler vector field $E=\frac{1}{\eta_r}\sum_i\eta_i u^i \partial_{u^i}$, the identity vector field $e$ and degrees $\frac{\eta_i}{\eta_r}$. Note that $E$ is independent of the choice of generators but $e$ is defined up to a constant factor. Thus, changing the set of generators will lead to an equivalent Frobenius manifold structure \cite{DCG}. The following theorem was conjectured by Dubrovin and proved by C. Hertling. \begin{theorem} \cite{Hert} \label{polyFrob2} Any semisimple polynomial Frobenius manifold with positive degrees is isomorphic to a polynomial Frobenius structure constructed on the orbits space of the standard real reflection representation of a finite irreducible Coxeter group. \end{theorem} Clearly, $T$ satisfies the hypotheses of Theorem \ref{dual Frob manif} and we have a conjugate regular QFPM $\widetilde T:=(\O_2, \mathrm {Lie}_{\widetilde e} \O_2)$, where $\widetilde{e}= (\tau)^{\eta_r} e$. Moreover, from the work of K. Saito and his collaborators (see also \cite{DCG}), we can fix $u^1,\ldots, u^r$ to be flat with respect to $\Omega_1$ and the potential of the polynomial Frobenius manifold will have the standard form \eqref{norm potential}. In particular $\widetilde T$ is the regular QFPM of the Frobenius manifold structure obtained by applying inversion symmetry to the polynomial Frobenius manifold on $M$. Considering Theorem \ref{polyFrob2}, we wonder what is the intrinsic description for the conjugate Frobenius manifold as this may help in the classification of Frobenius manifolds. In \cite{nonref}, we give a similar discussion for the $r$ Frobenius manifold structures constructed in \cite{polyZuo} on the orbits space $M$ when ${\mathcal W}$ is of type $B_r$ or $D_r$. \section{Remarks} Let $M$ be a Frobenius manifold of charge $d$ and $T=(\O_2,\O_1)$ be the associated QFPM. If $T$ is regular then $d\neq 1$. We did not succeed to find an example when $d\neq 1$ and $T$ satisfies conditions \eqref{new condition} but $T$ is not regular. If such example exists then $T$ has a conjugate QFPM $\widetilde T$ and it will be interesting to find if $\widetilde T$ defines another Frobenius manifold structure on $M$. We mention that the inversion symmetry of the WDVV equation can be applied to a solution $\mathbb F(t)$ in the standard form \eqref{norm potential} under more general quasihomogeneity condition than condition \eqref{quasihomog}. In this case, if the conjugate Frobenius manifold structure exists, we believe that it will be equivalent to Frobenius manifold structure obtained by applying the inversion symmetry but the corresponding potentials are not equal. We confirm this by Example \ref{example dim 2}. Note that Frobenius manifold structures which are invariant under inversion symmetry were studied in \cite{morison}. We did not consider these cases as the charge will equal 1. It will be interesting to study the consequences of Theorem \ref{main thm} on the interpretation of the inversion symmetry in terms of the action of the Givental groups obtained in \cite{givental} and the relation found in \cite{zang} between the principle hierarchies and tau functions of the two solutions to the WDVV equations related by the inversion symmetry. We also believe that the findings in this article can be generalized to the theory of bi-flat $F$-manifolds \cite{ArLor}. It is known that the leading term of a certain class of compatible local Poisson structures leads to a regular QFPM and thus to a Frobenius structure \cite{DZ}, \cite{Du98}. Polynomial Frobenius manifolds obtained in \cite{mypaper1} are constructed by fixing the regular nilpotent orbit in a simple Lie algebra and uses compatible local Poisson brackets obtained by Drinfeld-Sokolov reduction. In these cases, the Poisson brackets form an exact Poisson pencil, and thus their central invariants are constants \cite{FalLor}. If the Lie algebra is simply-laced, then the central invariants are equal \cite{DLZ} which means the Poisson structures are consistent with the principle hierarchy associated with the Frobenius manifold \cite{DZ}. Fix one of these polynomial Frobenius structures and denote the associated local Poisson brackets by $\mathbb B_2$ and $\mathbb B_1$ (here $\mathbb B_2$ is the classical $W$-algebra). In the flat coordinates, these local Poisson brackets form an exact Poisson pencil under the identity vector field $e$, i.e., $\mathfrak{L}_e\mathbb B_2=\mathbb B_1$ and $\mathfrak{L}_e\mathbb B_1=0$. Let us denote the leading term of $\mathbb B_2$ by $B_2$ and $\widetilde e$ is the vector field associated with the conjugate Frobenius manifold structure. We proved in this article that $\mathfrak{L}_{\widetilde e}^2 B_2=0$. Then it is natural to ask if $\widetilde e$ also leads to an exact Poisson pencil, i.e., $\mathfrak{L}_{\widetilde e}^2\mathbb B_2=0$. Our calculations for the simple Lie algebra of type $A_3$, shows that this is not true. \vspace{0.1cm} \noindent{\bf Acknowledgments.} The authors thank anonymous reviewers whose comments/suggestions helped clarify and improve the results. In particular, directing us to the coordinates free condition \eqref{new condition}. This work is funded by the internal grant of Sultan Qaboos University (IG/SCI/DOMS/19/08).
1,108,101,566,259
arxiv
\section{Introduction} Stellar population synthesis models provide a framework through which observational data of stellar clusters, galaxies and galaxy populations can be interpreted \citep{1976ApJ...203...52T}. Identifying the properties of the observed population relies on matching the data to predictions determined by the age, mass, metallicity and other properties of the best-fitting model. Those predictions are sensitive to the assumed evolution of individual stars included in the synthesis model, which in turn depends on assumptions including the fraction of stars affected by binary evolution pathways. While the majority of stellar population and spectral synthesis models currently in use neglect the role of stellar multiplicity \citep[e.g.][]{2003MNRAS.344.1000B,2005MNRAS.362..799M,2004A&A...425..881L}, there is an increasing recognition that its effects are important, particularly when interpreting young and distant stellar populations, or in determining the rates of transient objects \citep[e.g.][]{1991A&A...249..411V,1992ApJ...386..197T,1998A&A...333..557D,1998NewA....3..443V,2013MNRAS.433.1039Z,2014MNRAS.444.3466S,2016MNRAS.462.3302E,2019MNRAS.482..870E,2016MNRAS.456..485S,2016MNRAS.458L...6W,2016MNRAS.459.3614M,2016ApJ...826..159S,2018ApJ...869..123S,2020MNRAS.491.3479C,2020A&A...634A.134G,2020arXiv200207230Z}. The fraction of massive stars affected by a binary companion during their evolution is clearly substantial, and cannot be entirely neglected \citep{2012Sci...337..444S,2013A&A...550A.107S}. Nonetheless, implementing binary evolution pathways is both technically challenging and involves introducing additional assumptions for the binary fraction, and the distribution of initial binary parameters in the population, as well as the initial mass function (IMF). Constraints on these parameters have improved significantly in recent years \citep{2017ApJS..230...15M,2019ApJ...875...61M,2020arXiv200500014T}, but remain poor at low metallicities and outside the local Universe. In \citet{2019A&A...621A.105S} we began a programme to explore the impact of these uncertainties on stellar population predictions, by varying the initial mass function parameters assumed by the Binary Population and Spectral Synthesis \citep[BPASS, ][hereafter E17]{2017PASA...34...58E} model framework, while keeping the binary parameters fixed. In \citet[][hereafter S20]{2020MNRAS.tmp.1299S} we instead explored the impact of stellar binary population parameter uncertainties on the integrated light of stellar populations for a fixed IMF. In that work we considered both observational uncertainties on the binary parameters in the current v2.2 of BPASS, which are based on the analysis of \citet[][ hereafter MS17]{2017ApJS..230...15M}, and an extended grid of models in which the binary fraction as a function of mass is varied by an arbitrary amount. In parallel, recent work by \citet{2018ApJ...867..125D,2020arXiv200413040D} has explored the effect of both binary fraction and rotation on predictions for resolved stellar populations, using a custom set of models in which stars of all masses are assumed to share a common binary fraction. They identified the ratio of certain massive stellar types, and in particular the ratio of stripped-envelope, strong-wind, helium-atmosphere Wolf-Rayet (WR) stars to red supergiant (RSG) stars, as being sensitive to the binary fraction (and indeed rotational mixing) assumed. Here we explore the impact of a mass-dependent binary fraction on both stellar type ratios and supernova type ratios using a grid of models with a wide range of possible initial mass-dependent binary fractions and metallicities. We explore whether binary fractions might be recovered from observations of resolved stellar populations in the local Universe, or of bright transients at cosmological distances. We also explore the impact on these interpretations of recent proposals that the minimum luminosity of WR stars identified spectroscopically may show a strong metallicity dependence. The structure of this paper is as follows: In section \ref{sec:method} we introduce the model grid used here and discuss the alternate definitions of WR stars. In section \ref{sec:metal} we present the predictions of our models for continuously star forming populations as a function of metallicity. In section \ref{sec:redshift} we consider the binary fraction influence on supernova rates and the ratio between supernova types, assuming appropriate redshift histories for both star formation and its metallicity distribution. We evaluate the impact of WR definition and of binary fraction on these predictions, and consider whether upcoming projects will enable binary fraction to be evaluated observationally in future, in section \ref{sec:discussion}. Finally, we summarise our main conclusions in section \ref{sec:conc}. \section{Method}\label{sec:method} \subsection{Standard Models}\label{sec:bpass} All models presented here are based on the Binary Population and Spectral Synthesis (BPASS) stellar population synthesis models \citep{2009MNRAS.400.1019E,2012MNRAS.419..479E,2016MNRAS.456..485S,2017PASA...34...58E}, specifically their v2.2.1 implementation \citep{2018MNRAS.479...75S}. This framework generates an evolving simple (i.e. coeval) stellar population in which the initial stellar masses are distributed according to a broken power law, and the binary fraction, initial period distribution and initial mass ratio distribution of stars are based on the distributions determined by \citetalias{2017ApJS..230...15M}. These were initially determined empirically for stars in five mass ranges and four initial period bins, and are interpolated onto the BPASS mass and period grid. Here we keep the initial mass function, initial period distribution and mass ratio distributions fixed in line with the BPASS v2.2 default, but vary the binary fraction with the logarithm of the mass of the primary star. As in \citetalias{2020MNRAS.tmp.1299S}, where the unresolved stellar populations derived from the same models are discussed, we define two sets of variant models. In set 1, the high mass binary star fraction (above 20\,M$_\odot$) is fixed at unity and the low mass binary fraction is permitted to vary from about 40 per cent at Solar mass up to unity. In set 2, the Solar mass binary star fraction is held fixed at about 40 per cent, but the high mass binary fraction is permitted to vary from its current estimate (near unity) down to 40 per cent. These sets of varying binary fractions are defined in Fig. \ref{fig:fbase} and discussed in detail in \citetalias{2020MNRAS.tmp.1299S}. We note that this approach differs from and is complementary to that of \citet{2018ApJ...867..125D,2020arXiv200413040D} in which stars of all masses are deemed to share a common binary fraction, in conflict with the observed distributions in the local Universe. Since those papers addressed the relative numbers of massive stars, derived from a relatively narrow range of initial masses in young populations, their assumption of a constant binary fraction over that mass range is likely reasonable. However we expect the dependence on initial mass to affect any comparison with populations arising from lower mass stars - for example in the ratios of different supernova types as a function of metallicity or age, or their cosmic evolution (as discussed in section \ref{sec:redshift}). The models presented here do not vary the distribution of initial binary separation and mass ratio due to computational constraints, but focus on the total binary fraction as a function of primary star mass. The effects of varying these parameters independently was explored for unresolved stellar populations by \citetalias{2020MNRAS.tmp.1299S}, and it is clear that the current observational constraints on separation and mass ratio permit a large range of possible models. In the context of the work on resolved populations in this paper, the key question to be addressed is whether binary interactions alter the evolution of a system, thus changing its stellar type or supernova type at death. A system is more likely to interact if the stars begin their life in a close binary or if the mass ratio between primary and companion is near unity. Thus an increase in the total binary fraction has a similar effect to biasing the initial period distribution towards shorter periods, or to biasing the mass ratio towards twin systems. The default BPASS prescription for these is fixed based on observational constraints derived as a function of stellar mass by \citet{2017ApJS..230...15M}, and for massive stars already include a bias towards twin systems and short periods. Thus varying the overall binary fraction captures the majority of the behaviour for massive stars. For lower mass (e.g. Solar-type) stars, the distributions are broader and the observational constraints weaker, and so models in set 1 will be degenerate with models with larger mean separations or smaller mass ratios. For each variant binary fraction versus mass distribution function, we calculate time-evolving stellar number counts for populations with an initial total stellar mass of $10^6$\,M$_\odot$ at 13 metallicities and 42 age steps, spaced logarithmically such that log(age/years)=$6.0+i\times\Delta$(age) ($i=0-41$) and the increment $\Delta$(age)=0.1. For each of these age steps, we assign each stellar model a type by luminosity, temperature and surface composition. Similarly we assign a type to each supernova identified based on the state of its progenitor at the end of its evolution. These classifications are described in \citet{2017PASA...34...58E}. Briefly, a star is considered to undergo a core-collapse supernova if it has undergone core carbon burning and has a CO-core mass $>1.38$\,M$_\odot$ at the end of its life. Its type is then determined by the chemical composition of the surface layers which will be ejected, and the remnant (if any) determined from the core mass after accounting for the supernova energy injection. The survival or disruption of the binary is determined probabilistically, given an assumed kick distribution. For stars with insufficient mass to undergo core collapse, the end state is deemed to be a white dwarf with the mass of the progenitor star's helium core at the end of its life. Binary systems which survive to this point can show an increase in the white dwarf mass through mass transfer from a companion, or a merger of double white dwarfs through angular momentum loss due to gravitational wave radiation. Where either of these pathways result in a white dwarf with a total mass exceeding the Chandrasekhar limit, a thermonuclear, type Ia supernova is deemed to occur. The rates and delay time distributions of such explosive transients, as modelled in BPASS, are discussed in detail in \citet{2019MNRAS.482..870E} and are shown to be consistent with observational constraints. \begin{figure} \includegraphics[width=\columnwidth]{plot_supply_base_newcol.eps} \caption{Multiple fractions tested in an experimental grid to examine possible observable signatures for binary populations. Each line indicates a model binary fraction distribution which either raises the binary fraction at low stellar mass (set 1, dashed lines) or lowers it at high mass (set 2, solid lines). Data points are drawn from \citetalias{2017ApJS..230...15M} and the thick red line indicates the fiducial model applied in BPASS v2.2.} \label{fig:fbase} \end{figure} \subsection{Wolf-Rayet definition}\label{sec:WRdef} In the standard models described above, we have used the WR definitions laid out in \citetalias{2017PASA...34...58E} in which stars are identified as WR based primarily on their surface compositions. Stars are assumed to be identifiable as strong wind-driving, Wolf Rayet stars, rather than lower mass helium stars, if they have a luminosity exceeding log(L/L$_\odot$)$>4.9$. Recent work \citep{2020A&A...634A..79S} has argued on both observational and theoretical grounds that this simple constraint is insufficient. Instead, the luminosity constraint above which a star shows the spectral features classically identified as a Wolf-Rayet may be metallicity dependent, scaling as $L^{WR}_\mathrm{spec}\propto Z^{-1}$. Stars below this threshold would show a blue, stripped star spectrum, but produce narrow line emission, rather than the strongly line-broadened emission associated with classical Wolf-Rayets. To evaluate the impact of this proposal on the predicted number counts of stars by type, we recalculate the classification of stars in our models based on the relationship: \[ \log_{10}\left(L^{WR}_\mathrm{spec}\right) = 4.9 - \log_{10}(Z/0.014).\] Only stars above this luminosity threshold are classified as WR. These models are shown on figures with dotted lines, where appropriate. We do not expect this change to affect supernova rates, since these are determined by the structure and composition of the progenitor star, which is only weakly related to its stellar classification \citep[e.g.][]{2018PASA...35...49E}. \section{Results}\label{sec:results} \subsection{Trends with Metallicity}\label{sec:metal} \subsubsection{Resolved stellar populations} The metallicity of stars affects their wind strengths, radii, surface gravity and hence probability of undergoing binary interactions while on the main sequence or giant branch. Such interactions can lead to surface hydrogen stripping, rejuvenation and other processes which will change the classification of the stellar model. As a result, we expect (and observe) the ratio of different stellar types to depend on both binary fraction and metallicity. We calculate trends in stellar type number counts with metallicity for star forming stellar populations. In each case we assume that the composite stellar population (CSP) has been forming stars at a constant rate of 1\,M$_\odot$\,yr$^{-1}$ for 100 Myr, such that the number counts of most stellar types have stabilised, with the rate of stellar birth balanced by the rate of stellar death for massive stars. The long-lived low mass stellar population will continue to build up to much later ages, so we focus on the relatively massive stars which may be resolvable as individual stars beyond our immediate environs, and in particular on the Wolf-Rayet (WR) population of stripped-atmosphere stars. In Fig.\,\ref{fig:wr_o} we show the dependence of the WR to O-star ratio in such a population on metallicity and binary fraction. Unsurprisingly this ratio shows effectively no sensitivity to the binary fraction at low masses, with the models in set 1 indistinguishable at Solar metallicity. By contrast, the ratio is moderately dependent on the high mass binary fraction for our standard WR definition. Number count ratios yielded by the revised \citet{2020A&A...634A..79S} definition for WR stars show less dependence on binary fraction, but a stronger metallicity dependence than those using a uniform luminosity definition. For context, we also show a compilation of observational data points reported for this ratio \citep{1994A&A...287..803M,2012MNRAS.420.3091B,2016A&A...592A.105M,2007MNRAS.381..418H,2007A&A...469L..31C}. In each case we use the values reported by the original authors without modification. Where authors give metallicity in the form of 12+log(O/H) we assume $Z=0.020$ corresponds to 12+log(O/H)=8.93 as appropriate for BPASS stellar evolution models \citep{2018MNRAS.477..904X,2017PASA...34...58E}. We note that this observational dataset is likely highly incomplete due to the difficulty of resolving large samples of massive stars, determining their metallicity and classifying them reliably, and we discuss this further in Section \ref{sec:data_lims}. As a result of these uncertainties, the observational data show a large scatter and it is difficult to draw firm conclusions from the data. Nonetheless the models demonstrate that precision on the WR fraction significantly better than one per cent is needed to distinguish between binary fraction models at metallicities near Solar, where the ratio ranges from 0.078 at a massive star binary fraction of unity to 0.058 at a fraction of 40 per cent. \begin{figure} \includegraphics[width=\columnwidth]{plot_numsn_wro_zall-wr.eps} \caption{Wolf Rayet (WN + WC + WNH) to O star (O + Of, log(L/L$_\odot$)>4.9) ratio, as a function of metallicity and a range of binary fractions. Models are as colour coded in Fig \ref{fig:fbase}. Solid lines indicate a WR definition cut at log(L/L$_\odot$)>4.9, dotted lines are for a metallicity-dependent luminosity limit as discussed in section \ref{sec:WRdef}. Data points are from the references labelled \citep{1994A&A...287..803M,2017A&A...603A.130M,2012MNRAS.420.3091B,2007A&A...469L..31C,2007MNRAS.381..418H,2016A&A...592A.105M}. Filled symbols for \citet{2016A&A...592A.105M} indicate corrected values as discussed in Section \ref{sec:data_lims}. } \label{fig:wr_o} \end{figure} A similar dependence on metallicity in seen in the Wolf-Rayet subtype ratios shown in Fig \ref{fig:wc_wn}. The fraction of carbon-rich WC stars in the population (relative to nitrogen-rich WN stars and partially stripped WNH stars) declines sharply with either decreasing metallicity or increasing binary fraction when a uniform luminosity cut for WR stars is used. Introducing a metallicity dependence to the WR luminosity threshold has the effect of strongly reducing the dependence on both metallicity and binary fraction in this ratio. For comparison we show number counts for Galactic and Magellanic WR stars spanning a range of metallicities including the recent compillation from \citet{2015MNRAS.447.2322R}. While the uncertainties on these measurements are still very large they also appear to disfavour the revised \citet{2020A&A...634A..79S} WR star definition. \begin{figure} \includegraphics[width=\columnwidth]{plot_numsn_wcwn_zall-wr.eps} \caption{Wolf Rayet WN to WC ratio as a function of metallicity for binary fractions as colour coded in Fig \ref{fig:fbase}. Dotted lines show the results for the revised WR definition. Data points are drawn from the literature \citep{2015MNRAS.447.2322R,2019Galax...7...74N,2012MNRAS.420.3091B,2017A&A...603A.130M,2007A&A...469L..31C,2007MNRAS.381..418H}. } \label{fig:wc_wn} \end{figure} Another observation that has been suggested as a sensitive probe of massive star populations \citep[e.g.][]{1980A&A....90L..17M,2019Galax...7...74N,2016AJ....152...62M,2018ApJ...867..125D} is the WR to red supergiant (RSG, defined in our models as K or M type stars with log(L/L$_\odot$)$>$4.9) ratio. We show the metallicty dependence of this ratio in our models in figure \ref{fig:wr_rsg}. Interestingly, and unlike the previous two ratios considered, this quantity is only mildly dependent on metallicity when using our standard WR definition, but very strongly dependent on massive star binary fraction \citep[as also noted by][]{2018ApJ...867..125D}. This is a useful trait: the precise metallicity of stellar populations is often difficult to determine, particularly for more distant objects. Given the \citet{2020A&A...634A..79S} WR definition, the binary sensitivity remains but the ratio is now also metallicity dependent. Since the ratio is close to 1:1, small differences in the population ratio can be determined with relative ease - although the low number of objects in both classes still presents a problem. For comparison, we plot the ratio for M33 from \citet{2016AJ....152...62M} iand estimates for the SMC and LMC for which RSG data is drawn from \citet{2003AJ....126.2867M} and WR numbers from \citet{2018ApJ...863..181N}. As demonstrated by \citet{2018ApJ...867..125D}, this line ratio is also dependent on the age of a simple stellar population, and so comparisons of Fig.~\ref{fig:wr_rsg} to data are not recommended for small starbursts or single-aged stellar clusters, but are likely to be robust in the larger populations such as galaxies which have been forming stars at a constant or slowly varying rate over $10^8$\,year timescales, such as M33. \begin{figure} \includegraphics[width=\columnwidth]{plot_numsn_rsg_zall-wr.eps} \caption{Wolf Rayet (WN + WC) to RSG (K+M, log(L/L$_\odot$)$>4.9$) ratio as a function of metallicity for binary fractions as colour coded in Fig \ref{fig:fbase}. Dotted lines show the ratio for the revised WR luminosity limit. Data points are drawn from the literature \citep{2003AJ....126.2867M,2016AJ....152...62M,2018ApJ...863..181N}.} \label{fig:wr_rsg} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{plot_numsn_sn_zall-wr.eps} \caption{SN type II to SN type Ib/c ratio as a function of metallicity for binary fractions as colour coded in Fig \ref{fig:fbase}. Metallicity differences or uncertainties swamp binary fraction ones. For comparison, we show a compilation of data from the literature with representative uncertainties \citep{2008ApJ...673..999P,2009A&A...503..137B,2012ApJ...759..107K,2015MNRAS.452.2597X,2017ApJ...837..120G,2018A&A...613A..35K}.} \label{fig:supernovae} \end{figure} \subsubsection{Relative rates of supernovae} While resolved stellar number counts such as those discussed above are promising binary fraction diagnostics, an alternative diagnostic can be derived from the manner in which these stars end their lives \citep[e.g.][]{2008MNRAS.384.1109E}. Stars which have been stripped or gained mass through binary interactions may produce explosions which are classified differently, shifting between hydrogen-rich (type II) and hydrogen-poor (type I) classes. Amongst these transients, the ratio of stripped-envelope to hydrogen-rich core-collapse supernovae shows promise as a diagnostic of binary fraction. As Fig.\,\ref{fig:supernovae} demonstrates, this ratio declines with decreasing metallicity, tracking the fraction of stripped envelope massive stars in the population. As before, we overplot these models with a representative sample of observational data, showing both the vast range of estimates in the literature, and the large uncertainties on current measurements. \subsection{Cosmic Evolution}\label{sec:redshift} The probes discussed above are sensitive to the massive star binary properties but relatively insensitive to the binary fraction amongst intermediate mass and Solar-type stars. To probe these, we need to identify sources or transients with low mass progenitors, and take into account the longer evolutionary lifetime of these stars. Hence we need to account for both a star formation and metallicity history over gigayear timescales. This is challenging for any one galaxy, but plausible on a volume-averaged scale where extensive work has gone in to determining both the star formation rate (SFR) density evolution \citep{2014ARA&A..52..415M} and the global metallicity evolution \citep{2006ApJ...638L..63L}\footnote{While other metallicity distribution estimates exist in the literature, the metallicity distribution of high redshift star formation remains very uncertain, and we retain this prescription for comparison with earlier work. As \citet{2020MNRAS.493L...6T} explored, this prescription allows the correct recovery of local transient rates.}. In this context we consider the cosmic evolution of supernova rates, considering both core collapse events (with massive progenitors) and thermonuclear detonations (with lower mass progenitors). We adopt the same cosmic evolution prescription for SFR and Z as \citet{2019MNRAS.482..870E} to calculate the star formation rate density distributed between different metallicities as a function of redshift. Using delay time distributions and event rates from our models, we calculate the resultant cosmic evolution of supernova rate per unit volume for each variant binary fraction distribution\footnote{We assume $\Omega_M=0.286$, $\Omega_\Lambda=0.714$, $h=0.696$.}. The results are shown in Fig. \ref{fig:cosmic}. The upper panel gives the evolution in the mean volumetric rate of each supernova type between $z=0$ and $z=6$. In the lower panels, the evolution in the ratio of different types is shown out to $z=2$ and compared to a compilation of observational data as described below. We note that the lines indicating Long Gamma Ray Bursts (LGRBs) include only the chemically homogeneous evolution pathway which dominates at the lowest metallicities, and neglects pathways which operate at higher metallicity \citep[these may be included in later BPASS releases, see ][]{2020MNRAS.491.3479C}. To constrain the observed ratio of thermonuclear type Ia rates to core collapse supernova rates, SN\,Ia \citep{2014PASJ...66...49O,2015A&A...584A..62C,2014AJ....148...13R,2012A&A...545A..96M,2014ApJ...783...28G,2012AJ....144...59P} and CCSN \citep[][and data compiled therein]{2014ApJ...792..135T,2012A&A...545A..96M,2009A&A...499..653B,2016A&A...594A..54P} volumetric rate data have been sorted into $\Delta z=0.2$ bins, and where one or more rate estimates for both types exist in the same redshift bin, their ratio is taken. For the stripped envelope supernova fraction we show the local rate ratio estimated from the LOSS survey \citep{2017PASP..129e4201S} for galaxies at $z<0.05$. At low redshifts, a binary fraction close to unity is preferred for resolved studies of high mass stars, with some indication that a high binary fraction is also preferred for Solar-type stars at very low metallicity \citep{2017ApJS..230...15M,2019ApJ...875...61M}. In each case, however, the observational uncertainties on current survey data are too large to distinguish between binary fractions with any degree of reliability, or to evaluate the redshift evolution of these rates. As Fig.\,\ref{fig:cosmic} demonstrates, the stripped envelope fraction amongst core collapse supernovae evolves linearly with redshift, reflecting the slow evolution in the metallicity of the underlying stellar population. By contrast, the fraction of thermonuclear type Ia SNe relative to core collapse events remains near constant out to $z\sim0.7$ before declining sharply. This results primarily from the much longer delay times distribution of the type Ia events. These require the evolution of relatively low mass stars into white dwarfs, which then grow through binary interactions until the Chandrasekhar mass limit is reached. \begin{figure} \includegraphics[width=\columnwidth]{redshift_plot.eps} \includegraphics[width=\columnwidth]{redshift_plot3.eps} \includegraphics[width=\columnwidth]{redshift_plot4.eps} \caption{Cosmic Evolution in volume-averaged supernova rates by type and type ratios, as a function of binary fraction model, assuming the redshift evolution prescriptions for SFR and Z adopted in \citet{2019MNRAS.482..870E}. Overplotted points show the current state of observational constraints, compiled as described in section \ref{sec:redshift}.} \label{fig:cosmic} \end{figure} \section{Discussion}\label{sec:discussion} As we have demonstrated, both the types of massive stars and their eventual supernovae are sensitive to the presence of binary evolution pathways in the population. So are we approaching the point where resolved studies of massive stars may directly constrain the binary fraction of their underlying populations? \subsection{Observations of stellar type ratios} \label{sec:data_lims} The models presented in section \ref{sec:metal} are broadly consistent with the compilation of observational data shown, in terms of order of magnitude in number count ratios and underlying trends with metallicity. However Figures \ref{fig:wr_o}-\ref{fig:wr_rsg} also demonstrate that there are large variations in observational estimates of stellar type ratios. They also clearly indicate the very small number of measurements for which estimates of metallicity and massive star number count ratios are available. Nonetheless, in certain ratios, and in particular the WR/RSG ratio, the uncertainties quoted on the data are already sufficiently small to interpret as binary fraction measurements Given these factors, it is important to assess the robustness and appropriateness of the samples against which we are comparing. In their recent comprehensive survey of resolved massive stars in M31 and M33 \citet{2016AJ....152...62M} estimated that they were almost complete for Wolf-Rayet stars but were incomplete for RSGs in M31 and had identified only a few percent of the O stars present in the galaxies. In many of the observational samples reported, the completeness is still lower. The O star population is difficult to quantify due to confusion in star forming regions and the typical brightness of individual stars. As a result, the number of O stars is often inferred from the ionizing photon flux inferred in a population, while the number and type of the Wolf-Rayet stars is inferred from fitting of mass-scaled templates to diagnostic spectral features\footnote{This approach is taken by all the data shown on Fig. \ref{fig:wr_o}, with individual WR stars only resolved in the very closest objects such as in parts of the LMC and SMC, and O star numbers always derived indirectly.}. As a result, dusty stars may be undercounted, as may the hottest stars which radiate primarily in the ultraviolet. It is also an inconvenient fact that known Wolf-Rayet stars have luminosities that scatter over two orders of magnitude \citep{2006A&A...449..711C} and so determining whether any individual ionized region has been irradiated by one star or many is challenging \citep[see e.g.][]{2015MNRAS.447.2322R}. This leads to a large scatter in the WR/O star number ratios reported, ranging from those which rely on clear identification of individual stars (incomplete) to those based entirely on inference from unresolved populations (heavily model and metallicity dependent). To illustrate the scale of these effects, in Fig \ref{fig:wr_o} we show two sets of points for the data of \citet{2016A&A...592A.105M}: open circles indicate the values given by the original authors as inferred from fitting unresolved stellar populations, filled circles indicate values using the original WR numbers but modifying the inferred O star count to account for the generally lower ionizing flux to O star number ratio in the BPASS models. As the figure demonstrates, this increases the number of O stars inferred and brings this estimate closer into line with other estimates at similar metallicity. Nonetheless ratios inferred from this data set remain high compared to other data. Each data set presents its own challenges to interpret. In several cases, no uncertainty is given on the published number ratios, and where possible this is inferred to give error bars on Fig. \ref{fig:wr_o} from Poisson number count uncertainties on the inferred population. These have decreased with publication date as the number of detected sources per galaxy has risen. However Poisson uncertainties do not account for systematic uncertainties in the underlying models used to infer the numbers, which can easily be of order a few tenths of a dex and thus span the model parameter space here. A fully consistent comparison between models and data would require the model completeness and calibration calculations to be undertaken using BPASS or a comparable code which incorporates binary evolution pathways. Where Wolf-Rayet stars are identified, either individually or through spectral fitting, they are typically classified into WC or WN types by the strength of carbon features in the spectrum. Thus many of the uncertainties which affect the data in Fig. \ref{fig:wr_o} also affect Fig. \ref{fig:wc_wn}, with the added challenge that subdividing the small Wolf-Rayet population adds to the Poisson uncertainties. Again, it is not always clear whether systematic modelling uncertainties are incorporated in the reported error bars for these data, and is likely that the true uncertainty on most of the data encompasses the full span of the models. In this context, it is interesting to note that above a metallicity of about half Solar, the data appear to favour models with low fractions of massive binaries, which are inconsistant with those observed in the local Universe \citep{2012Sci...337..444S,2014ApJS..215...15S,2017ApJS..230...15M}. This may indicate that the number of WN stars in local galaxies is being underestimated using current template fitting techniques. In comparison to the ratios discussed above, data for the WR/RSG ratio shown in Fig. \ref{fig:wr_rsg} is very sparse in the literature: while RSG and WR populations have been studied separately in local group galaxies, it is rarely possible to evaluate whether the same regions have been surveyed in each case, the metallicity of the region being considered and the relative levels of completeness in the samples. In the figure, we have shown estimates for the SMC and LMC for which RSG data is drawn from \citet{2003AJ....126.2867M} and WR numbers from \citet{2018ApJ...863..181N}. While these works originate from the same team, they are derived from very different imaging surveys, with different spatial coverage. As a result the ratio can be compromised by the inclusion or omission of bright star forming regions, or particularly young regions in one survey which may be omitted from the other, or conversely by a more extended, more mature stellar population. The third data point on Fig. \ref{fig:wr_rsg} is that for M33 in which \citet{2016AJ....152...62M} identified and spectroscopically confirmed 211 resolved WR stars and 220 RSGs and estimated that the survey was near complete for WR stars, and may also be complete for RSGs. This data point (at $\sim$0.5 Solar metallicity) is entirely consistent with the high binary fractions inferred for massive stars elsewhere in the local Universe. Unfortunately, the metallicity of this system is rather uncertain, with the 1\,$\sigma$ error range admitting models with binary fractions of about 70 per cent or higher at 30\,M$_\odot$. This point resulted from a substantial, multi-year campaign, but demonstrates the potential for constraints on the stellar binary fraction from large nearby galaxies. In short, where data based on counting of individual stars is available (primarily in the SMC, LMC and perhaps M33), the data may be used with caution. Where number counts are inferred from unresolved populations, stellar population model dependence and completeness must be carefully considered. \subsection{Constraints from star number counts} The extant observational data cannot distinguish between WR definitions in either the WR/O or the WR/RSG ratio, but hints that the revised luminosity limit suggested by \citet{2020A&A...634A..79S} cannot reproduce the trend in WC/WN ratio with luminosity, for which our original log(L/L$_\odot$)=4.9 luminosity limit, independent of metallicity, provides a good match. If there is indeed a strong metallicity dependence in the luminosity limit for Wolf-Rayet spectroscopic identification, then the apparent discrepancy between the data and these predictions would suggest that the mass-loss rates, and especially their scaling with metallicity, in the BPASS stellar evolution models need to be revised. This question will be revisited in future work, since there is growing evidence that the mass-loss rates for WR stars and RSGs may need to be revised generally \citep[e.g.][]{2015PASA...32...15Y,2017MNRAS.470.3970Y,2020MNRAS.492.5994B,2020ApJ...889...44N}. Setting aside the definition question, and focussing on our standard fixed-luminosity selection, the WR-to-O star ratio ranges from almost 8 per cent at a massive star binary fraction of unity to 6 per cent at a fraction of 40 per cent. As a result, distinguishing these populations at any reasonable degree of confidence would require an observed Wolf-Rayet population well over ten thousand objects - far more than the total number of currently known WR stars in the Milky Way and its satellites. Thus it is unlikely that this ratio will be determined to sufficient precision in any given galaxy to act as a strong constraint on the binary population. Since binary processes are, at least in part, responsible for stripping the envelopes of stars which might otherwise evolve into WR stars, the WR/RSG ratio shows promise for evaluating the binary fraction in local galaxies in the near future. As \citet{2016AJ....152...62M} demonstrated, this ratio can be determined in large nearby galaxies with a high degree of precision, given sufficient observational time and effort. The ratio is relatively insensitive to metallicity, mitigating an often-substantial degeneracy in the fitting of any data, and shows a strong sensitivity to the binary fraction in massive stars. \subsection{Future prospects for star count observations} Given the model-dependence of indirectly-inferred number counts, there is a clear preference for sensitive observations of resolved stars that allow counting of sources down to a luminosity limit of at least log(L/L$_\odot$)=4.9. In this context, it is worth considering what observations future instrumentation may enable in this area. Science cases for the upcoming class of Extremely Large Telescopes (ELTs) include the detailed study of resolved stellar populations beyond the local group. The MICADO instrument on the European ELT\footnote{EELT, https://www.eso.org/sci/facilities/eelt/}, for example, would expect to resolve and detect stars down to the horizontal branch at the distance of the Centaurus group ($\sim$4.6\,Mpc) in five hours of integration, and so should produce complete catalogues for red supergiants \citep{2012PASP..124..653G}. The fields of view expected for ELT instruments are expected to be less than a square arcminute (in some cases, significantly less) and while this is suitable for mapping distant galaxies, will require large mosaics to map Local Group objects. However, like many of the planned ELT instruments, MICADO is optimised to operate in the near-infrared, where adaptive optics can be most effectively deployed. As a result, it is unlikely to provide any information on Wolf-Rayet and other luminous blue supergiant stars, for which near-ultraviolet imaging is preferred. Optical spectroscopy provides an alternate method for identifying Wolf-Rayet stars, as described as above, but the first-light spectrograph on the ELT is not expected to be sufficiently blue-sensitive. In the nearer term, resolved stellar populations may also be accessible to the {\it James Webb Space Telescope} ({\it JWST}) and an early release science programme in this area has been approved in Cycle 0 \citep{2017jwst.prop.1334W}. As is the case for the ELTs, JWST is a near-infrared optimised observatory with a small field of view. It will reach comparable sensitivities to the ELTs due to lying above the atmosphere, but suffers from a larger point spread function. As a result, confusion is likely to be an issue for observations at significant distances, while large mosaics will be necessary to map nearby galaxies. An optimal application for JWST may be study of individual star forming regions or complexes, for which the metallicity, age and binary fraction can be determined simultaneously, in contrast to the constant star formation case considered here. The effort to identify and map Wolf-Rayet stars, however, is unlikely to benefit significantly from either JWST or the ELTs due to their near-infrared optimisation. For these, the current and ongoing effort to identify these sources from integral field spectroscopy and optical photometry is unlikely to be improved upon before the construction of a blue-sensitive, large aperture observatory such as the proposed LUVOIR \footnote{https://asd.gsfc.nasa.gov/luvoir/}. Continuing this work, with a goal of highly complete spectroscopic follow-up, wherever possible of individually resolved sources, is essential if constraints on the binary fraction are to be obtained from stellar type number count ratios. It should also be noted that while these instruments are not optimised for mapping the large angular scales subtended by Local Group galaxies, analysis of the resolved stellar populations in more compact and distant objects may allow average ratios may be derived for larger samples of galaxies as a function of metallicity which will shed light on these populations. As with any observation, it will be crucial to map different stellar populations, fit any spectra and determine metallicities self-consistently and for stars drawn from the same spatial regions, before comparison can be made to model predictions such as those presented here. \subsection{Constraints from supernova observations} All the number count ratios involving WR stars are, however, relatively insensitive to the binary fraction in low mass binary stars in the population, as might be expected. The strongest diagnostic of low mass binaries studied here is the ratio of SN\,Ia to core-collapse supernovae. As Fig.~\ref{fig:cosmic} demonstrates, distinguishing between high mass star binary fractions requires precision on the SN\,Ia or SN Ibc fraction of about 1 per cent at $z=0$ and becomes progressively more difficult at higher redshifts. A similar precision is needed to constrain binary fraction as a function of metallicity, as seen in \ref{fig:supernovae} in which the data uncertainties are dominated by corrections for completeness in calibration or follow up. Since stripped envelope supernovae are often harder to classify from lightcurves than hydrogen-rich SN II, many of the estimates shown are likely to be lower limits. While demanding, the required precision promises to be eminently achievable with the upcoming Legacy Survey of Space and Time (LSST) at the Vera Rubin Observatory. LSST will carry out a deep, high cadence survey of the transient sky, expecting to find of order $10^5$ type Ia supernovae per year, and a comparable number of core collapse events \citep[][see chapter 11]{2009arXiv0912.0201L}. The majority of these will lie in the range $z=0.2-1$, an interval over which the ratio of event type is expected to change significantly - as Fig.~\ref{fig:cosmic} shows. Given the expected rate of events, if all could be accurately typed, measurements would be possible of the supernova type ratios in ten redshift bins at about 1 per cent precision - sufficient to distinguish between high and low binary fractions at both ends of the mass function. With lower numbers, of only about 1000 SNe per $\Delta z=0.1$ bin, the number of measured SNe\,Ia is expected to be 200, SNe\,Ibc about 240 and SN\,II about 560 giving 7 per cent uncertainty, 6 per cent uncertainty and 4 per cent uncertainty respectively on measured rates from simple Poisson statistic arguments - these then need to be corrected for observational biases. With 10,000 SNe per bin, the Poisson uncertainties drop to 2, 2 and 1 per cent, sufficient to identify the binary fraction to within $\pm1$ model on our current grid. This will be true for CCSN out to $z=0.5$ in 1 year \citep{2009JCAP...01..047L}. Higher redshifts may be accessible through wider redshift bins, while extended data as the survey continues will enable narrower bins to be used, probing more details such as the metallicity history of the galaxy evolution. We note that this assumes redshift uncertainties are smaller than the bin size. At this redshift range, this should be possible in the majority of host galaxies through photometric redshift determination. It also assumes that supernovae can be accurately typed by their lightcurves in the absence of large-scale spectroscopy \citep[expected to be true, ][]{2009arXiv0912.0201L}. We have also assumed that the same binary fraction applies at all metallicities, and that the same distribution of period and mass ratio applies at all binary fractions. These are more difficult to quantify or justify as assumptions and further studies with a more extensive suite of models will be required to evaluate the extent to which the joint posterior probability distribution of these parameters can be determined. Intriguingly, the wide area and deep limits of the LSST data will enable lensed supernovae to be observed at much higher redshifts. \citet{2020MNRAS.491.2447R} estimated that up to 120 lensed supernovae at $z\sim5-7$ could be detected by the LSST Wide Deep Fast survey, with more sources at intermediate redshift. While the precision in any type ratio derived from this higher redshift population would necessarily be large, it will provide an important test of the metallicity distribution assumed for high redshift star formation in this model. In very local examples, identified in LSST or other survey data on well studied local galaxies, it might be possible to determine both the supernova type ratio and WR/RSG ratio, at least for large galaxies. A simultaneous analysis of the SN type ratios and WR/RSG ratios for the same sample of galaxies would be a powerful diagnostic tool. This combination yields a diagnostic grid in binary fraction vs metallicity for $Z>0.002$. Again a precision of about 1 per cent is required to distinguish between models in SN type ratio, while a lower precision (about 10 per cent) is sufficient in the harder-to-measure stellar type ratio, and this is still likely to be challenging for the current and next generation of facilities. \section{Conclusions}\label{sec:conc} Analysis of the type statistics of massive stars has the potential to constrain the fraction of binary stars in stellar populations. However the degree of precision required is significantly higher than that obtained by current surveys. Adopting the metallicity dependence suggested by \citet{2020A&A...634A..79S} for the minimum luminosity of classical Wolf-Rayet stars significantly changes both the metallicity and binary fraction dependence of Wolf-Rayet number type ratios. Both the WR/O and WR/RSG ratios become more strongly metallicity dependent, while the WC/WN ratio becomes less so, in mild conflict with recent observational evidence. More data on these line ratios (drawn from large, complete sample of resolved stars, or potentially from the integrated light of well-aged stellar clusters) are needed before the new WR definition is adopted. We note that \citeauthor{2020A&A...634A..79S} do not argue that stripped helium stars at luminosities between log(L/L$_\odot$)=4.9 and their metallicity dependent limit do not exist or do not affect their surroundings, but rather than they would not show the characteristic spectral features indicative of strong stellar winds. The synergy between the capabilities of upcoming telescopes in the fields of resolved stellar populations (e.g. JWST, ELTs) and supernova rates (e.g. LSST) has the capacity to constrain the binary fraction as a function of metallicity and even redshift. LSST's vast dataset will likely allow both the high and low mass binary fractions to be determined to a high degree of precision, with some constraints on its metallicity evolution if the cosmic evolution of supernova type ratios can be measured with sufficient precision. This relies on reliable typing of supernovae, either photometrically or spectroscopically. We have focussed here on the effect of varying the total binary fraction at a given mass. Since stars in wide binaries (log(initial period/days)$>$4) are unlikely to interact in a Hubble time, and are treated as single stars in BPASS, this variation is degenerate with fixing the binary fraction, but instead biasing its period distribution towards closer binaries. Distinguishing between these scenarios is likely to be far harder, in the absence of spectroscopic period determinations for large numbers of distant stellar populations - beyond the capabilities of even planned telescopes. Constraining the period and mass ratio distributions based on very local stars is likely to remain necessary for some time to come. \section*{Acknowledgements} ERS recieved support from United Kingdom Science and Technology Facilities Council (STFC) grant number ST/P000495/1 and ST/T000406/1. AAC was supported by STFC studentship 1763016. JJE acknowledges support from the University of Auckland and the Royal Society Te Ap\={a}rangi of New Zealand under the Marsden Fund. BPASS would not be possible without the computational resources of the University of Auckland's NZ eScience Infrastructure (NeSI) Pan Cluster and the University of Warwick's Scientific Computing Research Technology Platform (SCRTP). \section*{Data Availability} The model data reported here is tabulated in the appendix and will be made available via the BPASS websites - bpass.auckland.ac.uk or warwick.ac.uk/bpass.
1,108,101,566,260
arxiv
\section{Introduction} Let $F$ be a distribution supported on $I=\mathbb{R}^{+}$ or $\mathbb{R}$ such that $$\int_I x^n dF(x) < \infty \quad \text{for all } n\geq 1.$$ Under this assumption we say that $F$ has a finite moment sequence on $I$. A distribution $F$ with finite moment sequence on $I$ is called $M$-indeterminate if there are other distributions supported on $I$ having the same moments as $F$. \\ In 1945 Krein proved that if $F$ is an absolutely continuous distribution on $\mathbb{R}$ with finite moment sequence whose density $f$ has finite logarithmic integral, i.e. \begin{equation}\label{logint} \int_{\mathbb{R}}-\frac{\log f\left( x\right) }{1+x^{2}}dx<\infty , \end{equation} then $F$ is $M$-indeterminate. This is the so-called Krein criterion.\\ About the Krein criterion, in \cite{Ostro} the authors say that it ``is a qualitative result; there is no indication of how to write other distributions with the same moments as $F".$ In \cite[Theorem 1]{Lin} the author used the theory of the Hardy space on the upper half plane $H^1$ to get a simple proof of the Krein criterion. In fact, if $f$ is a density satisfying the Krein condition (\ref{logint}), the author proved the existence of a density $g$ having the same moment sequence as $f$. In this work we go a step further, we combine the ideas in the proof of Theorem 1 in \cite{Lin} with some results of the Hilbert transform and the space $H^1$, to obtain an explicit description of the latter density $g$.\\ Actually, in this setting, we get a family of densities having the same moment sequence as $f$. To do this, we consider a construction introduced in \cite{Stoyanov1} to exhibit some densities with the same moment sequence. \\ Let $f$ be a density with finite moment sequence on $I$. Assume that there exists a bounded measurable function $h$ with $\sup_{x\in I}\left| h\left( x\right) \right| \leq 1,$ such that $$\int_{I}x^{n}f\left( x\right) h\left( x\right) dx=0 \quad\text{ for all }n\geq 0$$ and the function $fh$ is not identically zero, then the Stieltjes class $S_{I}\left( f,h\right) $ with center at $f $ and perturbation $h$ is given by \begin{equation*} S_{I}\left( f,h\right) =\left\{ f\left(x\right) \left[ 1+\varepsilon h\left( x\right) \right] :x\in I,\text{ }\varepsilon \in \left[ -1,1\right] \right\} . \end{equation*} Clearly, $S_{I}\left( f,h\right) $ is an infinite familiy of densities all having the same moment sequence as $f$. \\ Thus, our main results can be written in terms of Stieltjes classes involving the Hilbert transform of $\ln f$. \begin{theorem}\label{ham} Let $f$ be a density on $\mathbb{R}$ with finite moment sequence. If $f$ has finite logarithmic integral, then $S_{\mathbb{R}}\left( f,\cos( \mathcal{H}\ln f) \right) $ and $S_{% \mathbb{R}}\left( f,\sin ( \mathcal{H}\ln f) \right) $ are Stieltjes classes, where $$\mathcal{H}u(t)=\frac{1}{\pi} P \int_{-\infty}^{\infty} \left(\frac{1}{t-x}+\frac{x}{1+x^2} \right)u(x)dx, \quad t\in \mathbb{R}.$$ \end{theorem} When $I=\mathbb{R}^+$ we have a similar result. \begin{theorem}\label{sti} Let $f$ be a density on $\mathbb{R}^+$ with finite moment sequence. If $f$ satisfies the condition \begin{equation}\label{parasti} \int_0^\infty -\frac{\ln f(x^2)}{1+x^2}dx <\infty, \end{equation} then $S_{\mathbb{R}^+}( f,\sin (\widetilde {\mathcal{H}}\ln f) ) $ is a Stieltjes class, where $$\mathcal{H}_eu(t)=\frac{2t^{1/2}}{\pi}P \int_0^\infty \frac{u(x^2)}{t-x^2}dx,\quad t>0.$$ \end{theorem} This work is organized as follows. In the next section we give some facts about the Hilbert transform and compute the Hilbert transform of two important cases. In the last section we prove the results and analyze two examples to show the usefulness of our approach. \section{Preliminaries}\label{pre} The following results can be found in \cite[pages 60-65]{Koosis}. Suppose that the function $u:\mathbb{R} \rightarrow \mathbb{R}$ satisfies \begin{equation}\label{krein} \int_{-\infty}^{\infty}\frac{|u(t)|}{1+t^2}dt < \infty. \end{equation} Hence the following integral \begin{eqnarray*} U(z)+i\widetilde{U}(z)&:=&\frac{i}{\pi} \int_{-\infty}^{\infty} \left(\frac{1}{z-t} +\frac{t}{1+t^2}\right)u(t)dt\\ &=&\frac{1}{\pi} \int_{-\infty}^{\infty} \frac{\Im z}{|z-t|^2}u(t)dt+i\frac{1}{\pi}\int_{-\infty}^{\infty} \left(\frac{\Re z-t}{|z-t|^2}+ \frac{t}{1+t^2} \right)u(t)dt \end{eqnarray*} converges absolutely on $\mathbb{H}:=\{z\in \mathbb{C}: \Im z>0\}$ and defines an analytic function on $\mathbb{H}$. Notice that $U$ is the Poisson integral of $u$ and is the unique harmonic extension of $u$ to $\mathbb{H}$. Moreover, $\widetilde{U}$ is the unique conjugate harmonic function of $U$ such that $\widetilde{U}(i)=0$.\\ It is known the existence of the non-tangential limits of $U$ and $\widetilde{U}$ at almost $t\in \mathbb{R}$; the non-tangential limit of $U$ is $u$, and the non-tangential limit of $\widetilde{U}$ is called the Hilbert transform of $u$ and is denoted by $\mathcal{H}u$. The Hilbert transform of $u$ can be written as the principal value of a singular integral: \begin{equation*} \mathcal{H}u(t)=\frac{1}{\pi} \lim_{\varepsilon \rightarrow 0}\int_{|x-t|>\varepsilon}\left(\frac{1}{t-x}+\frac{x}{1+x^2} \right)u(x)dx, \quad a.e. \,\, t\in \mathbb{R}. \end{equation*} \begin{remark}\label{even} a) If $u$ is an even function satisfying (\ref{krein}) then $$\mathcal{H}u(t)= \frac{2t}{\pi}P \int_0^\infty \frac{u(x)}{t^2-x^2}dx, \quad a.e. \,\, t\in \mathbb{R}.$$ In particular, $\mathcal{H}u$ is an odd function on $\mathbb{R}$. We also can see that $\mathcal{H}c=0$ where $c$ is a constant function.\\ b) Let $u:\mathbb{R}^+\rightarrow \mathbb{R}$ be such that $u\in L^1(dt/(1+t^2))$, then $\mathcal{H}_eu(t)=\mathcal{H}u^*(t^{1/2}),$ $t>0$, where $u^*(x)=u(x^2),$ $x\neq 0$. \end{remark} \begin{lemma}\label{hilpower} Let $0<|\mu |<1$. The function $h_\mu(x)=|x|^\mu$ satisfies $(\ref{krein})$ and \begin{equation}\label{Mu} \mathcal{H}h_\mu(t)=-\tan(\mu\pi/2) \text{sgn}( t) |t|^\mu,\quad t\neq 0. \end{equation} In particular, $\mathcal{H}_e(x^{\mu})(t)=-\tan(\mu\pi)t^\mu$, $t>0$. \end{lemma} \begin{proof} Let $\mu \in (-1,0)$. From \cite[Table 1.2, page 464]{King} and Remark 3 we have \begin{eqnarray*} -\tan(\mu\pi/2) \text{sgn}(t) |t|^{\mu}&=&\frac{1}{\pi}P\int_{-\infty}^{\infty}\frac{|x|^{\mu}}{t-x}dx\\ &=&\frac{2t}{\pi}P\int_0^\infty\frac{x^{\mu}}{t^2-x^2}dx=\mathcal{H}h_{\mu}(t), \quad t\neq 0. \end{eqnarray*} Let $\mu \in (0,1)$. From Remark \ref{even} and the previous case we obtain \begin{eqnarray*} \mathcal{H}h_\mu(t)&=&\frac{2t}{\pi}P \int_0^\infty \frac{x^{\mu}}{t^2-x^2}dx\\ &=&-\frac{2}{\pi t}P \int_0^\infty \frac{x^{-\mu}}{t^{-2}-x^2}dx\\ &=&-\tan(\mu\pi/2)\text{sgn}(t) |t|^\mu, \quad t \neq 0. \end{eqnarray*} \end{proof} \begin{lemma}\label{hilon} $\mathcal{H}(\ln|x|)(t)= -\pi/2$ \text{sgn}$(t)$ for $t\neq 0$. In particular, $\mathcal{H}_e(\ln x)\equiv -\pi.$ \end{lemma} \begin{proof} From Remark \ref{even} we have $$\mathcal{H}(\ln|x|)(t)=\frac{2t}{\pi}P \int_0^\infty \frac{\ln x}{t^2-x^2}dx,$$ and it is sufficient to consider $t>0$. Now, from the identity $$ \int x^a \ln xdx=\frac{x^{a+1} \ln x}{a+1}-\frac{x^{a+1}}{(a+1)^2},$$ we get for $\varepsilon>0$ small enough that \begin{eqnarray*} \frac{1}{t^2}\int_0^{t-\varepsilon}\frac{\ln x}{1-(x/t)^2}dx&=&\sum_{n=0}^{\infty}\frac{1}{t^{2n+2}}\int_0^{t-\varepsilon}x^{2n}\ln xdx\\ &=&\left. \sum_{n=0}^{\infty}\frac{1}{t^{2n+2}}\left[\frac{x^{2n+1} \ln x}{2n+1}-\frac{x^{2n+1}}{(2n+1)^2}\right|_{x=0}^{x=t-\varepsilon}\right]\\ &=& \sum_{n=0}^{\infty}\frac{(t-\varepsilon)^{2n+1} \ln(t-\varepsilon)}{(2n+1)t^{2n+2}}-\frac{(t-\varepsilon)^{2n+1}}{(2n+1)^2t^{2n+2}}\\ &=& t^{-1}\ln(t-\varepsilon)\arctanh((t-\varepsilon)/t)-\sum_{n=0}^{\infty}\frac{(t-\varepsilon)^{2n+1}}{(2n+1)^2t^{2n+2}}\\ \end{eqnarray*} Similarly, we get \begin{eqnarray*} -\int^\infty_{|t|+\varepsilon}\frac{1}{x^2}\frac{\ln x}{1-(t/x)^2}dx&=& \sum_{n=0}^{\infty}\frac{-t^{2n} \ln(t+\varepsilon)}{(2n+1)(t+\varepsilon)^{2n+1}}-\frac{t^{2n}}{(2n+1)^2(t+\varepsilon)^{2n+1}},\\ &=&-t^{-1} \ln(t+\varepsilon)\arctanh(t/(t+\varepsilon))-\sum_{n=0}^{\infty}\frac{t^{2n}/(t+\varepsilon)^{2n+1}}{(2n+1)^2}, \end{eqnarray*} then we use that $\arctanh(x)=2^{-1}\ln\frac{1+x}{1-x} ,$ $|x|<1$, and apply the Weierstrass M-test considering $\varepsilon\in[0,\varepsilon_0)$ with $\varepsilon_0$ small enough, to obtain \begin{eqnarray*} \mathcal{H}(\ln|x|)(t)&=&\frac{1}{\pi}\lim_{\varepsilon\rightarrow 0^+} \ln\frac{(t-\varepsilon)(2t-\varepsilon)}{(t+\varepsilon)(2t+\varepsilon)} \\ &&-\frac{2}{\pi}\lim_{\varepsilon\rightarrow 0^+}\sum_{n=0}^{\infty}\frac{1}{(2n+1)^2}\left( \left((t-\varepsilon)/t\right)^{2n+1}+(t/(t+\varepsilon))^{2n+1}\right)\\ &=&-\frac{4}{\pi}\sum_{n=0}^{\infty}\frac{1}{(2n+1)^2}=-\frac{\pi}{2},\quad t>0. \end{eqnarray*} \end{proof} \section{Proof of the results} \begin{proof}[\textbf{Proof of Theorem \ref{ham}}.] Since $\ln f\leq f$, condition (\ref{logint})\ is equivalent to $\ln f\in L^{1}( dt/( 1+t^{2}))$, so we can set $u=\ln f$ and proceed as at the beginning of Section \ref{pre}: consider the holomorphic function $F(z)=U(z)+i\widetilde{U}(z)$ on $\mathbb{H}$, where $U$ is the harmonic extension of $u$\ to $\mathbb{H}$, and $\widetilde{U}$ is the unique conjugate harmonic function of $U$ satisfying $\widetilde{U}(i)=0.$\\ Now we introduce the function $G\in hol(\mathbb{H})$ given by \begin{equation*} G\left( z\right) =\exp \circ F (z),\quad z\in \mathbb{H}. \end{equation*} By using the Jensen's inequality we get that \begin{equation*} \left| G\left( z\right) \right| =\exp U\left( z\right) \leq \frac{1}{\pi } \int_{-\infty}^{\infty} \frac{\Im z}{|z-t|^2} f\left( t\right) dt, \quad z\in\mathbb{H}, \end{equation*} therefore \begin{equation*} \int_{-\infty}^{\infty}\left| G\left( x,y\right) \right| dx\leq \int_{-\infty}^{\infty} f\left( t\right) dt=1\text{ for all }y>0. \end{equation*} Thus $G \in H^{1}$ and Theorem 3.1 in \cite[page 55]{Garnett} implies that there exists a function $g\in L^{1}( \mathbb{R}) $ such that \begin{equation*} g( x) =\lim_{y\rightarrow 0^+}G\left( x,y\right) ,\text{\ }a.e.\text{ }x\in \mathbb{R} . \end{equation*} By the other hand, we have \begin{eqnarray*} \lim_{y\rightarrow 0^+} G\left( x,y\right) &=&\exp\left(\lim_{y\rightarrow 0^+}U(x,y)\right) \exp\left(i\lim_{y\rightarrow 0^+}\widetilde{U}(x,y)\right)\\ & =& f\left( x\right) \exp\left(i(\mathcal{H}\ln f )(x)\right),\quad a.e.\text{ }x\in \mathbb{R}, \end{eqnarray*} therefore \begin{equation} \label{fuente} g(x)=f\left( x\right) \exp\left(i(\mathcal{H}\ln f )(x)\right) \quad \text{a.e on } \mathbb{R}. \end{equation} In particular, we notice that $g$ has finite moments of all nonnegative orders.\\ By Lemma 3.7 in \cite[page 59]{Garnett} we have $$\int_{-\infty}^{\infty} g(x)e^{itx}dx =0\quad \text{ for all } t\geq 0, $$ which implies that $$\int_{-\infty}^{\infty} (ix)^k g(x)e^{itx}dx =0\quad \text{ for all } t\geq 0. $$ We set $t=0$ to get \begin{equation}\label{vanish} \int_{-\infty}^{\infty} x^k \Re g(x) dx= \int_{-\infty}^{\infty} x^k \Im g(x) dx= 0\quad \text{ for all } k\geq 0. \end{equation} Since $f$ is a density and $|g|=f$ a.e. on $\mathbb{R}$, it follows that at least one of the functions $\Re g, \Im g$ is a nonzero function. From (\ref{fuente}) and (\ref{vanish}) we get that $\cos(\mathcal{H}\ln f)$ and $\sin(\mathcal{H}\ln f)$ are perturbations for Stieltjes classes with center at $f$. \end{proof} \begin{example} Odd powers of the normal distribution. Let $X$ be a random variable with $X \sim N(0,\frac{1}{2})$, then $X^{2n+1}$, $n\geq 1$, has the density $$f_{n}(x):=\frac{1}{(2n+1)\sqrt{\pi}}|x|^{-2n/(2n+1)}\exp(-|x|^{2/(2n+1)}), \quad x\in \mathbb{R}.$$ Clearly $f_n$ has a finite moment sequence. In \cite{Stoyanov4} was shown that $f_{n}$ has finite logarithmic integral for all $n \geq 1$. Lemmas \ref{hilpower} and \ref{hilon} imply that $$\mathcal{H}\ln f_n(t)=\text{sgn}(t)\left( \pi n/(2n+1)+\tan\left( \pi /(2n+1)\right)|t|^{2/(2n+1)} \right), \, t\neq 0,$$ therefore $S_{\mathbb{R}}\left( f_n,h_c^n \right) $ and $S_{\mathbb{R}}\left( f_n,h_s^n) \right) $ are Stieltjes classes for all $n\geq 1$, where \begin{eqnarray*} h_c^n(t)&=&\cos(\mathcal{H}\ln f_n(t))=\cos\left( \pi n/(2n+1)+\tan\left( \pi /(2n+1)\right)|t|^{2/(2n+1)} \right)\\ &=& \sin(\pi/(4n+2))\cos(\beta_n|t|^{2/(2n+1)})- \cos(\pi/(4n+2))\sin(\beta_n|t|^{2/(2n+1)}), \end{eqnarray*} and \begin{eqnarray*} h_s^n(t)&=&\sin(\mathcal{H}\ln f_n(t))=\text{sgn}(t)\sin\left( \pi n/(2n+1)+\tan\left( \pi /(2n+1)\right)|t|^{2/(2n+1)} \right)\\ &=&\text{sgn}(t) \left(\sin(\pi/(4n+2))\sin(\beta_n|t|^{2/(2n+1)})+ \cos(\pi/(4n+2))\cos(\beta_n|t|^{2/(2n+1)})\right), \end{eqnarray*} with $\beta_n=\tan\left( \pi /(2n+1)\right)$, $t\neq 0$. The perturbation $h_c^n$ was obtained for the first time in \cite{Berg}. As far as we know $h_s$ is a new perturbation, we can proceed as in \cite{Berg} to verify that $f_n h_s$ has vanishing moments. \end{example} \begin{proof}[\textbf{Proof of Theorem \ref{sti}}] We set $f^*(x)=|x|f(x^2)$, $x \neq 0$. Clearly $f^*$ is a density on $\mathbb{R}$ and verifies $$\int_{-\infty}^{\infty}-\frac{\ln f^*\left( x\right) }{1+x^{2}}dx=-2\int_0^{\infty}\frac{\ln x }{1+x^{2}}dx-2\int_0^{\infty}\frac{\ln f\left( x^2\right) }{1+x^{2}}dx<\infty .$$ The hypothesis about $f$ imply that $f^*$ has a finite moment sequence. Actually, since $f^*$ is an even function, the moments of odd order of $f^*$ vanish.\\ Hence $f^*$ satisfies the hypothesis in Theorem \ref{ham} and we can proceed as in (\ref{vanish}) to get \begin{equation*} \int_{-\infty}^{\infty} x^{2k} f^*(x) \cos\left( \mathcal{H}\ln f^* (x)\right) dx= \int_{-\infty}^{\infty} x^{2k+1} f^*(x) \sin\left( \mathcal{H}\ln f^* (x)\right) dx= 0 \end{equation*} for all $k\geq 0$. Remark \ref{even} implies that \begin{eqnarray*} \int_{-\infty}^{\infty} x^{2k} f^*(x) \cos\left( \mathcal{H}\ln f^* (x)\right) dx&=&2\int_0^\infty x^{2k+1}f(x^2)\cos\left( \mathcal{H}\ln f^* (x)\right) dx \\ &=& \int_0^\infty x^k f(x) \cos\left( \mathcal{H}\ln f^* (x^{1/2})\right) dx=0 \end{eqnarray*} for all $k\geq 0$. Similarly, $$\int_{-\infty}^{\infty} x^{2k+1} f^*(x) \sin\left( \mathcal{H}\ln f^* (x)\right) dx=\int_0^\infty x^{k+1/2} f(x) \sin\left( \mathcal{H}\ln f^* (x^{1/2})\right) dx=0$$ for all $k\geq 0$. Unfortunaly, we notice the function $x^{1/2}\sin\left( \mathcal{H}\ln f^* (x^{1/2})\right)$ is not bounded on $\mathbb{R}^+$.\\ Remark \ref{even} and Lemma \ref{hilon} imply that \begin{eqnarray*} \mathcal{H}\ln f^* (t)&=&\mathcal{H}(\ln|x|)(t)+\mathcal{H}\left(\ln f(x^2)\right)(t)\\ &=& -\frac{\pi}{2} \text{sgn}(t)+\frac{2t}{\pi}P \int_0^\infty \frac{\ln f(x^2)}{t^2-x^2}dx. \end{eqnarray*} Finally, for $t>0$ we have $$\mathcal{H}\ln f^* (t^{1/2})=-\frac{\pi}{2}+\frac{2t^{1/2}}{\pi}P \int_0^\infty \frac{\ln f(x^2)}{t-x^2}dx,$$ therefore $\cos\left( \mathcal{H}\ln f^*(t^{1/2})\right)=\sin(\mathcal{H}_e\ln f(t))$ is a perturbation for the Stieltjes class with center at $f$. \end{proof} \begin{example} Let $X\sim N(0,\frac{1}{2})$. For $r>0$ the random variable $|X|^r$ has a density supported on $\mathbb{R}^+$ given by $$f_r(x):=\frac{2}{r\sqrt{\pi}}x^{1/r-1}\exp(-x^{2/r}), \quad x>0.$$ Clearly $f_r$ has a finite moment sequence. In \cite{Stoyanov4} was shown that $f_r$ satisfies the condition (\ref{parasti}) iff $r>4$. In this case, Lemmas \ref{hilpower} and \ref{hilon} imply that \begin{eqnarray*} \mathcal{H}_e\left(\ln f_r\right)(t)&=&\left(1/r-1\right)\mathcal{H}_e (\ln x)(t)-\mathcal{H}_e\left(x^{2/r}\right)(t)\\ &=& (1-1/r)\pi+\tan(2\pi/r)t^{2/r}, \quad t> 0. \end{eqnarray*} Therefore $S_{\mathbb{R}^+}\left( f_r,h_r \right) $ is a Stieltjes class for all $r>4$ where \begin{eqnarray*} h_r(t)&=&\sin(\mathcal{H}_e\left(\ln f_r\right)(t))=\sin((1-1/r)\pi+\tan(2\pi/r)t^{2/r})\\ &=& \sin(\pi/r)\cos(\tan(2\pi/r)t^{2/r})-\cos(\pi/r)\sin(\tan(2\pi/r) t^{2/r}), \quad t>0. \end{eqnarray*} This perturbation was also obtained in \cite{Berg}. \end{example} \textbf{Conclusion} Krein condition is no longer just a qualitative result to show the $M$-indeterminacy of a density $f$ but provides families of densities having all the same moment sequence as $f$.
1,108,101,566,261
arxiv
\section{Introduction} The \emph{Catalan monoid} $C_n$ is the monoid of all order-preserving, weakly increasing self-maps $f$ of $[n]=\{1,\ldots, n\}$. It is well known that the cardinality of $C_n$ is the $n$-th Catalan number. See for example, \cite[Ex.6.19(s)]{Stanley2}. $C_n$ has appeared in many guises in combinatorics and combinatorial semigroup theory under different names. For example, it has also been called the monoid of non-decreasing parking functions or, simply, the monoid of order-decreasing and order-preserving functions \cite{Catalancombo, Catalanmonoid, GM}. It was first observed by Hivert and Thi\'ery \cite{HivThiery}, via an indirect proof, that $\Bbbk C_n$ is isomorphic to the incidence algebra of a certain poset $P_{n-1}$ for any field $\Bbbk$; a different indirect proof using the representation theory of $\mathscr J$-trivial monoids can be found in the second author's book~\cite{BenBook}. Grensig gave a direct isomorphism from the incidence algebra of $P_{n-1}$ to $\Bbbk C_n$ for any base commutative ring with unit $\Bbbk$, but her proof is quite involved and long because of the technical recursive construction of a complete set of orthogonal idempotents. Here we show that there is a straightforward direct isomorphism from $\Bbbk C_n$ to the incidence algebra of $P_{n-1}$ over any base commutative ring with unit whose details are trivial to check. The proof is similar to that used by the second author for inverse monoids~\cite{mobius1} and Stein for more general monoids \cite{steinpartial, Ehresmann, EhresmannErrata}. The complication in previous approaches is avoided here as we show that the isomorphism is given by a unipotent upper triangular $0/1$-matrix. \section{Combinatorics of the Catalan monoid} If $n\geq 0$ is an integer, let $P_n$ be the poset consisting of subsets of $[n]$ ordered by $X\leq Y$ if and only if $|X|=|Y|$ and if $X=\{x_1<\cdots<x_k\}$ and $Y=\{y_1<\cdots<y_k\}$, then $x_i\leq y_i$ for $i=1,\ldots, k$. We shall also need the following refinement of this order given by $X\preceq Y$ if $|X|<|Y|$ or if $|X|=|Y|$ and $X\leq Y$. There is a well-known bijection between the Catalan monoid $C_{n+1}$ and the set of ordered pairs $(X,Y)$ of elements of $P_n$ with $X\leq Y$ given as follows. If $X\leq Y$, define $f_{X,Y}\colon [n+1]\to [n+1]$ in $C_{n+1}$ by \[f_{X,Y}(i) = \begin{cases}y_1, &\text{if}\ 1\leq i\leq x_1\\ y_j, &\text{if}\ x_{j-1}<i\leq x_j\\ n+1, &\text{if}\ i>x_k \end{cases}\] where we have retained the notation of the previous paragraph for $X$ and $Y$. The condition that $X\leq Y$ guarantees that $f_{X,Y}$ is order preserving and weakly increasing. Conversely, if $f\in C_{n+1}$, let $Y=f([n+1])\setminus \{n+1\}$ and let $X=\{i\in f^{-1}(Y)\mid i=\max\{f^{-1}(f(i))\}$; so $X$ consists of the maximum element of each partition block of $f$ except the block of $n+1$. Then it is straightforward to check from $f$ being order-preserving and weakly increasing that $X\leq Y$ and that $f=f_{X,Y}$. See~\cite[Ch. 17 Sec. 5]{BenBook} for details where we note that a slightly different convention was used. Let us say that $S\subseteq [n]$ is a \emph{partial cross-section} for $f\in C_{n+1}$ if $n+1\notin f(S)$ and $f|_S$ is injective, i.e., $|S|=|f(S)|$. We denote by $\mathsf{PCS}(f)$ the set of partial cross-sections of $f$. The following proposition is straightforward from the definitions and so we omit the proof. \begin{Prop}\label{p:basic.facts} Let $S$ be a partial cross-section for $f=f_{X,Y}$ in $C_{n+1}$. \begin{enumerate} \item $S\preceq X$. \item $S\leq f(S)$. \item $f(S)\subseteq Y$, hence $f(S)\preceq Y$. \item $X\in \mathsf{PCS}(f)$. \end{enumerate} \end{Prop} The following lemma is key to our proof. \begin{Lemma}\label{l:product} Let $f,g\in C_{n+1}$. Then $S\in \mathsf{PCS}(fg)$ if and only if $S\in \mathsf{PCS}(g)$ and $g(S)\in \mathsf{PCS}(f)$. \end{Lemma} \begin{proof} Suppose first that $S\in \mathsf{PCS}(fg)$. Since $f(n+1)=n+1$ and $n+1\notin fg(S)$, it follows that $n+1\notin g(S)$. Also, $fg|_S$ is injective and hence $g|_S$ is injective. Thus $S$ is a partial cross-section for $g$. Since $|fg(S)|=|S|=|g(S)|$ we also have that $f|_{g(S)}$ is injective. As $n+1\notin fg(S)$, we see that $g(S)$ is a partial cross-section for $f$. Conversely, assume that $S$ is a partial cross-section for $g$ and $g(S)$ is a partial cross-section for $f$. Then $n+1\notin fg(S)$ and $|fg(S)|=|g(S)|=|S|$. Thus $S$ is a partial cross-section for $fg$. \end{proof} \section{The isomorphism of algebras} Let $\Bbbk$ be a commutative ring with unit and let $n\geq 0$. Let $I(P_n,\Bbbk)$ be the incidence algebra of $P_n$ over $\Bbbk$. It can be viewed as the $\Bbbk$-algebra with basis all ordered pairs $(X,Y)$ with $X\leq Y$ in $P_n$ and where the product is defined on basis elements by \[(U,V)(X,Y) = \begin{cases}(X,V), & \text{if}\ Y=U\\ 0, & \text{else.}\end{cases}\] In other words, this is the algebra of the category corresponding to the poset $P_n$. We partially order the basis $C_{n+1}$ of $\Bbbk C_{n+1}$ by saying $f_{U,V}$ comes before $f_{X,Y}$ if $U\preceq X$ and $V\preceq Y$ and, similarly, we order the basis of $I(P_n,\Bbbk)$ by saying that $(U,V)$ comes before $(X,Y)$ if $U\preceq X$ and $V\preceq Y$. We can now prove the main result. \begin{Thm}\label{t:main} There is an isomorphism $\p\colon \Bbbk C_{n+1}\to I(P_n,\Bbbk)$ of $\Bbbk$-algebras given by \[\p(f) = \sum_{S\in \mathsf{PCS}(f)} (S,f(S))\] for $f\in \Bbb C_{n+1}$. \end{Thm} \begin{proof} It is immediate from Proposition~\ref{p:basic.facts} that $\p$ is well defined and that the matrix of $\p$ as a homomorphism of free $\Bbbk$-modules with respect to our preferred bases and orderings is unipotent upper triangular. Indeed, $\p(f_{X,Y}) = (X,Y)+a$ where $a$ is a sum of certain terms $(U,V)$ with $U\prec X$ and $V\preceq Y$. It follows that $\p$ is an isomorphism of $\Bbbk$-modules. It remains to show that it is a ring homomorphism. Indeed, \begin{align*} \p(f)\p(g) &= \sum_{T\in \mathsf{PCS}(f)}(T,f(T))\cdot \sum_{S\in \mathsf{PCS}(g)}(S,g(S))\\ &= \sum_{S\in \mathsf{PCS}(g), g(S)\in \mathsf{PCS}(f)}(S,f(g(S)))\\ &=\p(fg) \end{align*} where the last equality is by Lemma~\ref{l:product}. This completes the proof. \end{proof}
1,108,101,566,262
arxiv
\section{Introduction} In this paper we will consider the operator $\mathcal{L}$ (called here Heckman--Opdam Laplacian) on $\mathbb{R}^n$ defined, for $f$ a $C^2$ function, by \begin{eqnarray} \label{explicitlaplacian} \mathcal{L} f(x) &=& \Delta f(x)+ \sum_{\alpha \in \mathcal{R}^+}k_\alpha \coth \frac{\langle\alpha,x\rangle}{2}\partial_\alpha f(x) \\ &\quad& \nonumber - \sum_{\alpha \in \mathcal{R}^+}k_\alpha \frac{|\alpha|^2}{4\sinh^2 \frac{\langle\alpha,x\rangle}{2}} \{f(x)-f(r_\alpha x)\}. \end{eqnarray} Here $\Delta$ is the usual Euclidean laplacian, $\mathcal{R}$ is a root system, $\mathcal{R}^+$ its positive part, the $r_\alpha $'s are the orthogonal reflexions associated to the roots and $k$ is a positive function invariant under the action of the $r_\alpha$'s (see the next section). We denote by $W$ the Weyl group, i.e. the finite group generated by the $r_\alpha$'s. We denote by $L$ the restriction of $\mathcal{L}$ to the set of $W$-invariant functions. A simpler formula for $L$ is given by \begin{eqnarray} \label{radiallaplacian2} L f(x) = \Delta f(x)+ \sum_{\alpha \in \mathcal{R}^+}k_\alpha \coth \frac{\langle\alpha,x\rangle}{2}\partial_\alpha f(x). \end{eqnarray} Our main results are the two following: \begin{theo} \label{theoradial} Assume that $k\ge 1/2$. Then the set of bounded $W$-invariant harmonic functions for the Heckman--Opdam Laplacian is exactly the set of constant functions. In other words the Poisson boundary of $L$ is trivial. \end{theo} \begin{theo} \label{theononradial} Assume that $k\ge 1/2$. Then the set of bounded harmonic functions for the Heckman--Opdam Laplacian is a vector space of dimension $|W|$. In other words the Poisson boundary of $\mathcal{L}$ is $W$. \end{theo} In the next section we will give a precise definition for the terminology "harmonic function". We shall also discuss some consequences of our results in terms of the Heckman--Opdam hypergeometric functions, which are particular eigenfunctions of the operator $\mathcal{L}$. \vspace{0.2cm} \noindent The first result (Theorem \ref{theoradial}) was already known for values of $k$ corresponding to the case of symmetric spaces of the noncompact type $G/K$. The second result (Theorem \ref{theononradial}) is new even for these particular values of $k$, but should be also compared to the situation on symmetric spaces. There, according to the fundamental work of Furstenberg \cite{F} (see also \cite{GJT}), the Poisson boundary of the Laplace--Beltrami operator (but also of a large class of random walks) is $K/M$. But it was already observed that in the Heckman--Opdam (also called trigonometric Dunkl) theory the group $W$ often plays the same role than $K$ or $K/M$. First geometrically, since there is a kind of Cartan decomposition: any $x\in \mathbb{R}^d$ can be uniquely decomposed as $w\cdot x^W$, with $x^W$ the radial part of $x$ (lying in the positive Weyl chamber) and $w\in W$. In representation theory also \cite{O}: briefly if $\mathcal{H}$ is the graded Hecke algebra generated by $W$ and the Dunkl--Cherednik operators (see next section), then $(\mathcal{H},W)$ shares some properties of the Gelfand pair $(G,K)$, like the fact that in any irreducible finite-dimensional $\mathcal{H}$-module the subspace of $W$-invariant vectors is at most $1$-dimensional. So in some sense Theorem \ref{theononradial} is another manifestation (let say at an analytical or probabilistic level) of the strong analogy between $W$ and $K$. We should add that the hypothesis $k>0$ is probably sufficient to get the results of Theorem \ref{theoradial} and \ref{theononradial}. Here we restrict us to the case $k\ge 1/2$, because then the stochastic process associated with $L$ (or $\mathcal{L}$) a.s. never hit the walls (the hyperplanes orthogonal to the roots, which correspond to the singularities of $L$), and we need it to be sure that the coupling we use is well defined. The paper is organized as follows. In the next section we recall all necessary definitions. In section \ref{secradial} we prove Theorem \ref{theoradial}, by using the probabilistic technique of mirror coupling. In section \ref{secnonradial} we prove Theorem \ref{theononradial}, by extending the coupling to the non-radial process. Our main tool for this is the skew-product representation from Chybiryakov \cite{Chy}, that we have to adapt to our setting. \vspace{0.2cm} \noindent \textit{Acknowledgments: I warmly thank Marc Arnaudon for having explained to me the technique of mirror coupling, and Alano Ancona for enlightening discussions about the regularity of harmonic functions. } \section{Preliminaries} Let $\mathfrak{a}$ be a Euclidean vector space of dimension $n$, equipped with an inner product $\langle\cdot ,\cdot \rangle$, and denote by $\mathfrak{h}:=\mathfrak{a} +i\mathfrak{a}$ its complexification. We consider $\mathcal{R} \subset \mathfrak{a}$ an integral root system (see \cite{Bou}). We choose a subset of positive roots $\mathcal{R}^+$. Let $\alpha^\vee=2\alpha/|\alpha|^2$ be the coroot associated to a root $\alpha$ and let $$r_\alpha(x)=x-\langle\alpha^\vee,x\rangle\alpha,$$ be the corresponding orthogonal reflection. Remember that $W$ denotes the Weyl group associated to $\mathcal{R}$, i.e. the group generated by the $r_\alpha$'s. Let $k\ :\ \mathcal{R} \rightarrow [1/2,+\infty)$ be a multiplicity function, which by definition is $W$-invariant. We set $$\rho=\frac{1}{2}\sum_{\alpha \in \mathcal{R}^+}k_\alpha \alpha.$$ Let $$\mathfrak{a}_+ = \{x \mid \forall \alpha \in \mathcal{R}^+,\ \langle\alpha,x\rangle>0\},$$ be the positive Weyl chamber. Let also $\overline{\mathfrak{a}_+}$ be its closure, $\partial \mathfrak{a}_+$ its boundary and $\mathfrak{a}_{\text{reg}}$ the subset of regular elements in $\mathfrak{a}$, i.e. those elements which belong to no hyperplane $\{\alpha=0\}$. As recalled in the introduction any $x\in \mathfrak{a}$ can be uniquely decomposed as $x=w x^W$, with $x^W\in \overline{\mathfrak{a}_+}$ and $w\in W$. We call $x^W$ the radial part of $x$ and $w$ its angular part. For $\xi \in \mathfrak{a}$, let $T_\xi$ be the Dunkl--Cherednik operator \cite{C}. It is defined, for $f\in C^1(\mathfrak{a})$, and $x\in \mathfrak{a}_{\text{reg}}$, by $$T_\xi f(x)=\partial_\xi f(x) + \sum_{\alpha \in \mathcal{R}^+}k_\alpha \frac{\langle\alpha,\xi\rangle}{1-e^{-\langle\alpha,x\rangle}}\{f(x)-f(r_\alpha x) \}-\langle\rho,\xi\rangle f(x).$$ The Dunkl-Cherednik operators form a commutative family of differential-difference operators (see \cite{C} or \cite{O}). The Heckman--Opdam Laplacian $\mathcal{L}$ is also given by the formula $$\mathcal{L}+|\rho|^2=\sum_{i=1}^{n} T_{\xi_i}^2,$$ where $\{\xi_1,\dots,\xi_n\}$ is any orthonormal basis of $\mathfrak{a}$. Let $\la \in \mathfrak{h}$. We denote by $F_\la$ the unique (see \cite{HO}, \cite{O}) analytic $W$-invariant function on $\mathfrak{a}$, which satisfies the differential equations $$p(T_\xi)F_\la=p(\la)F_\la \text{ for all W-invariant polynomials }p$$ and which is normalized by $F_\lambda(0)=1$ (in particular $\mathcal{L} F_\la=(\langle\la,\la\rangle-|\rho|^2) F_\la$). We denote by $G_\lambda$ the unique analytic function on $\mathfrak{a}$, which satisfies the differential-difference equations (see \cite{O}) \begin{eqnarray} \label{equations} T_\xi G_\la = \langle\la,\xi\rangle G_\la \text{ for all }\xi \in \mathfrak{a}, \end{eqnarray} and which is normalized by $G_\lambda(0)=1$. These functions are related by the formula: \begin{eqnarray} \label{FG} F_\la(x)=\frac{1}{|W|} \sum_{w\in W} G_\la(wx), \end{eqnarray} for all $x\in \mathfrak{a}$ and all $\la \in \mathfrak{h}$. It was shown in \cite{Sch2} that $\frac{1}{2}\mathcal{L}$ and $\frac{1}{2}L$ are generators of Feller semi-groups that we shall denote respectively by $(P_t,t\ge 0)$ and $(P^W_t,t\ge 0)$. We will use the following definition for harmonic functions: \begin{defin} \label{defharmonic} A bounded or nonnegative function $h:\mathfrak{a} \to \mathbb{R}$ is called harmonic if it is measurable and satisfies $P_th=h$ for all $t>0$. \end{defin} \begin{rem} \emph{It is well known that if $h$ is a $C^2$ function such that $\mathcal{L} h=0$, then $h$ is harmonic in the sense of Definition \ref{defharmonic}. Inversely Corollary \ref{coroG} below shows, when $k\ge 1/2$, that any bounded harmonic function is regular, thus satisfies $\mathcal{L} h=0$. On the other hand, it is a general fact (which applies for any $k>0$), that bounded $W$-invariant harmonic functions are regular in $\mathfrak{a}_+$, but we will not use this fact here.} \end{rem} Observe that by definition $F_\rho$ is a $W$-invariant harmonic function. Moreover it is known (see \cite{Sch2} Remark 3.1) that it is bounded. So Theorem \ref{theoradial} shows that in fact $F_\rho$ is constant equal to $1$. Similarly the functions $G_{w\rho}$'s, for $w\in W$, are harmonic and also bounded. This last property follows from Formula \eqref{FG}, since the $G_{w\rho}$'s are real positive (see \cite{Sch2} Lemma 3.1). In fact one has the following \begin{cor} \label{coroG} If $k\ge 1/2$, then any bounded harmonic function is a linear combination of the $G_{w\rho}$'s, $w\in W$. \end{cor} \begin{proof} The only thing to prove is that the $G_{w\rho}$'s are linearly independent. This results from the fact that they are all eigenfunctions of the Dunkl--Cherednik operators but for different eigenvalues. More precisely, assume that for some real numbers $(c_w)_{w\in W}$, we have $$\sum_{w\in W} c_w G_{w\rho}=0.$$ By applying then the operators $p(T_\xi)$, with $p$ polynomial, we get $$\sum_{w\in W} c_w p(w\rho)G_{w\rho}=0 \quad \textrm{for all p}.$$ >From this, and the fact that $G_{w\rho}(0)=1$ for all $w$, it is easily seen that we must have $c_w=0$ for all $w$. \end{proof} \section{The $W$-invariant case: proof of Theorem \ref{theoradial}} \label{secradial} In this section we shall prove Theorem \ref{theoradial}. For this we will use the stochastic process $(X^W_t,t\ge 0)$ associated with $L$, called radial HO-process, and the so-called mirror coupling technique. First it is known \cite{Sch1} that $X^W$ is a strong solution of the SDE: $$ X^W_t=x+B_t + V^1_t $$ where $(B_t,t\ge 0)$ is a Brownian motion on $\mathfrak{a}$ and $$ V^1_t:=\sum_{\alpha\in \mathcal{R}^+} k_\alpha \alpha \int_0^t \coth \langle \alpha, X^W_s\rangle \ d s. $$ Moreover when $k\ge 1/2$, $X^W$ a.s. takes values in $\mathfrak{a}_+$, or in other words it never reaches $\partial \mathfrak{a}_+$ (see \cite{Sch1}). Now if $x,y\in \mathfrak{a}_+$, we define the couple $((X^W_t,Y^W_t),t\ge 0)$ as follows. Set $T=\inf\{s \mid X^W_s=Y^W_s\}$. Then by definition $X^W$ is as above, and $(X^W,Y^W)$ is the unique solution of the SDE: \begin{eqnarray} \label{SDE} (X^W_t,Y^W_t)=(x,y) + (B_t,B'_t) + (V^1_t,V^2_t), \quad \textrm{for } t<T, \end{eqnarray} where $dB'_t=r_t dB_t$, with $r_t$ the orthogonal reflexion with respect to the hyperplane orthogonal to the vector $Y^W_t-X^W_t$ (in particular Levy criterion shows that $B'$ is a Brownian motion), and $$ V^2_t:=\sum_{\alpha\in \mathcal{R}^+} k_\alpha \alpha \int_0^t \coth \langle \alpha, Y^W_s\rangle \ d s. $$ For $t\ge T$, we set $Y^W_t=X^W_t$. The existence of this coupling is guaranteed by the fact that the SDE \eqref{SDE} has locally regular coefficients. We define also $Z^W$ by $$Z^W_t:=Y^W_t-X^W_t,$$ and set $z^W_t= |Z^W_t|$. It is known \cite{Sch1} that a.s. $X^W_t/t \to \rho$, and thus that $\langle\alpha,X^W_t\rangle \sim \langle\rho,\alpha\rangle t$, for all $\alpha \in \mathcal{R}^+$. From this we see that a.s. $\sup_{t\ge 0}|V^2_t-V^1_t|<+\infty$. Then Tanaka formula (\cite{RY} p.222) shows that $$ z^W_t =\gamma_t + v_t, \quad \textrm{for } t<T, $$ with $\gamma$ a one-dimensional Brownian motion and a.s. $\sup_{t\ge 0}|v_t|<+\infty$. In particular $T$ is a.s. finite. The end of the proof is routine now. Assume that $h$ is a bounded $W$-invariant harmonic function. Then it is well known, and not difficult to show, that $(h(X^W_t),t\ge 0)$ as well as $(h(Y^W_t),t\ge 0)$ are bounded martingales. Thus they are a.s. converging toward some limiting (random) values, respectively $l$ and $l'$. Since a.s. $X^W_t=Y^W_t$ for $t$ large enough, we have a.s. $l=l'$. Then usual properties of bounded martingales show that $$ h(x) = \mathbb{E}[l] = \mathbb{E}[l']=h(y). $$ Since this holds for any $x,y \in \mathfrak{a}_+$, this proves well that $h$ is constant. \hfill $\square$ \section{The non $W$-invariant case: proof of Theorem \ref{theononradial}} \label{secnonradial} In order to prove Theorem \ref{theononradial}, the first idea is to extend the previous coupling to the full process $(X_t,t\ge 0)$ with semi-group $(P_t,t\ge 0)$. For this our tool will be the skew-product representation founded by Chybiryakov \cite{Chy} (see \cite{GaY} and \cite{Chy2} for the one-dimensional case). Actually Chybiryakov dealt with Dunkl processes, so we shall first mention the changes needed to adapt his proof to the present setting, and then explain how to combine this representation with the coupling from the previous section. \subsection{Skew-product representation and extension of the coupling} \label{sprod} The skew-product representation gives a constructive way to define $X$ starting from $X^W$, by adding successively jumps in the direction of the roots. Let us sketch the main steps of the construction (for more details see \cite{Chy}). First one fixes arbitrarily an order for the positive roots: $\alpha_1,\dots,\alpha_{|\mathcal{R}^+|}$. Then for each $j \in [1,|\mathcal{R}^+|]$, set $$ \mathcal{L}^jf(f):= Lf(x) - \sum_{i\le j}c_{\alpha_i}(x) \{f(x)-f(r_{\alpha_i} x)\}, $$ where for any root $\alpha$, $$ c_\alpha(x):= k_{\alpha} \frac{|\alpha|^2}{4\sinh^2 \frac{\langle\alpha,x\rangle}{2}}. $$ Decide also that $\mathcal{L}^0=L$. Set $$ \widetilde{\mathcal{L}}^j f(x) : = c_{\alpha_j}^{-1}(x) \mathcal{L}^j f(x), $$ and $$ \mathcal{L}^{j,j+1}f(x) := c_{\alpha_{j+1}}^{-1}(x) \mathcal{L}^j f(x). $$ The goal is to define inductively a sequence of processes $(X^j(t),t\ge 0)$, $j=0,\dots,|\mathcal{R}^+|$, associated to the operators $\mathcal{L}^j$'s. First $X^0$ is just the radial HO-process considered in the previous section. Next assume that $\mathcal{L}^j$ is the generator of a Markov process $(X^j(t),t\ge 0)$. Then set $$ A_t^j=\int_0^t c_{\alpha_{j+1}}(X^j_s)\ ds, $$ and $$ \tau_t^j= \inf\{s\ge 0\mid A_s^j>t\}. $$ Using the martingale problem characterization one can see that the radial part of $X^j$ is a radial HO-process. Thus for all $\alpha\in \mathcal{R}^+$, $|\langle \alpha,X^j_t\rangle|\ge c t$, for $t$ large enough and $c>0$ some constant. In particular the increasing process $A^j$ is bounded. Set $T^j=\lim_{t\to +\infty} A^j_t$. Then observe that $\tau_t^j=+\infty$, when $t\ge T^j$. This is essentially the only difference with the Dunkl case considered in \cite{Chy} (where $A^j$ was not bounded and $\tau_t^j$ finite for all $t$). But one can still see that if $$X^{j,j+1}(t):= X^j(\tau_t^j) \quad t< T^j,$$ then $X^{j,j+1}$, killed at time $T^j$, is solution of the martingale problem associated with $\mathcal{L}^{j,j+1}$ (see for instance \cite{EK} exercise 15 p.263 and section 6 p.306). The next step is to add jumps to $X^{j,j+1}$ in the direction of the root $\alpha_{j+1}$. Namely one define a new process $\widetilde{X}^j$, also denoted by $X^{j,j+1}*_{\alpha_{j+1}} N$ in \cite{Chy} section 2.5, which is solution of the martingale problem associated with $\widetilde{\mathcal{L}}^{j+1}$. Roughly $\widetilde{X}^j$ is constructed by gluing several paths, all with law $X^{j,j+1}$ or $r_{\alpha_{j+1}}X^{j,j+1}$, such that for any two consecutive path the starting point of the second is the image of the end point of the first path by the reflexion $r_{\alpha_{j+1}}$. The lengths of the paths are determined by independent exponentially distributed random variables. Here the only minor change is that $\widetilde{X}^j$ explodes at some time, let say $\widetilde{T}^j$. A change of variables shows that $$ \lim_{t\to \widetilde{T}^j} \int_0^t c_{\alpha_{j+1}}^{-1}(\widetilde{X}^j(s))\ ds=+\infty. $$ So for any $t\ge 0$, one can define $\widetilde{A}^j(t)$ as solution of the equation $$ t = \int_0^{\widetilde{A}^j(t)}c_{\alpha_{j+1}}^{-1}(\widetilde{X}^j(s))\ ds. $$ Differentiating this equation one get $$ \frac{d}{dt} \widetilde{A}^j(t) = c_{\alpha_{j+1}}(\widetilde{X}^j(\widetilde{A}^j(t))). $$ Then set $X^{j+1}(t)=\widetilde{X}^j(\widetilde{A}^j(t))$, for all $t\ge 0$. The preceding equation gives $$ \widetilde{A}^j(t)= \int_0^t c_{\alpha_{j+1}}(X^{j+1}(s))\ ds, $$ which in turn shows that $X^{j+1}$ is solution of the martingale problem associated with $\mathcal{L}^{j+1}$, as wanted. The point now is to combine this construction of $X=X^{|\mathbb{R}^+|}$ with the coupling of the radial process from section \ref{secradial}. We first take $(X^0,Y^0)$ with law given by this coupling. Then we define the sequence $((X^j(t),Y^j(t)),t\ge 0)$, $j= 1,\dots,|\mathcal{R}^+|$, simply by following the previous construction for the two coordinates. Actually this coupling is interesting only when $X=X^{|\mathcal{R}^+|}$ and $Y=Y^{|\mathcal{R}^+|}$ never jump, but this is precisely what we need. Indeed in this case we have $X_t=X^0(t)$ and $Y_t=Y^0(t)$, for all $t\ge 0$, so they coincide a.s. after some finite time. \subsection{End of the proof} For any $x\in \mathfrak{a}$, we denote by $\mathbb{P}_x$ the law of $(X_t,t\ge 0)$ starting from $x$. For $\epsilon \in (0,1)$, set $$A_\epsilon := \{z\in \mathfrak{a} \mid \mathbb{P}_z[X \textrm{ never jumps}] \ge 1-\epsilon\}.$$ We know that the process $(X_t,t\ge 0)$ can jump, so a priori $A_\epsilon \subsetneq \mathfrak{a}$. But we know also \cite{Sch1} that a.s. $X$ eventually stops to jump after some finite random time. This implies that \begin{eqnarray} \label{eqjumps} \lim_{t\to +\infty} \mathbb{P}_x[X \textrm{ never jumps after time } t]=1, \end{eqnarray} for all $x\in \mathfrak{a}$. But by using the Markov property, we have for all $t>0$, \begin{eqnarray} \label{eqjump2} \nonumber \mathbb{P}_x[X \textrm{ never jumps after time } t] &=& \mathbb{E}_x\left[ \mathbb{P}_{X_t}[X \textrm{ never jumps}] \right]\\ &=& \int_\mathfrak{a} \mathbb{P}_z[X \textrm{ never jumps}]\ d\mu^x_t(z), \end{eqnarray} where $\mu^x_t$ is the law of $X_t$ under $\mathbb{P}_x$. So \eqref{eqjumps} and \eqref{eqjump2} imply that for all $x\in \mathfrak{a}$, $\mu_t^x(A_\epsilon) \to 1$, when $t\to +\infty$. In particular $A_\epsilon$ is nonempty. Moreover, by invariance of $\mathcal{L}$ under $W$, we know that for any $w\in W$, the law of $(wX_t,t\ge 0)$ under $\mathbb{P}_x$ is $\mathbb{P}_{wx}$. In particular, for any $w\in W$ and any $\epsilon\in (0,1)$, we have $w(A_\epsilon\cap \mathfrak{a}_+)=A_\epsilon \cap w\mathfrak{a}_+$. Thus all these subsets of $A_\epsilon$ are nonempty as well. Let now $h$ be some harmonic function. Fix $w\in W$, and take $x,y \in A_\epsilon \cap w\mathfrak{a}_+$. Consider the coupling $((X_t,Y_t),t\ge 0)$ as defined above. Since $(h(X_t),t\ge 0)$ and $(h(Y_t),t\ge 0)$ are bounded martingales, they converge a.s. toward some limits, respectively $l$ and $l'$. We already saw that $X^W$ and $Y^W$ a.s. coincide after some time. So if both processes $X$ and $Y$ never jump, they must also coincide after some time, and in this case we have $l=l'$. Since $x,y \in A_\epsilon$, this shows that $$|h(x)-h(y)|=|\mathbb{E}[l]-\mathbb{E}[l']| \le 2C\epsilon,$$ where $C=\sup h$. In particular, by completeness of $\mathbb{R}$, for any sequence $(x_\epsilon)_{\epsilon \in (0,1)}$, such that $x_\epsilon \in A_\epsilon\cap w\mathfrak{a}_+$ for all $\epsilon \in (0,1)$, the limit of $h(x_\epsilon)$ when $\epsilon $ tends to $0$ exists, and is independent of the chosen sequence. Call $l_w$ this limit. For all $t\ge 0$, we denote by $w_t$ the angular part of $X_t$. Since $X$ eventually stops to jump, $(w_t,t\ge 0)$ a.s. converges, i.e. becomes stationary. Then for any $w\in W$, define the function $h_w$ on $\mathfrak{a}$ by $$h_w(x)=\mathbb{P}_x\left[\lim_{t\to +\infty}w_t=w\right].$$ By standard properties of Markov processes, we know that these functions are measurable, and actually it is not difficult to see that they are harmonic. Moreover the above convergence result for harmonic functions shows that these functions $h_w$, $w\in W$, are linearly independent. Then set $$\tilde{h}(x):= \sum_{w\in W} l_w h_w(x),$$ for all $x\in \mathfrak{a}$. All that remains to do now is to prove that $\tilde{h}=h$. Indeed if this was true, this would prove that the vector space of bounded harmonic function has dimension $|W|$ as wanted. By using the martingale property, we have for any $t>0$ \begin{eqnarray} \label{hhtilde} |h(x)-\tilde{h}(x)|= |\mathbb{E}_x[h(X_t)-\tilde{h}(X_t)]| \le \int_\mathfrak{a} |h(z)-\tilde{h}(z)|\ d\mu_t^x(z). \end{eqnarray} We have seen that for all $\epsilon \in (0,1)$, \begin{eqnarray} \label{Aepsilon} \mu_t^x(A_\epsilon) \to 1 \end{eqnarray} when $t\to +\infty$. But it is not difficult to see (by using the definition of the $l_w$'s), that for any $\epsilon'>0$, there exists $\epsilon>0$ such that $$|h(z)-\tilde{h}(z)|\le \epsilon' \quad \forall z\in A_\epsilon. $$ Since this holds for any $\epsilon'>0$, \eqref{hhtilde} and \eqref{Aepsilon} show that $h=\tilde{h}$ as wanted. \hfill $\square$ \begin{rem} \emph{We have seen in the previous proof that the family $(h_w)_{w\in W}$ is a basis of the space of bounded harmonic functions. Since the family $(G_{w\rho})_{w\in W}$ is another basis, it would be interesting to know the coefficients relating these two basis. } \end{rem}
1,108,101,566,263
arxiv
\section{Introduction} To fully expolit the potential of future $\mathrm{e^+ e^-}$ collider experiments, a precise reconstruction of all final states is necessary. To achieve this for hadronic final states, a jet energy resolution significantly beyond the current state-of-the-art, in the order of 3\% - 4\% over a wide energy range, is required. Such a resolution, which would provide the capability to separate hadronically decaying $\mathrm{W^{\pm}}$ and Z bosons, requires specialised detectors and reconstruction algorithms. In contrast to traditional energy measurement of summing up the energy depositions in all sub-detectors, the Particle Flow (PF) \cite{Sefkow:2015hna} approach is designed to measure different types of particles of a jet in the best suited detector. For charged particles, the tracker typically offers the best energy resolution. Photon energies are measured in the electromagnetic calorimeter (ECAL) and the energy of neutral hadrons is measured in the hadronic calorimeter (HCAL). Since charged particles and photons carry about 90\% of the energy in a typical jet, particle flow is intrinsically increasing the jet energy resolution. Only the remaining 10\% of the total jet energy carried by neutral hadrons are measured in the HCAL, which typically offers a worse energy resolution. To reduce double counting of charged particles in the tracker and calorimeter, it is vital to separate particle showers in the calorimeters and assign them to tracks, giving rise to a high spatial granularity. This article introduces the beam test data sets taken with the large technological prototype of the CALICE Analog Hadronic Calorimeter (AHCAL) \cite{Sefkow_2019}, a detector designed to complement the particle flow paradigm, and reports on the ongoing analyses. \section{Data taking at the SPS} During two dedicated beam test periods in May and June 2018 the prototype recorded minimum ionizing particle (MIP) tracks of muons as well as particle showers from electrons and pions with beam energies ranging from \SI{10}{\giga\electronvolt} to \SI{200}{\giga\electronvolt}. Additionally, the calorimeter was moved relative to the beam to evaluate the performance over the full front face area and volume. Per particle type and energy several $\mathrm{10^6}$ events were recorded and the detector was operated in both continuous acquisition and power-pulsing\footnote{The power-pulsing mode is designed for the application in a linear collider where the particles arrive in pulsed bunch trains. In the time between two bunch trains the acquisition of the calorimeter is switched off to save power and reduce the dissipated heat. } mode. The MIP tracks characterized by their well defined energy deposition throughout the calorimeter are used for calibration and analysis of the timing performance (\cref{sec:time}). The shower data is used to evaluate the response to hadronically and electromagnetically interacting particles (\cref{sec:shape}) and to study the particle identification performance of the system (\cref{sec:pid}). By artificially superimposing two or more particle showers the separation power, vital for applications in particle flow calorimetry, is studied (\cref{sec:pflow}). All analyses, except the timing performance, are presented using simulated data sets of the full prototype since the analyses are work in progress. \section{Particle Identification}\label{sec:pid} Efficient identification of particles is a key ingredient to successful particle flow algorithms to reduce the danger of confusion and double counting of particles. The left image in \cref{nhvsz} shows the distribution of the center of gravity along the beam direction versus the number of hits in the calorimeter for a mixture of particles. As indicated in the picture, three major accumulations are visible corresponding to three types of particles. \begin{figure} \centering \subfigure{\includegraphics[width=6cm]{nhitsvszcog.pdf}} \hspace{1.5 cm} \subfigure{\includegraphics[width=6cm]{AUC_vs_energy_3.pdf}} \caption{Left: Center of gravity of the deposited energy in beam direction versus the recorded number of hits per event. The three accumulations are caused by three particle types recorded in the beam test campaign. Right: Particle identification performance of the three classifiers (electron, hadron and muon) in terms of are under the ROC curve for simulated data. An excellent separation is achieved over the full energy range \cite{PrivCommBoch}.}\label{nhvsz} \end{figure} Since these accumulations overlap, additional observables have to be used to achieve sufficient particle identification giving rise to a multivariate approach. A boosted decision tree (BDT) is implemented for every particle type and trained on labeled events obtained with a simulation of the full prototype. The chosen features contain information on the center of gravity, the longitudinal and transversal development of the particle showers, the longitudinal position of the shower start, the fraction of deposited energy contained in showers and tracks and the number of hits contained in showers and tracks. Each BDT is trained on a simulated sample containing only its respective particle type and tested on a simulated data set containing a mixture of all three particle types. The right image in \cref{nhvsz} parameterizes the separation power of the classifiers over the simulated energy range, omitting the case of electron versus muon because the potential for confusion is larger in the shown cases. The measure of separation power is the area under the receiver operating characteristic (ROC)\footnote{A ROC curve describes the performance of a classification model at all classification thresholds by showing the ratio of false positive rate to true positive rate. The integral of this curve measures the identification power of the model, where 0 characterizes a 100\% wrong model and 1 characterizes a 100\% correct model.} curve showing excellent performance over the studied energy range. \section{Shower Shape Analysis}\label{sec:shape} The high spatial granularity and longitudinal segmentation of this prototype offers a unique possibility to study the shape of hadronic showers. By investigating the longitudinal and radial energy density, the shower can be separated into the electromagnetic core and the halo part. The longitudinal parameterization is based on the assumption that the evolution of the core part is governed by the radiation length $\mathrm{X_0}$ and the halo is governed by the nuclear interaction length $\mathrm{\lambda_l}$. It is composed of a sum of incomplete gamma functions and reads \begin{equation} \Delta E(Z)=E\cdot\left( \frac{f}{\Gamma(\alpha_s)}\cdot\left( \frac{Z[X_0]}{\beta_s}\right)^{\alpha_s -1} \cdot \frac{e^{\frac{Z[X0]}{\beta_s}}}{\beta_s} + \frac{1-f}{\Gamma(\alpha_l)} \left( \frac{Z[\lambda_l]}{\beta_l} \right)^{\alpha_l - 1} \cdot \frac{e^{\frac{-Z[\lambda_l]}{\beta_l}}}{\beta_l} \right) . \end{equation} Here $\mathrm{\alpha_s}$ and $\mathrm{\beta_s}$ are shape parameters of the core component, $\mathrm{\alpha_l}$ and $\mathrm{\beta_l}$ are the shape parameters of the halo component, $\mathrm{f}$ is the relative weight of core to halo component, $\mathrm{Z[\lambda_l]}$ is the depth in the calorimeter in terms of the nuclear interaction length $\mathrm{\lambda_l}$ and $\mathrm{Z[X_0]}$ is the depth in the calorimeter in terms of the radiation length $\mathrm{X_0}$. The radial profile is parameterized by an exponentially decreasing energy density in concentric rings of area $\mathrm{\Delta S = 2\pi r \Delta r}$ and reads \begin{equation} \frac{\Delta E}{\Delta S}\left(r\right) = \frac{E}{2\pi}\left( f\cdot \frac{e^{\frac{-r}{\beta_c}}}{\beta^2_c} + (1-f) \cdot\frac{e^{\frac{-r}{\beta_h}}}{\beta^2_h} \right), \end{equation} where r is the radius from the shower center and $\mathrm{\beta_c}$ and $\mathrm{\beta_h}$ are the shape parameters of core and halo part. The simulated shower shapes including the fitted parameterizations are shown in \cref{shapes}. \begin{figure} \centering \subfigure{\includegraphics[width=6.3cm]{lognshape.pdf}} \hspace{1.5 cm} \subfigure{\includegraphics[width=6.3cm]{transshape.pdf}} \caption{Left: Longitudinal shower development of simulated \SI{80}{\giga\electronvolt} Pions with fitted parameterization (1). Right: Radial development of simulated \SI{80}{\giga\electronvolt} Pions with fitted parameterization (2) \cite{PrivCommPinto}.}\label{shapes} \end{figure} From the weight of core and halo, f, the fraction of energy deposited in the respective part of the shower can be calculated from the total deposited energy. This study was also performed for the previous AHCAL physics prototype \cite{Eigen_2016} and is currently repeated for the large technological prototype to exploit its higher spatial granularity and the reduced noise. \section{Particle Flow}\label{sec:pflow} The main task of a hadronic calorimeter in the particle flow paradigm is the reconstruction of neutral hadrons within particle showers. In reality, charged hadrons will also reach the calorimeter so a reconstruction algorithm has to be implemented, which is capable of separating the showers and associating them to their respective mother particle. To reproduce and study this scenario with the AHCAL, two hadron events have to be overlayed. Since neutral hadrons don't leave a primary track in the calorimeter before the shower starts to develop, this track has to be removed from the event to generate a fake neutral hadron. The overlayed events are presented to the full Pandora particle flow algorithm chain \cite{Thomson2009pandora} to identify the two showers. An example of the resulting events is shown in \cref{pflow} on the left. \begin{figure} \centering \subfigure{\includegraphics[width=6cm]{separation_good.pdf}} \hspace{1.5 cm} \subfigure{\includegraphics[width=6.3cm]{separation_neutexcess.pdf}} \caption{Left: Overlayed event containing a charged hadron (h$\mathrm{\pm}$) and a artificial neutral hadron (h0) generated by removing the primary track. Pandora correctly identified the neutral shower(cyan) and the charged one(magenta). Right: Partial misclassification of a single particle event. The algorithm produced an excess of neutral energy(cyan) although only one charged hadron entered the calorimeter \cite{PrivCommHeuchel}.}\label{pflow} \end{figure} The algorithm is able to do a correct separation for the majority of the studied events, also considering variations in the relative particles energies and proximity. The right image in \cref{pflow} shows a rare case of misclassification in which only a single particle was presented to Pandora, but the resulting event shows an excess of neutral energy. Events of this kind are currently under investigation. This study was previously performed using data from the AHCAL physics prototype \cite{Adloff:2011ha} and is currently repeated to exploit the higher spatial granularity and reduced noise of the large technological prototype. \section{Analysis of the Timing Performance for MIPs}\label{sec:time} In addition to the hit energy measurement, the AHCAL is also capable of performing single channel hit time measurement enabling the separation of particles on the basis of their arrival time at the calorimeter. Furthermore, since the time of the interaction in a particle collider is well defined, this information can be used to reject out-of-time background providing clean events for high precision physics. The design time resolution of the AHCAL is \SI{1}{\nano\second} or below for minimum ionizing particle (MIP) tracks penetrating the calorimeter and causing one hit in every layer. The data for this analysis was recorded at the DESY test beam facility \cite{DIENER2019265} in 2019. The timing performance is investigated by calibrating the hit time against an external trigger time and obtaining the time difference of two subsequent channels. The resulting distribution is shown in \cref{time} on the left. In order to obtain the single channel time resolution from this distribution, its width has to be divided by $\mathrm{\sqrt{2}}$ resulting in a resolution of \SI{780}{\pico\second}. Since the time measurement in the AHCAL is affected by the involved electronics and calibration procedure, a dedicated beam test setup was designed to measure the intrinsic time resolution of the SiPM-on-Tile configuration as it is used in the prototype. It consists of four individual channels using the same scintillating tiles and SiPMs but with simplified electronics and a fast digitizer to record the full analog waveform at a sampling rate of \SI{2.5}{\giga\hertz}. The tiles are arranged in a beam telescope-like setup, the outer two tiles serve as coincidence triggers while the inner tiles are used to perform the time measurement. This setup was tested at DESY in 2020. Similar to the analysis performed for the AHCAL, the hit times of the inner tiles are obtained relative to the coincidence trigger time and subtracted from each other. The resulting distribution is shown in \cref{time} on the right, the intrinsic time resolution of the SiPM-on-Tile setup was determined to be \SI{507}{\pico\second}. This value implies that the front-end electronics of the AHCAL contribute about $\mathrm{\sqrt{780^2ps - 507^2ps}=}$\SI{593}{\pico\second} to its single channel time resolution\footnote{The measurements leading to these results have been performed at the Test Beam Facility at DESY Hamburg (Germany), a member of the Helmholtz Association (HGF).}. \begin{figure} \centering \subfigure{\includegraphics[width=6cm]{hittime_ILC.pdf}} \hspace{1.5 cm} \subfigure{\includegraphics[width=6.4cm]{hittime_STS.pdf}} \caption{Left: Distribution of the hit time difference in two subsequent channels of the AHCAL. Right: Distribution of the hit time difference obtained with the dedicated SiPM-on-Tile timing setup \cite{PrivCommLenz}.}\label{time} \end{figure} \section{Summary} This article summarizes the currently ongoing studies performed on the data taken with the large CALICE AHCAL technological prototype in 2018. It shows the excellent particle identification and separation power using multivariate classifiers and the Pandora particle flow algorithm. Studies investigating the calorimeters response to the components of hadronic showers are ongoing and show promising results on simulated data. Dedicated data sets taken at DESY in 2019 and 2020 are used to investigate the timing performance of the prototype showing that the design goal of \SI{1}{\nano\second} for minimum ionizing particles was reached. \section{Acknowledgment} I would like to thank the AHCAL group within the CALICE collaboration, in particular Vladimir Bocharnikov, Olin Lyod Pinto and Daniel Heuchel for contributing their research to this article. \printbibliography \end{document}
1,108,101,566,264
arxiv
\section{Introduction} Neural machine translation (NMT) has revolutionised the field of MT by overcoming many of the weaknesses of the previous state-of-the-art phrase-based machine translation (PBSMT)~\citep{D16-1025,toral-sanchezcartagena:2017:EACLlong}. In only a few years since the first working models, this approach has led to a substantial improvement in translation quality, reported in terms of automatic metrics~\cite{bojar-EtAl:2016:WMT1,bojar-EtAl:2017:WMT1,sennrich-wmt16}. This has ignited higher levels of expectation, fuelled in part by hyperbolic claims from large MT developers. First we saw in \citet{Google_NMT_16} that Google NMT was ``bridging the gap between human and machine translation [quality]”. This was amplified recently by the claim by \citet{achieving-human-parity-on-automatic-chinese-to-english-news-translation} that Microsoft had "achieved human parity” in terms of translation quality on news translation from Chinese to English, and more recently still by SDL who claimed to have ``cracked" Russian-to-English NMT with ``near perfect" translation quality.\footnote{https://www.sdl.com/about/news-media/press/2018/sdl-cracks-russian-to-english-neural-machine-translation.html} However, when human evaluation is used to compare NMT and SMT, the results do not always favour NMT \cite{castilho2017neural,Castil-MTSummit2017}. Accompanying the claims regarding the capability of the Microsoft Chinese-to-English NMT system, \citet{achieving-human-parity-on-automatic-chinese-to-english-news-translation} released their experimental data\footnote{http://aka.ms/Translator-HumanParityData} which permits replicability of their experiments. In this paper, we provide a detailed examination of Microsoft's claim to have reached {\it human parity} for the task of translating news from Chinese (ZH) to English (EN). They provide two definitions in this regard, namely: {\bf Definition 1}. {\em If a bilingual human judges the quality of a candidate translation produced by a human to be equivalent to one produced by a machine, then the machine has achieved human parity.} {\bf Definition 2}. {\em If there is no statistically significant difference between human quality scores for a test set of candidate translations from a machine translation system and the scores for the corresponding human translations then the machine has achieved human parity.} The remainder of the paper is organised as follows. First, we identify and discuss three potential issues in Microsoft's human evaluation, concerning (i) the language in which the source text was originally written, (ii) the competence of the human evaluators with respect to translation, and (iii) the linguistic context available to these evaluators (Section \ref{s:issues}). We then conduct a new modified evaluation of their MT system on the same dataset taking these issues onboard {(Section \ref{s:evaluation}). In so doing, we reassess whether human parity has indeed been achieved following what we consider to be a fairer evaluation setting. We then take a closer look at the quality of Microsoft's dataset with the help of an English native speaker and a Chinese native speaker, and discover a number of problems in this regard (Section \ref{s:analyses}). Finally, we conclude the paper (Section~\ref{conc}) with a set of recommendations for future human evaluations, together with some remarks on the risks for the whole field of overhyping the capability of the systems we build. \section{Potential Issues}\label{s:issues} \subsection{Original Language of the Source Text The test set used by~\newcite{achieving-human-parity-on-automatic-chinese-to-english-news-translation} (\texttt{newstest2017}) was the ZH reference from the news translation shared task at WMT 2017~\cite{bojar-EtAl:2017:WMT1},\footnote{{http://www.statmt.org/wmt17/translation-task.html}} which contains 2,001 sentence pairs, of which half were originally written in ZH and the remaining half were originally written in EN. Figure~\ref{fig:test set} represents the WMT test set and the respective translation. The organisers of WMT 2017 manually translated each of these two subsets (files A1 and B1 in Figure~\ref{fig:test set}) into the other language (B2 and A2, respectively) to produce the resulting parallel test set of 2,001 sentence pairs. Thus, \newcite{achieving-human-parity-on-automatic-chinese-to-english-news-translation} machine-translated 2,001 sentences from ZH into EN, but only half of them were originally written in ZH (file D1); the other half were originally written in EN, then they were translated by a human translator into ZH (as part of WMT's organisation), and this human translation was finally machine-translated by Microsoft into EN (file D2). Microsoft also human-translated the ZH reference file into EN to use as reference translations (file C - EN REF). Therefore, 50\% of their EN reference comprises EN translations direct from the original Chinese (file C1), while 50\% are EN translations from the human-translated file from EN into ZH (file C2), i.e. backtranslation of the original EN (A1). While their human evaluation is conducted on three different subsets (referred to as Subset-2, Subset-3, and Subset-4 in Tables 5d to 5f of their paper), since all three are randomly sampled from the whole test set, these subsets still contain around 50\% of sentences originally written in ZH and around 50\% originally written in EN. We hypothesize that the sentences originally written in EN are easier to translate than those originally written in ZH, due to the simplification principle of translationese, namely that translated sentences tend to be simpler than their original counterparts \citep{laviosa1998universals}. Two additional universal principles of translation, explicitation and normalisation, would also indicate that a ZH text originally written in EN would be easier to translate. Therefore, we explore whether the inclusion of source ZH sentences originally written in EN distorts the results, and unfairly favours MT. \begin{figure}[t] \includegraphics [width=.5\textwidth]{wmt.png} \caption{WMT test set and Microsoft Translation ZH-to-EN reference and MT output} \label{fig:test set} \end{figure} \subsection{Human Evaluators} The human evaluation described in~\newcite{achieving-human-parity-on-automatic-chinese-to-english-news-translation} was conducted by “bilingual crowd workers”. While the authors implemented a set of quality controls to “guarantee high quality results”, no further details are provided on the selection of evaluators and their linguistic expertise. In addition, no inter-annotator agreement (IAA) figures were provided. We acknowledge, however, that agreement cannot be measured using the conventional Kappa coefficient, since their human evaluation uses a continuous scale (range $[0-100]$). It has been argued that non-expert translators lack knowledge of translation and so might not notice subtle differences that make one translation better than another. This was observed in the human evaluation of the TraMOOC project\footnote{http://tramooc.eu/} in which authors compared the evaluation of MT output of professional translators against \st{the} crowd workers \citep{TC39}. Results showed that for all language pairs (involving 11 languages), the crowd workers tend to be more accepting of the MT output by giving higher fluency and adequacy scores and performing very little post-editing. With that in mind, we attempt to replicate the results achieved in \citet{achieving-human-parity-on-automatic-chinese-to-english-news-translation} by redoing the manual evaluation with participants with different levels of translation proficiency, namely professional translators (henceforth referred to as experts) and bilingual speakers with no formal translation qualifications (henceforth referred to as non-experts). \subsection{Context} \newcite{achieving-human-parity-on-automatic-chinese-to-english-news-translation} evaluated the sentences in the testset in randomised order, meaning that sentences were evaluated in isolation. However, documents such as the news stories that make up the test set contain relations that go beyond the sentence level. To translate them correctly one needs to take this inter-sentential context into account~\cite{W12-2503,Wang:2017:EMNLP}. The MT system by~\newcite{achieving-human-parity-on-automatic-chinese-to-english-news-translation} translates sentences in isolation while humans naturally consider the wider context when conducting translation. Our hypothesis is that referential relations that go beyond the sentence-level were ignored in the evaluation as its setup considered sentences in isolation (randomised). This probably resulted in the evaluation missing some errors by the MT system that might have been caused by its lack of inter-sentential contextual knowledge. In contrast, our revised human evaluation takes inter-sentential context into account. Sentences are not randomised but evaluated in the order they appear in the documents that make up the test set. In addition, when a sentence is evaluated, the evaluator can see both the previous and the next sentence, akin to how a professional translator works in practice. In the same spirit, concurrent work by \citet{laeubli2018parity} contrasts the evaluation of single sentences and entire documents in the dataset by~\citet{achieving-human-parity-on-automatic-chinese-to-english-news-translation}, and shows a stronger preference for human translation over MT when evaluating documents as compared to isolated sentences. \section{Evaluation}\label{s:evaluation} \subsection{Experimental Setup} We conduct a human evaluation in which at the same time evaluators are shown a source ZH sentence and three EN translations thereof: (i) the human translation produced by Microsoft (file C in Figure \ref{fig:test set}: henceforth referred to as HT), (ii) the output of Microsoft's MT system (file D: henceforth MS), and the output of a production system, Google Translate (henceforth GG).\footnote{\textcolor{red}{We note that in the study by \citet{achieving-human-parity-on-automatic-chinese-to-english-news-translation}, 9 different translations were compared: 3 reference translations, and the output from six MT systems, 4 of which were Microsoft systems (including one online), plus Google Translate and the Sogou system \cite{sogou}, the best-performing system at WMT-2017. This, together with the fact that we use different methods, may affect the comparability of the results obtained to some degree.}} We take these three translations from the data provided by~\citet{achieving-human-parity-on-automatic-chinese-to-english-news-translation}. Instead of giving evaluators randomly selected sentences, they see them in order. We randomised the documents in the test set (169) and prepared one evaluation task per document, for the first 49 documents (503 sentences). Of these 49 documents, 41 were originally written in ZH (amounting to 299 sentences, with each document containing 7.3 sentences on average) and the remaining 8 were originally written in EN (204 sentences, average of 25.5 sentences per document). Evaluators were asked to annotate all the sentences of each document in one go, so that they can take intersentential context into account. \begin{figure*}[htbp] \includegraphics[width=\textwidth]{human_parity_appraise_snapshot_doc1_sent1.jpg} \caption{Snapshot from the human evaluation showing the first sentence from the first document, which contains 30 sentences.} \label{f:appraise_snapshot} \end{figure*} Rather than direct assessment (DA)~\cite{N15-1124}, as in~\citet{achieving-human-parity-on-automatic-chinese-to-english-news-translation}, we conduct a relative ranking evaluation. While DA has some advantages over ranking and has replaced the latter at the WMT shared task since 2017~\cite{bojar-EtAl:2017:WMT1}, ranking is more appropriate for our evaluation due to the fact that we evaluate sentences in consecutive order (rather than randomly). This can be accommodated in ranking as we can show all three translations for each source sentence together with the previous and next source sentences at the same time. In contrast, in DA only one translation is shown at a time, which is of course evaluated in isolation. An important advantage of DA is that the number of annotations required grows linearly (rather than exponentially with ranking) with the number of translations to be evaluated; this is relevant for WMT's shared task as there may be many MT systems to be evaluated, but not for our research as we have only three translations (HT, MS and GG). In any case, both approaches have been found to lead to very similar outcomes as their results correlate very strongly ($R\geq 0.92$ in~\citet{bojar-EtAl:2016:WMT1}). Our human evaluation is performed with the Appraise tool~\citep{mtm12_appraise}.\footnote{https://github.com/cfedermann/Appraise} Figure~\ref{f:appraise_snapshot} shows a snapshot of the evaluation. Subsequently, we derive an overall score for each translation (HT, MS and GG) based on the rankings. To this end we use the TrueSkill method adapted to MT evaluation~\citep{sakaguchi-post-vandurme:2014:W14-33} following its usage at WMT15,\footnote{https://github.com/mjpost/wmt15} i.e. we run 1,000 iterations of the rankings recorded with Appraise followed by clustering (significance level $\alpha=0.05$). Five evaluators took part in our evaluation: two professional Chinese-to-English translators and three non-experts. Of the two professional translators, one is a native English speaker with a fluent level of Chinese, and the other is a Chinese native speaker with a fluent level of English. The three non-expert bilingual participants are Chinese native speakers with an advanced level of English. These bilingual participants are researchers in NLP, and so their profile is similar to some of the human evaluators of WMT, namely MT researchers.\footnote{\textcolor{red}{It is an open question as to whether using bilingual NLP researchers may affect the results obtained. While we follow the practice of WMT here -- which differs from the approach taken by \citet{achieving-human-parity-on-automatic-chinese-to-english-news-translation}, who used bilingual crowd workers -- we intend in future work to investigate this further.}} All evaluators completed all 49 documents, except the third non-expert, who completed the first 18. Similarly, all evaluators ranked all the sentences in the documents they evaluated, except the second professional translator, who skipped 3 sentences. In total we collected 6,675 pairwise judgements. \subsection{Results} \subsubsection{Original Language} To find out whether the language in which the source sentence was originally written has any effect on the evaluation, we show the resulting Trueskill scores for each translation taking into account all the sentences in our test set versus considering the sentences in two groups according to the original language (ZH and EN). The results are shown in Table \ref{t:orig_lang}. \begin{table}[htbp] \begin{center} \begin{tabular}{|l|l|l|l|} \hline \bf Rank & \multicolumn{3}{c|}{\bf Original language}\\ \hline & \multicolumn{1}{c|}{\bf Both} & \multicolumn{1}{c|}{\bf ZH} & \multicolumn{1}{c|}{\bf EN}\\ & $n=6675$ & $n=3873$ & $n=2802$\\ \hline 1 & HT 1.587* & HT 1.939* & MS 1.059\\ 2 & MS 1.231* & MS 1.199* & HT 0.772*\\ 3 & GG -2.819 & GG -3.144 & GG -1.832\\ \hline \end{tabular} \end{center} \caption{\label{t:orig_lang} Ranks of the translations given the original language of the source side of the test set shown with their Trueskill score (the higher the better). An asterisk next to a translation indicates that this translation is significantly better than the one in the next rank. } \end{table} Regardless of the original language, GG is the lowest-ranked translation, thus providing an indication that the quality obtainable from the MS system is a notable improvement over state-of-the-art NMT systems used in production. We observe that HT outperforms significantly MS when the original language is ZH, but the difference between the two is not significant when the original language is EN. Hence, we confirm our hypothesis that the use of translationese as the source language distorts the results in favour of MS. Next, we check whether this effect of translationese is also present in the evaluation by~\citet{achieving-human-parity-on-automatic-chinese-to-english-news-translation}. To this end, we concatenate all their judgments and model them with mixed-effects beta regression. Our dependent variable is the score, scaled down from the original range $[0,100]$ to $[0,1]$, which we aim to predict with one continuous predictor -- sentence length -- and two factorial independent variables: translation (levels HT, MS and GG) and original language (levels ZH and EN). The identifiers of the sentence and the annotator are included as random effects. We plot the interaction between the translation and the original language of the resulting model (adjusted $R^2=0.32$) in Figure~\ref{f:systemid_origlang}. HT outperforms MS by around 0.05 absolute points ($p=0.06$) for sentences whose original language is ZH. However this gap disappears for source sentences originally written in EN, where we see that the score for MS is actually slightly higher than that of HT, though the difference is not significant \textcolor{blue}{($p=0.3$)}. We observe a clear effect of translationese (EN): compared to ZH, the scores of both MT systems increase substantially (GG over 10\% absolute and MS over 6\% absolute), while the HT score increases only very slightly. \textcolor{blue}{ We then build the same regression model for the subset of judgments whose source text was originally written in ZH. The difference between HT and MS is significant ($p<0.05$) in favour of the first in the resulting model (adjusted $R^2=0.36$).} \begin{figure}[htbp] \includegraphics[width=0.49\textwidth]{interaction_systemID_OrigLang.pdf} \caption{Interaction between the MT system (levels HT, MS and GG) and the original language of the source sentence (levels ZH and EN).} \label{f:systemid_origlang} \end{figure} Our hypothesis was theoretically supported by the simplification principle of translationese. Applied to the test data, this would mean that the portion originally written in ZH is more complex than the part originally written in EN. To check whether this is the case, we compare the two subsets of the test set using a measure of text complexity, type-token ratio (TTR). While both subsets contain a similar number of sentences (1,001 and 1,000), the ZH subset contains more tokens (26,468) than its EN counterpart (22,279). We thus take a subset of the ZH (840 sentences) containing a similar amount of words to the EN data (22,271 words). We then calculate the TTR for these two subsets using bootstrap resampling. The TTR for ZH ($M=0.1927$, $SD=0.0026$, 95\% confidence interval $[0.1925,0.1928]$) is 13\% higher than that for EN ($M=0.1710$, $SD=0.0025$, 95\% confidence interval $[0.1708,0.1711]$). Given the findings of this experiment, in the remainder of the paper we use only the subset of the test set whose original language is ZH. \subsubsection{Evaluators} To find out whether the translation expertise of the evaluator has any effect on the evaluation, we show the resulting Trueskill scores for each translation resulting from the evaluations by non-expert versus expert translators. The results are shown in Table \ref{t:evaluators}. The gap between HT and MS is considerably wider for experts (2.2 vs 1.2) than for non-experts (1.3 vs 0.9). We link this to our expectation, based on the previous finding by \citet{TC39}, that non-experts are more lenient regarding MT errors. In other words, non-experts disregard translation subtleties in their evaluation, which leads to the gap between different translations -- in this case HT and MS -- being smaller. In Section \ref{s:analyses} we explore this further by means of a qualitative analysis. \begin{table}[htbp] \begin{center} \begin{tabular}{|l|l|l|l|} \hline \bf Rank & \multicolumn{3}{c|}{\bf Translators}\\ \hline & \multicolumn{1}{c|}{\bf All} & \multicolumn{1}{c|}{\bf Experts} & \multicolumn{1}{c|}{\bf Non-experts}\\ & $n=3873$ & $n=1785$ & $n=2088$\\ \hline 1 & HT 1.939* & HT 2.247* & HT 1.324\\ 2 & MS 1.199* & MS 1.197* & MS 0.94*\\ 3 & GG -3.144 & GG -3.461 & GG -2.268\\ \hline \end{tabular} \end{center} \caption{\label{t:evaluators} Ranks and Trueskill scores (the higher the better) of the three translations for evaluations carried out by expert versus non-expert translators. An asterisk next to a translation indicates that this translation is significantly better than the one in the next rank. } \end{table} Trueskill provides not only an overall score for each translation but also its confidence interval. We expect these to be wider for the annotations by non-experts than those annotations given by experts, which would indicate that there is more uncertainty in the rankings by non-experts. Figure~\ref{f:cis} shows the scores for each translation by experts and non-experts, i.e. the same values that were shown in Table~\ref{t:evaluators}, now enriched with their 95\% confidence intervals. The sum of the confidence scores for the three translations is just 0.33\% higher for non-experts (3.076) than for experts (3.066). However, it is worth mentioning that, compared to the width of the intervals for experts, those for non-experts are considerably wider for HT (16\% relative difference) while they are similar or smaller for MT (1\% and -11\% relative differences for GG and MS, respectively). \begin{figure}[htbp] \includegraphics[width=0.5\textwidth]{cis_plot.pdf} \caption{Trueskill scores of the three translations by experts and non-experts together with their confidence intervals.} \label{f:cis} \end{figure} We now look at inter-annotator agreement (IAA) between experts and non-experts. We compute the Kappa ($\kappa$) coefficient~\cite{cohen_doi:10.1177/001316446002000104}, as done at WMT 2016~\cite[Section~3.3]{bojar-EtAl:2016:WMT1}:\footnote{\url{https://github.com/cfedermann/wmt16/blob/master/scripts/compute_agreement_scores.py}} \[ k = \frac{P(A) - P(E)}{1- P(E)} \] \noindent where $P(A)$ represents the proportion of times that the annotators agree, and $P(E)$ the proportion of times that the annotators are expected to agree by chance. As expected, the IAA between professional translators ($\kappa=0.254$) is notably higher, 95\% relative, than that between non-experts ($\kappa=0.130$).\footnote{Due to the fact that one non-expert evaluated only 18 out of the 49 documents, the IAA calculations consider only the first 18 documents. If we consider all 49 documents, the trend remains the same; the IAA for the two experts is higher than that for the two non-experts who evaluated all the documents: 0.265 vs 0.196.} As we have three non-experts, we can calculate the IAA not only among the three of them but also between all three pairs of non-expert annotators; all of the resulting coefficients (0.057, 0.135 and 0.195) are lower than that between experts (0.254). To the best of our knowledge, this is the first time that IAA of professional translators and non-experts has been compared for the human evaluation of MT. In related work, \citet{Callison-Burch:2009:FCC:1699510.1699548} compared the agreement level of two types of non-expert translators: MT developers (referred to in that paper as `experts') and crowd workers. He showed that crowd workers can reach the agreement level of MT researchers using multiple workers and weighting their judments. That said, both types of non-experts conducted human evaluations for WMT13~\cite{bojar-EtAl:2013:WMT} and the IAA rates of the crowd were well below those of the researchers. \section{Analyses}\label{s:analyses} As mentioned previously, we have examined the quality of the test sets, both originally written in ZH and originally written in EN and their respective translations. An English native speaker analysed both the original EN version from the WMT set (file A1 in Figure \ref{fig:test set}) and the human translation of the set originally written in ZH performed by Microsoft (file C2). A Chinese native speaker, who is fluent in English and has experience with translation from EN into ZH, analysed the original ZH versions (file B1) as well as the human translation of the set originally written in EN performed by the WMT organisers (file B2). \subsection{Original English} Regarding the English original (file A1 in Figure \ref{fig:test set}), the analysis showed that apart from a few grammar errors, the test set appeared to be fluent and grammatical. Examples of grammatical errors in the original EN files are: \\ \noindent \textbf{i)} ``The idiot didn't realize they were still on the air'' \noindent \textbf{ii)} ``Soon after, Scott Russel who was hosting CBC's broadcast apologized on-air for MacDonald's comment, saying: `We apologize the comment on a swim performance made it to air.' '' \\ In example i) ``on air" should be used instead of ``on the air'', while in the example ii) a missing ``that'' should be used after ``apologize''. Nonetheless, these errors did not affect the ZH translation (file B2) or the following backtranslation (C2) into EN. Our hypothesis is that because the test set is news content, it also contains tweets (such as example i)) and quotes \st{that} from speech interviews (such as example ii)), which are more likely to contain grammatical errors. \subsection{Chinese Translation} Regarding the human translation into ZH performed by WMT (file B2 in Figure \ref{fig:test set}), most of the sentences contained grammatical errors and/or mistranslations of proper nouns. Furthermore, although some translations were grammatically correct and accurate, they were not fluent. When the ZH-translated sentences were compared against the source (A1), the translations were mostly accurate. However, when analyzed on their own without the source, they sound disfluent: \\ \noindent \textbf{iii)} \noindent EN original (A1): A front-row seat to the stunning architecture of the Los Angeles Central Library \noindent ZH (B2):\begin{CJK}{UTF8}{gbsn}洛杉矶中央图书馆的惊艳结构先睹为快\end{CJK} \\ \noindent \textbf{iv)} \noindent EN original (A1): An open, industrial loft in DTLA gets a cozy makeover \noindent ZH (B2): \begin{CJK}{UTF8}{gbsn}DTLA的开放式工厂阁楼进行了一次舒适的改造。\end{CJK} \\ In example iii), although the ZH translation has fully transferred the meaning of the source text, it contains word-order errors which makes the translation disfluent since the verb phrase \begin{CJK}{UTF8}{gbsn}``先睹为快''\end{CJK} (take a look) is placed after the object (library). One possible translation for that is \begin{CJK}{UTF8}{gbsn}``抢先目睹洛杉矶中央图书馆的惊艳结构''\end{CJK} because the ZH language syntax requires the verb to be placed before the object. In example iv), the ZH translation contains a grammatical error in the word \begin{CJK}{UTF8}{gbsn}``进行''\end{CJK}, which would imply that the loft is carrying out a makeover. In addition, the adjective \begin{CJK}{UTF8}{gbsn}``舒适的''\end{CJK} (cosy) cannot be used to describe \begin{CJK}{UTF8}{gbsn}``改造''\end{CJK} (makeover). One possible translation for the English sentence is \begin{CJK}{UTF8}{gbsn} ``DTLA的开放式工业阁楼被改造的很舒适''. \end{CJK} Given this analysis, we speculate that the translation of the EN original files into ZH might not have been performed by an experienced translator, but rather exemplify either human translation performed by an inexperienced translator, or poorly post-edited MT. \subsection{English Translation} Regarding the EN reference files translated by Microsoft (file C2 in Figure \ref{fig:test set}), many of the sentences contained grammatical errors (such as word order, verb tense and missing prepositions) as well as mistranslations. \\ \noindent \textbf{v)} \noindent EN original (A1): A front-row seat to the stunning architecture of the Los Angeles Central Library \noindent ZH (B2):\begin{CJK}{UTF8}{gbsn}洛杉矶中央图书馆的惊艳结构先睹为快\end{CJK} \noindent EN (C2): Take a look of the astounding architecture of the Los Angeles Central Library.\\ \noindent GG: The stunning structure of the Los Angeles Central Library \noindent MS: A sneak peek at the stunning architecture of the Los Angeles Central Library\\ \noindent \textbf{vi)} \noindent EN original (A1): An open, industrial loft in DTLA gets a cozy makeover \noindent ZH (B2):\begin{CJK}{UTF8}{gbsn}DTLA的开放式工厂阁楼进行了一次舒适的改造。\end{CJK} \noindent EN (C2): A comfortable makeover was provided to the open factory building design of DTLA. \\ \noindent GG: DTLA's Open factory loft has a comfortable makeover. \noindent MS: DTLA's open-plan factory loft has undergone a comfortable makeover.\\ In example v), the EN translation of the ZH source\footnote{It is important to note that the translators did not have access to the original EN (A1) and so the ZH file (B2) was used as the source.} analyzed previously is translated with the wrong preposition, i.e. `look of' instead of `look at'. None of the professional translators considered the reference worse than the MS output; while one translator and one non-expert considered it `as good' as the MS output, the other considered it better than MS but worse than GG. Regarding the non-expert assessment, two of them considered the HT to be as good as MS and better than GG, and one considered the HT to be worse than MS but better than GG.\\ In example vi), the EN translation (C2) of the ZH source (B2) does not have all the information expressed in ZH as the word `loft' \begin{CJK}{UTF8}{gbsn}(阁楼)\end{CJK} is not translated. Moreover, the EN translation refers to an architectural design makeover of the building rather than an interior makeover of an attic. Both professional translators considered the EN reference to be worse than the MS output. As far as the non-experts are concerned, two of them considered the HT to be worse than MS, while one considered it to be `as good'. This provides qualitative evidence that non-experts may be more tolerant of translation errors than professional translators. Another example of such behaviour is the following:\\ \noindent \textbf{vii)} \noindent EN original (A1): Learn more about the history of downtown's Central Library as the Society of Architectural Historians/Southern California Chapter hosts a salon with Arnold Schwartzman and Stephen Gee, authors of the new book "Los Angeles Central Library: A History of its Art and Architecture \noindent ZH (B2): \begin{CJK}{UTF8}{gbsn}美国建筑史学家学会南加利福尼亚洲分会与新书《洛杉矶中央博物馆:其艺术与建筑历史》的作者阿诺·斯瓦茨曼和史蒂芬·吉举办了一场沙龙。观众们可通过此次活动进一步了解市中心中央图书馆的历史 \end{CJK} \noindent EN (C2):A salon will be hosted by Southern California Branch of Society of Architectural Historians and the co-authors of Los Angles Central Museum: Art and Architectural History, Arnold Schwarzman and Stephen Gee. It will deliver more knowledge of the Central Library to the participants\\ \noindent GG: The Southern California branch of the American Institute of Architectural Historians has held a salon with \begin{CJK}{UTF8}{gbsn}阿诺·斯瓦茨曼\end{CJK} and \begin{CJK}{UTF8}{gbsn}史蒂芬·吉\end{CJK}, author of the Los Angeles Central Museum: its art and architectural history. Through this event, viewers can learn more about the history of Central Library in the city centre\\ \noindent MS: The Southern California chapter of the American Society of Architectural Historians and the authors of a new book, "Los Angeles Central Museum: Its Art and Architectural History," Arnold Schwartzman and Steven Gee, hosted a salon at which viewers learned more about the history of the Central Library in the city center\\ In example vii), regarding the ZH source (B2), in addition to having the first sentence translated into past tense -- whereas the EN original (A1) shows the salon event is happening in the near future -- it also contains a typo \begin{CJK}{UTF8}{gbsn}`洲'\end{CJK} which means `continent' instead of `state' \begin{CJK}{UTF8}{gbsn}`州'\end{CJK}. Even though the typo does not affect the EN translation (C2), it shows that the quality of the ZH translation is not as high as would be expected of professional human translators. Regarding the EN translation (C2), while the first sentence is mostly fluent -- although it contains a typo in `Angles' (Angeles) and lacks the article `the' before the proper noun in the first sentence -- the second sentence lacks fluency and contains errors of omissions and mistranslations. For example, the words ``downtown'' and ``history'' \begin{CJK}{UTF8}{gbsn}(市中心\end{CJK} and \begin{CJK}{UTF8}{gbsn}历史,\end{CJK} respectively) were not transferred over to the EN reference (C2). Furthermore, the word `viewers' in the ZH translation \begin{CJK}{UTF8}{gbsn}(观众们)\end{CJK} was mistranslated as `participants'. Nonetheless, the EN translation (C2) is able to capture the correct tense of the sentence since the second sentence in the ZH translation \st{is} (B2) is ambiguous regarding verbal tense. The MS translation does a better job in keeping the fluency throughout the sentence even though it mistranslates the tense of the source in the past tense. Both professional translators assessed the HT as worse than MS, whereas two of the non-experts considered it to be as good as MS and better than GG. The third non-expert considered the HT to be worse than both MT systems. This example shows that the level of expertise of the evaluators may have an effect on the evaluation given that non-experts are wrongly more tolerant of MT errors. Similarly to the ZH translation (B2) of the English original, we speculate that the EN translation (C2) of the ZH files is more likely a human translation performed by an inexperienced translator, or even a poorly post-edited machine translation; even if the translation was performed by an experienced translator, such that the ZH source (B2) contained errors or was disfluent, a professional translator would surely be more meticulous and fix such errors before rubber-stamping the translations. \section{Conclusions and Future Work} \label{conc} This paper has reassessed a recent study that claimed that MT has reached human parity for the translation of news from Chinese into English, considering three variables that were not taken into account in that previous study: (i) the language in which the source side of the test set was originally written, (ii) the translation proficiency of the evaluators, and (iii) the provision of inter-sentential context. The main findings of this paper are the following: \begin{itemize} \item If we consider the subset of the test set whose source side was originally written in ZH, there is evidence that human parity has not been achieved, i.e. the difference between the human and the machine translations is significant. This is the case both in our human evaluation and in Microsoft's. \item Having translationese (ZH translated from EN in our study) as input, compared to having original text, results in higher scores for MT systems in Microsoft's human evaluation. \item Compared to judgments by non-experts, those by professional translators have a higher IAA and a wider gap between human and machine translations. \item We have identified issues in the human translations by both WMT and Microsoft. These indicate that these translations were conducted by non-experts and that were possibly post-edited MT output. \end{itemize} There is little doubt that human evaluation has played a very important role in MT research and development to date. As MT systems improve -- as exemplified by the progress made by \citet{achieving-human-parity-on-automatic-chinese-to-english-news-translation} over state-of-the-art production systems -- and thus the gap between them and human translators narrows, we believe that human evaluation, in order to remain useful, needs to be more discriminative. We suggest that a set of principles should be adhered to, partly based on our findings, which we outline here as recommendations: \begin{itemize} \item The original language in which the source side of the test sets is written should be the same as their source language. This will avoid having spurious effects because of having translationese as MT input. \item Human evaluations should be conducted by professional translators. This allows fine-grained nuances of translations to be taken into account in the evaluation and should result in higher inter-annotator agreement. \item Human evaluations should proceed taking the whole document into account rather than evaluating sentences in isolation. This allows for intersentential phenomena to be considered as part of the evaluation. \item Test sets should be translated by experienced professional translators from scratch. \end{itemize} We are confident that adhering to these principles is sensible under any translation conditions. Of course, if the test set is faulty, then in claiming that one's MT system outperforms one's competitors, there is a risk that what one is actually demonstrating is the contrary, as if automatic evaluation metrics demonstrate a higher score, what that could be denoting is that one's output is actually closer to the faulty test set than producing better output in terms of improved translation quality {\em per se}. \textcolor{red}{Of course, this has consequences not just for the study in this paper, but for all shared tasks: past, present, and future.}\footnote{\textcolor{red}{Ideally, it would be great if multiple references were also available, but the point remains that if these are poor quality human translations, then this is likely to skew results still further.}} Should material be made available by Google, SDL or any other MT developers who claim `human parity' or the like, we would be very happy to apply these principles in subsequent rigorous evaluations of actual demonstrable improvements in translation quality. One thing is certain; as \citet{Way19} observes, ``those of us who have seen many paradigms come and go know that overgilding the lily does none of us any good, especially those of us who have been trying to build bridges between MT developers and the translation community for many years." We trust that our findings in this paper demonstrate that while MT quality does seem to be improving quite dramatically, human translators will continue to find gainful employment for many years to come, despite somewhat grandiose claims to the contrary. On a final note, we acknowledge that our conclusions and recommendations are somewhat limited in that they are derived from experiments on just one language direction and five evaluators. Therefore we plan as future work to conduct similar experiments on additional language pairs with a higher number of evaluators. In the spirit of \citet{achieving-human-parity-on-automatic-chinese-to-english-news-translation}, without which this paper would not have been possible, we too make publicly available our evaluation materials, the anonymised human judgments and the statistical analyses thereof.\footnote{\url{https://github.com/antot/human_parity_mt}} \section*{Acknowledgments} We would like to thank the five expert and non-expert translators that took part in this study. \textcolor{red}{We are also grateful for valuable comments on this paper from Hany Hassan, the lead author on the Microsoft paper.} This research was partially supported by the iADAATPA project funded by CEF research action (2016-EU-IA-0132) under grant agreement No.1331703. The ADAPT Centre for Digital Content Technology at Dublin City University is funded under the Science Foundation Ireland Research Centres Programme (Grant 13/RC/ 2106) and is co-funded under the European Regional Development Fund.
1,108,101,566,265
arxiv
\section{Introduction} \label{sec:introduction} \noindent The literature on Bayesian variable selection in linear regression is vast and rich. Many priors and methods have been proposed. \citet{George:McCulloch:1993} propose the stochastic search variable selection which uses the Gaussian distribution with a zero mean and a small but fixed variance as the spike prior, and another Gaussian distribution with a large variance as the slab prior. \citet*{ishwaran2005} also use Gaussian spike and slab priors, but with continuous bimodal priors for the variance of the regression coefficient to alleviate the difficulty of choosing specific prior parameters. \citet*{Narisetty:He:2014} introduce shrinking and diffusing priors as spike and slab priors, and establish model selection consistency of the approach in a high-dimensional setting. $g$-prior is introduced in \citep{Zeller:g}, and \citet{Liang:mixture:2008} further propose the mixture of $g$ priors based variable selection method and establish selection consistency. In recent years, the use of non-local priors in this context has generated a lot of interest. Non-local priors were first introduced by \citet{John:Rossell:non-localtesting:2010} as densities that are identically zero whenever a model parameter is equal to its null value in the context of hypothesis testing. Compared to local priors, which still preserve positive values at null parameter values, non-local prior distributions have relatively appealing properties for Bayesian model selection. In particular, non-local priors discard spurious covariates faster as the sample size $n$ grows, while preserving exponential learning rates to detect non-zero coefficients as indicated in \citep{John:Rossell:non-localtesting:2010}. These priors were further extended to Bayesian model selection problems in \citep{Johnson:Rossell:2012} by imposing non-local prior densities on a vector of regression coefficients. Posterior distributions on the model space based on non-local priors were found to be more tightly concentrated around the maximum a posteriori (MAP) model than the posterior based on for example, $g$-priors, which tend to be more dispersed, implying that these non-local priors yield a faster rate of posterior concentration, as indicated in \citep{Shin.M:2015}. In particular, let $\bm y_n$ denote a random vector of responses, $X_n$ an $n \times p$ design matrix of covariates, and $\bm \beta = (\beta_1, \beta_2, \ldots, \beta_p)$ a $p \times 1$ vector of regression coefficients. Under the linear regression model, $$ \bm y_n \sim N\left(X_n\bm \beta, \sigma^2I_n\right). $$ \noindent In \citep{Johnson:Rossell:2012}, the authors introduce the product moment (pMOM) non-local prior with density \begin{equation} \label{pmomdensity_introduction} d_p(2\pi)^{-\frac p 2}(\tau \sigma^2)^{-rp - \frac p 2} |A_p| ^{\frac 1 2} \exp \left\{- \frac{\bm \beta_p ^ \prime A_p \bm \beta}{2 \tau \sigma ^2}\right\}\prod_{i =1}^{p} \beta_{i}^{2r}. \end{equation} \noindent Here $A_p$ is a $p \times p$ nonsingular matrix, $r$ is a positive integer referred to as the order of the density and $d_p$ is the normalizing constant independent of $\tau$ and $\sigma^2$. Variations of the density in (\ref{pmomdensity_introduction}), called the piMOM and peMOM density, have also been developed in \citep{Johnson:Rossell:2012, RTJ:2013}. Clearly, the density in (\ref{pmomdensity_introduction}) is zero when any component of ${\bm \beta}$ is zero. Under appropriate regularity conditions, the authors in \citep{Johnson:Rossell:2012, Shin.M:2015} demonstrate that in high-dimensional settings, model selection procedures based on the pMOM and piMOM non-local prior densities can achieve strong model selection consistency, i.e, the posterior probability of the true model converges to $1$ as the sample size $n$ increases. As noted in \citep{Johnson:Rossell:2012}, the scale parameter $\tau$ is of particular importance, as it reflects the dispersion of the non-local prior density around zero, and implicitly determines the size of the regression coefficients that will be shrunk to zero. \citet{John:Rossell:non-localtesting:2010, Johnson:Rossell:2012} treat $\tau$ as given and suggest a choice of $\tau$ which leads to a high prior probability for significant values of the regression coefficients. \citet{Shin.M:2015} again treat $\tau$ as given, and consider a setting where $p$ and $\tau$ vary with the sample size $n$. They show that high-dimensional model selection consistency is achieved under the peMOM prior (another variation of the priors above introduced in \citep{RTJ:2013}), as long as $\tau$ is of a larger order than $\log p$ and smaller order than $n$. In the context of generalized linear model, similar to the development from $g$ prior in \citep{Zeller:g} to the mixture of $g$ prior in \citep{Liang:mixture:2008}, \citet{PhDthesis:Wu} further extends the work in \citep{Johnson:Rossell:2012,Shin.M:2015} by proposing a fully Bayesian approach with the pMOM non-local prior and an appropriate Inverse-Gamma prior on the parameter $\tau$ referred to as the hyper-pMOM prior, following the nomenclature in \citep{PhDthesis:Wu}. In particular, \citet{PhDthesis:Wu} discusses the potential advantages of using hyper-pMOM priors and establish Bayes factor rates. The primary goal and innovation of this paper is to investigate the underlying model selection consistency for the hyper-pMOM priors in linear regression setting. The extra prior layer of prior, however, creates technical challenges for a high-dimensional theoretical consistency analysis. Under standard regularity assumptions, which include the prior over all models is restricted to ones with model size less than an appropriate function of the sample size $n$, we establish {\it posterior ratio consistency} (Theorem \ref{thm1}), i.e., the ratio of the maximum marginal posterior probability assigned to a ``non-true" model to the posterior probability assigned to the ``true" model converges to zero in probability. In particular, this implies that the true model will be the mode of the posterior distribution with probability tending to $1$ as $n \rightarrow \infty$. Next, under the additional assumption that $p$ increases at a polynomial rate with $n$, we show {\it strong model selection consistency} (Theorem \ref{thm2}). Strong model selection consistency implies that the posterior probability of the true model converges in probability to $1$ as $n \rightarrow \infty$. The assumption of restricting the prior over models with appropriately bounded parameter size, i.e., putting zero prior mass on unrealistically large models) has been used in both \citep{Narisetty:He:2014} and \citep{Shin.M:2015} for regression models. Based on reviewers' comments, we relax the polynomial rate restriction on $p$ to a sub-exponential rate by replacing the uniform type prior with a complexity prior on the model space to penalize larger models and establish model selection consistency under the complexity prior in Theorem \ref{thm4}. For the hyper-piMOM priors, \citet*{BW:2017} establish model selection consistency in the framework of generalized linear model. While there are some connections between our model and the one in \citep{BW:2017}, there are fundamental differences between the two models and the corresponding analyses. A detailed explanation of this is provided in Remark \ref{BW_comparison}. The rest of the paper is structured as follows. In Section \ref{sec:model specification} we provide our hierarchical fully Bayesian model. Model selection consistency results are stated in Section \ref{sec:model selection consistency}, and the proofs are provided in Section \ref{sec:modelselectionproofs}. Section \ref{sec:complexity} establishes the model selection consistency under the complexity prior. Details about how to approximate the posterior density for model selection are demonstrated in Section \ref{sec:computation}. In Section \ref{sec:experiments} and Section \ref{sec:real}, via simulation studies and real data analysis, we illustrate the model selection consistency result, and demonstrate the benefits of model selection using the fully Bayesian approach as compared to approaches which treat $\tau$ as given, and existing penalized likelihood approaches. We end our paper with a discussion in Section \ref{sec:discussion}. \section{Model specification} \label{sec:model specification} \noindent We start by considering the standard Gaussian linear regression model with $p$ coefficients and by introducing some required notation. Let $\bm y_n$ denote a random vector of responses, $X_n$ an $n \times p$ design matrix of covariates, and $\bm \beta$ a $p \times 1$ vector of regression coefficients. Our goal is variable selection, i.e., to correctly identify all the non-zero regression coefficients. In light of that, we denote a model by $\bm k = \left\{k_1, k_2, \ldots, k_m\right\}$ if and only if all the non-zero elements of $\bm \beta$ are $\beta_{k_1}, \beta_{k_2}, \ldots, \beta_{k_m}$ and denote $\bm \beta_k = \left(\beta_{k_1}, \beta_{k_2}, \ldots, \beta_{k_m}\right)^T.$ For any $p \times p$ matrix $A$, let $A_k$ represent the submatrix formed from the columns of $A$ corresponding to model $\bm k$. In particular, Let $X_k$ denote the design matrix formed from the columns of $X_n$ corresponding to model $\bm k$. For the rest of the paper, simply let $k = |\bm k|$ represent the cardinality of model ${\bm k}$ for notational convenience. The class of pMOM densities (\ref{pmomdensity_introduction}) can be used for model selection through the following hierarchical model. \begin{align} \label{modelspecification} &\bm{Y}_n \mid \bm \beta_k, \sigma^2, \bm k \sim N(X_k \bm \beta_k, \sigma^2 I_n),\\ &\pi\left(\bm \beta_k \mid \tau, \sigma^2, \bm k\right) = d_k(2\pi)^{-\frac k 2}(\tau \sigma^2)^{-rk - \frac k 2} |A_k| ^{\frac 1 2} \exp \left\{- \frac{\bm \beta_k ^ \prime A_k \bm \beta_k}{2 \tau \sigma ^2}\right\}\prod_{i =1}^{k} \beta_{k_i}^{2r}, \label{model:pmom}\\ &\pi(\tau) = \frac{\left(\frac n 2\right)^{\frac 1 2}}{\Gamma(\frac 1 2)} \tau^{- \frac 3 2} e^{-\frac{n}{2\tau}}, \label{model:tau}\\ &\pi\left(\sigma^2\right) = \frac{\left( \alpha_2 \right)^{\alpha_1}}{\Gamma(\alpha_1)} \left(\sigma^2\right)^{-(\alpha_1 + 1)} e^{-\frac{\alpha_2}{\sigma^2}}. \label{model:5} \end{align} Note that in the currently presented hierarchical model, no specific form/condition has yet been assigned to the prior over the space of models. Some standard regularity assumptions for this prior will be provided later in Section \ref{sec:model selection consistency}. \begin{figure}[h] \centering \includegraphics[width=95mm,height=50 mm]{hyper_pmom} \caption{Comparison: Hyper-pMOM and pMOM when $p = 1$.} \label{fig:one_dimension} \end{figure} \begin{figure} [h] \centering \begin{subfigure}[pMOM] {\includegraphics[width=45mm]{pmom_3D.pdf}} \end{subfigure} \begin{subfigure}[Hyper-pMOM] {\includegraphics[width=45mm]{hyper_pmom_3D.pdf}} \end{subfigure} \caption{Comparison: Hyper-pMOM and pMOM when $p = 2$.} \label{fig:two_dimension} \end{figure} Following the nomenclature in \citep{PhDthesis:Wu}, we refer to the mixture of priors in (\ref{model:pmom}) and (\ref{model:tau}) as the hyper-pMOM prior. In particular, one can show the implied marginal density of $\bm \beta_k$ after integrating out $\tau$ have the following expression \begin{equation} \label{marginal_beta} \pi\left(\bm \beta_k \mid \sigma^2, \bm k\right) = \frac{\left(\frac n 2\right)^{\frac 1 2}}{\Gamma(\frac 1 2)} \frac{\Gamma(rk+\frac k 2 +\frac 1 2)}{(\frac{n}{2} + \frac{\bm \beta_k ^ \prime A_k \bm \beta_k}{2\sigma^2})^{rk + \frac k 2 + \frac 1 2}}d_k(2\pi)^{-\frac k 2}\sigma^{-2rk - k} |A_k| ^{\frac 1 2} \prod_{i =1}^{k} \beta_{k_i}^{2r}. \end{equation} Note that compared to the pMOM density in (\ref{pmomdensity_introduction}) with given $\tau$, $\pi\left(\bm \beta_k \mid \sigma^2, \bm k\right)$ now possesses thicker tails, which induces prior dependence. See Figure \ref{fig:one_dimension} and Figure \ref{fig:two_dimension}, where we plot the marginal density $\pi\left(\bm \beta_k \mid \sigma^2, \bm k\right)$ when $A_p = 1$, $\sigma^2 = 1$ and $n = 1$ for the univariate and bivariate case, respectively. In addition, the hyper-pMOM prior could achieve better model selection performance especially for small samples. See for example \citep{Liang:mixture:2008} that investigates the finite sample performance for hyper-$g$ priors. By (\ref{modelspecification}) and Bayes' rule, the resulting posterior probability for model $\bm k$ is denoted by, \begin{align} \label{model posterior} \pi(\bm k | \bm y_n) = \frac{\pi(\bm k)}{\pi(\bm y_n)}m_{\bm k}(\bm y_n), \end{align} where $\pi(\bm y_n)$ is the marginal density of $\bm y_n$, and $m_{\bm k}(\bm y_n)$ is the marginal density of $\bm y_n$ under model $\bm k$ given by, \begin{align} \label{marginal density} &m_{\bm k}(\bm y_n) \nonumber\\ =&\int_{0}^{\infty}\int_{0}^{\infty}\pi\left(\bm{y}_n \mid \bm \beta_k, \sigma^2, \bm k\right)\pi\left(\bm \beta_k \mid \tau, \sigma^2, \bm k\right)\pi(\tau)\pi\left(\sigma^2\right)d\bm \beta_k d\sigma^2 d\tau \nonumber\\ =& \frac{\left(\frac n 2\right)^{\frac 1 2}}{\Gamma(\frac 1 2)} \frac{\left( \alpha_2 \right)^{\alpha_1}}{\Gamma(\alpha_1)}\int_{0}^{\infty} \int_{0}^{\infty} d_k(2\pi)^{-\frac k 2}(\tau \sigma^2)^{-rk - \frac k 2} |A_k| ^{\frac 1 2} \exp \left[- \frac{\bm \beta_k ^ \prime A_k \bm \beta_k}{2 \tau \sigma ^2}\right]\prod_{i =1}^{k} \beta_{k_i}^{2r} \nonumber\\ &\times \frac{1}{\left(2\pi\sigma^2\right)^{\frac n 2}}\exp\left\{-\frac{(\bm y_n - X_k\bm \beta_k)^T(\bm y_n - X_k\bm \beta_k)} {2\sigma^2}\right\} \tau^{- \frac 3 2} e^{-\frac{n}{2\tau}} \left(\sigma^2\right)^{-(\alpha_1 + 1)} e^{-\frac{\alpha_2}{\sigma^2}} d\bm \beta_k d\sigma^2 d\tau \nonumber\\ =& d_k\frac{\frac{\left(\frac n 2\right)^{\frac 1 2}}{\Gamma(\frac 1 2)}}{(\sqrt{2\pi})^n}\frac{\left( \alpha_2 \right)^{\alpha_1}} {\Gamma(\alpha_1)} |A_k|^{\frac 12} \nonumber\\ &\times \int_{0}^{\infty}\int_{0}^{\infty} (\sigma^2)^{-\left(\frac n 2 + rk + \alpha_1 + 1\right)} \exp\left\{-\frac{R_k + 2\alpha_2} {2\sigma^2}\right\} \tau^{-rk - \frac k 2 - \frac 3 2}e^{-\frac{n}{2\tau}}\frac{E_k(\prod_{i=1}^{k}\beta_{k_i}^{2r})}{|C_k|^{\frac 1 2}} d\sigma^2 d\tau, \end{align} \noindent where $C_k = X_k^TX_k + \frac {A_k}{\tau}, R_{\bm k} =\bm y_n^T(I_n - X_k C_k^{-1}X_k^T)\bm y_n,$ and $E_k(.)$ denotes the expectation with respect to a multivariate normal distribution with mean $\tilde{\bm \beta_k} = C_k^{-1}X_k^T\bm y_n$, and covariance matrix $V = \sigma^2C_k^{-1}$. In particular, these posterior probabilities can be used to select a model by computing the posterior mode defined by \begin{equation} \label{a4} \hat{\bm k} = \argmax_{\bm k} \pi({\bm k}|\bm y_n). \end{equation} \section{Model selection consistency: main results} \label{sec:model selection consistency} \noindent In this section we will explore the high-dimensional asymptotic properties of the Bayesian model selection approach specified in Section \ref{sec:model specification}. In particular, we consider a setting where the number of regression coefficients $p = p_n$ increases with the sample size $n$. The true data generating mechanism is given by $Y_n = X_n \bm \beta_0 + \boldsymbol{\epsilon}_n$. Here $\bm \beta_0$ is the true $p_n$ dimensional vector of regression coefficients, whose dependence on $n$ is suppressed for notational convenience, and the entries of $\boldsymbol{\epsilon}_n$ are i.i.d Gaussian with mean zero and variance $\sigma_0^2$. As in \citep{Johnson:Rossell:2012}, we assume that the true vector of regression coefficients is sparse, i.e., all the entries of $\bm \beta_0$ are zero except those corresponding to a subset ${\bm t} \subseteq \{1,2, \ldots, p_n\}$, and ${\bm t}, \bm \beta_{0,t}, \sigma_0^2$ do not vary with $n$. Our results can be easily extended to the case where $|{\bm t}|$, and entries of $\bm \beta_{0,t}$ and $\sigma_0^2$ vary with $n$ but stay bounded. However, we assume these quantities stay fixed for ease of exposition. For any $p \times p$ symmetric matrix $A$, let $eig_1(A) \le eig_2(A) \ldots \leq eig_p(A)$ be the ordered eigenvalues of $A$ and denote the $j$-th largest nonzero eigenvalue as $\nu_j(A)$. Let $\lambda_k^m = \min_{1 \le j \le \min(n,k)} \nu_j\left(\frac{X_k^TX_k}{n}\right)$ and $\lambda_k^M = \max_{1 \le j \le \min(n,k)} \nu_j\left(\frac{X_k^TX_k}{n}\right)$, respectively. In order to establish our asymptotic results, we need the following mild regularity assumptions. \begin{assumption}\label{assumptiontruemodel} There exist $\epsilon_n < 1$, such that $0< \epsilon_n \le \lambda_k^m \le \lambda_k^M \le \epsilon_n^{-1}$, where $\epsilon_n^{-1} = O\left((\log n)^{\frac 1 8}\right)$. \end{assumption} \begin{assumption}\label{assumptionpvalue} $p = O\left(n^{\gamma}\right),$ where $\gamma < r.$ \end{assumption} \begin{assumption}\label{assumptionmodelsize} $\pi(\bm k) = 0$ for all $|\bm k| > q_n,$ where $q_n = O\left(n^\xi\right)$ and $\xi < 1$. \end{assumption} \begin{assumption}\label{assumptionprior} There exists a constant $\omega > 0$, such that $\frac{\pi(\bm t)}{\pi(\bm k)} > \omega$ for every ${\bm k}$ with $\pi({\bm k}) > 0$. \end{assumption} \begin{assumption}\label{assumptionhyper} For every $n \ge 1$, the hyper-parameter for the non-local pMOM prior in \ref{modelspecification} satisfy $0 < a_1 < eig_1(A_p) \le eig_2(A_p) \le \ldots \le eig_p(A_p) < a_2 < \infty$. Here $a_1, a_2$ are constants not depending on $n$. \end{assumption} \noindent \citet{Johnson:Rossell:2012} assume all the eigenvalues of $\frac{X^TX}{n}$ to be bounded by a constant, which is unrealistic to achieve in the high-dimensional setting. In our work, Assumption \ref{assumptiontruemodel} assumes that non-zero eigenvalues of any sub-matrices of the design matrix not to be bounded by a constant, but to be uniformly bounded over a function of $n$. Assumption \ref{assumptionhyper} is standard which assumes the prior covariance matrix are uniformly bounded in $n$. Note that for the default value of $A_p = I_p$, Assumption \ref{assumptionhyper} is immediately satisfied. Assumption \ref{assumptionmodelsize} states that the prior on the space of the $2^{p_n}$ possible models, places zero mass on unrealistically large models (identical to Assumption in \citep{Shin.M:2015}). Assumption \ref{assumptionprior} states that the ratio of the prior probabilities assigned to the true model and any non-true model stays bounded below in $n$ (identical to Assumption in \citep{Johnson:Rossell:2012}). This type of priors have also been considered in \citep{Liang:2015} and \citep{Shin.M:2015}. Assumption \ref{assumptionpvalue} states that $p$ can grow at an appropriate polynomial rate with $n$. In Section \ref{sec:complexity}, we also give the consistency results under the complexity priors on the model space, which penalize larger models, and consequently relax the assumption on the rate at which $p$ can be growing. We now state and prove the main model selection consistency results. Our first result establishes what we refer to as posterior ratio consistency. This notion of consistency implies that the true model will be the mode of the posterior distribution among all the models with probability tending to $1$ as $n \rightarrow \infty.$ \begin{theorem}[Posterior ratio consistency for hyper-pMOM priors] \label{thm1} Under Assumptions \ref{assumptiontruemodel}, \ref{assumptionmodelsize}, \ref{assumptionprior} and \ref{assumptionhyper}, for the hierarchical model in (\ref{modelspecification}) to (\ref{model:5}) with hyper-pMOM priors, the following holds: $$ \max_{\bm k \ne {\bm t}} \frac{\pi(\bm k|\bm y_n)}{\pi(\bm t|\bm y_n)} \rightarrow 0, \quad \mbox{as } n \rightarrow \infty. $$ \noindent In particular, it implies that the probability that the posterior mode $\hat{\bm k}$ defined in (\ref{a4}) is equal to the true model $\bm t$ will converge to $1$, i.e., $$ P(\bm t = \argmax_{\bm k} \pi(\bm k|\bm y_n)) \rightarrow 1, \quad \mbox{as } n \rightarrow \infty. $$ \end{theorem} \noindent We would like to point out that posterior ratio consistency (Theorems \ref{thm1}) does not require any restriction on the number of predictors. This requirement is only needed for strong selection consistency (Theorem \ref{thm2}). Next, we establish a stronger result which implies that the posterior mass assigned to the true model $\bm t$ converges to $1$ in probability. We refer to this notion of consistency as strong selection consistency. \begin{theorem}[Strong selection consistency for hyper-pMOM priors] \label{thm2} Under Assumptions 1-5, with $\xi < 1 - \frac {4\gamma} {3r}$ in Assumption \ref{assumptionmodelsize}, for the hierarchical model in (\ref{modelspecification}) to (\ref{model:5}) with hyper-pMOM priors, the following holds: $$ \pi(\bm t | \bm y_n) \rightarrow 1, \quad \mbox{as } n \rightarrow \infty. $$ \end{theorem} \noindent The above results have been established under the pMOM priors. Another class of non-local priors introduced in \citep{Johnson:Rossell:2012} are the piMOM priors on the regression coefficients, for which the density of the regression coefficients under the model ${\bm k} = \{k_1, k_2, \ldots, k_m\}$ is given by \begin{equation} \label{pimomdensity_introduction} \frac{(\tau\sigma^2)^{\frac{r|{\bm k}|}{2}}}{\Gamma(\frac r 2)^{|{\bm k}|}} \prod_{i=1}^{|{\bm k}|} |\beta_{k_i}|^{-(r+1)} \exp\left(-\frac{\tau\sigma^2}{\beta_{k_i}^2}\right), \end{equation} \noindent where $r$ is a positive integer and is refereed to as the order of the density. The corollary below establishes strong model selection consistency under the hyer-piMOM priors with piMOM priors on each linear regression coefficient (conditional on the hyper parameter $\tau$) and an Inverse-Gamma prior on $\tau$. The consistency can be obtained immediately by combining Theorem \ref{thm2} with Eq. (59) and (60) in the supplementary material for \citep{Johnson:Rossell:2012}. \begin{corollary}[Strong selection consistency for hyper-piMOM priors] \label{corollary1} Under the same conditions as in Theorem \ref{thm2}, when piMOM priors are imposed on $\bm \beta_k$ in model (\ref{model:pmom}), the following holds: $$ \pi(\bm t | \bm y_n) \rightarrow 1, \mbox{ as } n \rightarrow \infty. $$ \end{corollary} \begin{remark} \label{BW_comparison} In the context of generalized linear regression, \citet{BW:2017} consider the hierarchical Bayesian model with the following hyer-piMOM priors on regression coefficients. \begin{eqnarray*} & & \bm \beta_k \mid \tau_i \sim \frac{(\tau\sigma^2)^{\frac{r|{\bm k}|}{2}}}{\Gamma(\frac r 2)^{|{\bm k}|}} \prod_{i=1}^{|{\bm k}|} |\beta_{k_i}|^{-(r+1)} \exp\left(-\frac{\tau_i\sigma^2}{\beta_{k_i}^2}\right)\\ & & \tau_i \sim \mbox{Inverse-Gamma }(\frac{(r + 1)}2 , \lambda). \end{eqnarray*} \noindent In particular, the authors put an independent piMOM prior on each linear regression coefficient (conditional on the hyper parameter $\tau_i$), and an Inverse-Gamma prior on $\tau_i$. In this setting, \citet{BW:2017} establish strong selection consistency for the regression coefficients (assuming the prior is constrained to leave out unrealistically large models). There are similarities between the models and the consistency analysis in \citep{BW:2017} and our work as in the usage of non-local priors and Inverse-Gamma distribution. However, despite these similarities, there are some fundamental differences in the two models and the corresponding analysis. Firstly, unlike the piMOM prior, the pMOM prior in our model does not in general correspond to assigning an independent prior to each entry of $\bm \beta_k$. In particular, pMOM distributions introduce correlations among the entries in $\bm \beta_k$ and creates more theoretical challenges. Because of the correlation introduced, some properties like detecting small coefficients are not apparent in our case. Also, the pMOM prior imposes exact sparsity in $\bm \beta_k$, which is not the case in \citep{BW:2017}. Hence it is structurally different than the prior in \citep{BW:2017}. Secondly, in order to prove consistency results, \citet{BW:2017} assume the product of the response variables and the entries of design matrix are bounded by a constant. The former assumption is rarely seen in the literatures and the latter assumption can be problematic in practice. See Assumption C1 in \citep{BW:2017}. In addition, Assumption C2 in \citep{BW:2017} states the lower bound for the true regression coefficients, which is not required in our analysis. Thirdly, in terms of proving posterior consistency, we bound the ratio of posterior probabilities for a non-true model and the true model by a `prior term' which results from the Inverse-Gamma prior on $\tau$, and a `data term'. The consistency proof is then a careful exercise in balancing these two terms against each other on a case-by-case basis, while \citet{BW:2017} directly follow the proof in \citep{Shin.M:2015} and requires additional assumptions on the Hessian matrix. \end{remark} \section{Proof of Theorems \ref{thm1} and \ref{thm2}} \label{sec:modelselectionproofs} \noindent The proof of Theorems \ref{thm1} and \ref{thm2} will be broken up into several steps. First we denote for any model $\bm k$, $R_k^* = \bm y_n^T\left(I - X_k(X_k^TX_k)^{-1}X_k^T\right)\bm y_n,$ and $P_k = X_k(X_k^TX_k)^{-1}X_k^T$. Our method of proving consistency involves approximating $R_t$ and $R_k$ (in (\ref{marginal density})) with $R_t^*$ and $R_k^*$ respectively. Fix a model ${\bm k} \neq {\bm t}$ arbitrarily, and let $\bm u = \bm k \cup \bm t$ and $u = |\bm u|$ be the cardinality of $\bm u$. Note that $\frac{R_t^*}{\sigma_0^2} \sim \chi^2_{n-t}$, $\frac{R_u^*}{\sigma_0^2} \sim \chi^2_{n-u}$, $R_{u \cap t^c}^* \sim \chi_{u-t}^2$, and $\frac{\bm y_n^TP_u\bm y_n}{\sigma_0^2} \sim \chi^2_u\left(\frac{\bm \beta_0^TX_t^TX_t\bm \beta_0}{\sigma_0^2}\right)$. Next, we establish two tail probability bounds for the $\chi^2$ distribution and the non-central $\chi^2$ distribution respectively, which will be useful in our analysis. \begin{lemma} \label{chisquaredtail} For any $a > 0$, we have the following two inequalities, \begin{align} P\left(\lvert\chi_p^2 - p\rvert > a\right) &\le 2\exp\left(-\frac{a^2}{4p}\right),\\ P\left(\chi_p^2(\lambda) - (p + \lambda) > a\right) &\le \exp\left(-\frac p 2\left\{\frac a {p+\lambda} - \log\left(1+\frac a {p+\lambda}\right)\right\}\right). \end{align} \end{lemma} \noindent The proof for Lemma \ref{chisquaredtail} is provided in the supplemental document. The following result is immediate from Lemma \ref{chisquaredtail}. \begin{align} \label{chisquaredRt} \begin{split} P\left[\left\lvert\frac{R_t^*}{\sigma_0^2} - (n-t)\right\rvert > \sqrt{n-t}\log n \right] &\le P\left[\left\lvert\frac{R_t^*}{\sigma_0^2} - (n-t)\right\rvert > 4\sqrt{(n-t)\log n} \right] \\ &\le 2n^{-1} \rightarrow 0, \end{split} \end{align} as $n \rightarrow \infty.$ Similarly, we have \begin{align} \label{chisquaredRk} \begin{split} P\left[\left\lvert\frac{R_u^*}{\sigma_0^2} - (n-u)\right\rvert > \sqrt{n-u}\log n \right] \le 2n^{-1} \rightarrow 0, \end{split} \end{align} and \begin{align} \label{chisquaredRk-t} \begin{split} P\left[\left\lvert\frac{R_{u \cap t^c}^*}{\sigma_0^2} - (u-t)\right\rvert > \sqrt{u-t}\log n \right] \le 2n^{-1} \rightarrow 0, \end{split} \end{align} as $n \rightarrow \infty$. Next, by Lemma \ref{chisquaredtail}, since $\bm u \supset \bm t$, we have \begin{align} \label{chisquaredPk} \begin{split} &P\left[\frac{\bm y_n^TP_u\bm y_n}{\sigma_0^2} - \left(u + \frac1{\sigma_0^2}\bm \beta_0^TX_t^TX_t\bm \beta_0\right) > n\log n - u - \frac1{\sigma_0^2}\bm \beta_0^TX_t^TX_t\bm \beta_0 \right]\\ \le& \exp\left\{-\frac u 2 \left\{\frac{n\log n}{u + \frac1{\sigma_0^2}\bm \beta_0^TX_t^TX_t\bm \beta_0} - \log\left(1 + \frac{n\log n}{u + \frac1{\sigma_0^2}\bm \beta_0^TX_t^TX_t\bm \beta_0}\right)\right\}\right\} \\ \le& \exp\left\{-\frac u 4 \left\{\frac{\log n}{1 + \frac 1{\sigma_0^2\epsilon_n}\bm \beta_0^T\bm \beta_0} \right\}\right\}.\\ \preceq & n^{-c^\prime u} \rightarrow 0, \end{split} \end{align} \noindent as $n \rightarrow \infty$, for some constant $c^\prime > 0$. Define the event $C_n$ as \begin{align} \label{largeprobset} \begin{split} C_n =& \left\{\left\lvert\frac{R_t^*}{\sigma_0^2} - (n-t)\right\rvert > \sqrt{n-t}\log n \right\} \cup \left\{\left\lvert\frac{R_u^*}{\sigma_0^2} - (n-u)\right\rvert > \sqrt{n-u}\log n\right\} \\ &\cup \left\{\left\lvert\frac{R_{u \cap t^c}^*}{\sigma_0^2} - (u-t)\right\rvert > \sqrt{u-t}\log n\right\} \cup \left\{\frac{\bm y_n^TP_u\bm y_n}{\sigma_0^2} > n\log n\right\}, \end{split} \end{align} \noindent It follows from (\ref{chisquaredRt}), (\ref{chisquaredRk}), (\ref{chisquaredRk-t}), and (\ref{chisquaredPk}), that $P(C_n) \rightarrow 0$ as $n \rightarrow \infty$. We now analyze the behavior of the posterior ratio under different scenarios in a sequence of lemmas. Recall that our goal is to find an upper bound for the posterior ratio, such that the upper bound converges to $0$ as $n \rightarrow \infty$. {\bf For the following lemmas, we will restrict ourselves to the event $C_n^c$}. The following lemma establishes the upper bound of the marginal posterior ratio for any ``non-true" model compared to the true model. \begin{lemma} \label{lm1} Under Assumption \ref{assumptiontruemodel} and Assumption \ref{assumptionhyper}, for all $\bm k \neq \bm t,$ there exists $N$ (not depending on $k$), such that when $n > N$, \begin{align} \begin{split} \frac{m_{\bm k}(\bm y_n)}{m_{\bm t}(\bm y_n)} <& BA^k\left(\frac V{\epsilon_n^2}\right)^{rk}k^{k}n^{-(k-t)}\frac{\{R_t^* + 2\alpha_2\}^{\frac n 2 + rt + \alpha_1}}{\{R_k^* + 2\alpha_2\}^{\frac n 2 + rk + \alpha_1}}\\ &+ BA^kk^{(r+1)k}n^{-(r+1)(k-t)-\frac 3 4 rk -rt}\frac{\{R_t^* + 2\alpha_2\}^{\frac n 2 + rt + \alpha_1}}{\{R_k^* + 2\alpha_2\}^{\frac n 2 + \alpha_1}}, \end{split} \end{align} where $A, B$ are constants and $V = \epsilon_n^{-4}\hat \beta_{u}^T \hat \beta_{u}$, in which $\hat \beta_{u}^T = (X_u^TX_u)^{-1}X_u\bm y_n$ with $\bm u = \bm k \cup \bm t$. \end{lemma} \noindent The proof for Lemma \ref{lm1} is provided in supplemental document. The next two lemmas provide the upper bound of the marginal posterior ratio for $\bm y_n$ under different cases of the ``non-true" model $\bm k$ with proof provided in the supplemental document. \begin{lemma} \label{lm3} Under Assumptions \ref{assumptiontruemodel}, \ref{assumptionmodelsize} and \ref{assumptionhyper}, for all $\bm k \nsupseteq \bm t$, there exists $N$, such that when $n > N^\prime$ (not depending on $k$), \begin{align} \frac{m_{\bm k}(\bm y_n)}{m_{\bm t}(\bm y_n)} < K^{\prime}(L^{\prime})^kn^{-\frac 3 4 rk}, \end{align} where $K^{\prime}$ and $L^{\prime}$ are constants. \end{lemma} \begin{lemma} \label{lm5} Under Assumptions \ref{assumptiontruemodel}, \ref{assumptionmodelsize} and \ref{assumptionhyper}, for all $\bm k \supset \bm t$, there exists $N^{\prime\prime}$ (not depending on $k$), such that when $n > N^{\prime\prime}$, \begin{align} \frac{m_{\bm k}(\bm y_n)}{m_{\bm t}(\bm y_n)} < S^{\prime}(T^{\prime})^{k-t}n^{-\min\left\{\frac 3 4, 1-\xi \right\} r(k-t)}, \end{align} where $S^{\prime}$ and $T^{\prime}$ are constants. \end{lemma} \begin{proof}[Proof of Theorem \ref{thm1} and \ref{thm2}] The proof of Theorem \ref{thm1} will follow immediately from these two lemmas. By Lemma \ref{lm3}, if we restrict to $C_n^c,$ for any $\bm k \neq \bm t,$ if $\bm k \nsupseteq \bm t,$ \begin{align*} \frac{m_{\bm k}(\bm y_n)}{m_{\bm t}(\bm y_n)} < K^{\prime}(L^{\prime})^kn^{-\frac 3 4 rk} \rightarrow 0, \mbox{ as } n \rightarrow \infty. \end{align*} Otherwise, when $\bm k \supset \bm t,$ \begin{align*} \frac{m_{\bm k}(\bm y_n)}{m_{\bm t}(\bm y_n)} < S^{\prime}(T^{\prime})^{k-t}n^{--\min\left\{\frac 3 4, 1-\xi\right\} r(k-t)} \rightarrow 0, \mbox{ as } n \rightarrow \infty. \end{align*} Note that $P\left(C_n^c\right) \rightarrow 1$ as $n \rightarrow \infty$. Following from (\ref{model posterior}) and Assumption \ref{assumptionprior}, when $\bm k \neq \bm t$, we have \begin{align} \label{thm1proof} \frac{\pi(\bm k|\bm y_n)}{\pi(\bm t|\bm y_n)} \le \frac 1 \omega \frac{m_{\bm k}(\bm y_n)}{m_{\bm t}(\bm y_n)} \rightarrow 0, \mbox{ as } n \rightarrow \infty. \end{align} We now move on to the proof of Theorem \ref{thm2}. First note that when $\xi < 1 - \frac {4\gamma} {3r},$ we have \begin{equation} \label{thm3proof} \min\left\{\frac 3 4,1-\xi\right\}r > \frac{3}{4}(1-\xi)r >\gamma. \end{equation} It follows from (\ref{thm1proof}) and Assumption \ref{assumptionpvalue} that if we restrict to $C_n^c$, then \begin{align*} \frac{1-\pi(\bm t|\bm y_n)}{\pi(\bm t|\bm y_n)} =& \sum_{\bm k \neq \bm t}\frac{\pi(\bm k)m_{\bm k}(\bm y_n)}{\pi(\bm t)m_{\bm t}(\bm y_n)}\\ \le & \frac 1 \omega\sum_{\bm k \nsupseteq \bm t}\frac{m_{\bm k}(\bm y_n)}{m_{\bm t}(\bm y_n)} + \frac 1 \omega\sum_{\bm k \supset \bm t}\frac{m_{\bm k}(\bm y_n)}{m_{\bm t}(\bm y_n)}\\ \le & \frac 1 \omega\sum_{k = 1}^{q_n} \binom pk K^\prime(L^\prime)^kp^{-\frac{ \frac 3 4 r}{\gamma} k} \\ &+ \frac 1 \omega\sum_{k-t = 1}^{q_n-t} \binom {p-t}{k-t} S^{\prime}(T^{\prime})^{k-t} p^{-\frac{\min\left\{\frac 3 4, 1-\xi \right\}r}{\gamma}(k-t)}. \end{align*} Using $\binom {p} {k} \le p^{k}$ and (\ref{thm3proof}), we get \begin{align*} \frac{1-\pi(\bm t|\bm y_n)}{\pi(\bm t|\bm y_n)} \rightarrow 0, \mbox{ i.e. } \pi(\bm t|\bm y_n) \rightarrow 1, \end{align*} as $n \rightarrow \infty.$ \end{proof} \section{Results for Complexity Priors} \label{sec:complexity} \noindent Note that under our model prior specified in Assumption \ref{assumptionprior}, to achieve strong selection consistency, we are restricting $p$ to grow at a polynomial rate of $n$ (see Assumption \ref{assumptionpvalue}). To address this limitation, based on reviewers' comments, we investigate the theoretical property under the complexity priors introduced in \citep{Castillo:2015}. The hierarchical model with complexity priors placed on the model space, adapted to our notation and framework, can be described as follows: \begin{eqnarray} \label{complexity} \begin{split} & \bm{Y}_n \mid \bm \beta_k, \sigma^2, \bm k \sim N(X_k \bm \beta_k, \sigma^2 I_n)\\ & \pi\left(\bm \beta_k \mid \tau, \sigma^2, \bm k\right) = d_k(2\pi)^{-\frac k 2}(\tau \sigma^2)^{-rk - \frac k 2} |A_k| ^{\frac 1 2} \exp \left\{- \frac{\bm \beta_k ^ \prime A_k \bm \beta_k}{2 \tau \sigma ^2}\right\}\prod_{i =1}^{k} \beta_{k_i}^{2r}\\ & \pi(k) \propto c_1^{-k}p^{-c_2k},\\ & \pi(\tau) = \frac{\left(\frac n 2\right)^{\frac 1 2}}{\Gamma(\frac 1 2)} \tau^{- \frac 3 2} e^{-\frac{n}{2\tau}}.\\ & \pi\left(\sigma^2\right) = \frac{\left( \alpha_2 \right)^{\alpha_1}}{\Gamma(\alpha_1)} \left(\sigma^2\right)^{-(\alpha_1 + 1)} e^{-\frac{\alpha_2}{\sigma^2}}. \end{split} \end{eqnarray} \noindent where $c_1, c_2>0$ are fixed constants. As indicated in \citep{Castillo:2015}, the rate of decrease for $\pi(k)$ reflects the number of models $\binom{p}{k}$ of given size $k$. Compared to the previous uniform-like prior, these complexity priors are explicitly penalizing larger models to accommodate larger dimensions. In particular, to achieve model selection consistency in this setup, the dimension $p$ can be allowed to grow at a sub-exponential rate of $n$ given in the following condition: \begin{condition} \label{assumptionp_new} There exist a constant $0 < \kappa < 1$, such that $\log p = O(n^\kappa)$. \end{condition} The next result establish the posterior ratio consistency for the complexity prior based approach in (\ref{complexity}). \begin{theorem}[Posterior ratio consistency for complexity priors] \label{thm4.1} Consider the complexity prior based model described in (\ref{complexity}). Under Assumptions \ref{assumptiontruemodel}, \ref{assumptionmodelsize}, \ref{assumptionhyper} and Condition \ref{assumptionp_new}, the following holds: $$ \max_{\bm k \ne {\bm t}} \frac{\pi(\bm k|\bm y_n)}{\pi(\bm t|\bm y_n)} \rightarrow 0, \quad \mbox{as } n \rightarrow \infty. $$ \end{theorem} Next, we establish the strong selection consistency result which implies that the posterior mass assigned to the true model $\bm t$ converges to $1$ in probability. \begin{theorem}[Strong selection consistency for complexity priors] \label{thm4} Consider the complexity prior based model described in (\ref{complexity}). Under Assumptions \ref{assumptiontruemodel}, \ref{assumptionmodelsize}, \ref{assumptionhyper} and Condition \ref{assumptionp_new}, if we future assume $c_2 > 1$, the following holds: $$ {\pi(\bm t|\bm y_n)} \rightarrow 1, \quad \mbox{as } n \rightarrow \infty. $$ \end{theorem} \noindent The proof for Theorem \ref{thm4.1} and \ref{thm4} will also be broken into several steps. The following three lemmas establish the upper bound for the marginal posterior ratio between any ``non-true" model and the true model. \begin{lemma} \label{lm6} Under Assumptions \ref{assumptiontruemodel}, \ref{assumptionmodelsize}, \ref{assumptionhyper} and Condition \ref{assumptionp_new}, when $\bm k \subset \bm t$, for large enough $n > N_1^{\prime\prime}$ (not depending on $k$), the following holds: \begin{align} \frac{\pi(\bm k |\bm y_n)}{\pi(\bm t | \bm y_n)} \le 2M_1 p^{-2c_2t}, \end{align} for some constants $M_1 > 0$. \end{lemma} \begin{lemma} \label{lm7} Under Assumptions \ref{assumptiontruemodel}, \ref{assumptionmodelsize}, \ref{assumptionhyper} and Condition \ref{assumptionp_new}, When $\bm k \supset \bm t$, for large enough $n > N^{\prime\prime}$ (not depending on $k$), the following holds: \begin{align} \frac{\pi(\bm k |\bm y_n)}{\pi(\bm t | \bm y_n)} \le c_1^{-(k-t)}p^{-c_2(k-t)}. \end{align} \end{lemma} \begin{lemma} \label{lm8} Under Assumptions \ref{assumptiontruemodel}, \ref{assumptionmodelsize}, \ref{assumptionhyper} and Condition \ref{assumptionp_new}, when $\bm k \nsubseteq \bm t,$ $\bm k \nsupseteq \bm t$ and $\bm k \neq \bm t$, denote $\bm u = \bm k \cup \bm t$. for large enough $n > N_3^{\prime \prime}$ (not depending on $k$), the following holds: \begin{align} \frac{\pi(\bm k |\bm y_n)}{\pi(\bm t | \bm y_n)} \le c_3^{-(k-t)}p^{-c_2k}, \end{align} for some constant $c_3 > 0$. \end{lemma} \noindent \begin{proof}[Proof of Theorem \ref{thm4.1} and \ref{thm4}] Theorem \ref{thm4.1} immediately follows after Lemma \ref{lm6} to \ref{lm8}. We now move on to the proof of Theorem \ref{thm4}. It follows from Lemma \ref{lm6} to \ref{lm8} that if we restrict to $C_n^c$, then \begin{align*} \frac{1-\pi(\bm t|\bm y_n)}{\pi(\bm t|\bm y_n)} =& \sum_{\bm k \neq \bm t}\frac{\pi(\bm k|\bm y_n)}{\pi(\bm t|\bm y_n)}\\ \le & \sum_{k \le t}\frac{\pi(\bm k|\bm y_n)}{\pi(\bm t|\bm y_n)} + \sum_{k > t, \bm k \supset \bm t}\frac{\pi(\bm k|\bm y_n)}{\pi(\bm t|\bm y_n)} + \sum_{k > t, \bm k \nsupseteq \bm t}\frac{\pi(\bm k|\bm y_n)}{\pi(\bm t|\bm y_n)}\\ \le & \sum_{k = 1}^{t} \binom pk M_2p^{-2c_2t} + \sum_{k-t = 1}^{q_n-t} \binom {p-t}{k-t} c_1^{-(k-t)}p^{-c_2(k-t)}\\ &+ \sum_{k = 1}^{q_n} \binom {p}{k} c_3^{-(k-t)}p^{-c_2k}. \end{align*} By $\binom {p} {k} \le p^{k}$ and $c_2 > 1$, we get \begin{align*} \frac{1-\pi(\bm t|\bm y_n)}{\pi(\bm t|\bm y_n)} \rightarrow 0, \mbox{ i.e. } \pi(\bm t|\bm y_n) \rightarrow 1, \quad \mbox{as } n \rightarrow \infty, \end{align*} which completes our proof for Theorem \ref{thm4}. \end{proof} \begin{remark} Note that though under the complexity priors, the restriction on $p$ is relaxed in terms of proving strong selection consistency, we find that in our simulation studies, the model selection performance under uniform-like prior is much better than that under the complexity priors, hence from a practical point of view, one would still prefer the hyper-pMOM with uniform-like prior over the model space. As indicated in \citep{Shin.M:2015}, since the pMOM priors already induce a strong penalty on the model size, it is no longer necessary to penalize larger models through priors on the model space. \end{remark} \section{Computation} \label{sec:computation} \noindent The integral formulation in (\ref{model posterior}) is quite complicated, and hence the posterior probabilities can not be obtained in closed form. Hence, we use Laplace approximation to compute $m_{\bm k}(\bm y_n)$ and $\pi(\bm k | \bm y_n)$. A similar approach to compute posterior probabilities has been used in \citep{Johnson:Rossell:2012} and \citep{Shin.M:2015}. Note that for any model $\bm k$, when $A_{k} = I_{k}$, the normalization constant $d_{k}$ in (\ref{modelspecification}) is given by $d_{k} = \left((2r-1)!!\right)^{-k}$. Let \begin{align} \label{laplacemarginal} \begin{split} f(\bm \beta,\tau,\sigma^2) =& \log(m_{\bm k}(\bm y_n)) \\ =& \log\left(\pi(\bm y_n|\sigma^2)\pi(\bm \beta_k|\tau,\sigma^2)\pi(\tau)\pi(\sigma^2)\right)\\ =& -k\log\left((2r-1)!!\right) - \frac {n+k} 2\log(2\pi) - \left(rk +\frac {n+k} 2 + \alpha_1 + 1\right)\log(\sigma^2) \\ &- \left(rk+\frac {k+3} 2\right)\log\tau -\left(\frac{(\bm y_n - X_k\bm \beta_k)^T(\bm y_n - X_k\bm \beta_k)}{2\sigma^2} \right)\\ &-\left(\frac{\bm \beta_k^T\bm \beta_k }{2\tau\sigma^2} + \frac{\alpha_2}{\sigma^2} + \frac{n}{2\tau}\right)+ \sum_{i=1}^k2r\log(\beta_{k_i}) \end{split} \end{align} For any model $\bm k$, the Laplace approximation of $m_{\bm k}(\bm y_n)$ is given by \begin{align} \label{laplace} (2\pi)^{\frac k 2 + 1}\exp\left\{f(\hat{\bm \beta_k},\hat{\tau},\hat{\sigma^2})\right\}|V(\hat{\bm \beta_k},\hat{\tau},\hat{\sigma^2})|^{-\frac 1 2}, \end{align} where $(\hat{\bm \beta_k},\hat{\tau},\hat{\sigma^2}) = \argmax_{(\bm \beta,\tau,\sigma^2)}f(\bm \beta,\tau,\sigma^2)$ obtained via the optimization function nlm in R using a Newton-type algorithm and $V(\hat{\bm \beta_k},\hat{\tau},\hat{\sigma^2})$ is a $(k+2)\times(k+2)$ symmetric matrix with the following blocks: \begin{align} \begin{split} &V_{11} = \frac 1 {\tau\sigma^2}I_k + \frac 1 {\sigma^2}X_k^TX_k + diag\left\{\frac{2r}{\beta_{k_1}^2}, \ldots, \frac{2r}{\beta_{k_k}^2} \right\}, V_{12} = -\frac{\bm \beta_k}{\tau^2\sigma^2},\\ &V_{13} = -\frac{\bm \beta_k}{\tau\sigma^4} - \frac{X_k^TX_k\bm \beta_k - X_k^T\bm y_n}{\sigma^4}, V_{22} = -\frac{rk + \frac k 2 + \frac 3 2}{\tau^2} + \frac{\bm \beta_k^T\bm \beta_k}{\tau^3\sigma^2} + \frac{n}{\tau^3},V_{23} = \frac{\bm \beta_k^T\bm \beta_k}{2\tau^2\sigma^4},\\ &V_{33} = -\frac{rk+\frac k 2 + \frac n 2 + \alpha_1 + 1}{\sigma^4} +\frac{\bm \beta_k^T\bm \beta_k}{\tau\sigma^6} + \frac{(\bm y_n - X_k\bm \beta_k)^T(\bm y_n - X_k\bm \beta_k)}{4\sigma^6} + \frac{2\alpha_2}{\sigma^6}. \end{split} \end{align} The above Laplace approximation can be used to compute the log of the posterior probability ratio between any given model $\bm k$ and true model $\bm t$, and select a model $\bm k$ with the highest probability. Based on a reviewer's comment, we would like to point out that Laplace approximation could have potential drawbacks. Firstly, as indicated in \cite{Rossell:Telesca:2017}, for non-local priors, Laplace approximations fail to consistently estimate the marginal likelihood for overfitted models. Secondly, the Newton-type algorithm used for optimizing (6.1) could be quite time consuming, especially when the size of the model and the dimension $p$ are large. For example, in Figure 5, the runtime for the hyper-pMOM approach increases as $p$ grows. Despite these potential drawbacks of the Laplace approximation, we would like to point out that in these high-dimensional settings, full posterior sampling using Markov chain Monte Carlo algorithms is highly inefficient and often not feasible from a practical perspective. Hence, the usage of Laplace approximation is still much better than MCMC. We then adopt the scalable stochastic search algorithm proposed by \cite{Shin.M:2015} called Simplified Shotgun Stochastic Search with Screening (S5). Utilizing the Laplace approximations of the marginal probabilities in (\ref{laplace}), the S5 method aims at rapidly identifying regions of high posterior probability and finding the maximum a posteriori (MAP) model. Detailed algorithm steps can be found in \cite{Shin.M:2015} and the implementation can be found in the R package ``BayesS5". \section{Experiments} \label{sec:experiments} \noindent \subsection{Simulation I: Illustration of posterior ratio consistency} \label{sec:illustration:posterior:ratio} In this section, we illustrate the model selection consistency results in Theorems \ref{thm1} and \ref{thm2} using a simulation experiment. The similar simulation setting was also considered in the literature \citep{CKG:2017} by Cao, Khare and Ghosh, in which the authors showed posterior consistency in graphical model setting. We generate our data according to a Gaussian linear model based on the following mechanism. First, we vary $p$ from $500$ to $3000$ and let $n = p/5$. Then, for each fixed $p$, ten covariates are taken as active in the true model with coefficients $\bm \beta_0 = \left(1.1,1.2,1.3,\ldots,1.9,2\right)^T$ and set $\sigma = 1$. Also, the signs of the true regression coefficients were randomly changed with probability $0.5$. Next, we generate $n$ i.i.d. observations from the $N(\bm 0_p, \Sigma )$ distribution as rows of the covariate matrix $X$. We then examine posterior ratio consistency under three different cases of $\Sigma$ by computing the log posterior ratio of a ``non-true" model $\bm k$ and $\bm t$ as follows. \begin{enumerate} \item Case $1$: Isotropic design, where $\Sigma = I_p.$ \item Case $2$: Compound symmetry design, where $\Sigma_{ij} = 0.5$, if $i \neq j$ and $\Sigma_{ii} = 1,$ for all $1 \le i \le j \le p.$ \item Case $3$: Autoregressive correlated design; where $\Sigma_{ij} = 0.5^{|i-j|},$ for all $1 \le i \le j \le p$. \end{enumerate} \noindent Throughout this simulation study, we set the hyperparameters $r=2$ and $\alpha_1 = \alpha_2 = 0.01$. The log of the posterior probability ratio for various cases of $\Sigma$ is provided in Figure \ref{posterior_ratio_plot}. Note that for each of these cases, we compute the log ratio under four different scenarios of ``non-true'' model $\bm k$. \begin{enumerate} \item Scenario $1$: $\bm k$ is a subset of $\bm t$ and $|\bm k| = \frac 1 2 |\bm t|.$ \item Scenario $2$: $\bm k$ is a superset of $\bm t$ and $|\bm k| = 2 |\bm t|.$ \item Scenario $3$: $\bm k$ is not necessarily a subset of $\bm t$, but $|\bm k| = \frac 1 2 |\bm t|.$ \item Scenario $4$: $\bm k$ is not necessarily a superset of $\bm t$, but $|\bm k| = 2 |\bm t|.$ \end{enumerate} As expected the log of the posterior probability ratio for any ``non-true" model $\bm k$ compared to the true model $\bm t$ are all decreasing to large negative values as $p$ increases, thereby providing a numerical illustration of Theorems \ref{thm1} and \ref{thm2}. \begin{figure}[htbp] \centering \begin{subfigure} {\includegraphics[width=35mm]{posterior_ratio.pdf}} \end{subfigure} \begin{subfigure} {\includegraphics[width=35mm]{posterior_ratio_1.pdf}} \end{subfigure} \begin{subfigure} {\includegraphics[width=49mm]{posterior_ratio_2.pdf}} \end{subfigure} \caption{Log of posterior probability ratio for $\bm k$ and $\bm t$ for various choices of the ``non-true" model $\bm k$. Left: case $1$; middle: case $2$; right: case $3$. } \label{posterior_ratio_plot} \end{figure} \subsection{Simulation II: Illustration of model selection} \label{sec:illustration:model:selection} \noindent In this section, we perform a simulation experiment to illustrate the potential advantages of using our Bayesian approach. Several different values of $p$ ranging from $500$ to $3000$ are considered, while $n = p/5$. For each fixed $p$, we construct two sets of $\bm \beta_0$. The first set is generated by the same mechanism as in Section \ref{sec:illustration:posterior:ratio}. The other set also considered is $(0.3,0.35,0.4,0.45,0.5,1.1,1.2,1.3,1.4,1.5)^T$. Next, we generate $n$ i.i.d. observations from the $N(\bm 0_p, \Sigma)$ distribution as rows of covariate matrix $X$ under the following three cases similar to Section \ref{sec:illustration:posterior:ratio}. \begin{enumerate} \label{sigma_cases} \item Case $1$: Isotropic design, where $\Sigma = I_p.$ \item Case $2$: Compound symmetry design, where $\Sigma_{ij} = 0.5$, if $i \neq j$ and $\Sigma_{ii} = 1,$ for all $1 \le i \le j \le p.$ \item Case $3$: Autoregressive correlated design; where $\Sigma_{ij} = 0.5^{|i-j|},$ for all $1 \le i \le j \le p$. \end{enumerate} \noindent Then, we perform model selection using our hierarchical Bayesian approach. This is done by computing the posterior probabilities using the Laplace approximation in (\ref{laplace}), and exploring the model space using the simplified stochastic shotgun stochastic search algorithm in \citep{Shin.M:2015}. We would like to remind the readers that in our model, we don't need to specify a fixed value for $\tau$, but rather put a prior on the parameter $\tau$ (as opposed to \citep{Johnson:Rossell:2012} and \citep{Shin.M:2015} when $\tau$ is treated as a fixed parameter). In Table \ref{model:selection:table:n200} and Table \ref{model:selection:table:beta2}, we also provide model selection performance results with fixed $\tau$ at $\tau = 0.072$ (the default value for the second-order pMOM prior suggested in \cite{Johnson:Rossell:2012}), and numerical values in R package BayesS5 (a choice for fixed $\tau$ from the results in \citep{Shin.M:2015}). Additionally, we also provide model selection performance results for the Lasso \citep{Lasso:1996} and SCAD \citep{SCAD:2001} penalized likelihood methods. The model selection performance of these five methods is then compared using several different measures of structure such as positive predictive value, true positive rate and false positive rate (average over $20$ independent repetitions). Positive Predictive Value (PPV) represents the proportion of true model indexes among all the indexes detected by the given procedure. True Positive Rate (TPR) measures the proportion of true indexes detected by the given procedure among all the true indexes from the true model. False Positive Rate (FPR) represents the proportion of falsely identified indexes among all the non- true indexes from the true model. PPV, TPR and FPR are defined as $$\mbox{PPV} = \frac{\text{TP}}{\text{TP + FP}}, \quad \mbox{TPR} = \frac{\text{TP}}{\text{TP + FN}}, \quad \mbox{FPR} = \frac{\text{FP}}{\text{FP + TN}},$$ where TP, TN, FP and FN correspond to true positive, true negative, false positive and false negative, respectively. One would like the PPV and TPR values to be as close to $1$ as possible, while FPR to be as close to $0$ as possible. The results are summarized in Table \ref{model:selection:table:n200} and Table \ref{model:selection:table:beta2}. To better visualized the results, in Figure \ref{roc}, we provide the ROC curves when $|\bm \beta_0| = \left(1.1,1.2,1.3,\ldots,1.9,2\right)^T$ and $\Sigma$ for generating $\bm X$ yields a compound symmetry design. We also include the complexity prior based approach illustrated in Section \ref{sec:complexity}. As we can see, the complexity prior based approach captures fewer true indexes compared to other approaches. \begin{table} \centering \scalebox{0.78}{ \begin{tabular}{cccccccccccccccc} \hline \multicolumn{1}{c}{\multirow{1}{*}{}}& \multicolumn{3}{c}{Lasso} & \multicolumn{3}{c}{SCAD} & \multicolumn{3}{c}{BayesS5} & \multicolumn{3}{c}{$\tau = 0.072$} & \multicolumn{3}{c}{Hyper-pMOM} \\ p & PPV & TPR & FPR & PPV & TPR & FPR & PPV & TPR & FPR & PPV & TPR & FPR & PPV & TPR & FPR \\ \hline 200 & 0.29 & 1 & 0.14 & 0.93 & 1 & 0.01 &1 &1 &0 & 0.96 & 1 & 0 & 1 & 1 & 0 \\ 500 & 0.19 & 1 & 0.09 & 0.90 & 1 & 0 & 1 &1 &0 & 0.88 & 1 & 0 & 1 & 1 & 0 \\ 1000 & 0.18 & 1 & 0.04 & 0.87 & 1 & 0 & 0.98 & 1 &0 & 0.69 & 1 & 0 & 1 & 1 & 0 \\ 1500 & 0.18 & 1 & 0.03 & 0.84 & 1 & 0 & 0.98 & 1 &0 & 0.74& 1 & 0 & 1 & 1 & 0 \\ 2000 & 0.17 & 1 & 0.02 & 0.82 & 1 & 0 & 0.98 & 1 & 0 & 0.59 & 1 & 0 & 1 & 1 & 0 \\ 2500 &0.13 & 1 & 0.02 & 0.90 & 1 & 0 & 0.97 & 1 & 0 &0.49 &1 & 0 & 1 & 1 & 0 \\ \hline \end{tabular} } \scalebox{0.78}{ \begin{tabular}{cccccccccccccccc} \hline \multicolumn{1}{c}{\multirow{1}{*}{}}& \multicolumn{3}{c}{Lasso} & \multicolumn{3}{c}{SCAD} & \multicolumn{3}{c}{BayesS5} & \multicolumn{3}{c}{$\tau = 0.072$} & \multicolumn{3}{c}{Hyper-pMOM} \\ p & PPV & TPR & FPR & PPV & TPR & FPR & PPV & TPR & FPR & PPV & TPR & FPR & PPV & TPR & FPR \\ \hline 200 & 0.27 & 1 & 0.15 & 0.96 & 1 & 0 &0.94 & 1 & 0 &0.81 & 1 & 0.01 &1 & 1 & 0 \\ 500 & 0.21 & 1 & 0.09 & 0.94 & 1 & 0 &0.95 &1 &0 &0.59 & 1 & 0.02 &1 & 1 & 0 \\ 1000 & 0.17 & 1 & 0.05 & 0.92 & 1 & 0 &0.95 & 1 & 0 &0.46 & 1 & 0.01 &0.99 & 1 & 0 \\ 1500 & 0.19 & 1 & 0.03 & 0.90 & 1 & 0&0.94 &1 &0 &0.42 & 1 & 0.01 &1 & 1 & 0 \\ 2000 & 0.13 & 1 & 0.04 & 0.84 & 1 & 0 &0.87 &1 &0 &0.41 & 1 & 0.01 & 0.99 & 1 & 0 \\ 2500 &0.12 & 1 & 0.03 & 0.92 & 1 & 0&0.88 & 1 & 0 & 0.36 & 1 & 0.01&0.99 &1 & 0\\ \hline \end{tabular} } \scalebox{0.78}{ \begin{tabular}{cccccccccccccccc} \hline \multicolumn{1}{c}{\multirow{1}{*}{}}& \multicolumn{3}{c}{Lasso} & \multicolumn{3}{c}{SCAD} & \multicolumn{3}{c}{BayesS5} & \multicolumn{3}{c}{$\tau = 0.072$} & \multicolumn{3}{c}{Hyper-pMOM} \\ p & PPV & TPR & FPR & PPV & TPR & FPR & PPV & TPR & FPR & PPV & TPR & FPR & PPV & TPR & FPR \\ \hline 200 & 0.25 & 1 & 0.17 & 0.91 & 1& 0 &1 & 1 & 0 &0.96 & 1 & 0& 1 & 1 & 0 \\ 500 & 0.20 & 1 & 0.10 & 0.91 & 1 & 0 & 0.98 &1 &0 &0.83 & 1 & 0& 1 & 1 & 0 \\ 1000 & 0.18 & 1 & 0.05 & 0.85 & 1 & 0 &0.97 &1 &0 &0.73 & 1 & 0 &1 & 1 & 0\\ 1500 & 0.16 & 1 & 0.04 & 0.83 & 1 & 0 &0.96 &1 &0 & 0.71 & 1 & 0 & 1 & 1 & 0 \\ 2000 & 0.17 & 1 & 0.04 & 0.83 & 1 & 0 &0.96 & 1 &0 &0.57 & 1 & 0 &0.99 & 1 & 0 \\ 2500 & 0.14 & 1 & 0.03 & 0.85 & 1 & 0 &0.96 & 1 & 0 & 0.56 & 1 & 0 & 1 & 1 & 0\\ \hline \end{tabular} } \caption{Model selection performance comparison table when $|\bm \beta_0| = \left(1.1,1.2,1.3,\ldots,1.9,2\right)^T$. Top: case $1$; middle: case $2$; bottom: case $3$.} \label{model:selection:table:n200} \end{table} \begin{figure}[htbp] \centering \begin{subfigure} {\includegraphics[width=35mm]{roc2001500.pdf}} \end{subfigure} \begin{subfigure} {\includegraphics[width=35mm]{roc2002000.pdf}} \end{subfigure} \begin{subfigure} {\includegraphics[width=52.3mm]{roc2002500.pdf}} \end{subfigure} \caption{ROC curves when $p = 1500$ (left), $p = 2000$ (middle), $p = 2500$ (right).} \label{roc} \end{figure} \begin{table} \centering \scalebox{0.78}{ \begin{tabular}{cccccccccccccccc} \hline \multicolumn{1}{c}{\multirow{1}{*}{}}& \multicolumn{3}{c}{Lasso} & \multicolumn{3}{c}{SCAD} & \multicolumn{3}{c}{BayesS5} & \multicolumn{3}{c}{$\tau = 0.072$} & \multicolumn{3}{c}{Hyper-pMOM} \\ p & PPV & TPR & FPR & PPV & TPR & FPR & PPV & TPR & FPR & PPV & TPR & FPR & PPV & TPR & FPR \\ \hline 200 & 0.32 & 1 & 0.22 & 1 & 1 & 0 & 0.97 & 0.95 & 0 &0.95 &0.96 &0 &1 &0.86 & 0 \\ 500 & 0.27 & 1 & 0.06 & 0.48 & 1 & 0.02 & 0.97 & 0.93 & 0 &0.89 &0.93 &0 &1 &0.84 & 0 \\ 1000 & 0.13 & 1 & 0.07 & 0.59 & 1 & 0.01 & 0.95 & 0.95 & 0 &0.88 &0.89 &0 &1 &0.88 & 0 \\ 1500 & 0.23 & 1 & 0.02 & 0.61 & 0.89 & 0 &0.97 & 0.90 & 0 &0.84 &0.90 & 0 &1 &0.88 & 0 \\ 2000 & 0.19 & 1 & 0.03 & 0.63 & 1 & 0 & 0.97 & 0.89 & 0 &0.74 &0.89 &0 &1 &0.84 & 0 \\ 2500 & 0.16 & 1 & 0.03 & 0.59 & 1 & 0.01 & 0.99 & 0.87 & 0 &0.77 &0.88 & 0 &1 &0.83 & 0\\ \hline \end{tabular} } \scalebox{0.78}{ \begin{tabular}{cccccccccccccccc} \hline \multicolumn{1}{c}{\multirow{1}{*}{}}& \multicolumn{3}{c}{Lasso} & \multicolumn{3}{c}{SCAD} & \multicolumn{3}{c}{BayesS5} & \multicolumn{3}{c}{$\tau = 0.072$} & \multicolumn{3}{c}{Hyper-pMOM} \\ p & PPV & TPR & FPR & PPV & TPR & FPR & PPV & TPR & FPR & PPV & TPR & FPR & PPV & TPR & FPR \\ \hline 200 & 0.32 & 1 & 0.12 & 0.88 & 0.7 & 0.01 &0.99 &0.77 & 0 &0.79 &0.84 &0.01 & 1 & 0.81 & 0 \\ 500 & 0.26 & 1 & 0.06 & 1 & 0.83 & 0 &1 &0.72 & 0&0.74 &0.82 &0.01 & 1 & 0.83 & 0 \\ 1000 & 0.19 & 0.89 & 0.02 & 0.57 & 0.81 & 0.01 &1 &0.69 & 0 &0.60 &0.84 &0.01 & 1 & 0.79 & 0 \\ 1500 & 0.19 & 0.91 & 0.03 & 0.57 & 0.80 & 0.05 &0.99 &0.65 & 0 &0.70 &0.80 &0 & 1 & 0.79 & 0 \\ 2000 & 0.17 & 1 & 0.15 & 0.66 & 0.79 & 0.03 &0.95 &0.67 & 0 &0.62 &0.80 &0 & 0.94 & 0.74 & 0 \\ 2500 & 0.18 & 1 & 0.19 & 0.51 & 0.72 & 0.03 & 0.95 & 0.64 & 0 & 0.57 & 0.78 & 0 & 0.95 & 0.70 & 0\\ \hline \end{tabular} } \scalebox{0.78}{ \begin{tabular}{cccccccccccccccc} \hline \multicolumn{1}{c}{\multirow{1}{*}{}}& \multicolumn{3}{c}{Lasso} & \multicolumn{3}{c}{SCAD} & \multicolumn{3}{c}{BayesS5} & \multicolumn{3}{c}{$\tau = 0.072$} & \multicolumn{3}{c}{Hyper-pMOM} \\ p & PPV & TPR & FPR & PPV & TPR & FPR & PPV & TPR & FPR & PPV & TPR & FPR & PPV & TPR & FPR \\ \hline 200 & 0.37 & 1 & 0.09 & 0.7 & 1 & 0.02 &0.99 &0.92 & 0 & 0.94 & 0.93 & 0 & 1 & 0.90 & 0 \\ 500 & 0.23 & 1 & 0.07 & 0.89 & 0.79 & 0 &0.96 &0.90 & 0 & 0.89 & 0.87 & 0 & 1 & 0.88 & 0 \\ 1000 & 0.13 & 1 & 0.07 & 0.48 & 0.95 & 0.01 &0.96 &0.88 & 0 & 0.77 & 0.86 & 0 & 1 & 0.84 & 0 \\ 1500 & 0.21 & 1 & 0.03 & 0.36 & 0.80 & 0.01 &0.97 &0.87 & 0 & 0.75 & 0.86 & 0 & 1 & 0.89 & 0 \\ 2000 & 0.16 & 0.9 & 0.03 & 0.35 & 0.71 & 0.01 &0.95 &0.88 & 0 &0.84 &0.82 & 0 & 1 & 0.86 & 0 \\ 2500 & 0.13 & 1 & 0.03 & 0.45 & 0.68 & 0 &0.95 &0.81 & 0 &0.80 &0.82 &0 & 1 & 0.78 & 0\\ \hline \end{tabular} } \caption{Model selection performance comparison table when $\bm \beta_0 = (0.3,0.35,0.4,0.45,0.5,1.1,1.2,1.3,1.4,1.5)^T$. Top: case $1$; middle: case $2$; bottom: case $3$.} \label{model:selection:table:beta2} \end{table} Based on Table \ref{model:selection:table:n200} and \ref{model:selection:table:beta2}, it is clear that our Bayesian approach outperforms both the penalized likelihood approaches and the fixed $\tau$ setting based on almost all measures and under all cases. The PPV values for our hyper-pMOM approach are all higher than the other four methods, which means our method can identify the true model more precisely. In addition, The FPR values for the Bayesian approach are all significantly smaller than the FPR values for the penalized approaches. It is also worth noting that especially in lower dimensions, the numerical procedure for choosing $\tau$ implemented in BayesS5 needs additional run time as shown in Figure \ref{fig:runtime}, while in our simulation studies, not only this step is omitted, we are still able to better simulation results. Overall, this experiment illustrates the fact that the Bayesian approach can lead to a significant improvement in model selection performance as compared to penalized likelihood methods. Also, the hierarchical Bayesian approach introduced in this paper can lead to a significant improvement in performance as compared to the fixed $\tau$ Bayesian approach when sample size is much smaller than the number of predictors. \begin{figure} \label{fig:runtime} \centering \includegraphics[width=75mm]{run_time} \caption{Run time comparison in seconds.} \end{figure} \section{Real Data Analysis} \label{sec:real} In this section, we carry out the real data analysis to examine the performance of proposed method based on the Boston housing dataset. The dataset contains the median value of owner-occupied homes in the Boston region as the responsive variable, together with several other possible predictor variables including the geographical characteristics. The total number of observations is $n = 506$ and 10 continuous variables: crim, indus, nox, rm, age, dis, tax, ptratio, b, and lstat are considered as the predictor variables. Several approaches for variable selection have been demonstrated via this housing dataset. See for example \citep{yuan:lin:2005, Shin.M:2015}. We added 1000 noise variables generated independently from a standard normal distribution to perform the model selection in a $p > n$ regression setting. The design matrix is standardized and the dataset is divided into a training set of size $406$ and a test set of size $100$. We first obtain the model estimate based on the training set and then compare the proposed hyper-pMOM approach with the following four methods on the test set: pMOM with fixed $\tau = 0.072$, peMOM with simplified shotgun stochastic search, and two frequentist approaches, Lasso and SCAD. The results are summarized in Table \ref{table:boston} averaged over 100 repetitions based on the following five measures also adopted in \citep{Shin.M:2015}. MSPE represents the out-of-sample square prediction error calculated by \begin{equation*} \mbox{MSPE} = \frac 1 {100} \sum_{i \in test}\left(y_i - X_i^T\hat {\bm \beta}_{\hat k}^{train}\right)^2, \end{equation*} where $\hat {\bm \beta}_{\hat k}^{train}$ is the least squared estimator based on the model estimate obtained from the test set. MS-O and MS-N refer to the average original variables and falsely selected noise variables over 100 repetitions, respectively. FS-O is the number of original variables that are selected at least 95 out of 100 repetitions. TS-O refers to the number of original variables that are selected at least once from 100 repetitions. As we see in Table \ref{table:boston}, our hyper-pMOM approach consistently identifies the same model and had the lowest prediction error among all the five methods. In particular, the average number of the original variables that are selected at least 95 times is 3. Across all the 100 repetitions, our hyper-pMOM method successfully avoids selecting any noise variable, while all the other four methods falsely identify at least one noise variable. Overall, the real data application illustrates our hyper-pMOM approach yields the most stable and accurate model selection among all the five methods. \begin{table}\label{table:boston} \begin{tabular}{cccccc} \hline \text{ } &MSPE & MS-O & MS-N & FS-O & TS-O \\ \hline Hyper-pMOM & 17.53 & 3 & 0 & 3 & 3 \\ pMOM & 21.57 & 5 & 4 & 5 & 4 \\ peMOM & 18.08 & 5 & 1& 5 & 5 \\ Lasso & 23.40 & 5.58& 17.81& 4 & 6 \\ SCAD & 22.83 & 4.98& 12.89& 5 & 5\\ \hline \end{tabular} \caption{Model selection comparison based on the Boston housing data.} \end{table} \section{Discussion} \label{sec:discussion} This article describes and examines theoretical properties of hyper-pMOM priors proposed in \citep{PhDthesis:Wu} for variable selection in high-dimensional linear model settings. Under standard regularity assumptions, which include the prior over all models is restricted to ones with model size less than an appropriate function of the sample size $n$, we establish posterior ratio consistency (Theorem \ref{thm1}), i.e., the ratio of the maximum marginal posterior probability assigned to a ``non-true" model to the posterior probability assigned to the ``true" model converges to zero in probability. Next, under the additional assumption that $p$ increases at a polynomial rate with $n$, we show strong model selection consistency (Theorem \ref{thm2}). Strong model selection consistency implies that the posterior probability of the true model converges in probability to $1$ as $n \rightarrow \infty$. Based on the reviewers' comments, we realize the polynomial rate restriction on $p$ could be rather limited. By carefully examining our theoretical analysis, in Section 5, we add another result where we replace the uniform-like prior with the complexity prior on the model space to penalize larger models, and establish strong model selection consistency (Theorem \ref{thm4}) when $p$ is allowed to grow at a sub-exponential rate of $n$. However, through simulation studies, we find out that the model selection performance under the uniform-like prior is much better than that under the complexity prior, hence from a practical point of view, one would still prefer the hyper-pMOM with uniform-like prior on the model space. In Section \ref{sec:computation}, we provide details about the application of Laplace approximation to approximate the posterior density and illustrate the potential benefits for our hyper-pMOM based model selection procedure compared with other methods via simulation studies and real data analysis in Section \ref{sec:experiments} and Section \ref{sec:real}, respectively. \bibliographystyle{ba}
1,108,101,566,266
arxiv
\section{Introduction} \label{one} The interest in the Abelian monopoles in the non--Abelian gauge theories is motivated by a central role of these objects in the dual superconductor mechanism~\cite{DualSuperconductor} of color confinement. The Abelian monopoles can be considered as particular configurations of gluon fields with magnetic quantum numbers. In pure non--Abelian gauge theories the Abelian monopoles do not exist at the classical level. However, these topological defects can successfully be identified given a dynamical configuration of the gluon fields in a particular Abelian gauge~\cite{AbelianProjections}. There are many Abelian gauges among which the most popular one is the Maximal Abelian (MA) gauge~\cite{kronfeld}. In this gauge the off-diagonal gluon fields are suppressed and short--ranged contrary to the diagonal (Abelian) gluon fields~\cite{ref:propagators}. There are many numerical experiments confirming that Abelian degrees of freedom are responsible for the confinement of color (for a review, see Ref.~\cite{Reviews}). In particular, it was observed in Refs.~\cite{AbelianDominance,shiba:string} that the tension of the chromoelectric string is dominated by the Abelian monopole contributions. Moreover, the monopole condensate -- which guarantees the formation of the chromoelectric string between the quarks -- exists in the confinement phase and disappears in the deconfinement phase~\cite{shiba:condensation,MonopoleCondensation}. The trajectories of the Abelian monopoles form two different types of clusters. A typical configuration contains a lot of finite-sized clusters and one large percolating cluster~\cite{ivanenko,ref:kitahara}. The percolating cluster (or, the infrared (IR) cluster) occupies the whole lattice while the sizes of the other clusters have an ultraviolet nature. The monopole condensate corresponds to the so-called percolating (infrared) cluster of the monopole trajectories. The tension of the confining string gets a dominant contribution from the IR monopole cluster~\cite{ref:kitahara} while the finite-sized ultraviolet (UV) clusters do not play any role in the confinement. Various properties of the UV and IR monopole clusters were investigated previously in Refs.~\cite{ref:kitahara,ref:clusters,IshiguroSuzuki}. At high temperatures the Abelian monopoles become static. In the high temperature phase the IR monopole cluster disappears~\cite{ivanenko,ref:kitahara} and, consequently, the confinement of the static quarks is lost. Since the static currents do not play any role in confinement we concentrate below on the spatial components of the IR monopole cluster. We investigate the action, the length distribution and the entropy of spatial components of the infrared monopole clusters. We follow Ref.~\cite{IshiguroSuzuki} where energy and entropy of the monopole currents were studied at zero temperature. Our preliminary results were reported in Ref.~\cite{IshiguroSuzuki:spatial:Lattice}. The plan of the paper is as follows. In Section~\ref{sec:model} we describe the model and provide the description of the monopole currents. The details of numerical simulations are also given in this Section. Section~\ref{sec:action} is devoted to the investigation of the Abelian monopole action obtained by the inverse Monte-Carlo method for the clusters of the spatially projected Abelian monopoles. In Section~\ref{sec:length} we study the length distributions of the infrared clusters of the spatially projected monopole clusters. The knowledge of the monopole action and cluster distribution allows us to calculate the entropy of the spatial monopole currents which is discussed in Section~\ref{sec:entropy}. Our conclusions are presented in the last Section. \section{Model} \label{sec:model} We study pure SU(2) QCD with the standard Wilson lattice action for gluon fields, \begin{eqnarray} S(U) = - \frac{\beta}{2} {\mathrm{Tr}} \sum_P U_P\,, \end{eqnarray} where $\beta$ is the coupling constant, the sum goes over all plaquettes of the lattice, and $U_P \equiv U_{s,\mu\nu} = U_{s,\mu}U_{s+\hat\mu,\nu} U^\dagger_{s+\hat\nu,\mu} U^\dagger_{s,\nu}$ is the SU(2) plaquette constructed from link fields, $U_{s,\mu}$. We work in the MA gauge~\cite{kronfeld} defined by the maximization of the lattice functional \begin{eqnarray} R = \sum_{s,\hat\mu}{\mathrm{Tr}}\Big(\sigma_3 \widetilde{U}(s,\mu) \sigma_3 \widetilde{U}^{\dagger}(s,\mu)\Big)\,, \label{R} \end{eqnarray} with respect to the gauge transformations $U(s,\mu) \to \widetilde{U}(s,\mu)=\Omega(s)U(s,\mu)\Omega^\dagger(s+\hat\mu)$. In the continuum limit the local condition of maximization~\eq{R} can be written in terms of the differential equation, $(\partial_{\mu}+igA_{\mu}^3)(A_{\mu}^1-iA_{\mu}^2)=0$. Both this condition and the functional \eq{R} are invariant under residual U(1) gauge transformations, $\Omega^{\mathrm{Abel}}(\omega) = {\mathrm{diag} (e^{i \omega(s)},e^{- i \omega(s)})}$. After the Abelian gauge is fixed we perform the projection of the non-Abelian gauge fields, $U_{s,\mu}$, onto the Abelian ones, $u_{s,\mu}$: \begin{eqnarray} \widetilde{U}(s,\mu) = \left( \begin{array}{cc} (1-\vert c(s,\mu)\vert^2)^{1/2} & -c^*(s,\mu) \\ c(s,\mu) & (1-\vert c(s,\mu)\vert^2)^{1/2} \end{array} \right) \left( \begin{array}{cc} u(s,\mu) & 0 \\ 0 & u^*(s,\mu) \end{array} \right), \label{eq:field:decomposition} \end{eqnarray} where $c(s,\mu)$ corresponds to the charged (off-diagonal) matter fields. As we have discussed above, the dominant information about the confinement properties of the theory is located in the monopole configurations which are identified with the help of the Abelian phases of the diagonal fields, $\theta_{s,\mu}$. The Abelian field strength $\theta_{\mu\nu}(s)\in(-4\pi,4\pi)$ is defined on the lattice plaquettes by a link angle $\theta(s,\mu)\in[-\pi,\pi)$ as $\theta_{\mu\nu}(s)=\theta(s,\mu)+ \theta(s+\hat\mu,\nu)-\theta(s+\hat\nu,\mu)-\theta(s,\nu)$. The field strength $\theta_{\mu\nu}(s)$ can be decomposed into two parts, \begin{eqnarray} \theta_{\mu\nu}(s)= \bar{\theta}_{\mu\nu}(s) +2\pi m_{\mu\nu}(s)\,, \label{eq:field:separation} \end{eqnarray} where $\bar{\theta}_{\mu\nu}(s)\in [-\pi,\pi)$ is interpreted as the electromagnetic flux through the plaquette and $m_{\mu\nu}(s)$ can be regarded as a number of the Dirac strings piercing the plaquette. The elementary monopole current can conventionally be constructed using the DeGrand-Toussaint\cite{degrand} definition: \begin{eqnarray} k_{\mu}(s) & = & \frac{1}{2}\epsilon_{\mu\nu\rho\sigma} \partial_{\nu}m_{\rho\sigma}(s+\hat{\mu}), \label{eq:monopole:definition} \end{eqnarray} where $\partial$ is the forward lattice derivative. The monopole current is defined on a link of the dual lattice and takes the values $0, \pm 1, \pm 2$. Moreover the monopole current satisfies the conservation law automatically, \begin{eqnarray} \partial'_{\mu}k_{\mu}(s)=0\,, \end{eqnarray} where $\partial'$ is the backward derivative on the dual lattice. The monopole current~\eq{eq:monopole:definition} corresponds to the monopole charge defined on the scale of the elementary lattice spacing, $a$. Obviously, the scale $a$ becomes smaller as we approach the continuum limit. In order to study the properties of the monopoles at fixed {\it physical} scales we use the so--called extended monopoles~\cite{ivanenko}. The $n^3$ extended monopole is defined on a coarse sublattice with the lattice spacing $b=na$. Thus the construction of the extended monopoles corresponds to a block--spin transformation of the monopole currents with the scale factor $n$, \begin{eqnarray} k_{\mu}^{(n)}(s) = \sum_{i,j,l=0}^{n-1}k_{\mu}(n s+(n-1)\hat{\mu}+i\hat{\nu} +j\hat{\rho}+l\hat{\sigma})\,. \label{eq:blocking} \end{eqnarray} Since the time--like monopole currents are not essential for the confinement properties we concentrate on the spatial components of the currents. Namely, we investigate spatially projected currents, \begin{eqnarray} K_i^{(n)}(\vec s) = \sum^{L_t-1}_{s_4=0}\, k_i^{(n)}(s,s_4)\,,\,\, i=1,2,3\,, \end{eqnarray} which are integer--valued and closed. Technically, we generate 2000-10000 configurations of the SU(2) gauge field, $U$, for $\beta=2.3 \sim 2.6$ on the lattices $L_s^3\times L_t$, with $L_s = 24,32,48,72$ and $L_t=4,6,8,12,16$. The number of generated configuration depends on the value of $\beta$ and lattice volume. We fix the gauge with the help of the usual iterative algorithm. In this paper we used the same methods as in the zero--temperature case studied in Ref.~\cite{IshiguroSuzuki}. Thus we refer an interested reader to Ref.~\cite{IshiguroSuzuki} for a more detailed description of the numerical procedures. Below we concentrate on the description of the numerical results. \section{Monopole action} \label{sec:action} In what follows we discuss an effective model of the monopole currents corresponding to pure SU(2) QCD. Formally, we get this effective model through the gauge fixing procedure applied to the original model. Then we integrate out all degrees of freedom but the monopole ones. An effective monopole action is related to the original non-Abelian action $S[U]$ as follows: \begin{eqnarray} Z &=& \int {\cal D} U \, \delta(X) \Delta_{FP}(U)\, e^{- S[U]} = \Bigl( \prod_{s, \mu} \sum_{k_\mu(s) = -\infty}^{\infty} \Bigr) \Bigl( \prod_s \delta_{ \partial_{\mu}^{\prime} k_\mu (s), 0} \Bigr) e^{-S^{{\mathrm{mon}}}_{{\mathrm{eff}}}[k]}\,. \label{eq:Z1} \end{eqnarray} We omit irrelevant constant terms in front of the partition function. The term $\delta(X)$ represents the gauge-fixing condition and $\Delta_{FP}(U)$ is the corresponding Faddeev-Popov determinant. As we have discussed above, the MA gauge fixing condition is given by a maximization of the functional~\eq{R} and therefore the local condition $X=0$, implied in Eq.~\eq{eq:Z1}, is used here as a formal simplified notation. Numerically, the monopole action of the $3D$ projected IR monopole clusters can be defined using the inverse Monte--Carlo method~\cite{shiba:condensation}. The action is represented in a truncated form~\cite{shiba:condensation,chernodub} as a sum of the $m$--point ($m \ge 2$) operators~$S_i$: \begin{eqnarray} S_{{\mathrm{mon}}}[K] = \sum\nolimits_i f_i S_i [K]\,, \label{eq:monopole:action} \end{eqnarray} where $f_i$ are the coupling constants. Following Ref.~\cite{IshiguroSuzuki} we adopt only the two--point interactions in the monopole action, $S_i \sim K_{i}(s) K_{j}(s')$. Similarly to the $4D$ case we find that the monopole action of the spatially projected currents is proportional with a good accuracy to the length $L[K]$ of the monopole loop $K$, \begin{eqnarray} S_{{\mathrm{mon}}}[K] \backsimeq f_0 L[K] + const\,. \label{eq:length:proportionality} \end{eqnarray} The important property of the monopole action is that the couplings $f_i$ are the functions of the scale $b=na$, Eq.~\eq{eq:blocking}, at which the monopole charge is defined. To illustrate this fact we show the dependence of the coupling constant $f_0$ on $b = n\, a(\beta)$ in Figure~\ref{fig:f0}. \begin{figure}[!thb] \begin{center} \begin{tabular}{cc} \includegraphics[scale=0.45,clip=false]{f0_t0.8.eps} \hspace{5mm} & \includegraphics[scale=0.45,clip=false]{f0_t0.96.eps} \\ (a) & (b) \end{tabular} \end{center} \vspace{-4mm} \caption{The coefficient $f_0$ of the monopole action~\eq{eq:length:proportionality} {\it vs.} the scale parameter $b$ for the lattice sizes $L_s^3\times 6$, $L_s=48,72$ and blocking factors, $n=1 \dots 9$, at temperatures (a) $T=0.8\,T_c$ and (b) $T=0.96\,T_c$.} \label{fig:f0} \end{figure} {}From Figure~\ref{fig:f0} one observes the almost perfect scaling: the parameter $b$ does not depend on the parameters $n$ and $a$ separately. The action is near to the renormalized trajectory which corresponds to the continuum effective action. Moreover, the result does not depend on the spatial extension of the lattice, $L_s$. Thus the action of the spatially projected monopole current shows the scaling similarly to the action of the unprojected monopoles~\cite{shiba:condensation,chernodub}. \section{Length distribution} \label{sec:length} The length distribution of the spatially--projected monopole clusters is shown in Figure~\ref{fig:Distr}. \begin{figure}[!htb] \begin{center} \vspace{3mm} \begin{tabular}{cc} \includegraphics[scale=0.45,clip=true]{distr_n1.eps} \hspace{5mm} & \includegraphics[scale=0.45,clip=true]{distr_n2.eps} \\ (a) & (b) \end{tabular} \end{center} \vspace{-3mm} \caption{The distributions of the spatial monopole currents at various temperatures. For the low temperatures, $T < T_c$, the UV--part of the distributions is not shown.} \label{fig:Distr} \end{figure} In the confinement phase, $T<T_c$, only the infrared part of the distribution is shown. One can see that the length (in physical units) of the monopole trajectory belonging to the percolating cluster becomes shorter as the temperature increases. This fact is expected because the monopole condensate is "evaporating" as temperature increases towards the transition point, and, therefore, the infrared part of the monopole currents should be more and more diluted. At $T>T_c$ the percolating cluster of the spatially projected currents disappears, and, consequently, the confinement of quarks is lost. The behavior of the elementary and blocked currents is qualitatively the same. According to Ref.~\cite{IshiguroSuzuki} the length of the $4D$ IR monopole currents in the finite volume $V$ is distributed with the Gaussian law, which is in the finite-temperature case can be formulated as follows: \begin{eqnarray} D^{IR}(L) \propto \exp\{ - \alpha(b,V) L^2 + \gamma(b,T) L\}\,. \label{eq:IR:distr:two} \end{eqnarray} The length distribution function, $D(L)$, is proportional to the weight with which the particular trajectory of the length $L$ contributes to the partition function. In Eq.~\eq{eq:IR:distr:two} we neglect a power-law prefactor, $1/L^\tau$ with $\tau \sim 3$, which is essential for the distribution of the infrared clusters. The Gaussian form of the distribution \eq{eq:IR:distr:two} means that the clusters have the typical length \begin{eqnarray} L_{max} = \gamma(b,T) \slash 2 \, \alpha(b,V)\,, \label{eq:Lmax} \end{eqnarray} where $V$ is the three--dimensional volume. The coefficient $\alpha$ plays a role of the infrared cut--off which emerges due to the finite volume. In other words, the length of the monopole trajectory in an infrared cluster is restricted by the lattice boundary. However, since the cluster is infrared the length of the monopole trajectory in this cluster must be proportional to the total volume, $L_{max} \propto V$. The linear part of the distribution~\eq{eq:IR:distr:two} gets contribution from the monopole action and the monopole entropy (we discuss this issue below). Therefore the coefficient $\gamma$ should not depend on the volume in the thermodynamic limit. Thus, we expect \begin{eqnarray} \alpha(b,V) = A(b) \slash V\,, \label{eq:A} \end{eqnarray} where $A(b)$ is a certain function of the scale parameter $b$. One may suggest that the parameter $A$ should not significantly depend on the temperature $T$ since the factor is more kinematical than dynamical. The temperature influences the dynamical characteristics of the monopoles such as the effective three-dimensional action. The effective monopole action contributes to the coefficient $\gamma$ and, as a consequence, the temperature influences the projected monopole density via the $\gamma$--coefficient. Using Eqs.~(\ref{eq:Lmax},\ref{eq:A}) one can obtain that the monopole density in the infrared cluster is finite in the thermodynamic limit and is given by the formula \begin{eqnarray} \rho_{IR} = \frac{L_{max}}{V} \equiv \frac{\gamma(b,T)}{2 \, A(b)}\,. \label{eq:density} \end{eqnarray} We fit the numerically obtained distributions of the $3D$ projected currents by the function~\eq{eq:IR:distr:two} and then use a bootstrap method\footnote{A detailed description of the corresponding bootstrap method is given in Ref.~\cite{IshiguroSuzuki}.} to estimate the statistical errors of the fitting parameters. In Figure~\ref{fig:gamma} \begin{figure}[!htb] \begin{center} \vspace{3mm} \begin{tabular}{cc} \includegraphics[scale=0.45,clip=true]{g_t0.8.eps} \hspace{5mm} & \includegraphics[scale=0.45,clip=true]{g_t0.96.eps} \\ (a) & (b) \end{tabular} \end{center} \vspace{-3mm} \caption{The same as in Figure~\ref{fig:f0} but for the coefficient $\gamma$ of the distribution~\eq{eq:IR:distr:two}.} \label{fig:gamma} \end{figure} we show the coupling constant $\gamma(b,T)$ as a function of the scale parameter, $b$, at temperatures $T=0.8 \, T_c$ and $T = 0.96 \, T_c$. Again, as in the case of the parameter $f_0$, Figure~\ref{fig:f0}, we observe the volume independence and the $b$--scaling of the results. In a small $b$--region we find that $\gamma \propto b^\eta$ with $\eta \sim 3$ for low temperatures, $T \sim 0.5 T_c$, whereas $\eta \sim 2$ for $T \to T_c$. The data show a good $b$--scaling and also is independent of the volume similarly to the monopole action. The numerical values of the parameter $A$ are shown in Figure~\ref{fig:A}. The parameter $A$ is independent of the lattice volume, indicating that in the thermodynamic limit the coefficient $\alpha$ in the Gaussian distribution~\eq{eq:IR:distr:two} vanishes. \begin{figure}[!htb] \begin{center} \vspace{3mm} \begin{tabular}{cc} \includegraphics[scale=0.45,clip=true]{a_t0.8.eps} \hspace{5mm} & \includegraphics[scale=0.45,clip=true]{a_t0.96.eps} \\ (a) & (b) \end{tabular} \end{center} \vspace{-3mm} \caption{The same as in Figure~\ref{fig:f0} but for the ratio~$A(b)$, Eq.~\eq{eq:A}.} \label{fig:A} \end{figure} Using Eq.~\eq{eq:density} we calculate the monopole density corresponding to the infrared cluster. The density is shown in Figure~\ref{fig:rho}. \begin{figure}[!htb] \begin{center} \vspace{3mm} \begin{tabular}{cc} \includegraphics[scale=0.45,clip=true]{rho_t0.8.eps} \hspace{5mm} & \includegraphics[scale=0.45,clip=true]{rho_t0.96.eps} \\ (a) & (b) \end{tabular} \end{center} \vspace{-3mm} \caption{The same as in Figure~\ref{fig:f0} but for the monopole density~$\rho$ corresponding to the infrared cluster, Eq.~\eq{eq:density}.} \label{fig:rho} \end{figure} The density diminishes as the scale factor $b$ increases, while at small $b$ the density shows a plateau. As the temperature increases the density (at a fixed value of $b$) becomes smaller. Note that the confining non--Abelian objects have a finite size (in physical units). These objects are identified as the Abelian monopoles in the Abelian gauge. The monopoles are detected using the Gauss theorem applied to the magnetic field coming outside the cube of the size $b^3$. The confining non--Abelian objects have a typical size which is associated with the size of the monopole core~\cite{ref:rmon}, $r_{mon}\approx 0.05$~fm. If $b < r_{mon}$ then the monopole cube is too small to detect the charge of much larger monopole and the monopole density -- measured using the Gauss law -- is vanishingly small. Indeed, one can see that the monopole density in Figure~\ref{fig:rho} has a tendency to diminish at smaller $b \sqrt{\sigma} \lesssim 0.1$. \begin{figure}[!thb] \begin{center} \vspace{3mm} \begin{tabular}{cc} \includegraphics[scale=0.45,clip=true]{gamma_l48_n1.eps} \hspace{5mm} & \includegraphics[scale=0.45,clip=true]{gamma_l48_n2.eps} \vspace{-2mm}\\ (a) & (b) \vspace{5mm} \\ \includegraphics[scale=0.45,clip=true]{gamma_l48_n3.eps} \hspace{5mm} & \includegraphics[scale=0.45,clip=true]{gamma_l48_n4.eps} \vspace{-2mm}\\ (c) & (d) \end{tabular} \end{center} \vspace{-3mm} \caption{The coefficient $\gamma$ {\it vs.} temperature $T$ for various temporal extensions of the lattice and for various blacking factors $n$.} \label{fig:gamma:T} \end{figure} \begin{figure}[!htb] \begin{center} \vspace{3mm} \begin{tabular}{cc} \includegraphics[scale=0.45,clip=true]{gamma.fits.48xx3x6.eps} \hspace{5mm} & \includegraphics[scale=0.45,clip=true]{gamma.fits.48xx3x8.eps} \\ (a) & (b) \end{tabular} \end{center} \vspace{-3mm} \caption{Examples of the fits of the $\gamma$ parameter as the function of the temperature.} \label{fig:gamma:fits} \end{figure} At the critical temperature, $T=T_c$, the $4D$ IR monopole cluster disappears and we expect a similar behavior for the $3D$ projected IR cluster. This implies, that the parameter $\gamma(b,T)$ must become a (non-local) {\it order parameter}: it must vanish at the critical point. One can reach this conclusion by noticing that the parameter $\gamma(b,T)$ is proportional to the monopole density~\eq{eq:density} (which vanishes at $T=T_c$), and that the factor $A$ is unlikely to be divergent at the critical temperature (what can also be deduced from Figures~\ref{fig:A}). We show that the quantity $\gamma$ is indeed an order parameter in Figure~\ref{fig:gamma:T}(a) which depicts $\gamma$ for elementary ($n=1$) monopoles as a function of temperature for various temporal extensions of the lattice. The behavior of the $\gamma$-parameter in the vicinity of the phase transition point depends on the value of the temporal extension $L_t$. However, the value of the $\gamma$--parameter at the critical temperature, $\gamma(T \to T_c) \to 0$, is universal with respect to $L_t$. Moreover, one can observe in Figures~\ref{fig:gamma:T}(b,c,d) -- which correspond to the extensions $n=2,3,4$, respectively -- that the $\gamma$-coefficient for the extended monopoles is also vanishing at $T=T_c$. To characterize the critical behavior of the parameter $\gamma$ in the vicinity of the phase transition we have fitted this parameter by the function \begin{eqnarray} \gamma^{\mathrm{fit}}(b,T) = C_\gamma \cdot \Bigl( 1 - \frac{T}{T_c}\Bigr)^\delta\,, \qquad T<T_c\,, \label{eq:delta:fit} \end{eqnarray} where $\delta$ and $C_\gamma$ are the fitting parameters. We performed the fits for various lattices $L_s^3\times L_t$ and extensions $n$. The results for the "critical exponent" $\delta$ are shown in Table~\ref{tbl:delta}. \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|} \hline \multicolumn{4}{|c|}{$\delta$} \\ \hline $\ n\ $& $\ 48^3\times6\ $ & $\ 48^3\times8\ $ & $\ 72^3\times8\ $ \\ \hline 1 & 0.64(15) & 0.76(2) & -- \\ 2 & 0.62(8) & 0.70(16) & 0.48(3) \\ 3 & 0.34(6) & 0.55(7) & 0.30(2) \\ 4 & 0.22(2) & 0.36(6) & 0.18(3) \\ 6 & 0.11(2) & 0.20(2) & -- \\ \hline \end{tabular} \end{center} \caption{The "critical exponents" $\delta$ -- obtained with the help of the fit~\eq{eq:delta:fit} -- for various lattices $L_s^3\times L_t$ and extensions $n$.} \label{tbl:delta} \end{table} {}From this table one notices that the quantity $\delta$ is not universal: it depends not only on the extension of the monopole blocking but also it also depends on the lattice volume. Moreover, the larger extension $n$ the steeper behavior of $\gamma$ in the vicinity of the phase transition is. One should add a word of caution here. The fit results shown in Table~\ref{tbl:delta} are crucially dependent of the $T/T_c = 0.98,0.99$ points (as one can see from Figures~\ref{fig:gamma:fits}(a),(b)), which are very close to the phase transition. Since the transition is of the second order then the finite--volume effects must be strong and the results of fits may quantitatively be incorrect (although the results presented in Figures~\ref{fig:gamma:fits}(a),(b) must qualitatively be correct). \section{Monopole entropy} \label{sec:entropy} Apart from the finite--volume effect, the distribution~\eq{eq:IR:distr:two} has contributions from the energy and the entropy. As seen above, the action contribution is proportional to $e^{- f_0 L}$. The entropy contribution is proportional to $\mu^L$ (with $\mu>0$) for sufficiently large monopole lengths, $L$. Thus, the entropy factor, $\mu$, is \begin{eqnarray} \mu = \exp\{f_0 + \gamma\}\,. \label{eq:mu} \end{eqnarray} We determine the entropy using Eq.~\eq{eq:mu}. The numerical results for the entropy factor $\mu(b,T)$ are shown in Figures~\ref{fig:entropy} for various temperatures, lattice volumes and blocking factors. \begin{figure}[!htb] \begin{center} \begin{tabular}{cc} \includegraphics[scale=0.47,clip=true]{entropy_t0.6.eps} \hspace{5mm} & \includegraphics[scale=0.47,clip=true]{entropy_t0.8.eps} \\ (a) & (b) \vspace{7mm} \\[2mm] \includegraphics[scale=0.47,clip=true]{entropy_t0.9.eps} \hspace{5mm} & \includegraphics[scale=0.47,clip=true]{entropy_t0.96.eps} \\ (c) & (d) \vspace{7mm} \\[2mm] \includegraphics[scale=0.47,clip=true]{entropy_t0.98.eps} \hspace{5mm} & \includegraphics[scale=0.47,clip=true]{entropy_t0.99.eps} \\ (e) & (f) \end{tabular} \end{center} \caption{Entropy factor of the spatially projected monopole currents as the function of the scale $b$ at various temperatures.} \label{fig:entropy} \end{figure} One can see that the entropy factor $\mu$ scales as the function of $b$, as expected. In order to understand the meaning of the data shown in Figure~\ref{fig:entropy} we note that if the monopoles are randomly walking on a $3D$ hypercubic lattice then we should get a definite value for the entropy factor, $\mu=5$. This is because at each site there exist five choices for the monopole current to go further. One can see that far from the phase transition, $T \lesssim 0.96 T_c$ the entropy factor $\mu$ indeed tends to the $\mu=5$ plateau at moderately small values of $b \sim 0.4 \dots 1$. At yet smaller $b$ the entropy gets bigger than random walk value $\mu>5$ because in this region the inverse Monte-Carlo method with the truncated quadratic monopole action does not work well~\cite{chernodub}. Thus, the value of the constant $f_0$ -- defined in Eq.~\eq{eq:length:proportionality} -- can not be obtained correctly. At large $b$ the entropy factor drops down with the increase of the factor $b$. This feature is independent of the temperature. In the zero temperature case~\cite{IshiguroSuzuki} the entropy factor $\mu$ approaches unity in the $b \to \infty$ limit. This feature is difficult to observe from our data since the information about the entropy at large values of the blocking size $b$ is not available. \section{Conclusion} The distributions of the spatially--projected infrared monopole currents of various blocking sizes, $n$ were studied on the lattices with different spacings, $a$, and volumes, $L_s^3\times L_t$. We find that the distributions can be described by a gaussian anzatz with a good accuracy. The anzatz contains two important terms: (i) the linear term, which contains information about the energy and entropy of the monopole currents; and (ii) the quadratic term, which appears due to finite--volume and which suppresses large infrared clusters. The linear term is independent of the lattice volume while the quadratic term is inversely proportional to the volume. Moreover, the linear term is a (non--local) order parameter for the deconfinement phase transition. To get the entropy of the spatially--projected currents we studied the action of the monopoles belonging to the infrared monopole clusters of the spatially--projected currents using an inverse Monte-Carlo method. We show that the entropy factor has a plateau at sufficiently small values of $b$ and at $T \lesssim 0.96 T_c$. A reason for the temperature restriction of our result is that our analysis may not be valid close to the second order phase transition point because of the increase of correlation lengthes at (and, consequently, because of strong finite-volume effects) $T \approx T_c$. At $b \gtrsim 1$ the entropy is a descending function of $b=n a$, indicating that the effective degrees of freedom of the projected and blocked monopoles are getting smaller as the blocking scale $b$ increases. This effect is very similar to the zero temperature case, in which the monopole motion corresponds to the classical picture: the monopole with the large blocking size $b$ becomes a macroscopic object and the motion of such a monopole gets close to a straight line. \clearpage \begin{acknowledgments} M.N.Ch. is supported by grants RFBR 04-02-16079, MK-4019.2004.2 and by JSPS Grant-in-Aid for Scientific Research (B) No.15340073. T.S. is partially supported by JSPS Grant-in-Aid for Scientific Research on Priority Areas No.13135210 and (B) No.15340073. This work is also supported by the Supercomputer Project of the Institute of Physical and Chemical Research (RIKEN). A part of our numerical simulations have been done using NEC SX-5 at Research Center for Nuclear Physics (RCNP) of Osaka University. M.N.Ch. is grateful to the members of Institute for Theoretical Physics of Kanazawa University for the kind hospitality and stimulating environment. \end{acknowledgments}
1,108,101,566,267
arxiv
\chapter{Absolutely cartesian squares}\label{ch:abscart} \bigskip Let $I$ be a small indexing category with initial object $\emptyset$ and final object 1. A diagram $\mathscr{D}$ in a category $C$ is a functor $I \rightarrow C$; we restrict ourselves here to $C$ being spaces This diagram is cartesian \footnote{In keeping with conventions of Goodwillie's calculus of functors, we only deal with homotopy cartesian diagrams, so omit the ``homotopy'' modifier.} when $\mathscr{D} (\emptyset )$ is equivalent to the homotopy limit of $\mathscr{D}$ over $I$ with $\emptyset$ removed, denoted $\holim_{I_\emptyset} \mathscr{D}$ or $\holim_{\emptyset} \mathscr{D}$ when $I$ is clear from context. Similarly, $\mathscr{D}$ is cocartesian if $\mathscr{D}(1)$ is equivalent to the homotopy colimit over $I$ with the final object removed, denoted $\hocolim_{I^1} \mathscr{D}$; as in the cartesian case, the $I$ subscript is omitted if clear from context and we write $\hocolim_1$. A functor $F$ is a homotopy functor if it is weak-equivalence-preserving. We call a diagram $\mathscr{D}$ absolutely (co)cartesian if $F(\mathscr{D})$ is homotopy (co)cartesian for all homotopy functors $F$. Note that a diagram is an $(n+1)$ cube if it is indexed by $I=\mathscr{P}([n])$, the powerset on $[n]=\{0,1,\ldots n\}$. \section{Statements of Results and Conjectures} We prove the following classification theorem for absolutely cartesian squares \begin{thm}\label{thm:abscartsq} A square of spaces is absolutely cartesian if and only if it is a map of two absolutely cartesan 1-cubes. That is, of the following form (the other two maps may also be equivalences): \[ \xymatrix{ A \ar[r]^{\sim}\ar[d] & B\ar[d]\\ C \ar[r]^{\sim} & D\\ } \] \end{thm} \medskip Theorem \ref{thm:abscartsq} is the base case of our following conjecture: \begin{conj}\label{conj1} An $n$-cube of spaces is absolutely cartesian if and only if it can be written as either a map of two absolutely cartesian $(n-1)$-cubes or a chain of compositions of $n$-cubes of these types. \footnote{There should be some way to express this as the cubes being ``generated by'' those built out of absolutely cartesian squares.} \end{conj} It should be clear that building up an $n$-cube inductively as maps of these absolutely cartesian squares and compositions of such cubes will yield an absolutely cartesian $n$-cube, which is the $\Leftarrow$ direction of the if and only if. To be clear, two cubes $\mathscr{C,D}$ may be composed if they can be written $\mathscr{C}: X\rightarrow Y$ and $\mathscr{D}:Y \rightarrow Z$; their composition is then $\mathscr{C}\circ \mathscr{D} :X \rightarrow Z$. Geometrically, this looks like ``glueing'' the cubes along their shared face. We give an example in the next section. By chain of compositions, we mean compositions of possibly more than two cubes, e.g. $\mathscr{C}\circ \mathscr{D}\circ \mathscr{E}$ where $\mathscr{C,D,E}$ are all $n$-cubes built inductively up from maps of absolutely cartesian squares. It is not yet certain if the other direction is true. We may observe that the absolutely cartesian squares are also absolutely cocartesian. Thus, we make an additional conjecture that \begin{conj}\label{conj2} An $n$-cube is absolutely cartesian if and only if it is absolutely cocartesian. \end{conj} If we include contravariant functors, we can show this conjecture for $n=2$, and we will comment on this after the proof for cartesian squares, which is in the following section. We will present partial results towards Conjecture \ref{conj2} in section \ref{sec:conj2}; this includes a positive verification of the conjecture when restricting to functors which land in 1-connected spaces (including the identity implies that the spaces in the diagram must originally be 1-connected as well). The section after that is about a family of 3-cubes which are absolutely cocartesian and cartesian and which are not expressible as a map of two absolute cartesian squares, but as a composition of 3-cubes of that form. We end with applications and related work. \section{Proof of Classification for Squares} \begin{proof}[Proof of Theorem \ref{thm:abscartsq}]\footnote{The current form (and brevity) of this proof is influenced heavily by conversations between the author and Tom Goodwillie about developing a clearer route towards attacking the more general conjecture. This relies on switching briefly to the setting of spectra and using this to deduce properties of the original diagram of spaces. We also point out that it suffices to prove that either $B \rightarrow D$ or $C \rightarrow D$ is an equivalence, since equivalences are stable under homotopy pullback. That is, it implies that the mirroring map, $A\rightarrow C$ or $A\rightarrow B$, is also an equivalence. Consider an absolutely cartesian square of spaces: \[ \xymatrix{ A \ar[r] \ar[d]& B \ar[d]\\ C \ar[r] & D\\ } \] Now apply the functor $\Sigma^{\infty} \text{Map}(D,-)$ to our square: \[ \xymatrix{ \Sigma^{\infty}\text{Map}(D, A) \ar[r] \ar[d]& \Sigma^{\infty}\text{Map}(D,B) \ar[d]\\ \Sigma^{\infty}\text{Map}(D, C) \ar[r] & \Sigma^{\infty}\text{Map}(D,D)\\ } \] By assumption, this resultant square is still cartesian. Since the square is in spectra, we know that it is also cocartesian. Recall that $\Sigma^{\infty}$ commutes with colimits. We then have the following chain of equivalences: \medskip \[ \begin{array}{ccc} \pi_0 \Sigma^{\infty} \hocolim (\text{Map}(D,B) \leftarrow \text{Map}(D,A) \rightarrow \text{Map} (D,C) &\simeq& \pi_0 \Sigma^{\infty} \text{Map}(D,D).\\ \parallel & & \parallel\\ H_0 (\hocolim (\text{Map}(D,B) \leftarrow \text{Map}(D,A) \rightarrow \text{Map} (D,C) ) & & H_0 (\Sigma^{\infty}\text{Map}(D,D))\\ \parallel & & \parallel\\ \mathbb{Z} [ \pi_0 (\hocolim (\text{Map}(D,B) \leftarrow \text{Map}(D,A) \rightarrow \text{Map} (D,C)) ] & & \mathbb{Z}[\pi_0 \text{Map}(D,D)]\\ \end{array} \] \medskip We can interpret this as telling us that $(\pi_0 \text{Map} (D,B) \cup \pi_0 \text{Map} (D,C))/\sim$ surjects onto $\pi_0 \text{Map} (D,D)$. Consider $id \in \text{Map}(D,D)$. This then has a preimage (up to homotopy) in $\text{Map}(D,B)$ and/or $\text{Map}(D,C)$; assume $\text{Map}(D,B)$. This gives a section $D \rightarrow B$. We can then rewrite our original diagram with our new map in the pre-image of the identity. This is Figure \ref{fig:setup}. \begin{figure}[h] \[ \xymatrix{ & D \ar[d] \ar@/^/[dd]^{id}\\ A \ar[r] \ar[d]& B \ar[d]\\ C \ar[r] & D\\ \] \caption{New information included in diagram} \label{fig:setup} \end{figure} We can add the homotopy pullback of $(A \rightarrow B \leftarrow D)$ to the diagram. Then the whole diagram is a pullback, being a composition of pullback squares. This lets us pull back the identity map, as in Figure \ref{fig:pullbackid}. \begin{figure}[h!] \[ \xymatrix{ \ar@/_/[dd]_{id}C\ar[r]\ar[d] & D \ar[d] \ar@/^/[dd]^{id}\\ A \ar[r] \ar[d]& B \ar[d]\\ C \ar[r] & D\\ } \] \caption{Adding the pullback of the top punctured square and pulling back the identity} \label{fig:pullbackid} \end{figure} The whole diagram is itself absolutely cartesian (having two facing maps which are equivalences). Since the bottom and entire squares are both absolutely cartesian, so is the top square, shown again in Figure \ref{fig:top1}. \begin{figure}[h!] \[ \xymatrix{ C\ar[r]\ar[d] & D \ar[d]\\% \ar@/^/[dd]^{id}\\ A \ar[r] & B\\% \ar[d]\\ } \] \caption{``top'' square} \label{fig:top1} \end{figure} Now that the top square is known to be absolutely cartesian, we can proceed in the same as we did with the original square, and obtain a section from $B$ to $D$ or $A$. If the section is to $D$, we are done, as we already have a splitting from $D$ to $B$ and having another the other direction gives us an equivalence between $B$ and $D$. Otherwise, we work in the other direction. We add our section $B \rightarrow A$ to our diagram, in Figure \ref{fig:wBsec}, shown without the other equivalences. \begin{figure}[h] \[ \xymatrix{ & C \ar[d]\ar[r] & D\ar[d]\\ B \ar@/_/[rr] \ar[r] & A\ar[d] \ar[r] & B\ar[d]\\ & C \ar[r] & D\\ } \] \caption{Adding the section $B\rightarrow A$} \label{fig:wBsec} \end{figure} Then we pull back the upper left square. The square comprised of the upper left and right squares together is then a cartesian square, with bottom map an equivalence. These are stable under pullback, meaning that the identity map $B\rightarrow B$ is pulled back, this time to the ``top''. Thus we know the pullback of the left square is equivalent to $D$, so the top two squares are as in Figure \ref{fig:pbBD}. This implies that the left square is also absolutely cartesian, as the entire and the right ones are. \begin{figure}[h] \[ \xymatrix{ D\ar[r]\ar[d]\ar@/^/[rr]^{id}& C \ar[d]\ar[r] & D\ar[d]\\ B \ar[r]\ar@/_/[rr]_{id}& A \ar[r] & B\\ } \] \caption{Pulled back identity to top of diagram} \label{fig:pbBD} \end{figure} Then we return to $A$, and the (now) absolutely cartesian square in the left of Figure \ref{fig:pbBD}. In the same way as before, we get a section $A \rightarrow C$ or $A\rightarrow B$. If $A\rightarrow B$ is a section, we are done -- the one-sided inverse (our original section) has an inverse on the other side and $A\overset{\simeq}{\rightarrow} B$, which implies immediately that $D\simeq C$ since it occurs in the cartesian diagram on the left side of Figure \ref{fig:pbBD} . If $A \rightarrow C$ instead is a section, we are done. This is because the map $C\rightarrow A$ was a section obtained earlier. We conclude $A\overset{\simeq}{\rightarrow} C$, which implies immediately that $D\simeq B$ since it occurs in the cartesian diagram on the left side of Figure \ref{fig:pbBD} and equivalences are pulled back. \if false \begin{figure}[h] \[ \xymatrix{ C \ar[r]\ar[d] & D\ar[d]\\ A \ar[r]^{\sim} & B\\ } \] \caption{Two splittings giving our equivalence} \label{fig:topcart} \end{figure} \begin{figure}[h!] \[ \xymatrix{ D \ar[r] \ar[d]& C\ar[d]^{\sim}\\ B \ar[r] & A } \] \caption{Upper Left cartesian square} \label{fig:ulcart} \end{figure} \fi \end{proof} \begin{rem} For absolutely \textbf{cocartesian} squares, if we allow our homotopy functors to also be possibly contravariant, then we can establish that they are of the same form as absolutely cartesian squares. The proof is parallel to that for cartesian squares, with the functor $\Sigma^\infty \text{Map}(D,-)$ replaced by $\Sigma^\infty \text{Map}(-, A)$. \end{rem} \section{Partial results for Conjecture \ref{conj2}}\label{sec:conj2} As observed by the anonymous reviewer, the following weakened form of the conjecture already holds: \begin{prop}\label{prop:rev} If one restricts to $n$-cubes of 1-connected spaces and homotopy functors which take values in 1-connected-spaces, Conjecture \ref{conj2} holds. The direction (absolutely cocartesian implies absolutely cartesian) holds in the weaker condition of nilpotent\footnote{A space $X$ is nilpotent when $\pi_1(X)$ is a nilpotent group. 1-connected spaces are trivially nilpotent.} spaces and functors taking values in nilpotent spaces. \end{prop} \begin{proof}[Proof]This is also following the reviewer.\\ \begin{enumerate} \itemsep 5pt \item \textbf{Functors with nilpotent target, abs cocartesian $\Rightarrow$ abs cartesian}. Let $\mathcal{X}$ be absolutely cocartesian. The functor $\Sigma^\infty$ from Spaces to Spectra preserves the cocartesianness, and in Spectra, diagrams are cocartesian iff cartesian. $\Omega^\infty$ from Spectra to Spaces preserves cartesianness, so $Q\mathcal{X} := \Omega^\infty \Sigma^\infty \mathcal{X}$ is cartesian, in addition to remaining cocartesian. Repeated applications of $Q$ will clearly retain this property; that is, $QQ\cdots Q\mathcal{X}$ will be cartesian and cocartesian. As $\Omega^\infty$ and $\Sigma^\infty$ are an adjoint pair and $Q$ the associated monad\footnote{Also referred to as a "triple".}, there is an associated cosimplicial ``$Q$-completion" (a.k.a $\mathbb{Z}$-nilpotent completion) for any space. For a space $X$, the $Q$-completion of $X$ is the homotopy limit of the cosimplicial space which arises naturally from iterating the monadic maps $X \rightarrow Q(X)$ and $QQX \rightarrow QX$. \[ \xymatrix{QX \ar@<3pt>[r]\ar@<-3pt>[r]& \ar[l] QQX \ar[r]\ar@<6pt>[r]\ar@<-6pt>[r]& \ar@<-3pt>[l]\ar@<3pt>[l]\cdots } \] The same line of reasoning holds with $\mathcal{X}$ replaced by $F(\mathcal{X})$ (since $F(\mathcal{X})$ is also absolutely cocartesian), so $\mathcal{X}$ is absolutely cartesian. \item \textbf{Functors with 1-connected target, absolutely cartesian $\Rightarrow$ absolutely cocartesian}. Let $\mathcal{X}$ be absolutely cartesian. Then $F(\mathcal{X})$ and $\Sigma^\infty F(\mathcal{X})$ for all hofunctors $F: \Top \rightarrow \Top$ are also cartesian; in particular, $\Sigma^\infty F(\mathcal{X})$ is also cocartesian. Since $F$ takes values in 1-connected spaces, this is sufficient to conclude that $F(\mathcal{X})$ itself is cocartesian. This is for all hofunctors $F$, so $\mathcal{X}$ is absolutely cocartesian. \end{enumerate} \end{proof} It was also pointed out by Goodwillie in a discussion with the author that \begin{prop}\label{prop:tom} If (absolutely cocartesian $\Rightarrow$ absolutely cartesian), then (absolutely cartesian $\Rightarrow$ absolutely cocartesian). \end{prop} \begin{proof}[\textbf{Proof Sketch:}] Let $\mathcal{X}$ be absolutely cartesian. Then for all $F,G$ hofunctors and $A, B$ in the appropriate categories, $\text{Map} (F(\text{Map} (G(\mathcal{X}), A)), B)$ is also cartesian. Unwrapping the dependencies and keeping in mind that $\text{Map}( -, Y)$ takes cocartesian to cartesian, we get that $\text{Map}(G(\mathcal{X}),A)$ is absolutely cocartesian. Apply our hypothesis and that $\text{Map}( -, Y)$ takes cocartesian to cartesian to conclude that $\mathcal{X}$ is also absolutely cocartesian. \end{proof} \section{An Absolutely Cocartesian and Cartesian 3-cube} The original form of Conjecture \ref{conj1} was as follows: \begin{quote} \textit{An $(n+1)$-cube of spaces $\mathcal{X}$ is absolutely cartesian iff there are absolutely cartesian $n$-cubes $Y, Z$ such that $\mathcal{X}: Y \rightarrow Z$.} \end{quote} \medskip This was corrected to the current form of the conjecture due to the following illustrative example; a cube which may be expressed as the \textit{composition} of two cubes of the aforementioned type without being a map of two such absolutely (co)cartesian squares. Given maps $A\rightarrow B\rightarrow D\rightarrow B\rightarrow C$ with the condition that $B\rightarrow D \rightarrow B$ is equivalent to the identity, the following 3-cube may be assembled: \footnote{We thank the referee for this example, which made it clear that we need to include not just \textit{maps} of $(n-1)$ cubes.}: \[ \scalebox{.75}{$ \xymatrix{ &A \ar[rr] \ar[dd]& & C\ar@{=}[dd] \\ A \ar@{=}[ur] \ar[rr] \ar[dd]& & B \ar[dd]\ar[ur]\\ % &B \ar[rr]& & C \\ D \ar@{=}[rr]\ar[ur] & & D\ar[ur]\\ }$} \] Now that we have more complicated diagrams, we have chosen to denote equivalences by equality so that it is clear which maps are equivalences. It is possible to first establish absolute cartesianness and cocartesianness independent of the decomposition, but this is superfluous once we have the decomposition. We will then just provide the decomposition. \if false We will first establish absolute cartesianness and cocartesianness independent of the decomposition, and then give the decomposition. \subsection{Estabilishing Absolute (co)Cartesianness} While an $n$-cube is cartesian if and only if its $(n-1)$ cubes of fibers (for all choices of point to fiber over) is cartesian, there is only a partial dual to this statement. If we know an $n$-cube is cocartesian, then it will follow that its $(n-1)$ cofiber cubes will also be cocartesian. \textit{However}, there are cubes with cocartesian cofiber cubes which are not themselves cocartesian. This boils down to the difference between cartesian squares and cocartesian squares; the former induces a long exact sequence in homotopy groups and the latter only in (co)homology groups. We are not totally bereft of tools to check for cocartesianness. What is perhaps the most common is the ``Covering Lemma"\cite[Prop 0.2 and its dual]{GC2}, which we will state a restricted version of for cubical diagrams: \begin{lem}\label{lem:cover} Let $\mathscr{D}$ be an $n$-cubical diagram (of spaces or spectra). It may be expressed as a map of $(n-1)$ cubes in $n$ ways. For any such expression, $\mathscr{D}: X \rightarrow Y$, $\mathscr{D}$ is cartesian precisely when \[ \xymatrix{ X_\emptyset \ar[r]\ar[d] & \holim_\emptyset X\ar[d]\\ Y_\emptyset \ar[r] & \holim_\emptyset Y } \] is cartesian; $X_\emptyset$ denotes the initial object of the diagram $X$ (and likewise for $Y$) . $\mathscr{D}$ is cocartesian precisely when \[ \xymatrix{ \hocolim_1 X \ar[r] \ar[d] & X_{[n-1]}\ar[d]\\ \hocolim_1 Y \ar[r] & Y_{[n-1]}\\ } \] is cocartesian; here, $X_{[n-1]}$ is the final element of the diagram $X$ (and similarly for $Y$). \end{lem} This lemma allows us to determine cartesianness (and cocartesianness) by considering it as a map of two squares $X \rightarrow Y$ (possibly in two ways, one for cartesianness and one for cocartesianness). \medskip For cartesianness, we consider the 3-cube as a map \[ \xymatrix{ A \ar[r] \ar[d] & B \ar[d]\\ D \ar[r]^{\simeq} & D\\ } \hspace{1 cm} \Rightarrow \hspace{1 cm} \xymatrix{ A \ar[r] \ar[d] & C \ar[d]^{\simeq}\\ B \ar[r]& C\\ }. \] Let $X$ denote the source square and $Y$ the target square. The 3-cube is cartesian then if the following \[ \xymatrix{ A=X_\emptyset \ar[r]\ar[d] & (\holim_\emptyset X) \simeq B \ar[d]\\ A=Y_\emptyset \ar[r] & (\holim_\emptyset Y) \simeq B\\ } \] is cartesian. The property that composition the maps $B \rightarrow D \rightarrow B$ is the identity is what gives us that the induced map $B\simeq(\holim_\emptyset X) \rightarrow (\holim_\emptyset Y) \simeq B$ is also equivalent to the identity. It is clear that this is true for any homotopy functor $F$ we apply, except the square will be of $F(A)$ and $F(B)$. \medskip For cocartesianness, we consider the 3-cube as a map \[ \xymatrix{ A \ar[r] \ar[d]^{\simeq} & C \ar[d]\\ A \ar[r]& B\\ } \hspace{1 cm} \Rightarrow \hspace{1 cm} \xymatrix{ B \ar[r] \ar[d] & C \ar[d]\\ D \ar[r]^{\simeq}& D\\ }. \] Let $W$ denote the source square and $Z$ the target square. The 3-cube is cartesian then if the following \[ \xymatrix{ (\hocolim_1 W) \simeq B \ar[d]\ar[r] & W_{[1]}=C\ar[d]\\ (\hocolim_1 Z) \simeq B \ar[r]& Z_{[1]}=C\\ } \] is cocartesian. In this case, the maps $B \rightarrow D \rightarrow B$ compsing to the identity is what gives us that the map $B\simeq (\hocolim_1 W)\rightarrow(\hocolim_1 Z) \simeq B$ is an equivalence. It is again clear that this is true for any homotopy functor $F$ we apply, except the square will be of $F(B)$ and $F(C)$. This 3-cube is not written as a map of two absolutely (co)cartesian squares, but it has just been show to be absolutely cocartesian and cartesian. \fi \subsection{Factorization} Despite not being a map of two absolutely (co)cartesian squares, the 3-cube \footnote{This factorization related to one pointed out by Tom Goodwillie.} may be expressed as a composition of two 3-cubes which \textit{are} of that form. This relies on the ability to express $B$ as a retract of $D$. We compose the cube \[ \scalebox{.75}{$ \xymatrix{ &A \ar[rr] \ar[dd]& & C\ar@{=}[dd] \\ A \ar@{=}[ur] \ar[rr] \ar[dd]& & B \ar@{=}[dd]\ar[ur]\\ % &B \ar[rr]& & C\\ B \ar@{=}[rr]\ar@{=}[ur] & & B\ar[ur]\\ % }$} \] with \[ \scalebox{.75}{$ \xymatrix{ % &B \ar[rr]\ar@{=}[dd]& & C\ar@{=}[dd] \\ B \ar@{=}[rr]\ar@{=}[ur]\ar[dd] & & B\ar[dd]\ar[ur]\\ % &B \ar[rr]& & C \\ D \ar@{=}[rr]\ar[ur] & & D\ar[ur]\\ }$} \] to get our original 3 cube as the total cube of the composition (`glueing' the first cube atop the second): \[ \scalebox{.75}{$ \xymatrix{ &A \ar[rr] \ar[dd]& & C\ar@{=}[dd] \\ A \ar@{=}[ur] \ar[rr] \ar[dd]& & B \ar@{=}[dd]\ar[ur]\\ % &B \ar[rr]\ar@{=}[dd]& & C\ar@{=}[dd] \\ B \ar@{=}[rr]\ar@{=}[ur]\ar[dd] & & B\ar[dd]\ar[ur]\\ % &B \ar[rr]& & C \\ D \ar@{=}[rr]\ar[ur] & & D\ar[ur]\\ }$} \] \if false and if the following is cartesian \[ \xymatrix{ } \] then the 3-cube is cartesian. If the following is cocartesian, \[ \] the 3-cube is cocartesian. \fi \section{Applications and Related Work} We end with a few remarks on extending this and other approaches. Par\'{e}\cite{pare} studies strict colimits which are preserved by all functors, and calls such colimits absolute, a naming convention which we have chosen to follow by calling our \textit{homotopy} diagrams absolute when preserved by all homotopy functors. Street works in an enriched setting and states his results in terms of distributors\cite{street}. It is not clear at the moment how applicable their results are in this setting. The first step would be to switch to considering simplicial functors, which are (roughly) as good as homotopy functors, to work enriched. The original goal to classifying absolutely cartesian cubes was to get ``wrong way'' maps, from holims of cubes of one dimension to ones of a higher dimension, in a certain diagram related to the $E_1$ page of the spectral sequence associated to a cosimplicial space. These are going the wrong way inasmuch as natural maps between diagrams are usually from lower to higher dimension, which induces a map from the holim of the higher dimensional diagram to the holim of the lower dimensional diagram. A map of cubes \textit{of the same dimension}, $A \rightarrow B$, induces maps on the homotopy limits of the cubes $\holim A \rightarrow \holim B$ (also for the punctured homotopy limits, $\holim_\emptyset$). If A and B are diagrams, with B cartesian, $\holim_0 (A \rightarrow B) \simeq \holim_\emptyset A$. That is, a way to take an $n$ cube and produce an $(n+1)$ cube with equivalent (punctured) homotopy limit is to find a cartesian $n$ cube to which it maps naturally. We would also like do these constructions only once for all homotopy functors, so the cube we are mapping to not only needs to be cartesian, but with cartesianness preserved by all homotopy functors. \bibliographystyle{amsplain} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
1,108,101,566,268
arxiv
\section{Introduction}\label{sec:introduction} Ranging from high-energy particle physics (Standard Model) \cite{Peskin_book, gauge2, Schwartz:2013pla} to low-temperature condensed matter physics (spin liquids, quantum Hall, high-$T_c$ superconductivity) \cite{fradkin_2013, RevModPhys.89.025003}, gauge theories constitute the baseline in our microscopical description of the universe and are a cornerstone of contemporary scientific research. Yet, capturing their many-body body behavior beyond perturbative regimes, a mandatory step before experimentally validating these theories, often eludes us \cite{Review_challenges_QCD}. {One for all, the quark confinement mechanism in Quantum Chromodynamics (QCD), a founding pillar of the Standard Model which have been studied for almost half a century, is still at the center of current research efforts} \cite{PhysRevD.42.3520, PhysRevLett.91.171601, PhysRevB.77.045107, PhysRevX.9.021022, PhysRevD.10.2445, Alkofer_2007}. Indeed, a powerful numerical workhorse such as Monte-Carlo simulations \cite{LGT3_Montecarlo,LGT4_Montecarlo, LGT5_Montecarlo}, capable of addressing discretized lattice formulations of gauge theories \cite{PhysRevD.10.2445, LGT2_PhysRevD.11.395,LGT1_RevModPhys.51.659,SusskindLatticeFermions}, struggles in highly interesting regimes, where matter fermions and excess of charge are concerned, due to the infamous sign problem \cite{sign_problem}. In recent years, {a complementary numerical approach}, Tensor Networks (TN) methods, have found increasing applications for studying low-dimensional Lattice Gauge Theories (LGT) in the Hamiltonian formulation~\cite{FrankIgnacioTNReview08,OrusTNReview}. As tailored many-body quantum state ans\"atze, TNs are an efficient approximate entanglement-based representation of physical states, {capable of efficiently describe equilibrium properties and real-time dynamics of systems described by complex actions, where Monte Carlo simulations fail to efficiently converge}~\cite{JuthoTDVPUnified}. TN methods have proven remarkable success in simulating LGTs in (1+1) dimensions \cite{PhysRevD.66.013002, banuls2013mass,PhysRevLett.112.201601, PhysRevLett.113.091601, PhysRevD.92.034519, kuhn2015non, PhysRevX.6.041040,PhysRevD.94.085018,PhysRevD.95.094509, PhysRevX.7.041046,PhysRevLett.118.071601,PhysRevD.96.114501,PhysRevX.7.041046,kull2017classification,2018slft.confE.230S,PhysRevD.98.074503,PhysRevD.99.014503,PhysRevLett.122.250401,Magnifico2020realtimedynamics,PhysRevD.101.054507}, and very recently they have shown potential in (2+1) dimensions \cite{PhysRevB.83.115127, TAGLIACOZZO2013160, PhysRevX.4.041024, ZOHAR2015385, PhysRevD.97.034510, 2D_QED_Felser, PhysRevD.102.074501, PhysRevX.5.011024, meurice2020tensor}. To date, due to the lack of efficient numerical algorithms to describe high-dimensional systems via TNs, no results are available regarding the realistic scenario of LGTs in three spatial dimensions. {\color{black} In this work}, we bridge this gap by numerically simulating, via TN ansatz states, an Abelian lattice gauge theory akin to (3+1) Quantum Electrodynamics (QED), at zero temperature. We show that, by using the quantum link formalism (QLM) of LGTs \cite{Horn_QLM, CHANDRASEKHARAN1997455} and an unconstrained Tree Tensor Network (TTN), we can access multiple equlibrium regimes of the model, including finite charge densities. Precisely, we analyze the ground state properties of quantum-link QED in (3+1)D for intermediate system sizes, up to 512 lattice sites. The matter is discretized as a staggered spinless fermion field on a cubic lattice \cite{LGT2_PhysRevD.11.395}, while the electromagnetic gauge fields are represented on lattice links, and truncated to a compact representation of spin-$s$. Here we present results from a non-trivial representation for lattice gauge fields (the spin-$1$ case), with possible generalizations to higher spin requiring only a polynomial overhead in $s$. Our picture can be similarly adapted to embed non-Abelian gauge symmetries, such as they appear in QCD \cite{PhysRevX.7.041046}. Finally, we stress that the truncation of the gauge field is a common step in quantum simulations and computations~\cite{WieseReview2013QSim, QSimZoharReznik2013, QS_LGT_1, QS_LGT_2, QS_LGT_3, QS_LGT_4, QS_LGT_5, kasper2017Qsim, Tavernelli_2020, Jordan1130}, making the presented numerical approach a landmark benchmarking and cross-verification tool for current and future experiments. By variationally approximating the lattice QED ground state with a TTN, we address a variety of regimes and questions {inaccessible before}. In the scenario with zero excess charge density, we observe that the transition between the vacuum phase and the charge-crystal phase is compatible with a second-order quantum phase transition~\cite{2D_QED_Felser}. In the limit of zero magnetic coupling, this transition occurs at negative bare masses $m_0$, but as the coupling is activated, the critical point is shifted to larger, and even positive, $m_0$ values. To investigate field-screening properties, we also consider the case where two parallel charged plates are placed at a distance (a capacitor). By studying the polarization of the vacuum in the inner volume, we observe an equilibrium string-breaking effect akin to the Schwinger mechanism. Furthermore, we address the confinement problem by evaluating the binding energies of charged particle pairs pinned at specified distances. Finally, we consider the scenario with a charge imbalance into the system, i.e. at finite charge density, and we characterize a regime where charges accumulate at the surface of our finite sample, analogously to a classic perfect conductor. \begin{figure}[t!] \includegraphics[width=\linewidth]{scheme_3D.pdf} \caption{ \label{fig:scheme_3D} {\bf Scheme of the three-dimensional LGT with three electric field levels (spin-$1$ compact representation).} Fermionic degrees of freedom are represented by staggered fermions on sites with different parity: on the even (odd) sites, a full red (blue) circle corresponds to a particle (antiparticle) with positive (negative) charge. As an illustrative example, it is shown a gauge-invariant configuration of matter and gauge fields with one particle and one antiparticle in the sector of zero total charge. } \end{figure} \section{Results} \textbf{The model.} Hereafter, we numerically simulate, at zero temperature, the Hamiltonian of $U(1)$ quantum electrodynamics on a finite $L \times L \times L$ three-dimensional simple cubic lattice \cite{LGT2_PhysRevD.11.395,LGT1_RevModPhys.51.659,SusskindLatticeFermions}: \begin{subequations}\label{eq:H_lattice_QED} \begin{eqnarray} & \hat H & = - t \sum_{ {\color{black}\mathbf{x}}, {\color{black}\mathbf{\mu}}} \left(\hat \psi^{\dag}_{{\color{black}\mathbf{x}}} \, \hat U_{{\color{black}\mathbf{x}},{\color{black}\mathbf{\mu}}} \, \hat \psi_{{\color{black}\mathbf{x}}+{\color{black}\mathbf{\mu}}} + \text{H.c.} \right) \label{eq_line: kinetic} \\ &+& m \sum_{{\color{black}\mathbf{x}}}(-1)^{{\color{black}\mathbf{x}}} \hat \psi^{\dag}_{{\color{black}\mathbf{x}}}\hat \psi_{{\color{black}\mathbf{x}}} + \frac{{\color{black} g^2_\mathrm{e}}}{2} \sum_{ {\color{black}\mathbf{x}},{\color{black}\mathbf{\mu}}} \hat E_{{\color{black}\mathbf{x}},{\color{black}\mathbf{\mu}}}^{2} \label{eq_line: electric_and_mass} \\ &-& \frac{{\color{black} g^2_\mathrm{m}}}{2}\sum_{x} \left( \hat \square_{{\color{black}\mathbf{\mu}_x}, {\color{black}\mathbf{\mu}_y}} + \hat \square_{{\color{black}\mathbf{\mu}_x}, {\color{black}\mathbf{\mu}_z}} + \hat \square_{{\color{black}\mathbf{\mu}_y}, {\color{black}\mathbf{\mu}_z}} + \text{H.c.} \right) \label{eq_line: magnetic_plaquette} \end{eqnarray} \end{subequations} \\ with ${\color{black}\mathbf{x}} \equiv \left ( i,j,k \right )$ for $0 \leq i,j,k \leq L-1$ labelling the sites of the lattice and $ \hat \square_{{\color{black} \mathbf{\mu}_\alpha}, {\color{black} \mathbf{\mu}_\beta}} = \hat U_{{\color{black}\mathbf{x}},{\color{black} \mathbf{\mu}_{\alpha}}} \hat U_{{\color{black}\mathbf{x}}+{\color{black} \mathbf{\mu}_{\alpha}},{\color{black}\mathbf{\mu}_\beta}} \hat U^{\dag}_{{\color{black} \mathbf{x}}+{\color{black} \mathbf{\mu}_\beta},{\color{black}\mathbf{\mu}_\alpha}} \hat U^{\dag}_{{\color{black} \mathbf{x}},{\color{black}\mathbf{\mu}_\beta}}$. Here we adopted the Kogut-Susskind formulation \cite{LGT2_PhysRevD.11.395}, representing fermionic degrees of freedom with a staggered spinless fermion field $\{\hat\psi_{{\color{black}\mathbf{x}}},\hat\psi^{\dag}_{{\color{black}\mathbf{x'}}}\} = \delta_{{\color{black}\mathbf{x}},{\color{black}\mathbf{x'}}}$ on lattice sites. Their bare mass $m_{\color{black}\mathbf{x}} = (-1)^{{\color{black}\mathbf{x}}}m$ is staggered, as tracked by the site parity $(-1)^{\color{black}\mathbf{x}} = (-1)^{i+j+k}$, so that fermions on even sites represent particles with positive electric charge $+q$, while holes on odd sites represent anti-particles with negative charge $-q$, as shown in Fig. \ref{fig:scheme_3D}. Charge $\hat Q$ conservation is thus expressed as global fermion number $\hat N$ conservation, since $\hat Q = \sum_{\color{black}\mathbf{x}} \left ( \hat \psi^{\dagger}_{\color{black}\mathbf{x}} \hat \psi_{\color{black}\mathbf{x}} - \frac{1-(-1)^{\color{black}\mathbf{x}}}{2} \right ) = \hat N - L^3/2$. The links of the 3D lattice are uniquely identified by the couple of parameters $({\color{black}\mathbf{x}},{\color{black}\mathbf{\mu}})$ where ${\color{black}\mathbf{x}}$ is any site, ${\color{black}\mathbf{\mu}}$ is one of the three positive lattice unit vectors ${\color{black}\mathbf{\mu}_x} \equiv \left (1,0,0 \right )$, ${\color{black}\mathbf{\mu}_y} \equiv \left (0,1,0 \right )$, ${\color{black}\mathbf{\mu}_z} \equiv \left (0,0,1 \right )$. The gauge fields are defined on lattice links through the pair of operators $\hat E_{{\color{black}\mathbf{x}},{\color{black}\mathbf{\mu}}}$ (electric field) and $\hat U_{{\color{black}\mathbf{x}},{\color{black}\mathbf{\mu}}}$ (unitary comparator) that satisfy the commutation relation \begin{eqnarray}\label{eq:commutation_E_U} [\hat E_{{\color{black}\mathbf{x}},{\color{black} \mathbf{\mu}}},\hat U_{{\color{black}\mathbf{x'}}, {\color{black} \mathbf{\mu'}}}] = \delta_{{\color{black}\mathbf{x}},{\color{black}\mathbf{x'}}}\delta_{{\color{black} \mathbf{\mu}},{\color{black} \mathbf{\mu'}}}\hat U_{{\color{black}\mathbf{x}}, {\color{black} \mathbf{\mu'}}}. \end{eqnarray} For comfort of notation, we can extend the definition to negative lattice unit vectors via $\hat E_{{\color{black} \mathbf{x}}+{\color{black} \mathbf{\mu}},-{\color{black} \mathbf{\mu}}} = - \hat E_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}}$ and $\hat U_{{\color{black} \mathbf{x}}+{\color{black} \mathbf{\mu}},-{\color{black} \mathbf{\mu}}} = \hat U_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}}^{\dag}$. The Hamiltonian of Eq. \eqref{eq:H_lattice_QED} consists of four terms: the parallel transporter \eqref{eq_line: kinetic} describes creation and annihilation of a particle-antiparticle pair, shifting the gauge field in-between to preserve local gauge symmetries. The staggered mass and the electric energy density \eqref{eq_line: electric_and_mass} are completely local. Finally, the plaquette terms \eqref{eq_line: magnetic_plaquette} capture the magnetic energy density, and are related to the smallest Wilson loops along the closed plaquettes along the three planes $x-y$, $x-z$, $y-z$ of the lattice. In dimensionless units ($\hbar = c = 1$), the couplings in Eq.~\eqref{eq:H_lattice_QED} are not independent: They can be expressed as $t=1/a$, $m=m_0$, ${\color{black} g^2_\mathrm{e}}=g^2/a$, ${\color{black} g^2_{\mathrm{m}}} = 8/(g^2a)$, where $a$ is the lattice spacing, $g$ is the coupling constant of QED and $m_0$ is the bare mass of particles/antiparticles. The numerical setup allows us to consider the couplings $( t, m, {\color{black} g_{\mathrm{e}}} , {\color{black} g_{\mathrm{m}}})$ as mutually independent. We then recover the physical regime of QED by enforcing $ {\color{black} g_{\mathrm{e}}} {\color{black} g_{\mathrm{m}}} = 2 \sqrt{2} t$ \cite{LGT2_PhysRevD.11.395}. We also fix the energy scale by setting $t=1$. The local $U(1)$ gauge symmetry of the theory is encoded in Gauss's law, whose generators \begin{eqnarray}\label{eq:Gauss'law} \hat G_{\color{black} \mathbf{x}} = \hat \psi_{\color{black} \mathbf{x}}^\dagger \hat \psi_{\color{black} \mathbf{x}} - \frac{1-(-1)^{\color{black} \mathbf{x}}}{2} - \sum_{{\color{black} \mathbf{\mu}}} \hat E_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}}, \end{eqnarray} are defined around each lattice site ${\color{black} \mathbf{x}}$. The sum in Eq.~\eqref{eq:Gauss'law} involves the six electric field operators on the links identified by $\pm {\color{black} \mathbf{\mu}_x}$, $\pm {\color{black} \mathbf{\mu}_y}$, $\pm {\color{black} \mathbf{\mu}_z}$. Each $\hat G_{\color{black} \mathbf{x}}$ commutes with the Hamiltonian $\hat H$. In the absence of static (background) charges, the gauge-invariant Hilbert space consists of physical many-body quantum states $\left | \Phi \right \rangle $ satisfying $\hat G_{\color{black} \mathbf{x}} \left | \Phi \right \rangle = 0$ at every site ${\color{black} \mathbf{x}}$. \begin{figure*}[t!] \includegraphics[width=\linewidth]{transition.pdf} \caption{ \label{fig:transition} {\bf Transition at zero total charge.} Ground state charge occupation and electric field on links for $m=-3.0$ (a) and $m=3.0$ (c) and ${\color{black} g^2_{\mathrm{m}}}=0$. (b) Particle density as a function of $m$, for different system size $L$ and ${\color{black} g^2_{\mathrm{m}}}=0$. Ground state charge occupation and electric field on links for $m=-3.0$ (d) and $m=3.0$ (f) in the presence of magnetic interactions with ${\color{black} g^2_{\mathrm{m}}} = 8/{\color{black} g^2_\mathrm{e}}=4$. (e) Particle density as a function of $m$, for different system size $L$ and ${\color{black} g^2_{\mathrm{m}}} = 8/{\color{black} g^2_\mathrm{e}}=4$. } \end{figure*} As stressed in the standard Wilson's formulation of lattice QED \cite{PhysRevD.10.2445}, faithful representations of the $({\color{black} \mathbf{\hat{E}}}, {\color{black} \mathbf{\hat{U}}})$ algebra are infinite-dimensional. A truncation to a finite dimension becomes therefore necessary for numerical simulations with TN methods, which require a finite effective Hilbert dimension at each lattice site. We use the quantum link model (QLM) approach in which the gauge field algebra is replaced by $SU(2)$ spin algebra, i.e. $\hat E_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}} \equiv \hat S^z_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}}$ and $\hat U_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}} \equiv \hat S^+_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}}/s$ for a spin-$s$ representation. This substitution keeps the electric field operator hermitian and preserves Eq.~\eqref{eq:commutation_E_U}, but ${\color{black} \mathbf{\hat{U}}}$ is no longer unitary. Throughout this work, we will select $s = 1$, the smallest representation ensuring a nontrivial contribution of all the terms in the Hamiltonian (see also Fig.~\ref{fig:scheme_3D}). This truncation introduces a local energy cutoff based on ${\color{black} g^2_\mathrm{e}}$, which in turn requires larger spin $s$ to accurately represent weaker coupling regimes, still potentially accessible via TNs \cite{PhysRevD.95.094509}. \\ \textbf{Transition at zero charge.} We focus on the zero charge sector, i.e. $\sum_{\color{black} \mathbf{x}} \hat \psi^{\dagger}_{\color{black} \mathbf{x}} \hat \psi_{\color{black} \mathbf{x}} = \frac{L^3}{2}$, and Periodic Boundary Conditions (PBC). As shown in Fig. \ref{fig:transition} (upper panel), for ${\color{black} g^2_{\mathrm{m}}} = 0$ the system undergoes a transition between two regimes, analogously to the (1+1)D and (2+1)D cases \cite{PhysRevLett.112.201601, PhysRevD.98.074503,2D_QED_Felser}: for large positive masses, the system approaches the bare vacuum, while for large negative masses, the system is arranged into a crystal of charges, a highly degenerate state in the semiclassical limit ($t \to 0$) due to the exponential number of electric field configurations allowed. We track this transition by monitoring the average matter density $\rho = \frac{1}{L^3} \sum_{\color{black} \mathbf{x}} \left < {\color{black} \mathrm{GS}} \right | \hat n_{\color{black} \mathbf{x}} \left | {\color{black} \mathrm{GS}} \right >$ where $\hat n_{\color{black} \mathbf{x}} = \frac{1+(-1)^{\color{black} \mathbf{x}}}{2} - (-1)^{\color{black} \mathbf{x}} \hat \psi^{\dagger}_{\color{black} \mathbf{x}} \hat \psi_{\color{black} \mathbf{x}}$ is the matter occupation operator and the many-body ground state $ \left | {\color{black} \mathrm{GS}} \right > $ has been computed by TTN algorithm ({\color{black} see the Methods section for details}). Fig. \ref{fig:transition}{\color{black} (b)} displays the result for different sizes $L$ (and ${\color{black} g^2_\mathrm{e}}/2 = t = 1$), portraying the transition. Panels {\color{black} (a)} and {\color{black} (c)} display local configurations of matter $ \left < \hat n_{\color{black} \mathbf{x}} \right >$ and gauge sites $\langle \hat E_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}} \rangle$ for $m=-3.0$ and $m=+3.0$ respectively. In the former regime, the algorithm seems to favor a single allowed configuration of gauge fields rather than a superposition of many configuations: This is due to the fact that, when ${\color{black} g^2_{\mathrm{m}}} = 0$, the matrix element that rearranges the configurations occurs at very high perturbative order in $|t/m|$, and is numerically neglected. A finite-size scaling analysis of the transition ({\color{black} as detailed in the Methods' subsection ``Critical points: scaling analysis"}) yields results compatible with a II-order phase transition, with the critical point occurying at negative bare masses $m$. \begin{figure*}[t!] \includegraphics[width=\linewidth]{quantum_capacitor.pdf} \caption{ \label{fig:quantum_capacitor} {\bf Quantum capacitor properties.} (a) Ground state configuration of the quantum capacitor for $m=3.0$. (b) Mean charge density on the sites along the transverse direction for different values of $m$. (c) Mean value of the electric field on the transverse links for different values of $m$. (d) Ground state configuration of the quantum capacitor for $m=-3.0$. (e) Illustration of the creation of a particle-antiparticle pair along the transverse direction, starting from the initial electric field string generated by the boundary charges. (f) Particle density as a function of $m$, with a comparison to the case with no boundary charges. } \end{figure*} The same transition appears to be more interesting when we {\color{black} activate} the magnetic coupling, by setting ${\color{black} g^2_{\mathrm{m}}} = 8 t^2/{\color{black} g^2_\mathrm{e}} = 4$ (physical line). The phase at large negative $m$ now appears to be a genuine superposition of many configurations of the electric field, as they are coupled by matrix elements of the order $\sim {\color{black} g^2_{\mathrm{m}}}$, kept as numerically relevant by the algorithm. Moreover, the transition is still compatible with a II-order phase transition, and the critical point is shifted to larger $m$ values. This can lead to a critical bare mass ${\color{black} m_{\mathrm{c}}}$ that is positive (as we observed ${\color{black} m_{\mathrm{c}}} \approx +0.22$ for the case ${\color{black} g^2_\mathrm{e}}/2 = t = 1$), ultimately making the transition physically relevant. \\ \textbf{Quantum capacitor.} To investigate field-screening and equilibrium string-breaking properties, we analyze the scenario where two charged plates (an electric capacitor) are placed at the opposite faces of a volume, with open boundary conditions (OBC). In our simulations, we achieve this regime by setting large local chemical potentials on the two boundaries. We expect that for small positive masses $m$, the vacuum inside the plates will spontaneously polarize to an effective dielectric, by creating particle and antiparticle pairs to screen the electric field from the plates, into an energetically-favorable configuration. We observe this phenomenon by monitoring the charge density function along the direction ${\color{black} \mathbf{\mu}_x}$ orthogonal to the plates ${\color{black} q_{\mathrm{c}}(d)} = \frac{2}{L^2} \sum_{j,k=1}^{L} \left < {\color{black} \mathrm{GS}} \right | (-1)^{\color{black} \mathbf{x}} \hat \psi^{\dagger}_{(d,j,k)} \hat \psi_{(d,j,k)} \left | {\color{black} \mathrm{GS}} \right >$ as well as the electric field amplitude along ${\color{black} \mathbf{\mu}_x}$, ${\color{black} E^{\mathrm{c}}_{{\color{black} \mathbf{x}},{\color{black} \mathbf{x}}+\mathbf{\mu}_x}(d)} = \frac{2}{L^2} \sum_{j,k=1}^{L} \left < {\color{black} \mathrm{GS}} \right | \hat E_{(d,j,k),(d+1,j,k)} \left | {\color{black} \mathrm{GS}} \right >$, as presented in Fig.~\ref{fig:quantum_capacitor}. A transition from a vacuum regime to a string-breaking dielectric regime is observed, when driving $m$ from negative to positive. However, here the critical point occurs at positive masses (${\color{black} m_{\mathrm{c}}} >0$) even at zero magnetic coupling ${\color{black} g^2_{\mathrm{m}}} = 0$, analogously to the (1+1)D case \cite{PhysRevLett.112.201601}. In conclusion, the charged capacitor can make the phase transition {\color{black} physical} even when $g$ can not be tuned. The observed behaviour can be interpreted as an equilibrium counterpart to the {\it Schwinger mechanism}, a real-time dynamical phenomenon in which the spontaneous creation of electron-positron pairs out of the vacuum is stimulated by a strong external electric field \cite{PhysRev.82.664}. This could either be potentially verified in experiments or quantum simulations, by means of adiabatic quenches, ramping up the capacitor voltage. \\ \textbf{Confinement properties.} The (3+1)-dimensional pure compact lattice QED predicts a confining phase at large coupling $g$ \cite{PhysRevD.10.2445, POLYAKOV197582, BANKS1977493, PhysRevD.19.619,JERSAK1983103}. This phase, where the magnetic coupling is negligible, is characterized by the presence of a linear potential between static test charges, and is expected to survive at the continuum limit. By decreasing $g$, the system undergoes a phase transition to the Coulomb phase where the magnetic terms are not negligible and the static charges interact through the $1/r$ Coulomb potential at distance $r$ \cite{PhysRevD.21.2291}. When the gauge field is coupled to dynamical matter ($t\neq0$ and finite $m$), new possible scenarios emerge, such as the string-breaking mechanism. Nevertheless, the transition between confined and deconfined phases is still expected to occur \cite{PhysRevD.58.085013}. We can investigate this specific scenario with our TN method: we consider a $16 \times 4 \times 4$ lattice and pin two opposite charges via large local chemical potentials at distance $r$ along direction ${\color{black} \mathbf{\mu}_x}$. The energy $E(r) = V(r) - V(\infty) + 2 \epsilon_1 + E_0$ of this ground state comprises: the work $V(r) - V(\infty)$ needed to bring two charges from infinity to distance $r$, plus twice the excitation energy $\epsilon_1$ of an isolated pinned charge, on top of the dressed-vacuum energy $E_0$. Therefore we can estimate the interaction potential as $V(r) = E(r) - E_0 + \xi$ where the additive constant $\xi$ does not scale with the volume (while $E(r)$ and $E_0$ separately do). The presence of dynamical matter heavily impacts the strong-coupling picture (${\color{black} g^2_{\mathrm{m}}} \sim 0$), as it can be extrapolated in the semiclassical limit ($t \sim 0$). Here, a particle-antiparticle pair at distance $r$ with, a field-string between them, has an energy \begin{equation} E(r)-E_0 = 2m+\frac{g^{2}}{2a^2} \, r. \end{equation} that scales linearly with $r$ (here $g^2 = a \:{\color{black} g^2_\mathrm{e}}$). By contrast, two mesons (neighboring particle-antiparticle pairs) have a flat energy profile \begin{equation} E_{{\color{black}\mathrm{pairs}}}-E_0=4m+ \frac{g^{2}}{a}. \end{equation} Thus, for any mass $m$, there is critical distance $r_0$ above which the string is broken, and formation of two mesons is energetically favorable. We observe this transition at finite $t$, as shown in Fig. \ref{fig:potential} (bottom panel, $g^2 = 4$). The crossover from the short-range to long-range behavior is still relatively sharp, and the distance ${\color{black} r_\mathrm{c}}$ at which it occurs strongly depends on the bare mass $m$. This is in contrast to the weak-coupling regime (top panel, $g^2 = 1/4$), where the potential profile $V(r)$ is smoothly increasing with $r$, and its slope at short distances disagrees with the string tension ansatz $r g^2/2 + \text{const.}$. Thus our simulations highlight visibly different features between confined and deconfined regimes, even with dynamical matter. \\ \begin{figure}[t!] \includegraphics[width=\linewidth]{potential.pdf} \caption{ \label{fig:potential} {\bf Confinement properties.} Interaction potential $V(r)$ between two charges of opposite sign as a function of their distance $r$ in the (upper panel) weak coupling regime $g\ll1$ and (lower panel) strong coupling regime $g\gg1$. } \end{figure} \textbf{Finite Density.} One of the most important features of our numerical approach is the possibility to tackle finite charge-density regimes. In fact, by exploiting the global $U(1)$ fermion-number symmetry, implemented in our TTN algorithms, we can inject any desired charge imbalance into the system, while working under OBC. Fig.~\ref{fig:finite_density} shows the results for charge density $\rho=Q/L^{3}=1/4$. In the {\color{black} vacuum phase} ($m \gg {\color{black} g^2_\mathrm{e}}/2 \approx t$), we obtain configurations as displayed in panel {\color{black} (a)}, where the charges are expelled from the bulk, and stick to the boundaries to minimize the electric field energy of the outcoming fields. To quantify this effect, which can also be interpreted as a field-screening phenomenon, we introduce the surface charge density \begin{equation}\label{eq:surface_density} \sigma(l)=\frac{1}{A(l)}\sum_{{\color{black} \mathbf{x}}\in A(l)}\left\langle \hat \psi_{{\color{black} \mathbf{x}}}^{\dagger} \hat \psi_{{\color{black} \mathbf{x}}}\right\rangle \end{equation} where $A(l)$ contains only sites sitting at lattice distance $l$ from the closest boundary. The deeper we are in the {\color{black} vacuum phase}, the faster the surface charge decays to zero away from the boundary ($l=1$). By contrast, close to the transition, the spontaneous creation of charge-anticharge pairs determines a finite charge density of the bulk. Finally, for large negative $m$, the charge distribution is roughly uniform. \begin{figure}[t!] \includegraphics[width= 0.9 \linewidth]{finite_density.pdf} \caption{ \label{fig:finite_density} {\bf Finite density analysis.} (a) Ground state configuration for $m=4.0$ at finite charge density $\rho = Q/L = 1/4$. The system is in the global symmetry sector with $Q=16$ positive charges on the lattice with linear size $L=4$. (b) Surface charge density $\sigma_{l}$ on a cube whose faces are at distance $l$ from the boundaries of the lattice with linear size $L=8$. The system is in the global symmetry sector with $Q=128$ positive charges (finite density $\rho=1/4$). } \end{figure} \section{Discussion} We have shown that TN methods can simulate LGT in three spatial dimensions, in the presence of matter and charge imbalance, ultimately exploring those regimes where other known numerical strategies struggle. We have investigated collective phenomena of lattice QED which stand at the forefront of the current research efforts, including quantum phase diagrams, confinement issues, and the string breaking mechanism at equilibrium. We envision the possibility of including more sophisticated diagnostic tools, such as the 't Hooft operators \cite{THooft} which nicely fit TNs designs, to provide more quantitatively precise answers to the aforementioned open problems. From a theoretical standpoint, our work corroborates the long-term perspective to employ TN methods to efficiently tackle non-perturbative phenomena of LGTs, in high dimensions and in regimes that are out of reach for other numerical techniques. As ansatz states with a refinement parameter chosen by the user, the bond dimension, TTNs are automatically equipped with a self-validation tool: convergence of each quantity with the bond dimension can be verified in polynomial time. However, while TTNs perform well for small and intermediate system sizes, as the ones considered in this work ($L=2,4,8$), the pathway to general LGTs analysis with large $L$ is still a technical challenge. In particular, TTNs suffer from poor scalability for higher $L$, since they fail to explicitly capture area law for large systems, which denotes a possible bottleneck for further investigations towards the study of the thermodynamical limit of Abelian and non-Abelian high-dimensional LGTs. As a promising perspective, Ref. \cite{2020arXiv201108200F} presents the {\it augmented} TTN ansatz which compensates this drawback offering better scalability. Further development in this direction will contribute to overcoming the current limitations of TTN in high dimensional systems opening the pathway to the possibility of investigating realistic physics by starting from the TTN approach presented here. Furthermore, we stress that our simulations have been performed on standard clusters by taking advantage only of OpenMP parallelization on single multi-core nodes. We have not yet exploited a full-scale parallelization on multi-node architectures. At a purely technical level, it is straightforward to upgrade our algorithms in this direction, in order to fully exploit the capabilities of High-Performance Computing. For instance, following the ideas presented in Ref. \cite{PhysRevB.87.155137}, each TTN variational sweep could be parallelised in a way to optimise its tensors separately on different computing nodes, so to optimally scale the computational resources with the system size. On top of this, the implementation of tensor contractions on GPUs could be used to speed-up the low-level computations as well \cite{efthymiou2019tensornetwork}. In this work we consider the spin $s=1$ representation, which leads to a local basis dimension of 267, as described {\color{black} in the Methods' subsection ``Fermionic compact representation of local gauge-invariant site''}. Following the same theoretical construction for the local gauge-invariant sites, we estimate a local basis dimension of 1102 for the next representation of QED with $s=3/2$, whereas for the SU(2) Yang-Mills theory, by truncating after the first nontrivial irreducible representation and considering spin-$1/2$ fermionic matter, one finds a local basis of 178 states for the cubic lattice. TTNs algorithms scale only polynomially with the local basis dimension but taking into account the aforementioned numbers, specific strategies for truncating also the local dimension in an optimal way (see for instance Ref. \cite{stolpp2020comparative}) could be required for studying higher representations of the gauge fields. In conclusion, the aforementioned technical steps will be fundamental to tackle the problem of the continuum limit of realistic Abelian and non-Abelian LTGs and we foresee that, although very challenging, they are only some steps away along the path of TTNs developments presented here. Alongside, from an experimental point of view, quantum link model formulations are among the most studied pathway towards the simulation of LGTs on quantum hardware~\cite{Tavernelli_2020, Muschnik2021}. The recent developments in low-temperature physics and control techniques, for trapped ions, ultracold atoms and Rydberg atoms in optical lattices, have led to the the first experimental quantum simulations of one-dimensional LGTs \cite{QS_LGT_1, QS_LGT_2, QS_LGT_3, QS_LGT_4, QS_LGT_5}. In this framework, numerical methods capable of accessing intermediate sizes, such as TNs, play a fundamental role as a cross-verification toolbox. \section{Methods} \textbf{Fermionic compact representation of local gauge-invariant site.} In describing a framework for LGT, a common requirement of TN numerical simulations \cite{Silvi_2014,SU3_LGT_TN,PhysRevX_LGT_Dynamics_TN,Silvi2017finitedensityphase}, as well as quantum simulations \cite{Feynman1982,QS_LGT_6,Review_SimulationLGT,paulson2020towards,ZoharBurrelloPRD2015,ZoharReview2015QSim,MarcelloSimoneReview2016QSim}, is working with finite-dimensional local degrees of freedom. This is a hard requirement when investigating both LGT descending from high-energy quantum field theories \cite{PhysRev.96.191, PhysRev.127.965, PhysRevLett.13.321, PhysRevLett.13.508}, and condensed matter models with emergent gauge fields \cite{PhysRevB.37.580,Kleinert_book}. While other pathways have been developed \cite{haase2020resource,KaplanStrykerDuality2020,BenderZoharDuality2020,QED2DGaussian2020}, in this work we adopted the well-known approach of truncating the gauge field space based on an energy density cutoff. In this section we present the construction of the QED gauge-invariant configurations for the local sites that we exploit as computational basis in our TN algorithm. The use of the spin-$1$ representation implies that the gauge degrees of freedom on each link of the lattice are represented by three orthogonal eigenstates of the electric field operator: \begin{eqnarray}\label{eq:electric_states_E} \hat E_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}} |\!\rightarrow\rangle = |\!\rightarrow\rangle, \; \hat E_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}} |\text{\o}\rangle = 0, \; \hat E_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}} |\!\leftarrow\rangle = - |\!\leftarrow\rangle. \;\;\; \end{eqnarray} The parallel transporter, that is proportional to the raising operator in the spin language, acts on these states as \begin{eqnarray}\label{eq:electric_states_U} \hat U_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}} |\!\rightarrow\rangle = 0, \; \hat U_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}} |\text{\o}\rangle = |\!\rightarrow\rangle, \; \hat U_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}} |\!\leftarrow\rangle = |\text{\o}\rangle. \;\; \end{eqnarray} In the following, in order to obtain a representation of the gauge degrees of freedom that will be useful for constructing our TN ansatz, we employ the local mapping presented in Ref.~\cite{2D_QED_Felser} (see also \cite{Erezdefermion1,Erezdefermion2}), generalizing it to the case with three spatial dimensions. This technique is related to the standard rishon formulation of QLM \cite{PhysRevD.60.094502,BROWER2004149, PhysRevLett.110.125303} and allows us to encode the Gauss's law taking into account the anticommutation relations of the fermionic particles on the lattice. Let us consider a generic link of the lattice $({\color{black} \mathbf{x}}, {\color{black} \mathbf{\mu}})$ between the two sites ${\color{black} \mathbf{x}}$ and ${\color{black} \mathbf{x}}+{\color{black} \mathbf{\mu}}$: the starting point is the splitting of the gauge field of this link into a pair of {\it rishon} modes, so that each mode belongs to either one of the two sites. For the $s=1$ case, we can set each rishon mode (or half-link) to be a 3-hardcore fermionic field $\hat \eta_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}}$. Such lattice quantum fields satisfy $\hat \eta^2_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}} \ne 0$ and $\hat \eta^3_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}} = 0$. They mutually anticommute at different spatial positions, i.e. $\left \{ \hat \eta_{{\color{black} \mathbf{x}}, {\color{black} \mathbf{\mu}}}, \hat \eta_{{\color{black} \mathbf{x'}}, {\color{black} \mathbf{\mu'}}}^{(\dagger)} \right \} = 0$ for ${\color{black} \mathbf{x}} \ne {\color{black} \mathbf{x'}}$ or $ {\color{black} \mathbf{\mu}} \ne {\color{black} \mathbf{\mu'}}$, and also anticommute with the staggered matter fermionic fields $ \left\{ \hat \eta_{{\color{black} \mathbf{x}}, {\color{black} \mathbf{\mu}}}, \hat \psi_{{\color{black} \mathbf{x'}}}^{(\dagger)} \right\} = 0$ \cite{LGT1_RevModPhys.51.659,SusskindLatticeFermions}. Then, we express the comparator on the link as $\hat U_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}} = \hat \eta_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}} \hat \eta^{\dagger}_{{\color{black} \mathbf{x}}+{\color{black} \mathbf{\mu}}, -{\color{black} \mathbf{\mu}}}$. To explicitly build these 3-hardcore fermions for each half-link, we consider two species of standard Dirac fermions $\hat a_{{\color{black} \mathbf{x}}, {\color{black} \mathbf{\mu}}}$ and $\hat b_{{\color{black} \mathbf{x}}, {\color{black} \mathbf{\mu}}}$ and we use the following relation: \begin{eqnarray}\label{eq:3_hardcore_modes} \hat \eta^{\dagger}_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}} = \hat n^{a}_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}} \hat b^{\dagger}_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}} + (1 - \hat n^{b}_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}}) \hat a^{\dagger}_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}} \end{eqnarray} where $\hat n^{a}_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}}$ and $\hat n^{b}_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}}$ are the occupation number operators for each species, i.e., $\hat n^{a}_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}} = \hat a^{\dagger}_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}} \hat a_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}}$ and the same for $\hat n^b_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}}$. For each 3-hardcore mode, these operators act on a three-dimensional local Hilbert space with basis $\left | 0 \right >_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}}$, $\left | 1 \right >_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}} = \hat a^{\dagger}_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}} \left | 0 \right >_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}}$, $\left | 2 \right >_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}} = \hat b^{\dagger}_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}} \hat a^{\dagger}_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}} \left | 0 \right >_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}}$. In fact, due to the definition in Eq. \eqref{eq:3_hardcore_modes}, the algebra of the operators $\hat \eta_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}}$ never accesses the fourth state obtained as $\hat b^{\dagger}_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}} \left | 0 \right >_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}}$. By using the same representation on the other half-link through the Dirac operators $\hat a^{\dagger}_{{\color{black} \mathbf{x}}+{\color{black} \mathbf{\mu}},-{\color{black} \mathbf{\mu}}}$ and $\hat b^{\dagger}_{{\color{black} \mathbf{x}}+{\color{black} \mathbf{\mu}},-{\color{black} \mathbf{\mu}}}$, we would obtain for the complete link a local space of dimension 9. However, the operator that counts the total number of fermions on the complete link as \begin{eqnarray}\label{eq:total_number_fermion_link} \hat L_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}} = \hat n^{a}_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}} + \hat n^{b}_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}} + \hat n^{a}_{{\color{black} \mathbf{x}}+{\color{black} \mathbf{\mu}},-{\color{black} \mathbf{\mu}}} + \hat n^{b}_{{\color{black} \mathbf{x}}+{\color{black} \mathbf{\mu}},-{\color{black} \mathbf{\mu}}}, \end{eqnarray} \\ defines a symmetry of the Hamiltonian since it commutes with the operators $\hat E_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}}$ and $\hat U_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}}$. Thus, we can select the sector with $\hat L_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}}=2$ (two rishons on each full link), reducing the link space to dimension 3 with the basis \begin{eqnarray}\label{eq:link_state} |\!\rightarrow\rangle &=&-|0, 2\rangle = \hat a^{\dagger}_{{\color{black} \mathbf{x}}+{\color{black} \mathbf{\mu}},-{\color{black} \mathbf{\mu}}} \hat b^{\dagger}_{{\color{black} \mathbf{x}}+{\color{black} \mathbf{\mu}},-{\color{black} \mathbf{\mu}}} |0\rangle_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}} |0\rangle_{{\color{black} \mathbf{x}}+{\color{black} \mathbf{\mu}},-{\color{black} \mathbf{\mu}}} ,\nonumber \\ |\text{\o}\rangle&=& |1, 1\rangle = \hat a^{\dagger}_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}} \hat a^{\dagger}_{{\color{black} \mathbf{x}}+{\color{black} \mathbf{\mu}},-{\color{black} \mathbf{\mu}}} |0\rangle_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}} |0\rangle_{{\color{black} \mathbf{x}}+{\color{black} \mathbf{\mu}},-{\color{black} \mathbf{\mu}}} ,\\ |\!\leftarrow\rangle &=& |2, 0\rangle = \hat b^{\dagger}_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}} \hat a^{\dagger}_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}} |0\rangle_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}} |0\rangle_{{\color{black} \mathbf{x}}+{\color{black} \mathbf{\mu}},-{\color{black} \mathbf{\mu}}}, \nonumber \end{eqnarray} \\ where the minus sign in the first element allows the operator $\hat U_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}}$ to act correctly following the properties of Eq. \eqref{eq:electric_states_U}. By using this representation, the electric field finally corresponds to the imbalance of Dirac fermions between the two halves of the link, so that: \begin{eqnarray}\label{eq:electric_field_fermions} \hat E_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}} = \frac{1}{2} \left ( \hat n^{a}_{{\color{black} \mathbf{x}}+{\color{black} \mathbf{\mu}},-{\color{black} \mathbf{\mu}}} + \hat n^{b}_{{\color{black} \mathbf{x}}+{\color{black} \mathbf{\mu}},-{\color{black} \mathbf{\mu}}} - \hat n^{a}_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}} - \hat n^{b}_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}} \right ). \end{eqnarray} \\ This construction in terms of 3-hardcore fermions allows us to define, for each lattice site, a local basis that directly incorporates the Gauss's law, by constraining in this way the dynamics to the physical states only. This is a crucial point for both numerical and quantum simulations since non-physical states determine an exponential increase in the complexity of the problem. From the definition of the link basis states of Eq. \eqref{eq:link_state}, it follows that, within the sector with the link-symmetry constraint $\hat L_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}}=2$, the electric field operator is uniquely identified by taking only the half-link fermionic configuration, namely: \begin{eqnarray}\label{eq:electric_field_fermions2} \hat E_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}} = 1- \hat n^{a}_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}} - \hat n^{b}_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}}. \end{eqnarray} \\ In this way, the generators of the Gauss's law of Eq. \eqref{eq:Gauss'law} are transformed into completely local operators acting on the site ${\color{black} \mathbf{x}}$ only: \begin{eqnarray}\label{eq:Gauss's_law_with_constraint} \hat G_{\color{black} \mathbf{x}} = \hat \psi_{\color{black} \mathbf{x}}^\dagger \hat \psi_{\color{black} \mathbf{x}} - \frac{1-(-1)^{\color{black} \mathbf{x}}}{2} - \sum_{{\color{black} \mathbf{\mu}}} \left ( 1 - \hat n^{a}_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}} - \hat n^{b}_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}} \right). \end{eqnarray} Taking into account this property, it is possible to construct the gauge-invariant basis for the local site ${\color{black} \mathbf{x}}$, that is composed by the lattice site and the six half-links along the directions $\pm {\color{black} \mathbf{\mu}_x}$, $\pm {\color{black} \mathbf{\mu}_y}$, $ \pm {\color{black} \mathbf{\mu}_z}$ (see Fig. \ref{fig:gauge_site}): \begin{figure}[t!] \includegraphics[width=\columnwidth]{gauge_site.pdf} \caption{ \label{fig:gauge_site} {\bf Construction of the gauge-invariant configurations for the local sites.} (a) Representation of the gauge field in terms of two species of Dirac modes in the sector with a total number of fermions equal to two. b) Generic state of the local site composed by the the matter degrees of freedom and six half-links along the three spatial directions. On each half-link the coefficients $k_j \in \{ 0,1,2 \} $ define the fermionic modes. (c) Examples of gauge-invariant configurations for even and odd sites. Due to the use of staggered-fermions, the presence/absence of a fermion in an even/odd site represents the presence of a charge/anti-charge. } \end{figure} \begin{eqnarray}\label{eq:local_gauge_invariant_basis} \left|\begin{array}{ccc} & k_{5}\\ k_{1} & \phi & k_{4}\\ & k_{2} \end{array}\right\rangle _{k_{3}}^{k_{6}} & = & \left(-1\right)^{\delta_{k_{1},2}+\delta_{k_{2},2}+\delta_{k_{3},2}}\left|\phi\right\rangle _{{\color{black} \mathbf{x}}} \\ & \times & \left|k_{1}\right\rangle _{{\color{black} \mathbf{x}},-{\color{black} \mathbf{\mu}}_{x}}\left|k_{2}\right\rangle _{{\color{black} \mathbf{x}},-{\color{black} \mathbf{\mu}}_{y}}\left|k_{3}\right\rangle _{{\color{black} \mathbf{x}},-{\color{black} \mathbf{\mu}}_{z}} \nonumber \\ & \times & \left|k_{4}\right\rangle _{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}_{x}}\left|k_{5}\right\rangle _{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}_{y}}\left|k_{6}\right\rangle _{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}_{z}} \nonumber \end{eqnarray} \\ where $\left| \phi \right\rangle_{{\color{black} \mathbf{x}}} = ( \hat \psi^{\dagger}_{{\color{black} \mathbf{x}}} )^{\phi} \left | 0 \right\rangle$ with $\phi=0,1$ describes the presence or the absence of the matter/antimatter particles. The indices $k_j$ run over \{0,1,2\} selecting a configuration of the 3-hardcore modes for each respective half-link. The presence of the factor $\left(-1\right)^{\delta_{k_{1},2}+\delta_{k_{2},2}+\delta_{k_{3},2}}$ allows us to satisfy the anticommutation relations of the fermionic representation recovering the correct signs of Eq. \eqref{eq:link_state}. The occupation numbers $\phi$ and $k_j$ are not independent due to the constraint imposed by the Gauss's law \begin{equation} \hat{G}_{{\color{black} \mathbf{x}}}\left|\begin{array}{ccc} & k_{5}\\ k_{1} & \phi & k_{4}\\ & k_{2} \end{array}\right\rangle _{k_{3}}^{k_{6}}=0. \end{equation} \\ This equation, in the new language of matter fermions and rishons, reads \begin{eqnarray}\label{eq:Gauss's_law_final} \phi + \sum^{6}_{j=1} k_j = 6 + \frac{1-(-1)^{\color{black} \mathbf{x}}}{2}. \end{eqnarray} \\ where the factor 6 is indeed the coordination number of the cubic lattice. Thus, the gauge invariant configurations of the local basis are obtained by applying this constraint, effectively reducing the `dressed-site' (matter and 6 rishon modes) dimension from $2 \cdot 3^6 = 1458$ to merely 267. We encode these states as building blocks of our computational representation for the TN algorithms. In Fig. \ref{fig:gauge_site} we show some examples of gauge-invariant configurations for even and odd sites. The construction of the gauge-invariant local sites is particularly advantageous for our numerical purposes: in fact, it is now possible to express all the terms in the Hamiltonian of Eq. \eqref{eq:H_lattice_QED} of the main text as product of completely local operators that commute on different sites. Let us consider the kinetic term of the Hamiltonian and apply the representation of the gauge field in terms of the 3-hardcore fermionic modes: \begin{eqnarray}\label{eq:decomposition_kinetic_term} \hat \psi^\dagger_{\color{black} \mathbf{x}} \hat U_{{\color{black} \mathbf{x}}, {\color{black} \mathbf{\mu}}} \hat \psi_{{\color{black} \mathbf{x}}+{\color{black} \mathbf{\mu}}} &= & \hat \psi^\dagger_{\color{black} \mathbf{x}} \hat \eta_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}} \hat \eta^{\dagger}_{{\color{black} \mathbf{x}}+{\color{black} \mathbf{\mu}}, -{\color{black} \mathbf{\mu}}} \hat \psi_{{\color{black} \mathbf{x}}+{\color{black} \mathbf{\mu}}} \nonumber \\ &=& \left ( \hat \eta^{\dagger}_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}} \hat \psi_{\color{black} \mathbf{x}} \right )^{\dagger} \left( \hat \eta^{\dagger}_{{\color{black} \mathbf{x}}+{\color{black} \mathbf{\mu}}, -{\color{black} \mathbf{\mu}}} \hat \psi_{{\color{black} \mathbf{x}}+{\color{black} \mathbf{\mu}}} \right) \nonumber \\ &=& M^{(\alpha) \dagger}_{{\color{black} \mathbf{x}}} M^{\alpha '}_{{\color{black} \mathbf{x}}+{\color{black} \mathbf{\mu}}} \end{eqnarray} \\ where the indices $\alpha$ and $\alpha'$ select the right operators depending on the different directions in which the hopping process takes place. The operators $M^{\alpha}_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}}$ are genuinely local (i.e. they commute with operators acting elsewhere) as they are always quadratic in the fermionic operators ($\psi$ and/or $\eta$). The same argument applies to the magnetic (plaquette) terms in the Hamiltonian \begin{multline} \hat \square_{{\color{black} \mathbf{\mu}}_x,{\color{black} \mathbf{\mu}}_y} = \hat U_{{\color{black} \mathbf{x}}, {\color{black} \mathbf{\mu}}_x} \hat U_{{\color{black} \mathbf{x}}+{\color{black} \mathbf{\mu}}_x, {\color{black} \mathbf{\mu}}_y} \hat U^{\dagger}_{{\color{black} \mathbf{x}}+{\color{black} \mathbf{\mu}}_y, {\color{black} \mathbf{\mu}}_x} \hat U^{\dagger}_{{\color{black} \mathbf{x}}, {\color{black} \mathbf{\mu}}_y} = \\ = \hat \eta_{{\color{black} \mathbf{x}}, {\color{black} \mathbf{\mu}}_x} \hat \eta^{\dagger}_{{\color{black} \mathbf{x}}+{\color{black} \mathbf{\mu}}_x, -{\color{black} \mathbf{\mu}}_x} \hat \eta_{{\color{black} \mathbf{x}} + {\color{black} \mathbf{\mu}}_x, {\color{black} \mathbf{\mu}}_y} \hat \eta^{\dagger}_{{\color{black} \mathbf{x}} + {\color{black} \mathbf{\mu}}_x + {\color{black} \mathbf{\mu}}_y, -{\color{black} \mathbf{\mu}}_y} \\ \times \left(\hat \eta_{{\color{black} \mathbf{x}} + {\color{black} \mathbf{\mu}}_y, {\color{black} \mathbf{\mu}}_x} \hat \eta^{\dagger}_{{\color{black} \mathbf{x}} + {\color{black} \mathbf{\mu}}_x + {\color{black} \mathbf{\mu}}_y, -{\color{black} \mathbf{\mu}}_x} \right )^{\dagger} \left(\hat \eta_{{\color{black} \mathbf{x}}, {\color{black} \mathbf{\mu}}_y} \hat \eta^{\dagger}_{{\color{black} \mathbf{x}} + {\color{black} \mathbf{\mu}}_y, -{\color{black} \mathbf{\mu}}_y} \right)^{\dagger} \\ = - \left( \hat \eta^{\dagger}_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}_y} \hat \eta_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}_x} \right) \left( \hat \eta^{\dagger}_{{\color{black} \mathbf{x}}+{\color{black} \mathbf{\mu}}_x, -{\color{black} \mathbf{\mu}}_x} \hat \eta_{{\color{black} \mathbf{x}}+{\color{black} \mathbf{\mu}}_x, {\color{black} \mathbf{\mu}}_y} \right) \\ \times \left( \hat \eta^{\dagger}_{{\color{black} \mathbf{x}}+{\color{black} \mathbf{\mu}}_x+{\color{black} \mathbf{\mu}}_y, -{\color{black} \mathbf{\mu}}_y} \hat \eta_{{\color{black} \mathbf{x}}+{\color{black} \mathbf{\mu}}_x+{\color{black} \mathbf{\mu}}_y, -{\color{black} \mathbf{\mu}}_x} \right) \left( \hat \eta^{\dagger}_{{\color{black} \mathbf{x}}+{\color{black} \mathbf{\mu}}_y,{\color{black} \mathbf{\mu}}_x} \hat \eta_{{\color{black} \mathbf{x}}+{\color{black} \mathbf{\mu}}_y, -{\color{black} \mathbf{\mu}}_y} \right) \\ \equiv - C_{\color{black} \mathbf{x}}^{(\alpha)} C_{{\color{black} \mathbf{x}}+{\color{black} \mathbf{\mu}}_x}^{(\alpha')} C_{{\color{black} \mathbf{x}}+{\color{black} \mathbf{\mu}}_x+{\color{black} \mathbf{\mu}}_y}^{(\alpha'')} C_{{\color{black} \mathbf{x}}+{\color{black} \mathbf{\mu}}_y}^{(\alpha''')}, \end{multline} where the indices $\alpha$, $\alpha'$, $\alpha''$, $\alpha'''$ depend on the plane of the plaquette (in this case $x-y$) and the links involved into the loop. The operators $C^{\alpha}_{\color{black} \mathbf{x}}$ are genuinely local and act on the four sites at the corners of the plaquette. The decomposition is the same for the other plaquettes in the planes $x-z$ and $y-z$. The present construction ensures that they can be treated as spin (or bosonic) operators \cite{Erezdefermion1,Erezdefermion2}, so we can exploit standard TN algorithms, without the need of explicitly implementing the fermionic parity at each site \cite{PhysRevA.81.052338, PhysRevB.80.165129, PhysRevA.80.042333}. The mass term and the electric field energy in the Hamiltonian of Eq. \eqref{eq:H_lattice_QED} of the main text are diagonal in the gauge-invariant basis with the rishon representation and so it is trivial to express them as local operators. These operators include the local chemical potential terms, which we use to pin charges in order to study confinement properties \cite{POLYAKOV1977429, Greensite:2011zz, PhysRevResearch.2.043145}. In conclusion, {\color{black} all} the operators we employ in the TTN algorithms (see {\color{black} the Methods' subsection ``Tensor Networks''}) are genuinely local. In order to get an idea on the numerical complexity, we emphasize that the dimension of these matrices acting on the local gauge-invariant basis is $267 \times 267$. \\ \begin{figure*}[t!] \includegraphics[width=\linewidth]{TTNs.pdf} \caption{ \label{fig:TTNs} {\bf TTN ansazts.} TTN representations for (a) 1D lattice and (b) 2D square lattice. Green circles indicate the sites of the lattice connected to the physical indices of the tree, whereas the yellow circles are the tensors making up the TTN. In (c) we shown our generalization to the 3D cubic lattice that we use for the numerical simulations of the LGT. The different colours of the bond indices are just for a better visualization of the tree structure.} \end{figure*} \textbf{Tensor Networks.} In this section we present the main concepts of Tensor Networks (TNs) with a particular focus on the Tree Tensor Network (TTN) ansatz that we exploit in this work \cite{PhysRevB.90.125154}. For a detailed and exhaustive description of the subject, please see the technical reviews and textbooks \cite{Montangero_book, TN_Anthology, OrusTNReview}. Let us consider a generic quantum system composed by $N$ lattice sites, each of which described by a local Hilbert space $H_k$ of finite dimension $d$ and equipped with a local basis $ \left \{ \left | i \right >_k \right \}_{1 \leq i \leq d} $. The whole Hilbert space of the system will be generated by the tensor product of the local Hilbert spaces, that is, $\mathcal{H} = \mathcal{H}_1\otimes \mathcal{H}_2\otimes \cdots \mathcal{H}_N$, with a resulting dimension equal to $d^N$. Thus, a generic pure quantum state of the system $\left | \psi \right >$ can be expressed as a linear combination of the basis elements of $\mathcal{H} $, i.e., \begin{eqnarray}\label{eq:TN_state} | \psi \rangle = \sum_{i_1,..., i_N = 1}^{d} {c_{i_1,...,i_N} |i_1\rangle_1 \otimes |i_2\rangle_2 \otimes ...\otimes |i_N\rangle_N}. \; \end{eqnarray} In principle, the coefficients $c_{i_1,...,i_N}$ are $d^N$ complex numbers. As a consequence, this exact representation of the quantum state is completely inefficient from a computational point of view, since it scales exponentially with the system size $N$. In other words, the amount of information that we would need to store in memory for a computational representation of the generic quantum state of the system is exponentially large in the number of degrees of freedom. However, if we are concerned with {\color{black} local} Hamiltonians, which means that a lattice site interacts only with a finite set of neighboring sites and not with all sites of the lattice, it is possibile to exploit rigorous results on the scaling of entanglement under a bipartition ({\it area law}) \cite{RevModPhys.80.517, RevModPhys.82.277} in order to obtain an efficient representation of the states in the low-energy sectors of such Hamiltonians, e.g. ground-states and first excited states. Tensor Networks provide a natural language for this representation \cite{PhysRevB.73.094423, PhysRevLett.96.220601} by decomposing the complete rank-$N$ tensor $c_{i_1,...,i_N}$ in Eq. \eqref{eq:TN_state} into a network of smaller-rank local tensors interconnected with auxiliary indices (bond indices). If we control the dimension of the bond indices with a parameter $\chi$, called the bond dimension, the number of coefficients in the TN is of the order ${\color{black} O(\mathrm{poly}(N)\mathrm{poly}(\chi))}$, allowing an efficient representation of the information encoded in the quantum state. Furthermore, the bond dimension $\chi$ is a quantitative estimate of the amount of quantum correlations and entanglement present in the TN. In fact, by varying $\chi$, TNs interpolate between a product state ($\chi=1$) and the exact, inefficient, representation of the considered quantum state ($\chi \approx d^N $). Matrix product states (MPS) for 1D systems \cite{1992CMaPh.144.443F, 1991JPhA.24L.955K, 1993EL.24.293K}, Projected Entangled Pair State (PEPS) for 2D and 3D systems \cite{2004cond.mat.7066V, PhysRevLett.96.220601, tepaske2020threedimensional}, Multiscale Entanglement Renormalization Ansatz (MERA) \cite{PhysRevLett.99.220405, PhysRevLett.102.180406} and Tree Tensor Networks (TTN), that can be defined in any dimension, \cite{PhysRevB.90.125154, PhysRevA.74.022320, PhysRevA.81.062335} are all important examples of efficient representations based on TNs. MPS algorithms, such as the Density Matrix Renormalization Group (DMRG) \cite{SCHOLLWOCK201196}, represent the state-of-the-art technique for the numerical simulation of many-body systems in 1D. MPS satisfy area-law and are extremely powerful since they allow to compute scalar products between two wave functions and local observables in an exact and efficient way. This property does not hold true for higher-dimensional generalizations, such as PEPS, and the development of TN algorithms, for accurate and efficiently scalable computations, is at the center of current research efforts. In particular, one of the main problems is related to the choice of the TN geometry for simulating higher-dimensional systems. PEPS intuitively reproduce the structure of the lattice with one tensor for each physical site and the bond indices directly follow the lattice grid. The resulting TN follows the area-law of entanglement but it contains {\color{black} loops}, making the contractions for computing expectation values exponentially hard \cite{PhysRevResearch.2.013010}. Furthermore, the computational cost for performing the variational optimization of PEPS, as for instance in the ground state searching, scales as $O(\chi^{10})$ as a function of the bond dimension. This severely limits the possibility of reaching high values of $\chi$, especially for large system sizes (typical values are $\chi \approx 10$ for spin systems). For our purpose of simulating LGT in three-spatial dimensions this represents a crucial problem since the local dimension of our model is extremely high, i.e., $d=267$, and so, it becomes necessary to be able to handle high values of $\chi$ in order to reach the numerical convergence. Alternative ans\"atze for simulating quantum many-body systems are the TTNs, that decompose the wave function into a network of tensors without loops, allowing efficient contraction algorithms with a polynomial scaling as a function of the system size. In Fig. \ref{fig:TTNs}, we show the typical TTN ansazts for 1D and 2D systems and our generalization to 3D lattice. TTNs offer more tractable computational costs since the complete contraction and the variational optimization algorithms scale as $O(\chi^4)$, making it easier to reach high values of the bond dimension (up to $\chi \approx 1000$). The price to pay for using the loopless structure is related to the area law that TTNs may not explicitly reproduce in dimensions higher than one \cite{PhysRevB.87.125139}. Nevertheless, we use the TTN ansatz in a variational optimization, so we can improve the precision by using increasing values of $\chi$, providing in this way a careful control over the convergence of our numerical results. {\color{black} Ground state computation of our LGT model employs the TTN algorithm for variational ground state search, including the exploitation of Abelian symmetries and the Krylov sub space expansion \cite{TN_Anthology}. The algorithm is implemented to conserve the total charge through the definition of global $U(1)$ symmetry sectors encoded in the TTN. Thus, we can easily access finite charge-density regimes, with arbitrary imbalance between charges and anticharges.} \begin{figure*}[t!] \includegraphics[width=\linewidth]{convergence.pdf} \caption{ \label{fig:numerical_convergence} {\bf Numerical convergence.} (a) Driven optimization (in three steps: linear, quadratic, constant) of the penalty coefficient $\nu$ (red) and behavior of the energy (blue) as a function of the iterations for an exemplifying simulation. The energy is reported as the difference with the lowest final energy that we reach. (b) Driven optimization of the penalty coefficient $\nu$ (red) and global error $\delta L$ (green) with respect to the link symmetry during the optimization steps. (c) Scaling of the energy density as a function of the inverse of the bond dimension $1 / \chi$. The bond dimension $\chi$ is in the range $\left [ 100, 450 \right ]$. } \end{figure*} Our TTN for the 3D lattice is composed entirely of tensors with three links (this structure is usually called {\it binary tree}). The construction of the TTN starts from merging the physical indices at the bottom, that represent two neighboring lattice sites along the $x$-direction, into one tensor. Then, these tensors are connected along the $y$-direction through new tensors in an upper layer. The tensors in this layer are then connected along the $z$-direction through a new layer of tensors. Thus, this procedure is iteratively repeated by properly setting the connections along the three spatial directions in the upper layers of the tree. At the beginning of the simulation, we randomly initialize all the tensors in the network and the distribution of the global symmetry sectors. During the variational optimization stage, in order to improve the convergence, we perform the single-tensor optimization with subspace-expansion technique, i.e., allowing a dynamical increase of the local bond dimension and adapting the symmetry sectors \cite{TN_Anthology}. This scheme has a global computational cost of the order $O(\chi^4)$. The single tensor optimization is implemented in three steps: (i) the effective Hamiltonian $H_{{\color{black}\mathrm{eff}}}$ for the tensor is obtained by contracting the complete Hamiltonian of the system with all the remaining tensors of the tree; (ii) the local eigenvalue problem for $H_{{\color{black}\mathrm{eff}}}$ is solved by using the Arnoldi method of the ARPACK library; (iii) the tensor is updated by the eigenvector of $H_{{\color{black}\mathrm{eff}}}$ corresponding to the lowest eigenvalue. This procedure is iterated by sweeping through the TTN from the lowest to the highest layers, gradually reducing the energy expectation value. After completing the whole sweep, the procedure is iterated again and again, until the desidered convergence in the energy is reached. The precision of the Arnoldi algorithm is increased in each sweep, for gaining more accuracy in solving the local eigenvalue problems as we approach the final convergence. TTN computations presented in this work are extremely challenging due to the complexity of LGTs in the three-dimensional scenario. They were performed on different HPC-clusters (CloudVeneto, CINECA, BwUniCluster and ATOS Bull): a single simulation for the maximum size that we reached, a $8 \times 8 \times 8$ lattice, can last up to five weeks until final convergence, depending on the different regimes of the model and the control parameters of the algorithms. \\ \textbf{Numerical Convergence.} With our numerical simulations we characterize the properties of the ground state of the system as a function of the parameters in the Hamiltonian of Eq. \eqref{eq:H_lattice_QED} of the main text. We fix the energy scale by setting the hopping coefficient $t=1$ and we access several regimes of the mass $m$, the electric ${\color{black} g_\mathrm{e}}$ and the magnetic coupling ${\color{black}g_\mathrm{m}}$. We consider simple cubic lattices $L \times L \times L$ with the linear size $L$ being a binary power; in particular, we simulate the case with $L = 2, 4, 8$, that is, up to $512$ lattice sites. As explained {\color{black} in the Methods' subsection ``Fermionic compact representation of local gauge-invariant site''}, in order to obtain the right representation of the electric field operators, we have to enforce the extra link symmetry constraint $\hat L_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}}=2$ at every pair of neighboring sites. For this reason, we include in the Hamiltonian additional terms that energetically penalise all the states with a number of hardcore fermions per link different from two, namely: \begin{eqnarray} \label{eq:Ham_penalties} \hat H_{{{\color{black}\mathrm{pen}}}} = \nu \sum_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}} \left( 1 - \hat \delta_{2, \hat L_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}}} \right ) \end{eqnarray} \\ where $\nu > 0$ is the penalty coefficient and $\hat \delta_{2, \hat L_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}}}$ are the projectors on the states that satisfy the extra link constraint. In this way, the penalty terms vanish when the link symmetry is satisfied and raise the energy of the states violating the constraint. In principle, the link symmetry is rigorously satisfied for $\nu \rightarrow \infty$. At numerical level, this limit translates into choosing $\nu$ much larger than the other simulation parameters of the Hamiltonian, i.e., ${\color{black} \nu \gg \mathrm{max} \left \{ |t|,|m|, |g_{\mathrm{e}}|, |g_\mathrm{m}| \right \} }$. However, setting $\nu$ too large in the first optimisation steps could lead to local minima or non-physical states, since the variational algorithm would focus only on the penalty terms more than the physical ones. In order to avoid this problem and reach the convergence, we adopt a {\color{black} driven optimization}, by varying the penalty coefficient $\nu$ in three steps: (i) starting from a very small value of $\nu$ and from a random state of the TTN, that in general does not respect the extra link symmetry, we drive the penalty term with a linear growth of $\nu$ during the first optimization sweeps. In this stage, the optimization will focus mainly on the physical quantities, until we notice a slight rise of the energy: this effect signals that the global optmization procedure of the TTN become significantly sensitive to the penalty terms. (ii) Thus, we impose a quadratic growth of $\nu$ so that, in the immediately following sweeps, the penalty is increased at a slower rate with respect to the linear regime. (iii) After reaching the maximum desidered value of $\nu$, that is an input parameter of the simulation, we keep it fixed, performing the last sweeps in order to ensure the convergence of the energy. This driven optimization strategy is summarized in Fig. \ref{fig:numerical_convergence}{\color{black} (a)} where we show the three different stages of the penalty coefficient $\nu$ and typical behavior of the energy difference $\delta e$, computed with respect to the lowest final energy that we reach, as a function of the iterations. We can also quantify the global error with respect to the link symmetry during the driven optimization sweeps, by defining: \begin{eqnarray} \delta L = \sum_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}} \left | \left < {\color{black} \mathrm{GS}} \right | ( \hat L_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}} -2 ) \left | {\color{black} \mathrm{GS}} \right > \right | \end{eqnarray} i.e., the sum of the deviations from the exact link constraint $\hat L_{{\color{black} \mathbf{x}},{\color{black} \mathbf{\mu}}}=2$, computed over all the links of the lattice on the ground state. The typical behavior of this quantity is shown in Fig. \ref{fig:numerical_convergence}{\color{black} (b)}: at the end of the optimization procedure, the global error results of the order of $10^{-6}$. We also check the convergence of our TTN algorithms as a function of the bond dimension $\chi$, by using $\chi = 450$ at most to ensure stability of our findings. Depending from the different system sizes and regimes of physical parameters, we estimate the relative error of the energy in the range $\left [10^{-2}, 10^{-4} \right ]$. A typical scaling of the energy density as a function of the inverse of the bond dimension $1 / \chi$ is shown in Fig. \ref{fig:numerical_convergence}{\color{black} (c)}. \\ \begin{figure*}[t!] \includegraphics[width=\linewidth]{scaling.pdf} \caption{ \label{fig:scaling} {\bf Finite-size scaling analysis.} (a) Particle density as a function of $m$, for $t=0$, ${\color{black} g^2_{\mathrm{m}}}=0$ and $L=4$. (b) Universal scaling function $\lambda(x)$ close to the transition point ${\color{black} m_{\mathrm{c}}} \approx -0.39$ for ${\color{black} g^2_{\mathrm{m}}}=0$ with critical exponents $\beta \approx 0.16$ and $\nu \approx 1.22$. The inset shows the same universal behavior close to the transition point ${\color{black} m_{\mathrm{c}}} \approx 0.22$ in the presence of magnetic interactions with ${\color{black} g^2_{\mathrm{m}}} = 8/{\color{black} g^2_\mathrm{e}}=4$ and the same critical exponents $\beta \approx 0.16$ and $\nu \approx 1.22$. (c) Contour plot of the square root of the residual sum of squares in the $(\nu,\beta)$ plane for the best-fitting values of the critical exponents. } \end{figure*} \textbf{Critical points: scaling analysis.} In this section we show the finite-size scaling analysis for detecting the phase transition separating the {\color{black} charge-crystal phase} and the {\color{black} vacuum phase} and the related location of the critical points. At $t=0$ and neglecting the magnetic interactions, i.e., for ${\color{black} g^2_{\mathrm{m}}}=0$, the Hamiltonian of Eq. \eqref{eq:H_lattice_QED} results diagonal in the local basis described {\color{black} in the Methods' subsection ``Fermionic compact representation of local gauge-invariant site''} and it is trivial to prove that the system undergoes a first-order phase transition between the {\color{black} bare vacuum}, with energy $E_{{{\color{black}\mathrm{v}}}}= -m\frac{L^3}{2}$ and the {\color{black} charge-crystal phase}, with energy $E_{{{\color{black}\mathrm{ch}}}} = (m+\frac{g^2_{e}}{2}) \frac{L^3}{2}$. The ground-state exhibits a level-crossing at the critical value $m^{(0)}_\mathrm{c} = -\frac{{\color{black} g^2_\mathrm{e}}}{4}=-\frac{1}{2}$ that is obtained at $E_{{{\color{black}\mathrm{v}}}} = E_{{{\color{black}\mathrm{ch}}}}$. This behaviour is clearly seen in Fig. \ref{fig:scaling}{\color{black} (a)}, showing a discontinuous transition between the two configurations. In order to understand the behavior of the system for finite $t=1$ and ${\color{black} g^2_{\mathrm{m}}} = 0$, we observe that the density, plotted in Fig. \ref{fig:transition}{\color{black} (a)} of the main text, changes continuously as a function of the mass parameter and we might have a second-order phase transition. Finite-size scaling theory \cite{cardy_1996} implies that the behavior of the system close to a critical point, i.e. for $m \approx {\color{black} m_{\mathrm{c}}}$, can be described in terms of a universal function $\lambda(x)$ such that for our observable: \begin{eqnarray} \label{eq:scaling_equation} \rho L^{\frac{\beta}{\nu}} = \lambda \left ( L^{\frac{1}{\nu}} (m-{\color{black} m_{\mathrm{c}}}) \right ) \end{eqnarray} \\ where $\beta$ and $\nu$ are critical exponents. In particular this relation implies that for $m \approx {\color{black} m_{\mathrm{c}}}$, the value of $\rho L^{\frac{\beta}{\nu}}$ is independent of the size of the system. We use this property to get an estimate of the values of ${\color{black} m_{\mathrm{c}}}$, $\beta$, $\nu$. In particular, we consider a grid of values for these parameter and for each set of values we fit our points $\rho L^{\frac{\beta}{\nu}}$ with an high-degree polynomial $f \left( L^{\frac{1}{\nu}} (m-{\color{black} m_{\mathrm{c}}}) \right )$. We compute the residual sum of squares (RSS) and we select the set of values which minimize this quantity, producing the best data collapse. We get for the critical point ${\color{black} m_{\mathrm{c}}} \approx -0.39$ and for the critical exponents $\beta \approx 0.16$ and $\nu \approx 1.22$. In Fig. \ref{fig:scaling}{\color{black} (b)} we show the collapse of our numerical results onto the same universal function $\lambda(x)$ and in Fig. \ref{fig:scaling}{\color{black} (c)} a contour plot of the square root of the residual sum of squares in the $(\nu,\beta)$ plane for the best-fitting values. By extending the previous considerations and the finite-size scaling analysis to the case with magnetic interactions with ${\color{black} g^2_{\mathrm{m}}} = 8/{\color{black} g^2_\mathrm{e}}=4$, we check again the presence of a critical point and the values of critical exponents through the formula of Eq. \eqref{eq:scaling_equation}. We obtain a universal scaling function for ${\color{black} m_{\mathrm{c}}} \approx 0.22$ and the same critical exponents $\beta \approx 0.16$, $\nu \approx 1.22$, as reported in the inset of Fig. \ref{fig:scaling}{\color{black} (b)}. Thus, while the transition and its universality remain unchanged in the presence of the magnetic coupling, the critical point is shifted toward positive values of the mass parameter, signaling that the magnetic interactions determine a visible enhancement of the production of charges and anticharges out of the vacuum. Although a more precise determination of the numerical values of the critical exponents would require additional extensive analysis that results beyond the scope of this paper, our findings strongly indicate the presence of a phase transition at finite $m$ for the three-dimensional lattice model of QED (compare with other previously investigated transitions in lattice QED, e.g. Ref.~\cite{GOCKELER1990527}). \section{Data Availability} The data that support the findings of this study are available from the corresponding author upon reasonable request. Source data of the figures are attached to the manuscript. \section{Code Availability} The authors are available for discussing details of the implementation of the computer codes developed and used in this study upon reasonable request. \section{Acknowledgments} Authors kindly acknowledge support from the Italian PRIN2017 and Fondazione CARIPARO, the Horizon 2020 research and innovation programme under grant agreementNo 817482 (Quantum Flagship - PASQuanS), the QuantERA projects QTFLAG and QuantHEP, the DFG project TWITTER, the INFN project QUANTUM, and the Austrian Research Promotion Agency (FFG) via QFTE project AutomatiQ. We acknowledge computational resources by the Cloud Veneto, CINECA, the BwUniCluster and by ATOS Bull. \section{Author Contributions} G.M. and T.F. implemented the TTN algorithms for the three-dimensional model, basing on a code previously developed by T.F. for 2D LGT; G.M. performed the numerical simulations and analyzed the results; P.S. provided the basis for the theoretical strategy of mapping the lattice gauge theory into the proper computational framework. The interpretation of the results was mainly done by G.M., P.S. and S.M.; G.M. and P.S. wrote the paper; S.M. conceived, supervised and directed the project. All authors discussed the results, contributed to refining the manuscript and approved it. \section{Competing Interests} The authors declare no competing interests.
1,108,101,566,269
arxiv
\section{Introduction} A \emph{neighbor} of a vertex $v$ in $G$ is a vertex that is adjacent to $v$. A vertex \emph{dominates} itself and its neighbors. A \emph{dominating set} of a graph $G$ is a set $S$ of vertices of $G$ such that every vertex in $G$ is dominated by a vertex in $S$. The \emph{domination number} of $G$, denoted $\gamma(G)$, is the minimum cardinality of a dominating set in $G$, while the \emph{upper domination number} of $G$, denoted $\Gamma(G)$, is the maximum cardinality of a minimal dominating set in $G$. A minimal dominating set of cardinality~$\Gamma(G)$ we call a $\Gamma$-\emph{set of $G$}. The \emph{open neighborhood} of a vertex $v$ in $G$ is the set of neighbors of $v$, denoted $N_G(v)$. Thus, $N_G(v) = \{u \in V \, | \, uv \in E(G)\}$. The \emph{closed neighborhood of $v$} is the set $N_G[v] = \{v\} \cup N_G(v)$. If the graph $G$ is clear from context, we simply write $N(v)$ and $N[v]$ rather than $N_G(v)$ and $N_G[v]$, respectively. As defined by Alan Goldman and introduced in Slater~\cite{Sl77}, for a subset $S$ of vertices in a graph $G$, a vertex $v \in S$ is an \emph{enclave} of $S$ if it and all of its neighbors are also in $S$; that is, if $N[v] \subseteq S$. A set $S$ is \emph{enclaveless} if it does not contain any enclaves. We note that a set $S$ is a dominating set of a graph $G$ if the set $V(G) \setminus S$ is enclaveless. The \emph{enclaveless number} of $G$, denoted $\Psi(G)$, is the maximum cardinality of an enclaveless set in $G$, and the \emph{lower enclaveless number} of $G$, denoted by $\psi(G)$, is the minimum cardinality of a maximal enclaveless set. The domination and enclaveless numbers of a graph $G$ are related by the following equations. \begin{obser} \label{ob:relate} If $G$ is a graph of order~$n$, then $\gamma(G) + \Psi(G) = n = \Gamma(G) + \psi(G)$. \end{obser} The domination game on a graph $G$ consists of two players, \emph{Dominator} and \emph{Staller}, who take turns choosing a vertex from $G$. Each vertex chosen must dominate at least one vertex not dominated by the vertices previously chosen. Upon completion of the game, the set of chosen (played) vertices is a dominating set in $G$. The goal of Dominator is to end the game with a minimum number of vertices chosen, while Staller has the opposite goal and wishes to end the game with as many vertices chosen as possible. The Dominator-start domination game and the Staller-start domination game is the domination game when Dominator and Staller, respectively, choose the first vertex. We refer to these simply as the D-game and S-game, respectively. The \emph{D-game domination number}, $\dstart(G)$, of $G$ is the minimum possible number of moves in a D-game when both players play optimally. The \emph{S-game domination number}, $\sstart(G)$, of $G$ is defined analogously for the S-game. The domination game was introduced by Bre{\v{s}}ar, Klav{\v{z}}ar, and Rall~\cite{BrKlRa10} and has been subsequently extensively studied in the literature (see, for example,~\cite{Bu2015a,Bu2015b,HeKi14,HeLo17,KiWeZa13,ko-2017,Sc17}). Philips and Slater~\cite{PhSl01,PhSl02} introduced what they called the \emph{competition}-\emph{enclaveless game}. The game is played by two players, Maximizer and Minimizer, on some graph $G$. They take turns in constructing a maximal enclaveless set $S$ of $G$. That is, in each turn a player plays a vertex $v$ that is not in the set $S$ of the vertices already chosen and such that $S \cup \{v\}$ does not contain an enclave, until there is no such vertex. We call such a vertex a \emph{playable vertex}. The goal of Maximizer is to make the final set $S$ as large as possible and for Minimizer to make the final set $S$ as small as possible. The \emph{competition}-\emph{enclaveless game number}, or simply the \emph{enclaveless game number}, $\Psi_g^+(G)$ of $G$ is the number of vertices chosen when Maximizer starts the game and both players play an optimal strategy according to the rules. The \emph{Minimizer-start competition}-\emph{enclaveless game number}, or simply the \emph{Minimizer-start enclaveless game number}, $\Psi_g^-(G)$, of $G$ is the number of vertices chosen when Minimizer starts the game and both players play an optimal strategy according to the rules. The competition-enclaveless game, which has been studied for example in~\cite{GoHe18,He18,PhSl01,PhSl02,SeSl07}, has not yet been explored in as much depth as the domination game. In this paper we continue the study of the competition-enclaveless game. Our main motivation for our study are the following conjectures that have yet to be settled, where an isolate-free graph is a graph that does not contain an isolated vertex. \begin{conj} \label{conj1} If $G$ is an isolate-free graph of order $n$, then $\Psi_g^+(G) \ge \frac{1}{2}n$. \end{conj} Conjecture~\ref{conj1} was first posed as a question by Slater~\cite{Slater} to the 2nd author on 8th May 2015, and subsequently posed as a conjecture in~\cite{He18}. We refer to Conjecture~\ref{conj1} for general isolate-free graphs as the $\mathbf{\frac{1}{2}}$-\textbf{Enclaveless Game Conjecture}. We also pose the following conjecture for the Minimizer-start enclaveless game, where $\delta(G)$ denotes the minimum degree of the graph $G$. \begin{conj} \label{conj3} If $G$ is a graph of order $n$ with $\delta(G) \ge 2$, then $\Psi_g^-(G) \ge \frac{1}{2}n$. \end{conj} We proceed as follows. In Section~\ref{S:compare}, we discuss the domination game versus the enclaveless game, and show that these two games are very different and are not related. In Section~\ref{S:Fbounds}, we present fundamental bounds on the enclaveless game number and the Minimizer-start enclaveless game number. In Sections~\ref{S:regular} and~\ref{S:clawfree}, we show that the $\frac{1}{2}$-Enclaveless Game Conjecture holds for regular graphs and claw-free graphs, respectively. We use the standard notation $[k] = \{1,\ldots,k\}$. \section{Game domination versus enclaveless game} \label{S:compare} Although the domination and enclaveless numbers of a graph $G$ are related by the equation $\gamma(G) + \Psi(G) = n$ (see Observation~\ref{ob:relate}), as remarked in~\cite{He18} the competition-enclaveless game is very different to the domination game. For example, if $k \ge 3$ and $G$ is a tree with exactly two non-leaf vertices both of which have $k$ leaf neighbors, that is, if $G$ is a double star $S(k,k)$, then $\Psi_g^+(G) = \Psi_g^-(G) = k+1$ and $\dstart(G) = 3$ and $\sstart(G) = 4$. If $n \ge 1$, then Ko\v{s}mrlj~\cite{ko-2017} showed that $\sstart(P_n) = \left\lceil \frac{n}{2} \right\rceil$ and that $\dstart(P_n) = \left\lceil\frac{n}{2}\right\rceil-1$ if $n \equiv 3 \, (\modo \, 4)$ and $\dstart(P_n) = \left\lceil \frac{n}{2} \right\rceil$, otherwise. This is in contrast to the enclaveless game numbers of a path $P_n$ on $n \ge 2$ vertices determined by Phillips and Slater~\cite{PhSl02}. \begin{theorem}{\rm (\cite{PhSl02})} \label{enclave-path} If $n \ge 2$, then $\Psi_g^+(P_n) = \lfloor \frac{3n+1}{5} \rfloor$ and $\Psi_g^-(P_n) = \lfloor \frac{3n}{5} \rfloor$. \end{theorem} We remark that for the competition-enclaveless game the numbers $\Psi_g^+(G)$ and $\Psi_g^-(G)$ can vary greatly. For example, if $n \ge 1$ and $G$ is a star $K_{1,n}$, then $\Psi_g^+(G) = n$ while $\Psi_g^-(G) = 1$. However, for the domination game the Dominator-start game domination number and the Staller-start game domination number can differ by at most~$1$. The most significant difference between the domination game and the competition-enclaveless game is that the so-called Continuation Principle holds for the domination game but does not hold for the competition-enclaveless game. Another significant difference between the domination game and the competition-enclaveless game is that upon completion of the domination game, the set of played vertices is a dominating set although not necessarily a minimal dominating set, while upon completion of the competition-enclaveless game, the set of played vertices is always a maximal enclaveless set. Thus, the enclaveless game numbers of a graph $G$ are always squeezed between the lower enclaveless number $\psi(G)$ of $G$ and the enclaveless number $\Psi(G)$ of $G$. We state this formally as follows. \begin{obser} \label{ob:bound} If $G$ is a graph of order~$n$, then \[ \psi(G) \le \Psi_g^-(G) \le \Psi(G) \hspace*{0.5cm} \mbox{and} \hspace*{0.5cm} \psi(G) \le \Psi_g^+(G) \le \Psi(G). \] \end{obser} A graph $G$ is \emph{well}-\emph{dominated} if all the minimal dominating sets of $G$ have the same cardinality. Examples of well-dominated graphs include, for example, the complete graph $K_n$, $C_7$, $P_{10}$, the corona of any graph, and the graph formed from two vertex disjoint cycles of order~$5$ joined by a single edge. Finbow, Hartnell and Nowakowski~\cite{FiHaNo88} characterized the well-dominated graphs having no $3$-cycle nor $4$-cycle. As observed earlier, upon completion of the enclaveless game, the set of played vertices is always a maximal enclaveless set. Hence, any sequence of legal moves by Maximizer and Minimizer (regardless of strategy) in the enclaveless game will always lead to the game played on a graph $G$ of order~$n$ ending in $n - \gamma(G)$ moves. Thus as a consequence of Observation~\ref{ob:bound}, we have the following interesting connection between the enclaveless game and the class of well-dominated graphs. \begin{obser} \label{ob:well-dom} If $G$ is a well-dominated graph of order~$n$, then $\Psi_g^-(G) = \Psi_g^+(G) = n - \gamma(G)$. \end{obser} It is well-known that if $G$ is an isolate-free graph of order $n$, then $\gamma(G) \le \frac{1}{2}n$, implying by Observation~\ref{ob:relate} that $\Psi(G) = n - \gamma(G) \ge \frac{1}{2}n$. Hence one might think that $\gamma_g(G) \le \Psi_g^+(G)$ for such a graph $G$ with no isolated vertex. We now provide an infinite class of graphs to show that the ratio $\gamma_g/\Psi_g^+$ of these two graphical invariants can be strictly larger than, and bounded away from,~$1$. The \emph{corona} $\coro(G)$ of a graph $G$, also denoted $G \circ K_1$ in the literature, is the graph obtained from $G$ by adding for each vertex $v$ of $G$ a new vertex $v'$ and the edge $vv'$ (and so, the vertex $v'$ has degree~$1$ in $\coro(G)$). The edge $vv'$ is called a \emph{pendant edge}. \begin{theorem} \label{t:ratio} If $n \ge 2$ is an integer and $\cG_n$ denotes the class of all isolate-free graphs $G$ of order~$n$, then \[ \sup_{n} \, \frac{ \gamma_g(G) }{\Psi_g^+(G)} \ge \frac{11}{10} \] where the supremum is taken over all graphs $G \in \cG_n$. \end{theorem} \proof Let $n=10q$ for some positive integer $q$ and $G_n$ be the corona of the path $P_n$. That is, the vertex set of $G_n$ is $X_n \cup Y_n$ where $X_n = \{x_i : i \in [n]\}$ and $Y_n=\{y_i : i \in [n]\}$. The edge set of $G_n$ is $\{x_ix_{i+1} : i \in [n-1] \} \cup \{x_iy_i : i \in [n]\}$. For each $k$ such that $0 \le k \le q-1$ we let $B_k$ be the subgraph of $G_n$ induced by $\cup_{i=1}^{10} \{x_{10k+i}, y_{10k+i}\}$. The D-game is played on $G_n$. At any point in this game we say that $B_i$ is open if no vertex in $B_i$ has been played by either player; otherwise we say $B_i$ is not open. By the Continuation Principle we may assume that any vertex played by Dominator belongs to $X_n$. We denote by $d_1, d_2, \ldots $ and $s_1, s_2, \ldots $ the sequence of moves played by Dominator and Staller in the domination game. We now provide a strategy for Staller to show that $\gamma_g(G_n) \ge 11q$. \begin{enumerate} \item If Dominator plays $d_1=x_i$ where $10k+1 \le i \le 10k+5$ for some $k$ such that $0 \le k \le q-1$, then Staller plays $s_1=y_{10k+8}$. \item If Dominator plays $d_1=x_i$ where $10k+6 \le i \le 10k+10$ for some $k$ such that $0 \le k \le q-1$, then Staller plays $s_1=y_{10k+3}$. \end{enumerate} If Dominator plays a vertex $d_j$ in an open $B_i$ for some $j \ge 1$ and $i$ with $0 \le i \le q-1$, then Staller plays $s_j$ also in $B_i$ as described in (a) and (b) above. On the other hand, suppose that Dominator plays a vertex $d_j$ in a $B_i$ that is not open. If this move of Dominator is his second move played in $B_i$, then in this case Staller plays the support vertex that is adjacent to the leaf she played earlier in the game when $B_i$ changed from being open to being not open. This support vertex was a legal move for Staller because of the structure of the graph $G_n$. When the game ends at least one vertex for each pair $x_k,y_k$ must have been played by one of the players. Furthermore, the above strategy for Staller shows that she can ensure that at least eleven vertices are played from each of $B_0, \ldots, B_{q-1}$. Therefore, $\gamma_g(G_n) \ge 11q$. Now, observe that every minimal dominating set of $G_n$ has cardinality $n$, which implies by Observation~\ref{ob:relate} that every maximal enclaveless set of $G_n$ also has cardinality $n$; that is, $\psi(G) = \Psi(G) = n$ where we recall that $\psi(G)$ denotes the cardinality of the smallest maximal enclaveless set in $G$ and $\Psi(G)$ is the cardinality of a largest enclaveless set in $G$. Hence by Observation~\ref{ob:bound}, $\Psi_g^+(G_n) = n$. Consequently, we have shown \[ \sup_n \frac{\gamma_g(G_n)}{\Psi_g^+(G_n)} \ge \frac{11}{10}\,. \] The desired result follows noting that $G_n \in \cG_{2n}$.~\QED \section{Fundamental bounds} \label{S:Fbounds} In this section, we establish some fundamental bounds on the (Maximizer-start) enclaveless game number and the Minimizer-start enclaveless game number. We establish next an upper bound on the enclaveless number of a graph in terms of the maximum degree and order of the graph. \begin{prop} \label{p:bound1} If $G$ is an isolate-free graph of order~$n$ with maximum degree $\Delta(G) = \Delta$, then \[ \left( \frac{1}{\Delta+1} \right) n \le \psi(G) \le \Psi(G) \le \left( \frac{\Delta}{\Delta+1} \right) n. \] \end{prop} \proof If $G$ is any isolate-free graph of order $n$ and maximum degree $\Delta$, then $\gamma(G) \ge \frac{n}{\Delta + 1}$, with equality precisely when $G$ has a minimum dominating set consisting of vertices of degree~$\Delta$ that is a $2$-packing, where a $2$-packing is a set $S$ of vertices that are pairwise at distance at least~$3$ apart. Hence, by Observation~\ref{ob:relate}, \[ \Psi(G) = n - \gamma(G) \le n - \frac{n}{\Delta+1}= \left( \frac{\Delta}{\Delta+1} \right) n. \] On the other hand, let $D$ be a minimal dominating set of maximum cardinality, and so $|D| = \Gamma(G)$. Let $\barD = V(G) \setminus D$, and so $|\barD| = n - |D|$. Let $\ell$ be the number of edges between $D$ and $\barD$. Since $D$ is a minimal dominating set, every vertex in $D$ has at least one neighbor in $\barD$, and so $\ell \ge |D|$. Since $G$ has maximum degree~$\Delta$, every vertex in $\barD$ has at most $\Delta$ neighbors in $D$, and so $\ell \le \Delta \cdot |\barD| = \Delta (n - |D|)$. Hence, $|D| \le \Delta (n - |D|)$, implying that $\Gamma(G) = |D| \le \Delta n /(\Delta+1)$. Thus by Observation~\ref{ob:relate}, \[ \psi(G) = n - \Gamma(G) \ge n - \left( \frac{\Delta}{\Delta+1} \right) n = \left( \frac{1}{\Delta+1} \right) n. \] This completes the proof of Proposition~\ref{p:bound1}.~\QED \medskip By Observation~\ref{ob:bound}, the set of played vertices in either the Maximizer-start enclaveless game or the Minimizer-start enclaveless game is an enclaveless set of $G$. Thus as an immediate consequence of Proposition~\ref{p:bound1}, we have the following result. \begin{prop} \label{prop:trivialupperbound} If $G$ is an isolate-free graph of order~$n$ with maximum degree $\Delta(G) = \Delta$, then \[ \left( \frac{1}{\Delta+1} \right) n \le \Psi_g^-(G) \le \left( \frac{\Delta}{\Delta+1} \right) n \hspace*{0.5cm} \mbox{and} \hspace*{0.5cm} \left( \frac{1}{\Delta+1} \right) n \le \Psi_g^+(G) \le \left( \frac{\Delta}{\Delta+1} \right) n. \] \end{prop} \medskip The lower bound in Proposition~\ref{prop:trivialupperbound} on $\Psi_g^-(G)$ is achieved, for example, by taking $G = K_{1,\Delta}$ for any given $\Delta \ge 1$ in which case $\Psi_g^-(G) = 1 = ( \frac{1}{\Delta+1} ) n$ where $n = n(G) = \Delta + 1$. We show next that the upper bounds in Proposition~\ref{prop:trivialupperbound} are realized for infinitely many connected graphs. \begin{prop} \label{prop:achieve1} There exist infinitely many positive integers $n$ along with a connected graph $G$ of order~$n$ satisfying \[ \Psi_g^-(G) = \Psi_g^+(G) = \left( \frac{\Delta(G)}{\Delta(G)+1} \right) n. \] \end{prop} \proof Let $r$ be an integer such that $r \ge 4$ and let $m$ be any positive integer. For each $i \in [m]$, let $H_i$ be a graph obtained from a complete graph of order $r+1$ by removing the edge $x_iy_i$ for two distinguished vertices $x_i$ and $y_i$. The graph $F_m$ is obtained from the disjoint union of $H_1,\ldots,H_m$ by adding the edges $y_ix_{i+1}$ for each $i \in [m]$ where the subscripts are computed modulo $m$. The vertices $x_i$ and $y_i$ are called connectors in $F_m$, and each of the $r-1$ vertices in the set $V(H_i) \setminus \{x_i,y_i\}$ is called a hidden vertex of $H_i$. Note that $F_m$ is $r$-regular and has order $n=m(r+1)$. We first show that $\Psi_g^-(F_m) = ( \frac{r}{r+1} ) n$. Suppose the Minimizer-start enclaveless game is played on $F_m$. We provide a strategy for Maximizer that forces exactly $rm$ vertices to be played. Maximizer's strategy is to make sure that all the connector vertices in the graph are played. If he can accomplish this, then exactly $rm$ vertices will be played when the game ends because of the structure of $F_m$. Suppose that at some point in the game Minimizer plays a vertex from some $H_j$. If one of the connector vertices, say $x_j$, is playable, then Maximizer responds by playing $x_j$. If both connector vertices have already been played and some hidden vertex, say $w$, in $H_j$ is playable, then Maximizer plays $w$. If no vertex of $H_j$ is playable, then Maximizer plays a connector vertex from $H_i$ for some $i \neq j$ if one is playable and otherwise plays any playable vertex. Since $H_k$ contains at least $3$ hidden vertices for each $k \in [m]$, it follows that Maximizer can guarantee that all the connector vertices are played by following this strategy. This implies that for each $i \in [m]$, exactly one hidden vertex of $H_i$ is not played during the course of the game. That is, the set of played vertices has cardinality \[ rm = \left( \frac{r}{r+1} \right) m(r+1) = \left( \frac{\Delta(F_m)}{\Delta(F_m)+1} \right) n\,, \] where we recall that $\Delta(F_m) = r$. Thus, \[ \Psi_g^-(F_m) \ge \left( \frac{\Delta(F_m)}{\Delta(F_m)+1} \right) n. \] By Proposition~\ref{prop:trivialupperbound}, \[ \Psi_g^-((F_m)) \le \left( \frac{\Delta(F_m)}{\Delta(F_m)+1} \right) n. \] Consequently, $\Psi_g^-(F_m) = ( \frac{\Delta(F_m)}{\Delta(F_m)+1} ) n$. If the Maximizer-start enclaveless game is played on $F_m$, then the same strategy as above for Maximizer forces $rm$ vertices to be played (even with the relaxed condition that $r$ be an integer larger than $2$). Thus as before, $\Psi_g^+(F_m) = ( \frac{\Delta(F_m)}{\Delta(F_m)+1} ) n$.~\QED \section{Regular graphs} \label{S:regular} In this section, we show that $\frac{1}{2}$-Enclaveless Game Conjecture (see Conjecture~\ref{conj1}) holds for the class of regular graphs, as does Conjecture~\ref{conj3} for the Minimizer-start enclaveless game. For a set $S \subset V(G)$ of vertices in a graph $G$ and a vertex $v \in S$, we define the \emph{$S$-external private neighborhood} of a vertex $v$, abbreviated $\epn_G(v,S)$, as the set of all vertices outside $S$ that are adjacent to $v$ but to no other vertex of $S$; that is, \[ \epn_G(v,S) = \{w \in V(G) \setminus S \mid N_G(w) \cap S = \{v\}\}. \] We define an \emph{$S$-external private neighbor} of $v$ to be a vertex in $\epn_G(v,S)$. \begin{theorem}\label{t:regular} If $G$ is a $k$-regular graph of order $n$, then $\Psi_g^+(G) \ge \frac{1}{2}n$ and $\Psi_g^-(G) \ge \frac{1}{2}n$. \end{theorem} \proof Suppose the Maximizer-start enclaveless game is played on $G$. Let $S$ denote the set of all vertices played when the game ends. By definition of the game, the set $S$ is a maximal enclaveless set in $G$. By Observations~\ref{ob:relate} and~\ref{ob:bound}, we have $|S| = \Psi_g^+(G) \ge \psi(G) = n-\Gamma(G)$. It therefore suffices to establish the proposition by proving that $\Gamma(G) \le \frac{1}{2}n$. This inequality is proved in~\cite{SoHe13}, but we prove it here for the sake of completeness. Let $D$ be an arbitrary minimal dominating set of $G$. Denote by $D_1$ the set of vertices in $D$ that have a $D$-external private neighbor. That is, $D_1 = \{x \in D : \epn_G(x,D) \ne \emptyset\}$. In addition, let $D_2 = D \setminus D_1$. Since $D$ is a minimal dominating set, the set $D_2$ consists of those vertices in $D$ that are isolated in the subgraph $G[D]$ of $G$ induced by $D$. Let \[ C_1=\bigcup_{x\in D_1} \epn_G(x,D) \hspace*{0.5cm} \mbox{and} \hspace*{0.5cm} C_2 = V(G) \setminus (D \cup C_1). \] We note that by definition, there are no edges in $G$ joining a vertex in $D_2$ and a vertex in $C_1$. That is, each vertex in $D_2$ has $k$ neighbors in $C_2$. Since every vertex has degree $k$, each vertex of $C_2$ has at most $k$ neighbors in $D_2$. Denote by $\ell$ the number of edges of the form $uv$ where $u\in D_2$ and $v \in C_2$. It now follows that $k|D_2|=\ell \le k|C_2|$. That is, $|D_2| \le |C_2|$. Now \[ |D|=|D_1|+|D_2| \le |C_1|+|C_2|=n-|D|\,, \] which shows that $\Gamma(G)\le |D| \le \frac{1}{2}n$. Similarly, $\Psi_g^-(G) \ge \psi(G) = n-\Gamma(G) \ge \frac{1}{2}n$.~\QED \medskip We remark that the lower bound in Theorem~\ref{t:regular} is achieved for $k=1$ and $k=2$ as shown by $K_2$ and $C_4$, respectively. However, it remains an open problem to characterize the graphs achieving equality in Theorem~\ref{t:regular} for each value of $k \ge 1$. A similar proof to that of Theorem~\ref{t:regular} will establish the same lower bounds by restricting the minimum degree and forbidding induced stars of a certain size. \begin{prop} \label{prop:nolargestars} Let $k$ be a positive integer. If $G$ is a graph of order $n$ with minimum degree at least $k$ and with no induced $K_{1,k+1}$, then both $\Psi_g^+(G)$ and $\Psi_g^-(G)$ are at least $\frac{1}{2}n$. \end{prop} \proof Let $D$ be a minimal dominating set of $G$. The sets $D_1,D_2,C_1$ and $C_2$ as well as $\ell$ are defined as in the proof of Theorem~\ref{t:regular}. In this case we get $k|D_2| \le \ell$ and $\ell \le k|C_2|$. The first of these inequalities follows since $\delta(G) \ge k$ and the second inequality follows from the fact that $D_2$ is independent and the assumption that $G$ is $K_{1,k+1}$-free. Once again we conclude that $|D_2| \le |C_2|$, and the result follows.~\QED \section{Claw-free graphs} \label{S:clawfree} A graph is \emph{claw}-\emph{free} if it does not contain the star $K_{1,3}$ as an induced subgraph. In this section, we show that $\frac{1}{2}$-Enclaveless Game Conjecture (see Conjecture~\ref{conj1}) holds for the class of claw-free graphs with no isolated vertex, as does Conjecture~\ref{conj3} for the Minimizer-start enclaveless game. For this purpose, we recall the definition of an irredundant set. For a set $S$ of vertices in a graph $G$ and a vertex $v \in S$, the \emph{$S$-private neighborhood} of $v$ is the set \[ \pn_G[v,S] = \{w \in V \mid N_G[w] \cap S = \{v\}\}. \] If the graph $G$ is clear from context, we simply write $\pn[v,S]$ rather than $\pn_G[v,S]$. We note that if the vertex $v$ is isolated in $G[S]$, then $v \in \pn[v,S]$. A vertex in the set $\pn[v,S]$ is called an $S$-\emph{private neighbor} of $v$. The set $S$ is an \emph{irredundant set} if every vertex of $S$ has an $S$-private neighbor. The \emph{upper irredundance number} $\IR(G)$ is the maximum cardinality of an irredundant set in $G$. The \emph{independence number} $\alpha(G)$ of $G$ is the maximal cardinality of an independent set of vertices in $G$. An independent set of vertices of $G$ of cardinality $\alpha(G)$ we call an $\alpha$-\emph{set of $G$}. Every maximum independent set in a graph is minimal dominating, and every minimal dominating set is irredundant. Hence we have the following inequality chain. \begin{obser}{\rm (\cite{CoHeMi78})} \label{ob:dom_chain} For every graph $G$, we have $\alpha(G) \le \Gamma(G) \le \IR(G)$. \end{obser} The inequality chain in Observation~\ref{ob:dom_chain} is part of the canonical domination chain which was first observed by Cockayne, Hedetniemi, and Miller~\cite{CoHeMi78} in 1978. We shall need the following upper bounds on the independence number of a claw-free graph. \begin{theorem} \label{t:indep} If $G$ is a connected claw-free graph of order~$n$ and minimum degree~$\delta \ge 1$, then the following holds. \\ [-26pt] \begin{enumerate} \item {\rm (\cite{Ga99,RySc95})} If $\delta = 1$, then $\alpha(G) \le \frac{1}{2}(n+1)$. \2 \item {\rm (\cite{Faudree92,LiVi90})} If $\delta \ge 2$, then $\alpha(G) \le \frac{2n}{\delta + 2}$. \end{enumerate} \end{theorem} In 2004, Favaron~\cite{Fa03} established the following upper bound on the irredundance number of a claw-free graph. \begin{theorem}{\rm (\cite{Fa03})} \label{t:bound_IR1} If $G$ is a connected, claw-free graph of order~$n$, then $\IR(G) \le \frac{1}{2}(n+1)$. Moreover, if $\IR(G) = \frac{1}{2}(n+1)$, then $\alpha(G) = \Gamma(G) = \IR(G)$. \end{theorem} If $G$ is a connected, claw-free graph of order~$n$ and minimum degree~$\delta \ge 2$, then by Theorem~\ref{t:indep}(b) we have $\alpha(G) \le \frac{1}{2}n$. In this case when $\delta \ge 2$, if $\IR(G) = \frac{1}{2}(n+1)$ holds, then by Theorem~\ref{t:bound_IR1} we have $\alpha(G) = \frac{1}{2}(n+1)$, a contradiction. Hence when $\delta \ge 2$, we must have $\IR(G) \le \frac{1}{2}n$. We state this formally as follows. \begin{cor}{\rm (\cite{Fa03})} \label{c:bound_IR1} If $G$ is a connected, claw-free graph of order~$n$ and minimum degree at least~$2$, then $\IR(G) \le \frac{1}{2}n$. \end{cor} We are now in a position to prove the following result. \begin{theorem}\label{t:clawfree1} If $G$ is a connected claw-free graph of order $n$ and $\delta(G) \ge 2$, then \[ \Psi_g^+(G) \ge \frac{1}{2}n \hspace{0.5cm} \mbox{and} \hspace{0.5cm} \Psi_g^-(G) \ge \frac{1}{2}n. \] \end{theorem} \proof Suppose the Minimizer-start enclaveless game is played on $G$. Let $S$ denote the set of all vertices played when the game ends. By definition of the game, the set $S$ is a maximal enclaveless set in $G$. By Observations~\ref{ob:relate},~\ref{ob:bound} and~\ref{ob:dom_chain} and Corollary~\ref{c:bound_IR1}, we have \[ |S| = \Psi_g^-(G) \ge \psi(G) = n - \Gamma(G) \ge n - \IR(G) \ge n - \frac{1}{2}n = \frac{1}{2}n, \] as desired. Similarly, $\Psi_g^+(G) \ge \psi(G) \ge n - \IR(G) \ge \frac{1}{2}n$.~\QED \medskip By Theorem~\ref{t:clawfree1}, we note that Conjecture~\ref{conj3} holds for connected claw-free graphs. In order to prove that Conjecture~\ref{conj1} holds for connected claw-free graphs, we need the characterization due to Favaron~\cite{Fa03} of the graphs achieving equality in the bound of Theorem~\ref{t:bound_IR1}. For this purpose, we recall that a vertex $v$ of a graph $G$ is a \emph{simplicial vertex} it is neighborhood $N_G(v)$ induces a complete subgraph of $G$. A \emph{clique} of a graph $G$ is a maximal complete subgraph of $G$. The \emph{clique graph} of $G$ has the set of cliques of $G$ as its vertex set, and two vertices in the clique graph are adjacent if and only if they intersect as cliques of $G$. A \emph{non}-\emph{trivial tree} is a tree of order at least~$2$. Favaron~\cite{Fa03} defined the family $\cF$ of claw-free graphs $G$ as follows. Let $T_1, \ldots, T_q$ be $q \ge 1$ non-trivial trees. Let $L_i$ be the line graph of the corona $\coro(T_i)$ of the tree $T_i$ for $i \in [q]$. If $q = 1$, let $G = L_1$. If $q \ge 2$, let $G$ be the graph constructed from the line graphs $L_1, L_2, \ldots, L_q$ by choosing $q-1$ pairs $\{x_{ij},x_{ji}\}$ such that the following holds. \\ [-24pt] \begin{enumerate} \item[$\bullet$] $x_{ij}$ and $x_{ji}$ are simplicial vertices of $L_i$ and $L_j$, respectively, where $i \ne j$. \item[$\bullet$] The $2(q-1)$ vertices from the $q-1$ pairs $\{x_{ij},x_{ji}\}$ are all distinct vertices. \item[$\bullet$] Contracting each pair of vertices $x_{ij}$ and $x_{ji}$ into one common vertex $c_{ij}$ results in a graph whose clique graph is a tree. \end{enumerate} To illustrate the above construction of a graph $G$ in the family $\cF$ consider, for example, such a construction when $q = 3$ and the trees $T_1, T_2, T_3$ are given in Figure~\ref{f:familyF}. \begin{figure}[htb] \begin{center} \begin{tikzpicture}[scale=.7,style=thick,x=1cm,y=1cm] \def\vr{2.75pt} \def\vB{14pt} \path (1,0) coordinate (v1); \path (2,1) coordinate (v2); \path (3,0) coordinate (v3); \path (4,1) coordinate (v4); \path (5,0) coordinate (v5); \path (6,1) coordinate (v6); \path (6,-0.15) coordinate (v6p); \path (7,0) coordinate (v7); \path (8,1) coordinate (v8); \path (9,0) coordinate (v9); \path (10,1) coordinate (v10); \path (11,0) coordinate (v11); \path (12,1) coordinate (v12); \path (13,0) coordinate (v13); \draw (v1) -- (v2); \draw (v2) -- (v3); \draw (v3) -- (v4); \draw (v4) -- (v5); \draw (v5) -- (v6); \draw (v5) -- (v8); \draw (v6) -- (v7); \draw (v8) -- (v9); \draw (v9) -- (v10); \draw (v10) -- (v11); \draw (v11) -- (v12); \draw (v12) -- (v13); \draw (v4) -- (v6); \draw (v6) -- (v8); \draw (v10) -- (v12); \draw (v1) [fill=white] circle (\vr); \draw (v2) [fill=black] circle (\vr); \draw (v3) [fill=white] circle (\vr); \draw (v4) [fill=black] circle (\vr); \draw (v5) [fill=white] circle (\vr); \draw (v6) [fill=black] circle (\vr); \draw (v7) [fill=white] circle (\vr); \draw (v8) [fill=black] circle (\vr); \draw (v9) [fill=white] circle (\vr); \draw (v10) [fill=black] circle (\vr); \draw (v11) [fill=white] circle (\vr); \draw (v12) [fill=black] circle (\vr); \draw (v13) [fill=white] circle (\vr); \draw (v4) to[out=60,in=120, distance=1cm] (v8); \draw[anchor = north] (v3) node {{\small $c_{12}$}}; \draw[anchor = north] (v9) node {{\small $c_{23}$}}; \draw[anchor = north] (v6p) node {{\small $G$}}; \path (-2,2.5) coordinate (u0); \path (-1,3) coordinate (u1); \path (0,4) coordinate (u2); \path (0,2.85) coordinate (u2p); \path (1,3) coordinate (u3); \path (3,3) coordinate (u4); \path (4,4) coordinate (u5); \path (5,3) coordinate (u6); \path (6,4) coordinate (u7); \path (6,2.85) coordinate (u7p); \path (7,3) coordinate (u8); \path (8,4) coordinate (u9); \path (9,3) coordinate (u10); \path (11,3) coordinate (u11); \path (12,4) coordinate (u12); \path (13,3) coordinate (u13); \path (13,2.85) coordinate (u13p); \path (14,4) coordinate (u14); \path (15,3) coordinate (u15); \draw (u1) -- (u2); \draw (u2) -- (u3); \draw (u4) -- (u5); \draw (u5) -- (u6); \draw (u6) -- (u7); \draw (u6) -- (u9); \draw (u7) -- (u8); \draw (u9) -- (u10); \draw (u11) -- (u12); \draw (u12) -- (u13); \draw (u13) -- (u14); \draw (u14) -- (u15); \draw (u5) -- (u7); \draw (u7) -- (u9); \draw (u12) -- (u14); \draw (u1) [fill=white] circle (\vr); \draw (u2) [fill=black] circle (\vr); \draw (u3) [fill=white] circle (\vr); \draw (u4) [fill=white] circle (\vr); \draw (u5) [fill=black] circle (\vr); \draw (u6) [fill=white] circle (\vr); \draw (u7) [fill=black] circle (\vr); \draw (u8) [fill=white] circle (\vr); \draw (u9) [fill=black] circle (\vr); \draw (u10) [fill=white] circle (\vr); \draw (u11) [fill=white] circle (\vr); \draw (u12) [fill=black] circle (\vr); \draw (u13) [fill=white] circle (\vr); \draw (u14) [fill=black] circle (\vr); \draw (u15) [fill=white] circle (\vr); \draw (u5) to[out=60,in=120, distance=1cm] (u9); \draw[anchor = north] (u3) node {{\small $x_{12}$}}; \draw[anchor = north] (u4) node {{\small $x_{21}$}}; \draw[anchor = north] (u10) node {{\small $x_{23}$}}; \draw[anchor = north] (u11) node {{\small $x_{32}$}}; \draw[anchor = north] (u2p) node {{\small $L_1$}}; \draw[anchor = north] (u7p) node {{\small $L_2$}}; \draw[anchor = north] (u13p) node {{\small $L_3$}}; \draw[anchor = north] (u0) node {{\small $\Downarrow$}}; \path (-2,5.5) coordinate (w0); \path (-1,6) coordinate (w1); \path (-1,7) coordinate (w2); \path (0,5.85) coordinate (w2p); \path (1,7) coordinate (w3); \path (1,6) coordinate (w4); \path (3,6) coordinate (w5); \path (3,7) coordinate (w6); \path (5,6) coordinate (w7); \path (5,7) coordinate (w8); \path (6,5.85) coordinate (w8p); \path (7,6) coordinate (w9); \path (7,7) coordinate (w10); \path (9,6) coordinate (w11); \path (9,7) coordinate (w12); \path (11,6) coordinate (w13); \path (11,7) coordinate (w14); \path (13,6) coordinate (w15); \path (13,7) coordinate (w16); \path (13,5.75) coordinate (w15p); \path (15,6) coordinate (w17); \path (15,7) coordinate (w18); \draw (w1) -- (w2); \draw (w2) -- (w3); \draw (w3) -- (w4); \draw (w5) -- (w6); \draw (w7) -- (w8); \draw (w9) -- (w10); \draw (w11) -- (w12); \draw (w6) -- (w8); \draw (w8) -- (w10); \draw (w13) -- (w14); \draw (w15) -- (w16); \draw (w17) -- (w18); \draw (w14) -- (w16); \draw (w16) -- (w18); \draw (w1) [fill=white] circle (\vr); \draw (w2) [fill=black] circle (\vr); \draw (w3) [fill=black] circle (\vr); \draw (w4) [fill=white] circle (\vr); \draw (w5) [fill=white] circle (\vr); \draw (w6) [fill=black] circle (\vr); \draw (w7) [fill=white] circle (\vr); \draw (w8) [fill=black] circle (\vr); \draw (w9) [fill=white] circle (\vr); \draw (w10) [fill=black] circle (\vr); \draw (w11) [fill=white] circle (\vr); \draw (w12) [fill=black] circle (\vr); \draw (w13) [fill=white] circle (\vr); \draw (w14) [fill=black] circle (\vr); \draw (w15) [fill=white] circle (\vr); \draw (w16) [fill=black] circle (\vr); \draw (w17) [fill=white] circle (\vr); \draw (w18) [fill=black] circle (\vr); \draw (w8) to[out=60,in=120, distance=1cm] (w12); \draw[anchor = north] (w2p) node {{\small $\coro(T_1)$}}; \draw[anchor = north] (w8p) node {{\small $\coro(T_2)$}}; \draw[anchor = north] (w15p) node {{\small $\coro(T_3)$}}; \draw[anchor = north] (w0) node {{\small $\Downarrow$}}; \path (-2,8.5) coordinate (x0); \path (-1,9) coordinate (x1); \path (1,9) coordinate (x2); \path (0,8.85) coordinate (x2p); \path (3,9) coordinate (x3); \path (5,9) coordinate (x4); \path (6,8.85) coordinate (x4p); \path (7,9) coordinate (x5); \path (9,9) coordinate (x6); \path (11,9) coordinate (x7); \path (13,9) coordinate (x8); \path (13,8.75) coordinate (x8p); \path (15,9) coordinate (x9); \draw (x1) -- (x2); \draw (x3) -- (x4); \draw (x4) -- (x5); \draw (x7) -- (x8); \draw (x8) -- (x9); \draw (x1) [fill=black] circle (\vr); \draw (x2) [fill=black] circle (\vr); \draw (x3) [fill=black] circle (\vr); \draw (x4) [fill=black] circle (\vr); \draw (x5) [fill=black] circle (\vr); \draw (x6) [fill=black] circle (\vr); \draw (x7) [fill=black] circle (\vr); \draw (x8) [fill=black] circle (\vr); \draw (x9) [fill=black] circle (\vr); \draw (x4) to[out=60,in=120, distance=1cm] (x6); \draw[anchor = north] (x2p) node {{\small $T_1$}}; \draw[anchor = north] (x4p) node {{\small $T_2$}}; \draw[anchor = north] (x8p) node {{\small $T_3$}}; \draw[anchor = north] (x0) node {{\small $\Downarrow$}}; \end{tikzpicture} \end{center} \vskip -0.6cm \caption{An illustration of the construction of a graph $G$ in the family $\cF$} \label{f:familyF} \end{figure} We note that if $G$ is an arbitrary graph of order~$n$ in the family~$\cF$, then $n \ge 3$ is odd and the vertex set of $G$ can be partitioned into two sets $A$ and $B$ such that the following holds. \\ [-24pt] \begin{enumerate} \item[$\bullet$] $|A| = \frac{1}{2}(n-1)$ and $|B| = \frac{1}{2}(n+1)$. \item[$\bullet$] The set $B$ is an independent set. \item[$\bullet$] Each vertex in $A$ has exactly two neighbors in $B$. \end{enumerate} We refer to the partition $(A,B)$ as the partition associated with $G$. For the graph $G \in \cF$ illustrated in Figure~\ref{f:familyF}, the set $A$ consists of the darkened vertices and the set $B$ consists of the white vertices. We are now in a position to state the characterization due to Favaron~\cite{Fa03} of the graphs achieving equality in the bound of Theorem~\ref{t:bound_IR1}. \begin{theorem}{\rm (\cite{Fa03})} \label{t:bound_IR2} If $G$ is a connected, claw-free graph of order~$n \ge 3$, then $\IR(G) \le \frac{1}{2}(n+1)$, with equality if and only if $G \in \cF$. \end{theorem} We prove next the following property of graphs in the family~$\cF$. \begin{lemma} \label{l:lemma1} If $G \in \cF$ and $(A,B)$ is the partition associated with $G$, then the set $B$ is the unique $\IR$-set of $G$. \end{lemma} \proof We proceed by induction on the order $n \ge 3$ of $G \in \cF$. If $n = 3$, then $G = P_3$. In this case, the set $B$ consists of the two leaves of $G$, and the desired result is immediate. This establishes the base case. Suppose that $n \ge 5$ and that the result holds for all graphs $G' \in \cF$ of order~$n'$, where $3 \le n' < n$. Let $Q$ be an $\IR$-set of $G$. By construction of the graph $G$, the set $B$ contains at least two vertices of degree~$1$ in $G$. Let $v$ be an arbitrary vertex in $B$ of degree~$1$ in $G$, and let $u$ be its neighbor. We note that $u \in A$. Let $G' = G - \{u,v\}$ and let $G'$ have order~$n'$, and so $n' = n - 2$. Let $A' = A \setminus \{u\}$ and $B' = B \setminus \{v\}$. By construction of the graph $G$ and our choice of the vertex~$v$, we note that $G' \in \cF$ and that $(A',B')$ is the partition associated with $G'$. Applying the inductive hypothesis to $G'$, the set $B'$ is the unique $\IR$-set of $G'$. Let $w$ be the second neighbor of $u$ in $G$ that belongs to the set $B$, and so $N_G(u) \cap B = \{v,w\}$. By the structure of the graph $G \in \cF$, we note that $N_G[w] \subset N_G[u]$ and that the subgraph of $G$ induced by $N_G[w]$ is a clique. Suppose, to the contrary, that $Q \ne B$. Let $Q'$ be the restriction of $Q$ to the graph $G'$, and so $Q' = Q \cap V(G')$. Suppose that $u \in Q$. Since $Q$ is an irredundant set, this implies that $v \notin Q$. If $w \in Q$, then $\pn[w,Q] = \emptyset$, contradicting the fact that $Q$ is an irredundant set. Hence, $w \notin Q$, and so $Q' \ne B'$. By the inductive hypothesis, the set $Q'$ is therefore not an $\IR$-set of $G'$, and so $|Q'| < \IR(G')$. Thus, $\IR(G) = |Q| = |Q'| + 1 \le (\IR(G') - 1) + 1 = \frac{1}{2}(n'+1) = \frac{1}{2}(n-1) < \IR(G)$, a contradiction. Hence, $u \notin Q$. In this case, $\IR(G) = |Q| \le |Q'| + 1 \le \IR(G') + 1 = \frac{1}{2}(n'+1) + 1 = \frac{1}{2}(n+1) = \IR(G)$. Hence, we must have equality throughout this inequality chain. This implies that $v \in Q$ and $|Q'| = \IR(G')$. By the inductive hypothesis, we therefore have $Q' = B'$. Hence, $Q = Q' \cup \{v\} = B' \cup \{v\} = B$. Thus, the set $B$ is the unique $\IR$-set of $G$.~\QED \begin{cor} \label{c:cor1} If $G \in \cF$ and $(A,B)$ is the partition associated with $G$, then the set $B$ is the unique $\alpha$-set of $G$ and the unique $\Gamma$-set of $G$. \end{cor} \proof By Theorem~\ref{t:bound_IR1}, $\alpha(G) = \Gamma(G) = \IR(G) = \frac{1}{2}(n+1)$. By Lemma~\ref{l:lemma1}, the set $B$ is the unique $\IR$-set of $G$. Since every $\alpha$-set of $G$ is an $\IR$-set of $G$ and $\alpha(G) = \IR(G)$, this implies that $B$ is the unique $\alpha$-set of $G$. Since every $\Gamma$-set of $G$ is an $\IR$-set of $G$ and $\Gamma(G) = \IR(G)$, this implies that $B$ is the unique $\Gamma$-set of $G$.~\QED \medskip We show next that Conjecture~\ref{conj1} holds for connected claw-free graphs. \begin{theorem} \label{t:clawfree2} If $G$ is a connected, claw-free graph of order~$n \ge 2$, then the following holds. \\ [-26pt] \begin{enumerate} \item $\Psi_g^+(G) \ge \frac{1}{2}n$. \1 \item If $G \ne P_3$, then $\Psi_g^-(G) \ge \frac{1}{2}n$. \end{enumerate} \end{theorem} \proof Let $G$ be a connected, claw-free graph of order~$n \ge 2$. Suppose the Maximizer-start enclaveless game is played on $G$. Let $S$ denote the set of all vertices played when the game ends. By definition of the game, the set $S$ is a maximal enclaveless set in $G$. If $\IR(G) \le \frac{1}{2}n$, then analogously as in the proof of Theorem~\ref{t:bound_IR1} we have $|S| = \Psi_g^+(G) \ge \psi(G) \ge n - \IR(G) \ge \frac{1}{2}n$. Hence, we may assume that $\IR(G) > \frac{1}{2}n$, for otherwise the desired result follows. By Theorem~\ref{t:bound_IR2}, $\IR(G) = \frac{1}{2}(n+1)$ and $G \in \cF$. Let $(A,B)$ be the partition associated with $G$. We show in this case we have $\Psi_g^+(G) > \psi(G)$. By Observation~\ref{ob:relate}, $\Gamma(G) + \psi(G) = n$. Moreover, the complement of every $\Gamma$-set of $G$ is a maximal enclaveless set, and the complement of every $\psi$-set of $G$ is a minimal dominating set. By Corollary~\ref{c:cor1}, the set $B$ is the unique $\Gamma$-set of $G$. These observations imply that the complement of the set $B$, namely the set $A$, is the unique $\psi$-set of $G$. Thus every maximal enclaveless set of $G$ of cardinality~$\psi(G)$ is precisely the set $A$. We now return to the Maximizer-start enclaveless game played on $G$. If Maximizer plays as his first move any vertex from the set $B$ and thereafter both players play optimally, then the resulting set $S^*$ of moves played during the course of the game contain a vertex of $B$ and is therefore different from the set $A$. Since the set $A$ is the unique $\psi$-set of $G$, this implies that $|S^*| > \psi(G)$. We therefore have that the following inequality chain, where the first inequality, namely $\Psi_g^+(G) \ge |S^*|$, is due to the fact that the first move of Maximizer from the set $B$ may not be an optimal move. \[ \Psi_g^+(G) \ge |S^*| \ge \psi(G) + 1 = (n - \Gamma(G)) + 1 = n - \frac{1}{2}(n+1) + 1 = \frac{1}{2}(n+1). \] This shows that $\Psi_g^+(G) \ge \frac{1}{2}n$, as desired. Suppose next that $G \ne P_3$ and the Minimizer-start enclaveless game is played on $G$. Let $S$ denote the set of all vertices played when the game ends. By definition of the game, the set $S$ is a maximal enclaveless set in $G$. If $\IR(G) \le \frac{1}{2}n$, then analogously as before we have $|S| = \Psi_g^-(G) \ge \psi(G) \ge n - \IR(G) \ge \frac{1}{2}n$. Hence, we may assume that $\IR(G) > \frac{1}{2}n$, for otherwise the desired result follows. By Theorem~\ref{t:bound_IR2}, $\IR(G) = \frac{1}{2}(n+1)$ and $G \in \cF$. Let $(A,B)$ be the partition associated with $G$. We show in this case we have $\Psi_g^-(G) > \psi(G)$. Since $G \ne P_3$, we note that there are at least two vertices in the set $B$ at distance at least~$3$ apart in $G$. Thus, whatever the first move is played by Minimizer, Maximizer can always respond by playing as his first move a vertex chosen from the set $B$. Thus, analogously as before, the resulting set of played vertices in the game is different from the set $A$. Recall that upon completion of the game the resulting set is a maximal enclaveless set. Therefore, Maximizer has a strategy to finish the game in at least~$\psi(G) + 1$ moves, implying that $\Psi_g^-(G) \ge \frac{1}{2}(n+1)$.~\QED \medskip By Theorem~\ref{t:clawfree2}(a), we note that Conjecture~\ref{conj1} holds for connected claw-free graphs. Moreover by Theorem~\ref{t:clawfree2}(b), we note that Conjecture~\ref{conj3} holds for connected claw-free graphs even if we relax the minimum degree two condition and replace it with the requirement that the graph is isolate-free and different from the path $P_3$. \medskip
1,108,101,566,270
arxiv
\section{Introduction} Continuous-time jump Markov processes are broadly used in stochastic models of operations research. In many applications continuous-time jump Markov processes are defined by transition rates often called $Q$-functions. Each $Q$-function defines Kolmogorov's backward and forward equations, and transition probabilities of the jump Markov process defined by the $Q$-function satisfy these equations. If transition rates are unbounded, Kolmogorov's equations may have multiple solutions, see, e.g., Anderson~\cite[Chap.~4, Example~1.2]{Anderson}, Doob~\cite[Chap. 6]{Doob}, Kendall~\cite{Ken}, Reuter~\cite{Reu}, and the relation between Kolmogorov's equations and the corresponding transition probabilities is not trivial. For example, in queueing theory birth and death processes have unbounded transition rates in each of the following three situations: arrival rates depend on the state of the system and are unbounded, queues with an infinite number of servers, queues with reneging. This paper answers the questions on how a nonhomogeneous jump Markov process can be defined for a given $Q$-function and how can its transition probability be found as a solution of Kolmogorov's backward and forward equations. These questions were studied by Feller~\cite{Fel} for continuous $Q$-functions and a standard Borel state space, by Ye et al.~\cite{YGHL} for measurable $Q$-functions and a countable state space, and by Feinberg et al.~\cite{FMS} for measurable $Q$-functions and a standard Borel state space. All these papers considered $Q$-functions satisfying certain boundedness conditions. This paper generalizes the results from Feinberg et al.~\cite{FMS} to more general classes of unbounded $Q$-functions, strengthens some of results from \cite{FMS}, and provides proofs of the following two facts: (i) (Lemma~\ref{l:A-eq}(a)) Fellers's assumption on the boundedness of a $Q$-function, Assumption~\ref{Feller}, is equivalent to the boundedness of a $Q$-function at each state, Assumption~\ref{LB}, and (ii) (Theorem~\ref{thm:intFKE}) Kolmogorov's forward equation is equivalent to the integral equation~\eqref{int-FKE}. The first fact is introduced and the validity of equation~\eqref{int-FKE} is stated in \cite{FMS} without detailed proofs. For a topological space $S,$ its Borel $\sigma$-field (the $\sigma$-field generated by open subsets of $S$) is always denoted by $\B(S),$ and the sets in $\B(S)$ are called {\it Borel subsets} of $S$. Let $\BB{R}$ be the real line endowed with the Euclidean metric. A topological space $(S, \B(S))$ is called a {\it standard Borel space} if there exists a bijection $f$ from $(S, \B(S))$ to a Borel subset of $\BB{R}$ such that the mappings $f$ and $f^{-1}$ are measurable. In this paper, measurability and Borel measurability are used synonymously. Let $({\bf X}, \B({\bf X}))$ be a standard Borel space, called the state space, and let $[T_0, T_1[$ be a finite or an infinite interval in $\BB{R}_+:= [0, \infty[$. In this paper, we always assume that $T_0< T_1.$ A function $P(u, x; t, B)$, where $u \in [T_0, T_1[$, $t \in ]u, T_1[$, $x \in {\bf X},$ and $B \in \B({\bf X}),$ is called a transition function if it takes values in $[0,1]$ and satisfies the following properties: \begin{itemize} \item[(i)] for all $u,x,t$ the function $P(u,x;t,\cdot)$ is a measure on $({\bf X}, \mathfrak{B}({\bf X}))$; \item[(ii)]for all $B$ the function $P(u,x; t, B)$ is Borel measurable in $(u,x,t);$ \item[(iii)] $P(u,x;t,B)$ satisfies the Chapman-Kolmogorov equation \begin{equation} \label{CKE} P(u, x; t, B) = \int_{{\bf X}} P(s,y; t, B)P(u, x; s, dy), \qquad u < s < t. \end{equation} \end{itemize} A transition function $P$ is called {\it regular} if $P(u,x;t,{\bf X}) = 1$ for all $u,x,t$ in the domain of $P$. A stochastic process $\{\BB{X}_t: t \in [T_0, T_1[\}$ with values in ${\bf X}$, defined on the probability space $(\Omega, \C{F}, \BB{P})$ and adapted to the filtration $\{\C{F}_t\}_{t \in [T_0, T_1[}$, is called Markov if $\BB{P}(\BB{X}_t \in B \mid \C{F}_u) = \BB{P}(\BB{X}_t \in B \mid \BB{X}_u)$, $\BB{P}-a.s.$ for all $u \in [T_0, T_1[,$ $t \in ]u,T_1[$, and $B \in \B({\bf X})$. Each Markov process has a transition function $P$ such that $\BB{P}(\BB{X}_t \in B \mid \BB{X}_u) = P(u,\BB{X}_u; t, B),$ $\BB{P}-a.s.$; see Kuznetsov~\cite{Kuz}, where the equivalence of two definitions of a Markov process given by Kolmogorov~\cite{Kol} is established. In addition, if a Markov process is a jump process, that is, if each sample path of the process is a right-continuous piecewise constant function in $t$ that has a finite or countable number of discontinuity points on $t \in [T_0, T_1[$, then the Markov process is called a {\it jump Markov process}. A function $q(x,t,B)$, where $x \in {\bf X}$, $t \in [T_0, T_1[$, and $B \in \B({\bf X})$, is called a {\it Q-function} if it satisfies the following properties: \begin{itemize} \item[(a)]for all $x,t$ the function $q(x,t,\cdot)$ is a signed measure on $({\bf X}, \mathfrak{B}({\bf X}))$ such that $q(x,t,{\bf X})$ $\leq$ $0$ and $0 \leq q(x, t, B \setminus \{x\}) < \infty$ for all $B \in \B({\bf X})$; \item[(b)] for all $B$ the function $q(x,t,B)$ is measurable in $(x,t).$ \end{itemize} In addition to properties (a) and (b), if $q(x,t,{\bf X}) =0$ for all $x,t$, then the $Q$-function $q$ is called {\it conservative}. Note that any $Q$-function can be transformed into a conservative $Q$-function by adding an absorbing state $\bar{x}$ to ${\bf X}$ with $q(x,t,\{\bar{x}\}):= -q(x,t,{\bf X})$, $q(\bar{x}, t, {\bf X}) := 0$, and $q(\bar{x}, t, \{\bar{x}\}) := 0$, where $x \in {\bf X}$ and $t \in [T_0, T_1[$. To simplify the presentation, in this paper we always assume that $q$ is conservative. The same arguments as in Remark~4.1 in Feinberg et al.~\cite{FMS} explain how the main formulations change when the $Q$-function $q$ is not conservative. A $Q$-function $q$ is called {\it continuous} if it is continuous in $t \in [T_0, T_1[$. Feller~\cite{Fel} studied Kolmogorov's backward and forward equations for continuous $Q$-functions and provided explicit formulae for a transition function that satisfies Kolmogorov's backward and forward equations. If the constructed transition function is regular, Feller~\cite[Theorem 3]{Fel} showed that this transition function is the unique solution of Kolmogorov's backward equation. Though Feller~\cite{Fel} focused on regular transition functions, it follows from the proof of Theorem 3 in Feller~\cite{Fel} that the transition function constructed there is the minimal solution of Kolmogorov's backward equation. Feinberg et al.~\cite{FMS} showed for a measurable $Q$-function that the transition function constructed by Feller~\cite{Fel} is the minimal solution of Kolmogorov's backward and forward equations, and it is the transition function of the jump Markov process defined by the random measure whose compensator is defined via the $Q$-function. In this paper, we show that the minimal solution of Kolmogorov's backward and forward equations is the transition function of the corresponding jump Markov process under more general boundedness assumptions on $Q$-functions than those assumed in \cite{FMS}. \section{Assumptions and description of main results} \label{S-A} In this section, we describe several assumptions on unbounded $Q$-functions and the results of this paper. Let $q(x,t) := -q(x, t, \{x\})$ for $x \in {\bf X}$ and $t \in [T_0, T_1[,$ and let ${\bar q}(x):= \sup_{t \in [T_0, T_1[} q(x,t)$ for $x\in {\bf X}.$ Feller~\cite{Fel} studied Kolmogorov's equations for continuous $Q$-functions under the following assumption. \begin{assumption}[{\it Feller's assumption}] \label{Feller} There exists Borel subsets $B_n, n = 1,2,\ldots,$ of ${\bf X}$ such that $\sup_{x \in B_n} {\bar q}(x) < n$ for all $n = 1,2,\ldots$ and $B_n \uparrow {\bf X}$ as $n \to \infty$. \end{assumption} Feinberg et al.~\cite{FMS} studied Kolmogorov's equations for measurable $Q$-functions under the following assumption with $T_0 = 0$ and $T_1 = \infty$. \begin{assumption}[{\it Boundedness of $q$}] \label{LB} ${\bar q}(x) < \infty $ for each $x \in {\bf X}$. \end{assumption} As mentioned in Feinberg et al.~\cite[p.~262]{FMS}, Assumptions~\ref{Feller} and \ref{LB} are equivalent; see Lemma~\ref{l:A-eq}(a) for details. In this section, we introduce two more general assumptions. \begin{assumption}[{\it Local boundedness of $q$}] \label{ALB} $\sup_{t\in [T_0,s[} q(x,t)<\infty$ for each $s\in ]T_0,T_1[$ and $x\in {\bf X}.$ \end{assumption} \begin{assumption}[{\it Local $\mathcal{L}^1$ boundedness of $q$}] \label{L1} $\int_{T_0}^s q(x,t) dt < \infty$ for each $s \in ]T_0, T_1[$ and $x \in {\bf X}.$ \end{assumption} The following lemma compares Assumptions~\ref{Feller}--\ref{L1}. \begin{lemma} \label{l:A-eq} The following statements hold for a measurable $Q$-function $q:$ (a) Assumptions~\ref{Feller} and \ref{LB} are equivalent; (b) Assumption~\ref{LB} implies Assumption~\ref{ALB}; (c) Assumption~\ref{ALB} implies Assumption~\ref{L1}. \end{lemma} \begin{proof} (a) Let $\{B_n, n = 1,2,\ldots\}$ be a sequence of Borel subsets of ${\bf X}$ satisfying the properties stated in Assumption~\ref{Feller}. Then for each $x \in {\bf X}$ there exists an $n \in \{1,2,\ldots\}$ such that $x \in B_n$ and therefore ${\bar q}(x) < n$. Thus, Assumption~\ref{Feller} implies Assumption~\ref{LB}. To prove that Assumption~\ref{LB} implies Assumption~\ref{Feller}, define $C_n := \{x \in {\bf X}: {\bar q}(x) \ge n\}, n = 1,2,\ldots\ .$ Since $C_n =\cap_{k=1}^\infty proj_{\bf X} (\{(x,t) \in ({\bf X} \times [T_0, T_1[) : q(x,t) \ge n-k^{-1}\})$ are countable intersections of projections of Borel sets, the sets $C_n$ are analytic for all $n = 1,2,\ldots;$ see Bertsekas and Shreve~\cite[Proposition~7.39 and Corollary~7.35.2]{BS}. In addition, Assumption~\ref{LB} implies that $\bigcap_{n = 1}^\infty C_n = \emptyset.$ Thus, in view of the Novikov separation theorem, Kechris~\cite[Theorem~28.5]{Kechris}, there exist Borel subsets $Z_n,$ $n=1,2,\ldots,$ of ${\bf X}$ such that $C_n \subseteq Z_n$ and $\bigcap_{n = 1}^\infty Z_n = \emptyset.$ This fact implies that $Z_n^c \subseteq C_n^c$ and $\bigcup_{n = 1}^\infty Z_n^c = {\bf X},$ where the sets $Z_n^c$ and $C_n^c$ are complements of the sets $Z_n$ and $C_n$, respectively. Let $B_n:= \cup_{m = 1}^n Z_m^c$ for all $n = 1,2,\ldots\,.$ The Borel sets $B_n,$ $n=1,2,\ldots,$ satisfy the properties stated in Assumption~\ref{Feller}. (b,c) Statements (b) and (c) are obvious. \qed \end{proof} \begin{remark} \label{T-eq} Under Assumption~\ref{Feller}, which, as stated in Lemma~\ref{l:A-eq}(a), is equivalent to Assumption~\ref{LB}, Feller~\cite{Fel} studied Kolmogorov's equations for the time parameter $t\in [T_0, T_1[.$ Under Assumption~\ref{LB}, Feinberg et al.~\cite{FMS} studied Kolmogorov's equations for the time parameter $t\in [T_0, T_1[ = [0,\infty[.$ It is apparent that the formulation of results for an arbitrary interval $[T_0, T_1[,$ where $0\le T_0< T_1\le\infty,$ is more general than their formulation for the interval $[0,\infty[.$ In fact, these two formulations are equivalent under Assumption~\ref{LB} holding for the corresponding time intervals. Indeed, a $Q$-function $q,$ defined for $t\in [T_0,T_1[$ and satisfying Assumption~\ref{LB} on this interval, can be extended to all $t\in [0,\infty[$ by setting $q(x,t,B):=0$ for $x\in {\bf X},$ $t\in [0, T_0[\cup [T_1,\infty[,$ and $B\in\mathfrak{B}({\bf X}).$ The extended $Q$-function satisfies Assumption~\ref{LB} for $t\in [0,\infty[.$ Since solutions of Kolmogorov's equations \eqref{eq:BKDE} and \eqref{eq:FKDE} for the extended $Q$-function are constants in $t,$ when $t\in [0,T_0[$ and $t\in [T_1,\infty[,$ and since Kolmogorov's equations for the original $Q$-function $q$ and the extended $Q$-function coincide when $t \in [T_0, T_1[,$ there is a one-to-one correspondence between solutions of Kolmogorov's equations for the $Q$-function $q$ and for the extended $Q$-function. Since Assumption~\ref{LB} is assumed in \cite{FMS}, the results obtained in~\cite{FMS} for the problem formulations for the interval $[0,\infty[$ hold for an arbitrary interval $[T_0,T_1[.$ \end{remark} As explained in Remark~\ref{T-eq}, if a $Q$-function $q$ satisfies Assumption~\ref{LB} on the interval $[T_0, T_1[,$ its extension to $t\in [0,\infty[$ defined in Remark~\ref{T-eq} satisfies the same assumption on $[0, \infty[.$ The following example illustrates that this need not be the case if $q$ satisfies Assumption~\ref{ALB} or Assumption~\ref{L1}. Hence, following Feller~\cite{Fel}, we formulate the results in this paper for an arbitrary interval $[T_0, T_1[$ with $0\le T_0 < T_1<\infty.$ \begin{example} \label{ex:B-eq} {\it A $Q$-function $q$ satisfies Assumption~\ref{ALB} on the interval $[T_0, T_1[,$ while its extention to $t\in [0,\infty[$ defined in Remark~\ref{T-eq} does not satisfy even the weaker Assumption~\ref{L1} when $T_0 = 0$ and $T_1 = \infty$. }Fix an arbitrary $T_1 \in ]0, \infty[.$ Let $T_0 := 0$, ${\bf X} := \{1,2,\ldots\},$ and $q(x,t):= \frac{1}{T_1-t}$ for all $x \in {\bf X}, 0 \le t < T_1$. Then $\sup_{t\in [T_0,s[} q(x,t)\le (T_1-s)^{-1}<\infty$ for each $s\in ]T_0,T_1[$ and $x\in {\bf X}.$ Thus the $Q$-function $q$ satisfies Assumption~\ref{ALB}. Consider the extension of $q$ to $t\in [0,\infty[$ defined in Remark~\ref{T-eq} and the sequence $\{t_m, m = 1,2,\ldots\} \subset [0, T_1[$ with $t_m = T_1 - \frac{1}{m}$ for all $m = 1,2,\ldots\ .$ Observe that \begin{equation} \int_{0}^{T_1} q(x,s)ds = \lim_{m \to \infty}\int_{0}^{t_m} q(x,s)ds = \lim_{m \to \infty} \int_{0}^{t_m} \frac{1}{T_1-s}ds = \lim_{m \to \infty} \log{(m \times T_1)} = \infty. \end{equation} Therefore, the described extension of $q$ from $t\in [0,T_1[$ to $t\in [0,\infty[$ does not satisfy Assumption~\ref{L1}. \qed \end{example} In Section~\ref{S-BE} we show in Theorem~\ref{thm:JMP} that under Assumption~\ref{L1} the compensator defined by a $Q$-function and an initial probability measure define a jump Markov process, whose transition function $\bar P$ is described in \eqref{def}, and Theorem~\ref{thm:BKE} states that this function is the minimal function satisfying Kolmogorov's backward equation. The function $\bar P$ was introduced in Feller~\cite{Fel}. Section~\ref{S-FE} deals with Kolmogorov's forward equation, when Assumption~\ref{ALB} holds, and Theorem~\ref{thm:FKDE} states that $\bar P$ is the minimal function satisfying the forward equation. Section~\ref{S-GB} presents results on Kolmogorov's forward equation under Assumption~\ref{LB}. \section{Jump Markov process defined by a $Q$-function and Kolmogorov's backward equation} \label{S-BE} In this section, we show that a $Q$-function satisfying Assumption~\ref{L1} defines a transition function for a jump Markov process. In addition, this transition function is the minimal function satisfying Kolmogorov's backward equation defined by this $Q$-function. Let $x_\infty\notin {\bf X}$ be an isolated point adjoined to the space ${\bf X}$. Denote ${\bar {\bf X}}={\bf X}\cup\{x_\infty\}$ Consider the Borel $\sigma$-field $\B({\bar {\bf X}})=\sigma(\mathfrak{B}({\bf X}),\{x_\infty\})$ on $\bar {\bf X}$, which is the minimal $\sigma$-field containing $ \mathfrak{B}({\bf X})$ and $\{x_\infty\}.$ Let $({\bar {\bf X}} \times ]T_0, T_1])^\infty$ be the set of all sequences $(x_0, t_1, x_1, t_2, x_2, \ldots)$ with $x_n\in \bar{{\bf X}}$ and $t_{n+1}\in ]T_0, T_1]$ for all $n =0,1,\ldots\ .$ This set is endowed with the $\sigma$-field generated by the products of the Borel $\sigma$-fields $\B(\bar{{\bf X}})$ and $\B(]T_0, T_1])$. Denote by $\Omega$ the subset of all sequences $\omega= (x_0, t_1, x_1, t_2, x_2, \ldots)$ from $({\bar {\bf X}} \times ]T_0, T_1])^\infty$ such that: (i) $x_0 \in {\bf X}$; (ii) for all $n =1,2,\ldots\,,$ if $t_n < T_1$, then $t_n < t_{n+1}$ and $x_n \in {\bf X}$, and if $t_n = T_1$, then $t_{n+1} = t_n$ and $x_n = x_\infty$. Observe that $\Omega$ is a measurable subset of $({\bar {\bf X}} \times ]T_0, T_1])^\infty$. Consider the measurable space $(\Omega, \C{F})$, where $\C{F}$ is the $\sigma$-field of the measurable subsets of $\Omega$. For all $n = 0,1,\ldots$, let $x_n(\omega)=x_n$ and $t_{n+1}(\omega)=t_{n+1},$ where $\omega \in \Omega,$ be the random variables defined on the measurable space $(\Omega, \C{F})$. Let $t_0 := T_0$, $t_\infty (\omega) := \lim\limits_{n \to \infty} t_n (\omega)$, $\omega \in \Omega$, and for all $t \in [T_0, T_1],$ let $\C{F}_t := \sigma(\B({\bf X}), \C{G}_t)$, where $\C{G}_t := \sigma (I\{x_n \in B\}I\{t_n \le s\}: n \ge 1, T_0 \le s \le t, B \in \B({\bf X})).$ Throughout this paper, we omit $\omega$ whenever possible. Consider the multivariate point process $(t_n, x_n)_{n =1,2,\ldots}$ on $(\Omega, \C{F})$. Given a $Q$-function $q$ satisfying Assumption~\ref{L1}, define a random measure $\nu$ on $([T_0, T_1[ \times {\bf X})$ as \begin{equation} \label{compensator} \nu(\omega; [T_0,t], B): = \int_{T_0}^{t}\sum_{n \ge 0}I\{t_n < s \leq t_{n+1}\} q(x_n,s, B \setminus \{x_n\})ds, \quad t \in [T_0, T_1[,\ B \in \B({\bf X}). \end{equation} Observe that $\nu$ is a predictable random measure. Indeed, formula~(\ref{compensator}) coincides with Feinberg et al.~\cite[Eq.~(2)]{FMS} when $T_0 = 0$ and $T_1 = \infty$. Arguments similar to those following Feinberg et al.~\cite[Eq.~(2)]{FMS}, which show that the random measure $\nu$ defined in \cite[Eq.~(2)]{FMS} is a predictable random measure, imply that the measure $\nu$ defined in \eqref{compensator} is a predictable random measure. Furthermore, $\nu(\{t\}\times {\bf X}) \le 1$ for all $t\in ]T_0,T_1[$ and $\nu([t_\infty,\infty[\times {\bf X})=0. $ According to Jacod~\cite[Theorem~3.6]{Jac}, the predictable random measure $\nu$ defined in \eqref{compensator} and a probability measure $\gamma$ on $\bf X$ define a unique probability measure $\BB{P}$ on $(\Omega, \C{F})$ such that $\BB{P}(x_0 \in B) = \gamma(B), B \in \B({\bf X}),$ and $\nu$ is the compensator of the random measure of the multivariate point process $(t_n, x_n)_{n \ge 1}$ defined by the triplet $(\Omega, \C{F}, \BB{P})$. Consider the process $\{\BB{X}_t: t\in [T_0, T_1[\}$, \begin{equation} \label{jump} \mathbb{X}_t(\omega) := \sum_{n \geq 0}{\bf I}\{t_n \leq t < t_{n+1}\}x_n + {\bf I}\{t_\infty \leq t\}x_\infty, \end{equation} defined on $(\Omega, \C{F}, \BB{P})$ and adapted to the filtration $\{\C{F}_{t}, t \in [T_0, T_1[\}$. By definition, the process $\{\BB{X}_t: t \in [T_0, T_1[\}$ is a jump process. For $x \in {\bf X}$ and $t \in [T_0, T_1[,$ let $q^+(x,t,\cdot)$ be the measure on $({\bf X},\B({\bf X}))$ with values $q^+(x,t,B) := q(x,t,B\setminus \{x\}),$ $B \in \B({\bf X})$. In this paper, we use the notation \[q(x,t,dz\setminus\{x\}):=q^+(x,t,dz).\] Following Feller \cite[Theorem 2]{Fel}, for $x \in {\bf X}$, $u \in [T_0, T_1[$, $t \in ]u,T_1[$, and $B \in \B({\bf X})$, define \begin{equation} \label{b0} \bar{P}^{(0)} (u,x;t,B): = I\{x \in B\} e^{-\int_u^t q(x, s) ds}, \end{equation} and \begin{equation} \label{bn} \bar{P}^{(n)}(u, x; t, B): = \int_{u}^{t} \int_{{\bf X} } e^{ -\int_{u}^{w} q(x,\theta) d\theta } q(x,w,dy \setminus \{x\}) \bar{P}^{(n-1)}(w, y; t, B) dw, \ n=1,2,\ldots\ . \end{equation} Set \begin{equation} \label{def} \bar{P}(u, x; t, B) := \sum\limits_{n=0}^{\infty} \bar{P}^{(n)}(u, x; t, B). \end{equation} According to Feller~\cite[(27) and Theorem 4]{Fel}, equation \eqref{bn} can be rewritten as \begin{equation} \label{bn-alt} \bar{P}^{(n)}(u,x;t,B) = \int\limits_u^t \int\limits_{\bf X} \int\limits_{B } e^{-\int_w^t q(y,\theta)d\theta} q(z,w,dy\setminus \{z\}) \bar{P}^{(n-1)}(u,x;w,dz)dw,\ n = 1,2,\ldots\ . \end{equation} Though Feller~\cite{Fel} considered continuous $Q$-functions, the proof of \eqref{bn-alt} given in Feller~\cite[Theorem~4]{Fel} remains correct for measurable $Q$-functions. Observe that $\bar{P}$ is a transition function if the $Q$-function $q$ satisfies Assumption~\ref{L1}. For continuous $Q$-functions satisfying Assumption~\ref{Feller}, Feller \cite[Theorems 2, 5]{Fel} proved that: (a) for fixed $u,x,t$ the function $\bar{P}(u,x;t,\cdot)$ is a measure on $({\bf X},\B({\bf X}))$ such that $0 \le \bar{P}(u,x;t,\cdot) \le 1$, and (b) for all $u,x,t,B$ the function $\bar{P}(u,x;t,B)$ satisfies the Chapman-Kolmogorov equation \eqref{CKE}. The proofs remain correct for measurable $Q$-functions satisfying Assumption~\ref{L1}. The measurability of $\bar{P}(u,x;t,B)$ in $u,x,t$ for all $B \in \B({\bf X})$ follows directly from the definitions \eqref{b0}, \eqref{bn}, and \eqref{def}. Therefore, if $q$ satisfies Assumption~\ref{L1}, the function $\bar{P}$ takes values in $[0,1]$ and satisfies properties (i)-(iii) from the definition of a transition function. \begin{theorem}\rm{(cp. Feinberg et al.~\cite[Theorem~2.2]{FMS})} \label{thm:JMP} Given a probability measure $\gamma$ on ${\bf X}$ and a $Q$-function $q$ satisfying Assumption~\ref{L1}, the jump process $\{\BB{X}_t: t \in [T_0, T_1[\} $ defined in \eqref{jump} is a jump Markov process with the transition function $\bar{P}$. \end{theorem} \begin{proof} The statement of the theorem follows from the same arguments as in the proof of Feinberg et al.~\cite[Theorem~2.2]{FMS}, where the case $T_0=0$ and $T_1=\infty$ was considered. We remark that though it was assumed there that the $Q$-function $q$ satisfies Assumption~\ref{LB}, the arguments in the proof in \cite{FMS} only require that, \begin{equation} \label{eq-a} \int_u^t q(x,s)ds < \infty, \qquad x \in {\bf X},\ u\in [T_0,T_1[,\ t \in ]u,T_1[, \end{equation} and this holds in view of Assumption~\ref{L1}. \qed \end{proof} Let $\cal P$ be the family of all real-valued non-negative functions $P(u,x;t,B),$ defined for all $t \in ]T_0, T_1[,$ $u \in [T_0, t[,$ $x \in {\bf X},$ and $B \in \B({\bf X}),$ which are measurable in $(u,x)\in [T_0, t[\times {\bf X}$ for all $t\in ]T_0, T_1[$ and $B\in \B({\bf X}).$ Observe that $\bar{P} \in {\cal P}.$ Consider a set $E$ and some family $\cal A$ of functions $f:E\to\bar{\BB{R}}=[-\infty,+\infty].$ A function $f$ from $\cal A$ is called minimal in the family $\cal A$ if for every function $g$ from $\cal A$ the inequality $f(x)\le g(x)$ holds for all $x\in E.$ The following theorem generalizes Theorems 3.1 and 3.2 in Feinberg et al. ~\cite{FMS}, stating the same statements under Assumption~\ref{LB} which is stronger than Assumption~\ref{L1}. The measurability in $(u,x)$ of a function satisfying Kolmogorov's backward equation is implicitly assumed in Feinberg et al.~\cite{FMS}. \begin{theorem \label{thm:BKE} Under Assumption~\ref{L1}, the transition function $\bar{P}$ is the minimal function in $\cal P$ satisfying the following two properties: (i) for all $t\in]T_0,T_1[,$ $x\in {\bf X},$ and $B\in\B({\bf X}),$ \begin{equation} \label{BC1} \lim\limits_{u \to t-}P(u,x;t,B) = {\bf I} \{ x \in B \}, \end{equation} and the function is absolutely continuous in $u \in [T_0, t[;$ (ii) for all $t\in]T_0,T_1[,$ $x\in {\bf X},$ and $B\in\B({\bf X}),$ Kolmogorov's backward equation \begin{equation} \label{eq:BKDE} \frac{\partial}{\partial u}{P}(u,x;t,B) = q(x,u){P}(u,x;t,B) - \int_{{\bf X} }q(x,u,dy \setminus \{x\})P(u,y;t,B) \end{equation} holds for almost every $u\in [T_0,t[.$ In addition, if the transition function $\bar{P}$ is regular (that is, $\bar{P}(u,x;t,{\bf X}) $ $=1$ for all $u,$ $x,$ $t$ in the domain of $\bar P$), then $\bar{P}$ is the unique function in $\cal P$ satisfying properties (i), (ii) and which is a measure on $({\bf X}, \B({\bf X}))$ for all $t\in]T_0,T_1[,$ $u\in [T_0,t[,$ and $x\in {\bf X},$ and taking values in $[0,1]$. \end{theorem} \begin{proof} Under Assumption~\ref{LB}, this theorem is Theorems~3.1 and 3.2 from Feinberg et al.~\cite{FMS} combined. However, the proofs there only use the property that \eqref{eq-a} holds, and this property is true under Assumption~\ref{L1}. Therefore, the statement of the theorem holds. \qed \end{proof} \section{Kolmogorov's forward equation} \label{S-FE} Under Assumption~\ref{LB}, Kolmogorov's forward equation~\eqref{eq:FKDE} was studied by Feller~\cite[Theorem~1]{Fel} for continuous $Q$-functions and by Feinberg et al.~\cite[Theorems~4.1, 4.3]{FMS} for measurable $Q$-functions. In this section, we study Kolmogorov's forward equation~\eqref{eq:FKDE} under Assumption~\ref{ALB}, which, in view of Lemma~\ref{l:A-eq}(b), is more general than Assumption~\ref{LB}. Let $\hat{\cal P}$ be the family of real-valued functions $\hat{P}(u,x;t,B),$ defined for all $u \in [T_0, T_1[$, $t \in ]u, T_1[$, $x \in {\bf X},$ and $B \in \B({\bf X}),$ which are measures on $({\bf X}, \B({\bf X}))$ for fixed $u,$ $x,$ $t$ and are measurable functions in $t$ for fixed $u,$ $x,$ $B.$ In particular, $\bar P\in \hat{\cal P},$ where $\bar P$ is defined in \eqref{def}. \begin{definition} \label{defXnt} For $s \in ]T_0, T_1],$ a set $B\in \mathfrak{B}({\bf X})$ is called $(q,s)$-bounded if the function $q(x,t)$ is bounded on the set $B\times [T_0,s[.$ \end{definition} \begin{definition \label{defbbf} A $(q,T_1)$-bounded set is called $q$-bounded. \end{definition} In Definition~\ref{defbbf} we follow the terminology from Feinberg et al.~\cite[p.~262]{FMS}. Feller~\cite{Fel} called such sets {\it bounded}. The following theorem shows that the transition function $\bar{P}$ is the minimal function satisfying Kolmogorov's forward equation. Being applied to a function $q$ satisfying the stronger Assumption~\ref{LB}, this theorem implies Corollary~\ref{CORR2}, which is a stronger result than \cite[Theorem 4.3]{FMS}; see explanations before Corollary~\ref{CORR2}. \begin{theorem} \label{thm:FKDE} Under Assumption~\ref{ALB}, the transition function $\bar{P}$ is the minimal function in $\hat{\cal P}$ satisfying the following two properties: (i) for all $u \in [T_0, T_1[,$ $s \in ]u,T_1[,$ $x \in {\bf X},$ and $(q,s)$-bounded sets $B,$ \begin{equation} \label{BC2} \lim_{t \to u+} P(u,x;t,B) = {\bf I}\{x \in B\}, \end{equation} and the function is absolutely continuous in $t \in ]u,s[;$ (ii) for all $u \in [T_0, T_1[,$ $s \in ]u,T_1[,$ $x \in {\bf X},$ and $(q,s)$-bounded sets $B,$ Kolmogorov's forward equation \begin{equation} \label{eq:FKDE} \frac{\partial}{\partial t} P(u,x;t,B) = -\int_{B} q(y,t) P(u,x;t,dy) + \int_{\bf X} q(y,t,B\setminus \{y\})P(u,x;t,dy), \end{equation} holds for almost every $t \in ]u,s[.$ In addition, if the transition function $\bar{P}$ is regular, then $\bar{P}$ is the unique function in $\hat{\cal P}$ satisfying properties (i), (ii) and taking values in $[0,1]$. \end{theorem} As stated in Theorem~\ref{thm:FKDE}, the function $\bar{P}$ satisfies Kolmogorov's forward equation~\eqref{eq:FKDE} for $(q,s)$-bounded sets $B \in \B({\bf X})$. In general, as the following example demonstrates, it is not possible to extend \eqref{eq:FKDE} to all sets $B \in \B({\bf X})$. \begin{example}\label{ex} \emph{For a set $B\in\B({\bf X}),$ Kolmogorov's forward equation~\eqref{eq:FKDE} does not hold at all $t \in ]u,T_1[$.} {\rm Let ${\bf X} = \BB{Z},$ where $\BB{Z}$ denotes the set of integers, $q(0, t) = 1,\ q(0, t, j)= 2^{-(|j| + 1)}$ for all $j \ne 0,$ and $q(j, t, -j) = q(j, t) = 2^{|j|}$ for all $ j \ne 0.$ If ${\BB X}_u=0,$ then starting at time $u$ the process spends an exponentially distributed amount of time at state 0, then it jumps to a state $j\ne 0$ with probability $2^{-(|j|+1)},$ and then it oscillates between the states $j$ and $-j$ with equal intensities. Thus for all $u \in [T_0, T_1[$ and $t \in ]u,T_1[$ \[\bar{P}(u,0;t,0) = e^{-(t-u)} \qquad \text{ and } \qquad \bar{P}(u,0;t,j) = \frac{1- e^{-(t-u)}}{2^{|j|+1}}, \qquad j \ne 0,\] which implies \begin{multline*} \int_{\bf X} q(y,t, {\bf X} \setminus \{y\})\bar{P}(u,0;t,dy)= \int_{\bf X} q(y,t)\bar{P}(u,0;t,dy) \\ = q(0,t)\bar{P}(u,0;t,0) + \sum_{j \ne 0} q(j,t)\bar{P}(u,0;t,j)= e^{-(t-u)} + \sum_{j > 0} (1- e^{-(t-u)}) = \infty. \end{multline*} Thus, if $B ={\bf X}$, then \eqref{eq:FKDE} does not hold with $P=\bar P$ because both integrals in \eqref{eq:FKDE} are infinite.\qed} \end{example} The following theorem describes the necessary and sufficient condition for a function $P$ from $\hat{\cal P}$ to satisfy properties (i) and (ii) stated in Theorem~\ref{thm:FKDE}. In other words, it provides a necessary and sufficient condition that a function $P$ from $\hat{\cal P}$ satisfies Kolmogorov's forward equation. The necessity part of this theorem plays the central role in proving the minimality property of $\bar P$ stated in Theorem~\ref{thm:FKDE}. \begin{theorem} \label{thm:intFKE} Let Assumption~\ref{ALB} hold. A function $P$ from $\hat{\cal P}$ satisfies properties (i) and (ii) stated in Theorem~\ref{thm:FKDE} if and only if, for all $u \in [T_0, T_1[,$ $t \in ]u,T_1[,$ $x \in {\bf X},$ and $B \in \B({\bf X}),$ \begin{equation} \label{int-FKE} \begin{split} P(u,x;t,B) &= {\bf I}\{x \in B\}e^{-\int_u^t q(x,\theta)d\theta} \\ &\qquad \qquad + \int_u^t \int_{\bf X} \int_B e^{-\int_w^t q(y,\theta)d\theta} q(z,w, dy\setminus \{z\}) P(u,x;w,dz) dw. \end{split} \end{equation} \end{theorem} \begin{lemma} \label{lem:FKDE-sol} Under Assumption~\ref{ALB}, the following statements hold: (a) for each $u \in [T_0, T_1[$, $s\in ]u, T_1[,$ $x \in {\bf X}$, and $B \in \B({\bf X})$, the function $\bar{P}(u,x;t,B) $ satisfies the boundary condition \eqref{BC2} and is absolutely continuous in $t \in ]u,s[.$ (b) the function $\bar{P}$ satisfies property (ii) stated in Theorem~\ref{thm:FKDE}. \end{lemma} \begin{proof} (a) Under Assumption~\ref{LB}, statement (a) of this lemma is Theorem~4.1(i) in Feinberg et al.~\cite{FMS}, and the proof there is correct if \eqref{eq-a} holds. In view of Lemma~\ref{l:A-eq}(c), formula~\eqref{eq-a} is true under Assumption~\ref{ALB}, and therefore, statement (a) of the lemma holds. (b) Fix an arbitrary $s \in ]T_0, T_1[.$ Observe that a $Q$ function satisfying Assumption~\ref{ALB} satisfies Assumption~\ref{LB} with $T_1=s$. Then it follows from Feinberg et al.~\cite[Theorem~4.1(ii)]{FMS} that, for all $u \in [T_0, s[$, $x \in {\bf X}$, and $(q,s)$-bounded sets $B \in \B({\bf X})$, the function $\bar{P}(u,x;t,B)$ satisfies Kolmogorov's forward equation~\eqref{eq:FKDE} for almost every $t \in ]u,s[.$ Since $s$ was chosen arbitrarily, this fact implies that the function $\bar{P}$ satisfies property (ii) stated in Theorem~\ref{thm:FKDE}. \qed \end{proof} To prove Theorems~\ref{thm:FKDE} and \ref{thm:intFKE} we formulate and prove two lemmas. Lemmas~\ref{Cor} and ~\ref{l:int-S} present Kolmogorov's forward equation in integral forms~\eqref{int-FKE},~\eqref{eq:FKE}, which are equivalent to its differential form~\eqref{eq:FKDE}. In particular, Theorem~\ref{thm:intFKE} follows from Lemma~\ref{l:int-S}. Let $u \in [T_0, T_1[,$ $s \in ]u,T_1[$, $ x \in {\bf X},$ and $B \in \B({\bf X})$ be a $(q,s)$-bounded set. For any function $P$ from $\hat{\cal P},$ \begin{equation} \label{almostM} \int_{B} q(y,t)P(u,x;t,dy) \le \left (\sup_{y \in B, t \in ]u,s[}q(y,t)\right) P(u,x;t,B) < \infty, \qquad t \in ]u,s[. \end{equation} In addition, for $u,$ $s,$ $x,$ and $B$ described above, if the function $P$ satisfies the boundary condition \eqref{BC2} and is absolutely continuous in $t \in ]u,s[,$ then it is bounded in $t \in ]u,s[,$ which along with \eqref{almostM} implies that \begin{equation} \label{eq:HEF2M} \int_u^t \int_{B} q(y,w)P(u,x;w,dy)dw < \infty, \qquad \qquad \qquad t \in ]u,s[. \end{equation} \begin{lemma} \label{Cor} For arbitrary fixed $u \in [T_0, T_1[$, $s \in ]u, T_1[$, $x \in {\bf X}$, and $(q,s)$-bounded set $B \in \B({\bf X}),$ a function $P$ from $\hat{\cal P}$ satisfies the equality \begin{equation} \label{eq:FKE} \begin{split} P&(u,x;t,B)= {\bf I}\{x \in B\} \\ &- \int_{u}^{t} \int_{B} q(y,w)P(u,x;w,dy)dw + \int_u^t \int_{\bf X} q(y,w,B\setminus \{y\}) P(u,x; w, dy) dw, \quad t \in ]u,s[, \end{split} \end{equation} if and only if it satisfies the boundary condition \eqref{BC2}, is absolutely continuous in $t \in ]u,s[,$ and satisfies Kolmogorov's forward equation~\eqref{eq:FKDE} for almost every $t\in ]u,s[.$ \end{lemma} \begin{proof} Suppose that a function $P$ from $\hat{\cal P}$ satisfies the boundary condition \eqref{BC2}, is absolutely continuous in $t \in ]u,s[,$ and satisfies Kolmogorov's forward equation~\eqref{eq:FKDE} for almost every $t\in ]u,s[$ for $u,$ $s,$ $x,$ and $B$ described in the formulation of the lemma. Since every absolutely continuous function is the integral of its derivative, equality \eqref{eq:FKE} follows from integrating equation \eqref{eq:FKDE} from $u$ to $t$ and using the boundary condition \eqref{BC2}. In particular, both integrals in equality \eqref{eq:FKE} are finite because, in view of \eqref{eq:HEF2M}, the first integral is finite. Now, suppose \eqref{eq:FKE} holds for $u,$ $s,$ $x,$ and $B$ described in the formulation of the lemma. Observe that, for fixed $u,$ $s,$ $x,$ and $B$, the real valued function $P(u,x;t,B)$ is a constant plus the the difference of two integrals from $u$ to $t$ of nonnegative integrable functions defined for $w \in ]u,s[$. Since an integral of an integrable function is an absolutely continuous function of the upper limit of integration and its derivative is equal to the integrand almost everywhere on its domain (Royden~\cite[Thms~10 on p. 107 and 14 on p. 110]{Roy}), the function $P(u,x;t,B)$ is absolutely continuous in $t \in ]u,s[,$ and Kolmogorov's forward equation~\eqref{eq:FKDE} holds for almost every $t\in ]u,s[$ for the fixed $u,$ $s,$ $x,$ and $B$. In addition, the absolute continuity of the integrals in \eqref{eq:FKE} implies that \eqref{BC2} holds. \qed \end{proof} \begin{lemma} \label{l:int-S} Let $u \in [T_0, T_1[$, $s \in ]u, T_1[$, $x \in {\bf X}$, and $C \in \B({\bf X})$ be a $(q,s)$-bounded set. A function $P$ from $\hat{\cal P}$ satisfies for all $B \in \B(C)$ the boundary condition \eqref{BC2}, is absolutely continuous in $t \in ]u,s[,$ and satisfies Kolmogorov's forward equation~\eqref{eq:FKDE} for almost every $t\in ]u,s[$ if and only if it satisfies equality \eqref{int-FKE} for all $t \in ]u,s[$ and $B \in \B(C).$ \end{lemma} \begin{remark} \label{T-eq2} The sufficiency statements of Theorem~\ref{thm:intFKE} and Lemma~\ref{l:int-S} are not used in the proofs in this section \end{remark} \begin{proof}\emph{of Lemma~\ref{l:int-S}} The following version of Fubini's theorem from Halmos~\cite[Section~36, Remark~(3)]{Halmos} is used in the proof. Let $(Z, {\bf S}, \mu)$ be a measure space with $\mu(Z) < \infty,$ and let $(Y, {\bf T})$ be a measurable space. Suppose that to almost every $z \in Z$ there corresponds a finite measure $\nu_z$ on ${\bf T}$ such that the function $\phi(z):=\nu_z(B)$ is measurable in $z$ for each measurable subset $B$ of $Y.$ Then, for any non-negative measurable function $g$ on $Y$, \begin{equation} \label{eq:exchange} \int_{Z} \left(\int_Y g(y) \nu_z(dy) \right) \mu(dz) = \int_Y g(y) \nu(dy), \end{equation} where, for each measurable subset $B$ of $Y$, \[\nu(B):= \int_Z \nu_z(B) \mu(dz).\] Let us fix $ P\in\hat{\cal P},$ $u \in [T_0, T_1[$, $s \in ]u, T_1[$, $x \in {\bf X}$, and a $(q,s)$-bounded set $C \in \B({\bf X})$. To simplify notations, define \begin{align} \label{G-1} G^{(1)}(t,B) &:= \int_{\bf X} q(z,t,B \setminus \{z\}) {P}(u,x;t,dz), \qquad & t \in ]u,s[,\ B \in \B(C),\\ \label{G-2} G^{(2)}(t,B) &:= \int_{\bf X} {\delta}_z(B)q(z,t) {P}(u,x;t,dz),\qquad & t \in ]u,s[,\ B \in \B(C), \end{align} where $\delta_z(\cdot)$ is the Dirac measure on $({\bf X}, \B({\bf X})),$ \begin{equation} \label{DM} \delta_z(B) := {\bf I}\{z \in B\}, \qquad B \in \B({\bf X}). \end{equation} Observe that, for $j = 1,2,$ the function $G^{(j)}(t,\cdot)$ is a measure on $(C, \B(C))$ for every $t \in ]u,s[,$ and $G^{(j)}(\cdot, B)$ is a measurable function on $]u,s[$ for every $B \in \B(C).$ Let $t \in ]u,s[,$ $v \in ]u,t[,$ and $B \in \B(C).$ Consider $(Z,{\bf S}, \mu)=({\bf X}, \B({\bf X}), {P}(u,x;v,\cdot))$ and $(Y, {\bf T}):=(C,\B(C)).$ For $\nu_z(\cdot) = q^+(z, v, \cdot),$ which is finite for all $z \in Z$ since $q$ is a $Q$-function, and for $g(y) = {\bf I}\{ y \in B\}e^{-\int_v^t q(y,\theta)d\theta},$ formula \eqref{eq:exchange} yields \begin{equation} \label{ex1} \int_{\bf X} \left(\int_B e^{-\int_v^t q(y,\theta)d\theta} q(z,v, dy\setminus \{z\})\right) {P}(u,x;v,dz) = \int_B e^{-\int_v^t q(y,\theta)d\theta} G^{(1)}(v,dy). \end{equation} \emph{Necessity.} For all $B \in \B(C)$, let the function $P$ satisfy the boundary condition \eqref{BC2}, be absolutely continuous in $t \in ]u,s[,$ and satisfy Kolmogorov's forward equation~\eqref{eq:FKDE} for almost every $t\in ]u,s[.$ Equation~\eqref{eq:FKDE} can be rewritten as \begin{equation}\label{eq:FKEShort} \frac{\partial}{\partial t} P(u,x;t,B) = -G^{(2)}(t,B)+G^{(1)}(t,B). \end{equation} Formula \eqref{almostM} means that $G^{(2)}(t,B)<\infty$ for all $t\in ]u,s[.$ This inequality and \eqref{eq:FKEShort} imply that, for $j = 1,2,$ \begin{equation} \label{almost1} G^{(j)}(t, C)< \infty \quad \text{ for almost every } \quad t \in ]u,s[. \end{equation} For $j = 1,2,$ consider the non-negative functions $H^{(j)}:(]u,s[ \times \B(C))\to \BB{R}_+,$ \begin{equation} \label{eq:H} H^{(j)}(t,B) := \int_u^t G^{(j)}(w,B) dw, \qquad t \in ]u,s[,\ B \in \B(C). \end{equation} In view of Lemma~\ref{Cor}, \begin{equation} \label{eq:FKE-alt} {P}(u,x;t,B) = {\bf I}\{x \in B\} + H^{(1)}(t,B)- H^{(2)}(t,B), \qquad t \in ]u,s[,\ B \in \B(C). \end{equation} Equality \eqref{eq:HEF2M}, which implies \eqref{eq:HEF} for $j=2,$ and \eqref {eq:FKE-alt} yield \begin{equation} \label{eq:HEF} H^{(j)}(t,B) <\infty, \qquad\qquad j=1,2,\ t \in ]u,s[,\ B \in \B(C). \end{equation} Observe that, for any measure $p(\cdot)$ on $(C, \B(C))$ and $w \in [u,t[$, \begin{equation} \label{0G} \begin{aligned} \int_B (1-e^{-\int_w^t q(y, \theta)d\theta})p(dy) &= \int_B \left (\int_w^t q(y,v) e^{-\int_v^t q(y,\theta)d\theta} dv\right)p(dy) \\ &= \int_w^t \int_B q(y,v) e^{-\int_v^t q(y,\theta)d\theta} p(dy) dv, \end{aligned} \end{equation} where the first equality is correct since \begin{equation} \label{exp} \int_w^t q(y,v)e^{-\int_v^t q(y,\theta)d\theta} dv = 1 - e^{-\int_w^t q(y,\theta)d\theta}, \qquad y \in {\bf X} \end{equation} and the last one is obtained by changing the order of integration in $y$ and $v$ and applying Fubini's theorem. Let $j = 1,2,$ $t \in ]u,s[$, and $B \in \B(C).$ Then \begin{multline} \label{GH1} H^{(j)}(t, B) - \int_u^t \int_B e^{-\int_w^t q(y,\theta)d\theta} G^{(j)}(w,dy) dw = \int_u^t \int_B (1- e^{-\int_w^t q(y,\theta)d\theta}) G^{(j)}(w,dy) dw\\ \begin{aligned} &=\int_u^t \left( \int_w^t \int_B q(y,v) e^{-\int_v^t q(y,\theta)d\theta} G^{(j)}(w,dy) dv\right)dw \\ &= \int_u^t \int_u^v \left (\int_B q(y,v) e^{-\int_v^t q(y,\theta)d\theta} G^{(j)}(w,dy) \right) dw dv,\\ &= \int_u^t \int_B q(y,v) e^{-\int_v^t q(y,\theta)d\theta} H^{(j)}(v,dy)dv, \end{aligned} \end{multline} where the first equality follows from \eqref{eq:H}, the second equality follows from \eqref{0G} with $p(\cdot) = G^{(j)}(w,\cdot),$ the third equality is obtained by changing the order of integration in $w$ and $v$, and the last one is obtained from formula \eqref{eq:exchange} by setting $(Z, {\bf S}, \mu):=(]u,v[,\B(]u,v[),\lambda),$ where $\lambda$ is the Lebesgue measure, $(Y, {\bf T}):=(C,\B(C)),$ $\nu_z(\cdot) = G^{(j)}(z, \cdot),$ which, in view of inequality \eqref{almost1} is finite for almost every $z \in Z$, and $g(y) = {\bf I} \{y \in B\}q(y,v) e^{-\int_v^t q(y,\theta)d\theta}.$ For $v \in ]u,t[$, by setting $(Z,{\bf S}, \mu):=({\bf X}, \B({\bf X}), P(u,x;v,\cdot))$, $(Y, {\bf T}):=(C,\B(C))$, $\nu_z(\cdot): = q(z, v)\delta_z(\cdot)$, and $g(y): = {\bf I}\{ y \in B\}e^{-\int_v^t q(y,\theta)d\theta},$ formula \eqref{eq:exchange} yields \begin{equation} \label{ex2} \int_B e^{-\int_v^t q(y,\theta)d\theta} G^{(2)}(v,dy) = \int_{\bf X} \left(\int_B e^{-\int_v^t q(y,\theta)d\theta} q(z,v) \delta_z(dy)\right) {P}(u,x;v,dz). \end{equation} Therefore, for all $t \in ]u,s[$ and $B \in \B(C),$ \begin{align} \label{2} &H^{(2)}(t,B) = \int_u^t \int_B e^{-\int_v^t q(y,\theta)d\theta} G^{(2)}(v,dy) dv + \int_u^t \int_B q(y,v) e^{-\int_v^t q(y,\theta)d\theta} H^{(2)}(v,dy) dv \notag\\ &= \int_u^t \int_B q(y,v)e^{-\int_v^t q(y,\theta)d\theta} {P}(u,x;v,dy) dv + \int_u^t \int_B q(y,v) e^{-\int_v^t q(y,\theta)d\theta} H^{(2)}(v,dy) dv \notag\\ &= \int_u^t \int_B q(y,v)e^{-\int_v^t q(y,\theta)d\theta} \delta_x(dy)dv + \int_u^t \int_B q(y,v)e^{-\int_v^t q(y,\theta)d\theta} H^{(1)}(v, dy)dv,\\ &= \left({\bf I}\{x \in B\} - {\bf I}\{x \in B\}e^{-\int_u^t q(x, \theta)d\theta}\right) + \left (H^{(1)}(t,B) - \int_u^t \int_B e^{-\int_v^t q(y,\theta)d\theta} G^{(1)}(v,dy) dv \right)\notag\\ &= {\bf I}\{x \in B\} + H^{(1)}(t, B) \notag\\ &\qquad - \left( {\bf I}\{x \in B\} e^{-\int_u^t q(x, \theta)d\theta} + \int_u^t \int_{\bf X} \int_B e^{-\int_v^t q(y,\theta)d\theta} q(z,v, dy\setminus \{z\}) {P}(u,x;v,dz) dv \right)\notag, \end{align} where the first equality follows from \eqref{GH1} with $j = 2$, the second equality follows from \eqref{DM} and \eqref{ex2}, the third equality follows from \eqref{DM}, \eqref{eq:FKE-alt}, and \eqref{eq:HEF}, the fourth equality follows from \eqref{0G} with $p(\cdot) = \delta_x(\cdot)$ and $w = u$ and \eqref{GH1} with $j = 1$, and the last one follows from \eqref{ex1}. Thus, \eqref{eq:FKE-alt} and \eqref{2} imply \eqref{int-FKE}. \emph{Sufficiency.} Assume that the function ${P}$ satisfies \eqref{int-FKE} for all $t \in ]u,s[$ and $B \in \B(C)$. As follows from Lemma~\ref{Cor}, it is sufficient to show that \eqref{eq:FKE} holds for all $B \in \B(C).$ In view of equality \eqref{ex1}, formula~\eqref{int-FKE} can be rewritten as \begin{equation} \label{int-FKEs} {P}(u,x;t,B) = {\bf I}\{x \in B\} e^{-\int_u^t q(x, \theta)d\theta} + \int_u^t \int_B e^{-\int_v^t q(y,\theta)d\theta} G^{(1)}(v,dy) dv. \end{equation} Let $t \in ]u,s[$ and $B \in \B(C).$ Since \begin{equation} \label{0G1} \int_w^t q(y,v)e^{-\int_w^v q(y,\theta)d\theta} dv = 1- e^{-\int_w^t q(y,\theta)d\theta}, \qquad y \in {\bf X},\ w \in [u,t[, \end{equation} it follows from \eqref{0G1} and Fubini's theorem that, for any measure $p(\cdot)$ on $(C, \B(C)),$ \begin{equation} \label{exp1} \int_B (1- e^{-\int_w^t q(y,\theta)d\theta})p(dy) = \int_w^t \int_B q(y,v) e^{-\int_w^v q(y,\theta)d\theta}p(dy) dv, \quad w \in [u,t[. \end{equation} Observe that formula~\eqref{exp1} differs from \eqref{0G}. Next, \begin{multline} \label{22} \int_u^t G^{(1)}(w,B) dw - \int_u^t \int_B e^{-\int_w^t q(y,\theta)d\theta} G^{(1)}(w,dy) dw \\ \begin{aligned} &= \int_u^t \left (\int_w^t \int_B q(y,v) e^{-\int_w^v q(y,\theta) d\theta} G^{(1)}(w,dy)dv \right) dw\\ &= \int_u^t \int_u^v \left (\int_B q(y,v) e^{-\int_w^v q(y,\theta) d\theta} G^{(1)}(w,dy) \right)dw dv\\ &= \int_u^t \left(\int_B q(y,v) \int_u^v e^{-\int_w^v q(y,\theta) d\theta} G^{(1)}(w,dy) dw \right) dv, \end{aligned} \end{multline} where the first equality follows from \eqref{exp1} with $p(\cdot) = G^{(1)}(w,\cdot)$, the second equality is obtained by interchanging the order of integration in $w$ and $v$, and the last one is obtained from \eqref{eq:exchange} by setting $(Z,{\bf S}, \mu)=(]u,v[,\B(]u,v[),\lambda),$ where $\lambda$ is the Lebesgue measure, $(Y, {\bf T}):=(C,\B(C))$, $\nu_z(B) = \int_B e^{-\int_z^v q(y, \theta)d\theta}G^{(1)}(z,dy)$, which, in view of equality~\eqref{int-FKEs} and the property that the function $P$ takes values in $[0, \infty[,$ is finite for $B=C$ and for almost every $z\in Z$, and $g(y) = q(y,v){\bf I}\{y \in B\}.$ Therefore, \begin{multline*} \begin{aligned} {P}(u,x;t,B) &= {\bf I}\{x \in B\} - \int_u^t \int_B q(y,v) e^{-\int_u^v q(y,\theta)d\theta} \delta_x(dy) dv \\ &+ \int_u^t G^{(1)}(w,B)dw - \int_u^t \left(\int_B q(y,v) \int_u^v e^{-\int_w^v q(y,\theta) d\theta} G^{(1)}(w,dy) dw \right) dv \\ &= {\bf I}\{x \in B\} + \int_u^t G^{(1)}(w,B) - \int_u^t \int_B q(y,v){P}(u,x;v,dy)dv, \end{aligned} \end{multline*} where the first equality follows from \eqref{int-FKEs}, \eqref{exp1} with $p(\cdot) = \delta_x(\cdot)$, and \eqref{22}, and the last one is obtained by substituting ${P}(u, x; v,dy)$ with \eqref{int-FKEs}. Thus, it follows from \eqref{G-1} and the above equality that \eqref{eq:FKE} holds for all $B \in \B(C).$ \qed \end{proof} \begin{proof} \emph{of Theorem~\ref{thm:intFKE}} The sufficiency statement of the theorem follows immediately from Lemma~\ref{l:int-S}, and the necessity statement of the theorem follows from Lemma~\ref{l:int-S} and Lebesgue's monotone convergence theorem, as explained below. {\it Necessity. } Assume that, for all $u \in [T_0, T_1[,$ $s \in ]u, T_1[,$ $x \in {\bf X}$, and $(q,s)$-bounded sets $C$, properties (i) and (ii) stated in Theorem~\ref{thm:FKDE} hold for the function $P$. Assumption~\ref{ALB} and Lemma~\ref{l:A-eq}(a) imply that for each $s\in ]T_0,T_1[ $ there exist $(q,s)$-bounded sets $B^s_1,B^s_2,\ldots$ such that $B^s_n\uparrow {\bf X}$ as $n \to \infty$. Then, for all $u \in [T_0, T_1[,$ $s \in ]u, T_1[,$ $t \in ]u,s[,$ $x \in {\bf X}$, and $B \in \B({\bf X}),$ \begin{multline} \begin{aligned} P(&u,x;t,B)=\lim_{n\to\infty} P(u,x;t,B\cap B^s_n) = \lim_{n\to\infty} {\bf I}\{x \in B\cap B^s_n\}e^{-\int_u^t q(x,\theta)d\theta} \\ &\qquad \qquad+ \lim_{n\to\infty} \int_u^t \int_{\bf X} \int_{B\cap B^s_n} e^{-\int_w^t q(y,\theta)d\theta} q(z,w, dy\setminus \{z\}) P(u,x;w,dz) dw\\ &= {\bf I}\{x \in B\}e^{-\int_u^t q(x,\theta)d\theta} + \int_u^t \int_{\bf X} \int_{B} e^{-\int_w^t q(y,\theta)d\theta} q(z,w, dy\setminus \{z\}) P(u,x;w,dz) dw, \end{aligned} \end{multline} where the first equality is correct since the sets $B_n^s \uparrow {\bf X}$ as $n \to \infty$, the second equality follows from Lemma~\ref{l:int-S}, and the last one follows from Lebesgue's monotone convergence theorem since the sets $B_n^s \uparrow {\bf X}$ as $n \to \infty$. Since the above equality holds for all $t \in ]u,s[$ for each $s \in ]u,T_1[$, formula~\eqref{int-FKE} holds for all $u \in [T_0, T_1[,$ $t \in ]u,T_1[,$ $x \in {\bf X}$, and $B \in \B({\bf X}).$ \qed \end{proof} \begin{proof} \emph{of Theorem~\ref{thm:FKDE}} In view of Lemma~\ref{lem:FKDE-sol}, we need to prove only the minimality and uniqueness properties of $\bar P$ among functions from $\hat{\cal P}$ satisfying properties (i) and (ii) stated in Theorem~\ref{thm:FKDE}. Let $ P$ be a function from $\hat{\cal P}$ satisfying these properties. Let $u \in [T_0, T_1[,$ $t \in ]u, T_1[,$ $x \in {\bf X}$, and $B \in \B({\bf X}).$ In view of Theorem~\ref{thm:intFKE}, formula~\eqref{int-FKE} holds. Since the last term in \eqref{int-FKE} is non-negative, \[{P}(u,x;t,B) \ge {\bf I}\{x \in B\}e^{-\int_u^t q(x, \theta)d\theta} = \bar{P}^{(0)}(u,x;t,B),\] where the last equality is \eqref{b0}. Assume that for some $n=0,1,\ldots,$ \begin{equation}\label{eq:indn} {P}(u,x;t,B) \ge \sum_{m=0}^n\bar{P}^{(m)}(u,x;t,B). \end{equation} Then, from \eqref{bn-alt}, \eqref{int-FKE}, and \eqref{eq:indn}, $ {P}(u,x;t,B) \ge \sum_{m=0}^{n+1}\bar{P}^{(m)}(u,x;t,B).$ Thus, by induction, \eqref{eq:indn} holds for all $n=0,1,\ldots\ .$ Let $n\to\infty.$ Then \eqref{eq:indn} and \eqref{def} imply that ${P}(u,x;t,B) \ge \bar{P}(u,x;t,B).$ Therefore, the function $\bar{P}$ is the minimal function from $\hat{\cal P}$ satisfying properties (i) and (ii) stated in Theorem~\ref{thm:FKDE}. In conclusion, let the transition function $\bar P$ be regular. If there is another function ${ P,}$ which satisfies properties (i) and (ii) stated in Theorem~\ref{thm:FKDE} and takes values in $[0,1]$, then, since $\bar P$ is the minimal solution, ${P}(u,x;t,B)> {\bar P}(u,x;t,B)$ for some $u \in [T_0, T_1[,$ $x \in {\bf X},$ $t \in ]u, T_1[,$ and $B \in \B({\bf X}).$ In addition, ${P}(u,x;t,{\bf X}\setminus B)\ge {\bar P}(u,x;t,{\bf X}\setminus B).$ Therefore, ${P}(u,x;t,{\bf X})= {P}(u,x;t,B)+ {P}(u,x;t,{\bf X}\setminus B)> \bar P(u,x;t,B)+\bar P(u,x;t,{\bf X}\setminus B)=\bar P(u,x;t,{\bf X})=1,$ and the inequality ${P}(u,x;t,{\bf X})>1$ contradicts the property that ${P}$ takes values in $[0,1].$\qed \end{proof} Theorems~\ref{thm:FKDE} and \ref{thm:intFKE} imply the following two corollaries. \begin{corollary} \label{Cor-G} Under Assumption~\ref{ALB}, the following statements hold: (a) for all $u \in [T_0, T_1[,$ $s \in ]u,T_1[,$ $x \in {\bf X},$ and $(q,s)$-bounded sets $B,$ the function \\ $\bar{P}(u,x;t,B)$ satisfies \eqref{eq:FKE}. (b) the function $\bar{P}$ is the minimal function in $\hat{\cal P}$ for which statement~(a) holds. In addition, if the transition function $\bar{P}$ is regular, then $\bar{P}$ is the unique function in $\hat{\cal P}$ with values in $[0,1]$ for which statement~(a) holds. \end{corollary} \begin{proof} In view of Lemma~\ref{Cor}, any function $P$ from ${\hat{\cal P}}$ satisfies statement (a) of the corollary if and only if it satisfies properties (i) and (ii) stated in Theorem~\ref{thm:FKDE}. Thus, the corollary follows from Theorem~\ref{thm:FKDE}. \qed \end{proof} \begin{corollary} \label{cor:intFKE} Let Assumption~\ref{ALB} hold. The function $\bar P$ is the minimal function $P$ in $\hat{\cal P}$ satisfying equality~\eqref{int-FKE} for all $u \in [T_0, T_1[,$ $t \in ]u,T_1[,$ $x \in {\bf X},$ and $B \in \B({\bf X}).$ In addition, if the transition function $\bar{P}$ is regular, then $\bar{P}$ is the unique function in $\hat{\cal P}$ with values in $[0,1]$ satisfying equality~\eqref{int-FKE} for all $u \in [T_0, T_1[,$ $t \in ]u,T_1[,$ $x \in {\bf X},$ and $B \in \B({\bf X}).$ \end{corollary} \begin{proof} The corollary follows from Theorems~\ref{thm:FKDE} and \ref{thm:intFKE}. \qed \end{proof} \section{Kolmogorov's forward equation for $Q$-functions bounded at each state} \label{S-GB} This section provides additional results on Kolmogorov's forward equation when Assumption~\ref{LB} holds. Under Assumption~\ref{LB} Kolmogorow's forward equation is studied in Feinberg et al. \cite[Theorems~4.1, 4.3]{FMS}, and Corollary~\ref{CORR2} is a more general statement than \cite[Theorem 4.3]{FMS}. In addition, Corollary~\ref{Cor-P} describes the minimality property of the function $\bar P(T_0, x;t,B)$ that is useful for applications to continuous-time Markov decision processes. The following lemma and its corollary do not require any of Assumptions~\ref{Feller}-\ref{L1}. \begin{lemma} \label{LEMMABT} Let $u\in [T_0,T_1[,$ $x\in{\bf X},$ and $B$ be a $q$-bounded set. A function $P \in \hat{\cal P}$ satisfies Kolmogorov's forward equation~\eqref{eq:FKDE} for almost every $t\in ]u,s[$ for all $s\in ]u,T_1[$ if and only if it satisfies this equation for almost every $t\in ]u,T_1[.$ \end{lemma} \begin{proof} The sufficiency statement of the lemma is straightforward since $]u,s[\subset ]u,T_1[$ when $s\in ]u,T_1[.$ Let a function $P \in \hat{\cal P}$ satisfy Kolmogorov's forward equation~\eqref{eq:FKDE} for almost every $t\in ]u,s[$ for all $s\in ]u,T_1[,$ where $u \in [T_0, T_1[,$ $x \in {\bf X},$ and $B \in \B({\bf X})$ is a $q$-bounded set. Consider an arbitrary sequence $s_n\uparrow T_1$ as $n\to\infty$ with $s_1>u.$ Let $Y$ be the set of all $t\in ]u,T_1[$ such that \eqref{eq:FKDE} does not hold at point $t.$ For $n = 1,2,\ldots,$ the Lebesgue measure of the sets $Y\cap ]u,s_n[$ is 0 since each of these sets consists of points $t\in ]u,s_n[$ at which \eqref{eq:FKDE} does not hold. This implies that the Lebesgue measure of the set $Y$ is $0$. Therefore, the function $P$ satisfies Kolmogorov's forward equation for almost every $t \in ]u,T_1[.$ \qed \end{proof} \begin{corollary}\label{LEMMAB} A function $P \in \hat{\cal P}$ satisfies for $q$-bounded sets $B$ properties (i) and (ii) stated in Theorem~\ref{thm:FKDE} if and only if the following two properties hold: (a) for all $u \in [T_0, T_1[$, $x \in {\bf X}$, and $q$-bounded sets $B$, the function $P(u,x;t,B)$ satisfies the boundary condition \eqref{BC2} and is absolutely continuous in $t \in ]u,s[$ for each $s \in ]u,T_1[;$ (b) for all $u \in [T_0, T_1[$, $x \in {\bf X}$, and $q$-bounded set $B$, the function $P(u,x;t,B)$ satisfies Kolmogorov's forward equation~\eqref{eq:FKDE} for almost every $t\in ]u,T_1[.$ \end{corollary} \begin{proof} For $q$-bounded sets $B,$ property (i) stated in Theorem~\ref{thm:FKDE} coincides with property (a) stated in the corollary. Lemma~\ref{LEMMABT} implies that property (ii) stated in Theorem~\ref{thm:FKDE} holds for a $q$-bounded set $B$ if and only if property (b) stated in the corollary holds. \qed \end{proof} \begin{lemma}\label{LEMMA5} Under Assumption~\ref{LB}, a function $P \in \hat{\cal P}$ satisfies properties (i) and (ii) stated in Theorem~\ref{thm:FKDE} if and only if it satisfies properties (a) and (b) stated in Corollary~\ref{LEMMAB}. \end{lemma} \begin{proof} Let the function $P$ satisfy properties (i) and (ii) stated in Theorem~\ref{thm:FKDE}. Since a $q$-bounded set is $(q,s)$-bounded, it follows from Corollary~\ref{LEMMAB} that properties (a) and (b) stated in Corollary~\ref{LEMMAB} hold. Let properties (a) and (b) stated in Corollary~\ref{LEMMAB} hold. Fix arbitrary $u \in [T_0, T_1[$, $s \in ]u, T_1[$, and $x \in {\bf X}.$ Lemma~\ref{l:int-S} implies that for every $q$-bounded set $B$ equality~\eqref{int-FKE} holds for all $t\in ]u,s[.$ In view of Assumption~\ref{LB} and Lemma~\ref{l:A-eq}(a), there exist $q$-bounded sets $B_1,B_2,\ldots$ such that $B_n\subseteq B_{n+1},$ $n=1,2,\ldots,$ and ${\bf X}=\cup_{n=1}^\infty B_n.$ Let $B \in \B(X)$. Then $B^n:=B_n\cap B, $ $n=1,2,\ldots,$ are $q$-bounded sets. Therefore, for each set $B^n,$ equality~\eqref{int-FKE} holds for all $t \in ]u,s[.$ Since $B^n \uparrow B$ as $n \to \infty,$ Lebesgue's monotone convergence theorem implies that this formula also holds for $B.$ Thus, in view of Theorem~\ref{thm:intFKE}, the function $P$ satisfies properties (i) and (ii) stated in Theorem~\ref{thm:FKDE}.\qed \end{proof} The following corollary generalizes Feinberg et al.~\cite[Theorem 4.1]{FMS} since Assumption~\ref{ALB} is weaker than Assumption~\ref{LB}. We remark that absolute continuity in $t\in ]u,\infty[$ in \cite[Theorem 4.1(i)]{FMS} is meant in the sense that for each $s\in ]u,\infty[$ the function is absolutely continuous in $t\in ]u,s[.$ For $T_1=\infty$ this is equivalent to the absolutely continuity assumed in property (a) stated in Corollary~\ref{LEMMAB}. For unbounded intervals, this type of absolute continuity is sometimes called local absolute continuity. \begin{corollary} {\rm (cp. Feinberg et al.~\cite[Theorem 4.1]{FMS})} \label{CORR1} Let Assumption~\ref{ALB} hold. Then, the function $\bar P$ satisfies properties (a) and (b) stated in Corollary~\ref{LEMMAB}. In addition, property (a) stated in Corollary~\ref{LEMMAB} holds for all $B\in\B({\bf X}).$ \end{corollary} \begin{proof} In view of Lemma~\ref{lem:FKDE-sol}, the function $\bar{P}$ satisfies properties (i) and (ii) stated in Theorem~\ref{thm:FKDE}. In particular, it satisfies these properties for the smaller class of $q$-bounded sets. Thus, it follows from Corollary~\ref{LEMMAB} that the function $\bar{P}$ satisfies properties (a) and (b) stated in Corollary~\ref{LEMMAB}. In addition, Lemma~\ref{lem:FKDE-sol}(a) implies that property (a) stated in Corollary~\ref{LEMMAB} holds for all $B\in\B({\bf X}).$ \qed \end{proof} The following corollary generalizes \cite[Theorem 4.3]{FMS}. The difference is that Corollary~\ref{CORR2} states that $\bar P$ is the minimal solution within the class of functions satisfying the weakly continuity property, when $B$ is a $q$-bounded set, while \cite[Theorem 4.3]{FMS} claims the minimality within the smaller class of functions satisfying the weakly continuity property when $B\in\B({\bf X}).$ \begin{corollary} \label{CORR2} {\rm (cp. Feinberg et al.~\cite[Theorem 4.3]{FMS})} Let Assumption~\ref{LB} hold. Then $\bar P$ is the minimal function in $\hat{\cal P}$ satisfying properties (a) and (b) stated in Corollary~\ref{LEMMAB}. Furthermore, if the transition function $\bar{P}$ is regular, then $\bar{P}$ is the unique element of $\hat{\cal P}$ taking values in $[0,1]$ and satisfying properties (a) and (b) stated in Corollary~\ref{LEMMAB}. \end{corollary} \begin{proof} In view of Lemma~\ref{l:A-eq}(b), the corollary follows from Theorem~\ref{thm:FKDE} and Lemma~\ref{LEMMA5}. \qed \end{proof} The following two corollaries from Corollary~\ref{CORR2} are useful for applying the results of this paper to continuous-time jump Markov decision processes; see Feinberg et al.~\cite[Theorem~3.2]{FMS1}. \begin{corollary} \label{Cor-S} Under Assumption~\ref{LB}, the following statements hold: (a) for all $u \in [T_0, T_1[,$ $x \in {\bf X},$ and $q$-bounded sets $B \in \B({\bf X}),$ the function $\bar{P}(u,x;t,B)$ satisfies the equality in formula~\eqref{eq:FKE} for all $t \in ]u,T_1[.$ (b) the function $\bar{P}$ is the minimal function in $\hat{\cal P}$ for which statement~(a) holds. In addition, if the transition function $\bar{P}$ is regular, then $\bar{P}$ is the unique function in $\hat{\cal P}$ with values in $[0,1]$ for which statement~(a) holds. \end{corollary} \begin{proof} Lemma~\ref{Cor} and Corollary~\ref{LEMMAB} imply that statement~(a) of the corollary holds for a function $P$ from $\hat{ \cal P}$ if and only if the function $P$ satisfies properties (a) and (b) stated in Corollary~\ref{LEMMAB}. Therefore, this corollary follows from Corollary~\ref{CORR2}. \end{proof} When $x$ is fixed and $u = T_0$, formula~\eqref{eq:FKE} is an equation in two variables $t$ and $B$. Hence, for simplicity, we write $P(t,B)$ instead of $P(T_0,x;t,B)$ in \eqref{eq:FKE} for any function $P$ from $\hat{\cal P}$ when $x$ is fixed and $u = T_0,$ and \eqref{eq:FKE} becomes \begin{equation} \label{eq:FKE1} P(t,B) = I\{x \in B\} + \int_{T_0}^{t} ds \int_{\bf X} q(y,s,B \setminus \{y\}) P(s, dy) -\int_{T_0}^{t} ds \int_{B} q(y,s) P(s,dy). \end{equation} For fixed $x \in {\bf X}$ and $u = T_0$, the function $\bar{P}(t, \cdot)$ is the marginal probability distribution of the process $\{\BB{X}_t: t \in [T_0, T_1[\}$ at time $t$ given $\BB{X}_{T_0} = x$. Under Assumption~\ref{LB}, the following corollary describes the minimal solution of \eqref{eq:FKE1} and provides a sufficient condition for its uniqueness. \begin{corollary} \label{Cor-P} Fix an arbitrary $x \in {\bf X}$. Under Assumption~\ref{LB}, the following statements hold: (a) for all $t \in ]T_0, T_1[$ and $q$-bounded sets $B \in \B({\bf X}),$ the function $\bar{P}(t,B)$ satisfies \eqref{eq:FKE1}; (b) $\bar{P}(t,B),$ where $t\in ]T_0,T_1[$ and $B\in \B({\bf X}),$ is the minimal non-negative function that is a measure on $({\bf X},\B({\bf X}))$ for fixed $t$, is measurable in $t$ for fixed $B$, and for which statement~(a) holds. In addition, if the function $q(z,t)$ is bounded on the set ${\bf X} \times [T_0, T_1[$, then $\bar{P}(t,B)$ is the unique non-negative function with values in $[0,1]$ and satisfying the conditions stated in the first sentence of this statement. \end{corollary} \begin{proof} Statement (a) of the corollary follows immediately from Corollary~\ref{Cor-S}(a) when $u = T_0$. To prove statement (b), consider a non-negative function $P(t,B)$, where $t \in ]T_0,T_1[$ and $B \in \B({\bf X}),$ that satisfies the conditions given in the first sentence of statement (b) of this corollary. Define the function $f(u,z;t,B) \in \hat{\cal P}$, \begin{equation} \label{sol-alt} f(u,z ;t, B) = \left \{ \begin{array}{ll} P(t, B), & \quad \text{ if} \quad u = T_0 \text{ and } z = x, \\ \bar{P}(u,z;t,B), & \quad \text{ otherwise}. \end{array} \right. \end{equation} Then, it follows from Corollary~\ref{Cor-S}(a) and \eqref{sol-alt} that the function $f$ satisfies the property given in Corollary~\ref{Cor-S}(a). Thus, Corollary~\ref{Cor-S}(b) and \eqref{sol-alt} imply \begin{equation} \label{2-m} P(t, B) = f(T_0,x;t,B) \ge \bar{P}(T_0,x;t,B) = \bar{P}(t, B), \qquad t \in ]T_0,T_1[, B \in \B({\bf X}). \end{equation} To show the uniqueness property, let the function $P$ take values in $[0,1]$. This fact and the property that the function $\bar{P}(u,z;t,B)$ takes values in $[0,1]$ for all $u,$ $z,$ $t,$ $B$ in the domain of $\bar{P}$ imply that the function $f$ defined in \eqref{sol-alt} takes values in $[0,1]$. Observe that ${\bf X}$ is a $q$-bounded set if the function $q(z,t)$ is bounded on the set ${\bf X} \times [T_0, T_1[.$ Then, as follows from Corollary~\ref{Cor-S}(a), $\bar{P}(u,z;t,{\bf X}) = 1$ for all $u \in [T_0, T_1[,$ $t \in ]u,T_1[,$ and $z \in {\bf X}$. Therefore, it follows from Corollary~\ref{Cor-S}(b) that $f(u,z;t,B) = \bar{P}(u,z;t,B)$ for all $u,z,t,B$ in the domain of $\bar{P}$, which along with \eqref{2-m} implies the uniqueness property of $\bar{P}(t,B).$ \qed \end{proof} \begin{acknowledgement} The first two authors thank Pavlo Kasyanov for useful comments. \end{acknowledgement}
1,108,101,566,271
arxiv
\section{Introduction and Statement of Results} An incredible number of interesting sequences appear as Fourier coefficients of modular forms. The analytic properties of these modular forms dictate the asymptotic behavior of the corresponding sequences. The most famous example of such a sequence is the partition function $p(n)$, which counts the number of ways of representing an integer $n$ as a sum of a non-increasing sequence of positive integers. Hardy and Ramanujan pioneered the use of the circle method to study the asymptotics for $p(n)$ and proved that \[ p(n) \sim \frac{1}{4n \sqrt{3}} e^{\pi \sqrt{\frac{2n}{3}}} \] by using the analytic properties of the generating function \[ f(z) = \sum_{n=0}^{\infty} p(n) q^{n} = \prod_{n=1}^{\infty} \frac{1}{1 - q^{n}}, \] where $q = e^{2 \pi i z}$. (See Chapter 5 of \cite{Andrews} for a proof as well as for an exact formula for $p(n)$). Another important example is given by the arithmetic of quadratic forms. Let $Q$ be a positive-definite, integral, quadratic form in $r$ variables, where $r$ is even, and let $r_Q(n)$ denote the number of representations of the integer $n$ by $Q$. It is well-known that the generating function \[ \theta_{Q}(z) = \sum_{n=0}^{\infty} r_{Q}(n) q^{n} \] is a holomorphic modular form of weight $\frac{r}{2}$ for some congruence subgroup of ${\rm SL}_{2}(\mathbb{Z})$ (see Chapter 10 of \cite{Iwa} for details). To determine which integers are represented by $Q$, it is necessary to study the decomposition \[ \theta_{Q}(z) = E(z) + G(z) \] where $E(z)$ is an Eisenstein series and $G(z)$ is a cusp form, and to determine explicit bounds on the coefficients of $E(z)$ and $G(z)$. If $r \geq 6$, formulas for the coefficients of Eisenstein series show that the coefficients of $E(z)$ are of size $n^{\frac{r}{2} - 1}$, and if we write \[ G(z) = \sum_{i=1}^{\ell} c_{i} g_{i}(d_{i} z) \] where the $g_{i}(z)$ are newforms, then Deligne's proof of the Weil conjectures implies that the $n$th coefficient of $G(z)$ is bounded by \[ \left(\sum_{i=1}^{\ell} |c_{i}|\right) d(n) n^{\frac{r - 2}{4}}. \] In \cite{BH}, Bhargava and Hanke prove that a positive-definite quadratic form with integer coefficients represents every positive integer if and only if it represents the integers from 1 up to 290; in fact, it is only necessary for the form to represent 29 of these numbers. To prove this, they study about $6000$ quadratic forms in four variables, and the most time-consuming part of their calculation comes from computing the constant \[ C(G) = \sum_{i=1}^{\ell} |c_{i}|. \] In this paper, we find bounds for this constant $C(G)$ for general cusp forms $G$ of weight $k$ and full level. If \[ \ell := \dim S_{k} = \begin{cases} \lfloor \frac{k}{12} \rfloor & \text{ if } k \not\equiv 2 \pmod{12}\\ \lfloor \frac{k}{12} \rfloor - 1 & \text{ if } k \equiv 2 \pmod{12}, \end{cases} \] then any cusp form $G(z) = \sum_{n=1}^{\infty} a(n) q^{n}$ is determined uniquely by the coefficients $a(1)$, $a(2)$, $\ldots$, $a(\ell)$. In fact, in \cite[Theorem 3]{BKO}, Bruinier, Kohnen and Ono showed that the coefficients $a(n)$ of $G(z)$ may be explicitly computed recursively from the first $\ell$ coefficients of $G$. Specifically, $a(n)$ may be written as a polynomial with rational coefficients in the coefficients $a(n-i)$, the weight $k$, and the values of the $j$-function at points in the divisor of $G$. Our first result is a bound on $\sum_{i=1}^{\ell} |c_{i}|$ (giving a bound on $\abs{a(n)}$) in terms of the coefficients $a(1), a(2), \ldots, a(\ell)$. \begin{thm} \label{bound} Assume the notation above. Then \[ |a(n)| \leq \sqrt{\log(k)} \left(11 \cdot \sqrt{\sum_{m=1}^{\ell} \frac{|a(m)|^{2}}{m^{k-1}}} + \frac{e^{18.72} (41.41)^{k/2}}{k^{(k-1)/2}} \cdot \left|\sum_{m=1}^{\ell} a(m) e^{-7.288m} \right| \right) \cdot d(n) n^{\frac{k-1}{2}}. \] \end{thm} We apply this result to the study of extremal lattices. An even, unimodular lattice is a free $\mathbb{Z}$-module $\Lambda$ of rank $r$, together with a quadratic form $Q : \Lambda \to \mathbb{Z}$ with the property that the inner product \[ \langle \vec{x}, \vec{y} \rangle = Q(\vec{x} + \vec{y}) - Q(\vec{x}) - Q(\vec{y}) \] is positive definite on $\mathbb{R} \otimes \Lambda$ and is an integer for all pairs $\vec{x}, \vec{y} \in \Lambda$; additionally, we require that $\langle \vec{x}, \vec{x} \rangle$ is even for all $\vec{x} \in \Lambda$, and that the dual lattice \[ \Lambda^{\#} := \{ \vec{x} \in \mathbb{R} \otimes \Lambda : \langle \vec{x}, \vec{y} \rangle \in \mathbb{Z} \text{ for all } \vec{y} \in \Lambda \} \] is equal to $\Lambda$. For such a lattice, we must have $r \equiv 0 \pmod{8}$, so the theta function $\theta_{Q}$ is a modular form for ${\rm SL}_{2}(\mathbb{Z})$ of weight $k \equiv 0 \pmod{4}$. For example, if \[ Q = x_{1}^{2} + x_{2}^{2} + x_{3}^{2} + x_{4}^{2} + x_{5}^{2} + x_{6}^{2} + x_{7}^{2} + x_{8}^{2} - x_{1} x_{3} - x_{2} x_{4} - x_{3} x_{4} - x_{4} x_{5} - x_{5} x_{6} - x_{6} x_{7} - x_{7} x_{8}, \] then $\Lambda$ is the $E_{8}$ lattice and \[ \theta_{Q}(z) = E_{4}(z) = 1 + 240 \sum_{n=1}^{\infty} \sigma_{3}(n) q^{n}. \] An even, self-dual lattice $\Lambda$ is called extremal if $r_{Q}(n) = 0$ for $1 \leq n \leq \lfloor \frac{r}{24} \rfloor$. This means that if $Q$ is the quadratic form corresponding to $\Lambda$, then \[ \theta_{Q}(z) = 1 + O(q^{\ell + 1}) \in M_{\frac{r}{2}}. \] An example is given by the famous Leech lattice $\Lambda_{24}$. It is the unique extremal lattice of dimension 24, and ${\rm Aut}(\Lambda_{24})$ is a perfect group whose quotient by $-1$ is $Co_{1}$, the first sporadic finite simple group discovered by John H. Conway. Little is known about the set of dimensions in which extremal lattices exist, and examples are known only in dimensions $\leq 88$. Cases where the rank is a multiple of $24$ are particularly challenging, and Nebe~\cite{Nebe} recently succeeded in constructing a $72$-dimensional extremal lattice. If $\Lambda$ is an extremal lattice of dimension $r$, then the definition of $r_Q(n)$ implies that all the Fourier coefficients of the modular form \[ \theta_{Q}(z) = \sum_{n=0}^{\infty} r_{Q}(n) q^{n} = 1 + O(q^{\ell + 1}) \in M_{\frac{r}{2}} \] are non-negative. In \cite{MOS}, Mallows, Odlyzko, and Sloane use this to show that extremal lattices fail to exist in large dimensions (larger than about 164,000) by showing that the unique modular form of weight $k$ with Fourier expansion \[ F_{k,0}(z) = \sum_{n=0}^{\infty} a(n) q^{n} = 1 + O(q^{\ell + 1}), \] has $a(\ell + 2) < 0$ if $k$ is large enough. (In \cite{S}, Siegel proved that $a(\ell + 1) > 0$ for all $k \equiv 0 \pmod{4}$). As an application of Theorem~\ref{bound}, we give an explicit estimate on the largest index negative coefficient of $F_{k,0}(z)$. \begin{thm} \label{extreme} Suppose that $k \equiv 0 \pmod{4}$, and $F_{k,0}(z) \in M_{k}$ is the unique modular form of weight $k$ with \[ F_{k,0}(z) = 1 + O(q^{\ell + 1}) = \sum_{n=0}^{\infty} a(n) q^{n}. \] We have $a(n) > 0$ if \[ n \geq e^{58.366/(k-2)} (\ell^{3} \log(k))^{\frac{1}{k-2}} 1.0242382 \ell. \] \end{thm} \begin{rem} The result above is surprisingly strong. The factor preceding $1.0242382 \ell$ tends to $1$ as $k \to \infty$, and since $a(n) = 0$ for $n \leq \ell$, the only region in which negative coefficients could occur is (asymptotically) \[ \ell < n < 1.0242382 \ell. \] \end{rem} We now use this bound to determine the largest weights $k$ in which all the coefficients of $F_{k,0}(z)$ are non-negative. This depends on $k \pmod{12}$, and so we have three cases. \begin{cor} \label{cor0} The largest weight $k$ for which all coefficients of $F_{k,0}(z)$ are non-negative is \begin{align*} k = 81288 \, \, &\text{if} \, \, k \equiv 0 \pmod{12}, \\ k = 81460 \, \, &\text{if} \, \, k \equiv 4 \pmod{12}, \text{and} \\ k = 81632 \, \, &\text{if} \, \, k \equiv 8 \pmod{12}. \end{align*} \end{cor} \begin{rem} As a consequence, the largest possible dimension of an extremal lattice is $163264$. \end{rem} Our approach to proving our results is to study the basis of cusp forms \[ F_{k,m}(z) = q^m + \sum_{n=\ell + 1}^{\infty} A_k(m,n) q^{n} \in S_{k}. \] Theorem 2 of \cite{DJ} gives a generating function for the forms $F_{k,m}(z)$, and by integrating this generating function we are able to isolate individual coefficients of these forms. Using this method leads to a bound of the form \[ |A_k(m,n)| \leq c_{1} \cdot c_{2}^{\ell} e^{c_{3} m + c_{4} n} \] where $c_{1}, c_{2} > 0$, $c_{3} < 0$ and $0 < c_{4} < \sqrt{3}/2$. Given that the coefficients of a cusp form of weight $k$ are bounded by $O(d(n) n^{\frac{k-1}{2}})$, this bound is not useful by itself. Next, we estimate the Petersson norm $\langle F_{k,m}, F_{k,m} \rangle$ which is (essentially) the infinite sum \[ \sum_{n=1}^{\infty} \frac{|A_k(m,n)|^{2}}{n^{k-1}} \int_{2 \pi \sqrt{3} n}^{\infty} y^{k-2} e^{-y} \, dy. \] The exponential decay in the integral now cancels the exponential growth from the bound on $|A_k(m,n)|$. Finally, we translate the bound on $\langle F_{k,m}, F_{k,m} \rangle$ to a bound on the constant $\sum_{i=1}^{\ell} |c_{i}|$ using methods similar to those in \cite{Rou}. An outline of the paper is as follows. In Section~\ref{prelim} we review necessary background material about modular forms. In Sections~\ref{proofofbound} and \ref{proofofextreme} we prove Theorems~\ref{bound} and \ref{extreme}, respectively. In Section~\ref{computations}, we prove Corollary~\ref{cor0}. \section{Preliminaries} \label{prelim} Let $M_{k}$ denote the $\mathbb{C}$-vector space of all holomorphic modular forms of weight $k$ for ${\rm SL}_{2}(\mathbb{Z})$, and let $S_{k}$ denote the subspace of cusp forms. For even $k \geq 4$, we have the classical Eisenstein series \[ E_{k}(z) = 1 - \frac{2k}{B_{k}} \sum_{n=1}^{\infty} \sigma_{k-1}(n) q^{n} \in M_{k}, \] where $B_k$ is the $k$th Bernoulli number and $\sigma_{k-1}(n)$ is the sum of the $k-1$st powers of the divisors of $n$. We will also use the standard $\Delta$-function \[\Delta(z) = \frac{E_4^3-E_6^2}{1728} = q \prod_{n=1}^\infty (1-q^n)^{24} = \sum_{n=1}^\infty \tau(n) q^n \in S_{12}\] and the classical modular $j$-function \[j(z) = \frac{E_4(z)^3}{\Delta(z)} = q^{-1} + 744 + 196884 q + \ldots,\] a weakly holomorphic modular form of weight 0. (Weakly holomorphic modular forms are holomorphic on the upper half plane and satisfy the modular equation, but may have poles at the cusps.) For each prime $p$, there is a Hecke operator $T_{p} : M_{k} \to M_{k}$ given by \[ \sum_{n=1}^{\infty} a(n) q^{n} | T_{p} := \sum_{n=1}^{\infty} \left(a(pn) + p^{k-1} a\left(\frac{n}{p}\right)\right) q^{n}. \] The subspace $S_{k}$ is stable under the action of the Hecke operators. If $f, g \in S_{k}$, we define the Petersson inner product of $f$ and $g$ by \[ \langle f, g \rangle = \frac{3}{\pi} \int_{-1/2}^{1/2} \int_{\sqrt{1-x^{2}}}^{\infty} f(x+iy) \overline{g(x+iy)} y^{k} \, \frac{dx \, dy}{y^{2}}. \] It is well-known (see Theorem 6.12 of \cite{Iwa} for a proof) that the Hecke operators are self-adjoint with respect to the Petersson inner product, and this fact, together with the commutativity of $T_{p}$ and $T_{q}$, implies that there is a basis for $S_{k}$ consisting of Hecke eigenforms, each normalized so that the coefficient of $q$ is equal to $1$. If \[ g(z) = \sum_{n=1}^{\infty} a(n) q^{n} \] is such a Hecke eigenform, Deligne proves in \cite{Del} that if $p$ is prime, then \[ |a(p)| \leq 2 p^{\frac{k-1}{2}}, \] as a consequence of the Weil conjectures. It follows from this that $|a(n)| \leq d(n) n^{\frac{k-1}{2}}$ for all $n \geq 1$. The self-adjoint property of the Petersson inner product implies that if $g_{i}$ and $g_{j}$ are two distinct Hecke eigenforms, then $\langle g_{i}, g_{j} \rangle = 0$. On the other hand, the second equation on p. 251 of \cite{Iwa} gives that \[ L({\rm Sym}^{2} g_{i}, 1) = \frac{\pi^{2}}{6} \cdot \frac{(4 \pi)^{k} \langle g_{i}, g_{i} \rangle}{\Gamma(k)}. \] Here, $L({\rm Sym}^{2} g_{i}, s)$ is the symmetric square $L$-function. In the appendix to \cite{GHL}, Goldfeld, Hoffstein and Lieman proved that $L({\rm Sym}^{2} g_{i}, s)$ has no Siegel zeroes, and in \cite{Rou}, the second author used this to derive the lower bound \[ L({\rm Sym}^{2} g_{i}, 1) \geq \frac{1}{64 \log(k)}. \] \section{Proof of Theorem~\ref{bound}} \label{proofofbound} Let $\ell = \dim S_{k}$ and write $k = 12 \ell + k'$, where $k' \in \{ 0, 4, 6, 8, 10, 14 \}$. For each integer $m$ with $1 \leq m \leq \ell$, we let $F_{k,m}(z)$ denote the unique weight $k$ modular form with a Fourier expansion of the form \[ F_{k,m}(z) = q^m + \sum_{n=\ell+1}^{\infty} A_{k}(m,n) q^{n}. \] In \cite{DJ}, Duke and the first author gave a generating function for the $F_{k,m}(z)$. Note that the notation in this paper differs slightly from theirs; $F_{k, m}$ is equal to the modular form $f_{k, -m}$ in~\cite{DJ}. \begin{thmnonum}[Lemma~2 of \cite{DJ}] We have \[ F_{k,m}(z) = \frac{1}{2 \pi i} \oint_{C} \frac{\Delta^{\ell}(z) E_{k'}(z) E_{14-k'}(\tau)}{\Delta^{1+\ell}(\tau) (j(\tau) - j(z))} p^{m-1} \, dp, \] where $p = e^{2 \pi i \tau}$ and $C$ denotes a (counterclockwise) circle in the $p$-plane with sufficiently small radius. \end{thmnonum} Inspection of the integrand shows that the only poles of the integrand (as $\tau$ varies) occur when $\tau$ is equivalent to $z$ under the action of ${\rm SL}_{2}(\mathbb{Z})$. We change variables by setting $\tau = u + iv$, $p = e^{2 \pi i \tau}$, $dp = 2 \pi i e^{2 \pi i \tau}$, and let $v$ and $y$ be fixed constants. This gives \[ F_{k,m}(z) = \int_{-.5}^{.5} \frac{\Delta^{\ell}(z) E_{k'}(z) E_{14 - k'}(\tau)} {\Delta^{1+\ell}(\tau) (j(\tau) - j(z))} e^{2 \pi i m \tau} \, du, \] which is valid provided no point with imaginary part at least $v$ is equivalent to $z$ under the action of ${\rm SL}_{2}(\mathbb{Z})$. It follows that \[ A_{k}(m,n) = \int_{-.5}^{.5} \int_{-.5}^{.5} \frac{\Delta^{\ell}(z) E_{k'}(z) E_{14 - k'}(\tau)}{\Delta^{1+\ell}(\tau) (j(\tau) - j(z))} e^{2 \pi i m \tau} e^{-2 \pi i n z} \, du \, dx, \] provided no point $\tau$ with $\IM{\tau} \geq v$ is equivalent to any point $z$ with $\IM{z} = y$. From this, it is clear that we can take absolute values to obtain the bound \[\abs{A_k(m, n)} \leq \max_{\abs{u}, \abs{x} \leq .5} \abs{\frac{\Delta(z)}{\Delta(\tau)}}^\ell \abs{\frac{E_k'(z)E_{14-k'}(\tau)}{\Delta(\tau)(j(\tau)-j(z))}} e^{-2\pi m v} e^{2\pi n y}.\] Since $\Delta(z) = q - 24 q^2 + O(q^3)$, we have $\abs{\Delta(z)} \leq e^{-2\pi y} + 24 e^{-4 \pi y} + B$, where $B$ is a bound on the tail $\sum_{n=3}^\infty \tau(n) q^n$ of the series. We can bound the tail by $\sum_{n=3}^\infty d(n) n^{11/2} e^{-2\pi n y}$; using the bound $d(n) \leq 2 \sqrt{n}$, we can exactly evaluate the sum that results in terms of $y$. This gives us an explicit upper bound for $\abs{\Delta(z)}$ in terms of $y$. Similarly, we find an lower bound for $\abs{\Delta(\tau)}$ in terms of $v$. For each of the six choices of $k'$, we bound $\abs{E_k'(z) E_{14-k'}(\tau)}$ in terms of $y$ and $v$ by noting that $\sigma_{k-1}(n) \leq 2\sqrt{n} n^{k-1} \leq 2n^{k}$, so that \[\abs{E_k(z)} = \abs{1 - \frac{2k}{B_k} \sum_{n=1}^\infty \sigma_{k-1}(n) q^n} \leq 1 + \frac{2k}{\abs{B_k}} \sum_{n=1}^\infty 2n^k e^{-2\pi n y}.\] This latter sum may be exactly evaluated in terms of $y$. At this point, we set $y = .865$ and $v = 1.16$; these values satisfy the conditions above, since all points equivalent to $z = x + .865i$ under the action of ${\rm SL}_2(\mathbb{Z})$ have imaginary part less than 1.16, and give reasonable bounds for the quantities we are studying. With these choices, we find that \[\abs{\frac{\Delta(z)}{\Delta(\tau)}} \leq 7.358,\] \[\abs{\frac{1}{\Delta(\tau)}} \leq 1488.802,\] \[\abs{E_k'(z)E_{14-k'}(\tau)} \leq 40.368.\] It remains to bound the quantity $\abs{j(\tau)-j(z)}$ on the appropriate intervals. We bound the tails of the two series, taking all terms with exponent 10 and above for $j(z)$ and all terms with exponent $5$ and above for $j(\tau)$. Using the bounds given in~\cite{BP}, we find that the tail of $j(z)$ is bounded by \[\sum_{n=10}^\infty e^{-2\pi n (.865)} \frac{1}{\sqrt{2} n^{3/4}} e^{4\pi \sqrt{n}} \left(1 - \frac{3}{32\pi\sqrt{n}} + \frac{.055}{n}\right) \leq \frac{1.055}{\sqrt{2}} \sum_{n=10}^\infty e^{-2\pi \sqrt{n} (.865\sqrt{n}-2)} \]\[\leq \frac{1.055}{\sqrt{2}}\sum_{n=10}^\infty e^{-2\pi \sqrt{n}(.2\sqrt{n})} \leq .000003636545.\] Similarly, the tail of $j(\tau)$ is bounded by $.000003636545$. We now bound the main terms of $\abs{j(\tau)-j(z)}$. Writing $j(z) = q^{-1} + \sum c(n) q^n$, we must find a lower bound for \[G(x, u) = \left| p^{-1} + \sum_{i=1}^4 c(i)p^i - q^{-1} - \sum_{i=1}^9 c(i)q^i \right|,\] where $p = e^{2\pi i (u + 1.16 i)}$, $q = e^{2 \pi i (x + .865 i)}$, and $\abs{u}, \abs{x} \leq .5$. To bound $G(x, u)$, we examine the function $G(x, u)^2$, which can be written as an expression in $\cos(2\pi n x), \cos(2\pi n u), \sin(2\pi n x)$, and $\sin(2\pi n u)$. After finding bounds on the partial derivatives of $G^2$ with respect to $x$ and $u$, we compute its values on a grid of points satisfying $\abs{u}, \abs{x} \leq .5$ to see that $G^2 \geq 900$, implying that $G(x, u) \geq 30$ in this range. The computations were performed using Maple, and were shortened by noting that $G(x, u) = G(-x, -u)$; the bounds on derivatives were calculated by trivially bounding the second derivatives and, again, computing values on a grid of points. Putting together these computations, we see that \[ |A_{k}(m,n)| \leq 2003.34 \cdot 7.358^\ell e^{-2 \pi m \cdot 1.16} e^{2 \pi n \cdot 0.865}.\] We now use this estimate on $|A_{k}(m,n)|$ to estimate $\langle G, G \rangle$, where $G = \sum_{m=1}^{\ell} a(m) F_{k, m}$. We have \begin{align*} \langle G, G \rangle &= \frac{3}{\pi} \int_{-1/2}^{1/2} \int_{\sqrt{1-x^{2}}}^{\infty} |G(x+iy)|^{2} y^{k-2} \, dy \, dx\\ &\leq \frac{3}{\pi} \int_{\sqrt{3}/2}^{\infty} \int_{-1/2}^{1/2} |G(x+iy)|^{2} y^{k-2} \, dx \, dy.\\ \end{align*} Plugging in the Fourier expansion $G(z) = \sum_{n=1}^{\infty} a(n) q^{n}$ and using the fact that we are integrating over a complete period gives \[ \langle G, G \rangle \leq \frac{3}{\pi} \sum_{n=1}^{\infty} |a(n)|^{2} \int_{\sqrt{3}/2}^{\infty} y^{k-2} e^{-4 \pi n y} \, dy. \] Setting $u = 4 \pi n y$, $du = 4 \pi n \, dy$ gives \begin{equation}\label{ff} \langle G, G \rangle \leq \frac{12}{(4 \pi)^{k}} \sum_{n=1}^{\infty} \frac{|a(n)|^{2}}{n^{k-1}} \int_{2 \pi \sqrt{3} n}^{\infty} u^{k-2} e^{-u} \, du. \end{equation} We have \[ a(n) = \sum_{m=1}^{\ell} a(m) A_{k}(m,n) \] and so for $n \geq \ell + 1$, we have \[ |a(n)|^{2} \leq (2003.34)^{2} (7.358)^{2 \ell} \left| \sum_{m=1}^{\ell} a(m) e^{-2 \pi m \cdot 1.16}\right|^{2} \cdot e^{4 \pi n \cdot 0.865}. \] For $1 \leq n \leq \ell$ we use the simple bound \[ \int_{2 \pi \sqrt{3} n}^{\infty} u^{k-2} e^{-u} \, du \leq \int_{0}^{\infty} u^{k-2} e^{-u} \, du = (k-2)!. \] Hence, the contribution to $\langle G, G \rangle$ from the terms with $1 \leq n \leq \ell$ is at most \[ \frac{12 (k-2)!}{(4 \pi)^{k}} \sum_{n=1}^{\ell} \frac{|a(n)|^{2}}{n^{k-1}}. \] For $n \geq \ell + 1$ we use that \[ \int_{2 \pi \sqrt{3} n}^{\infty} u^{k-2} e^{-u} \, du = e^{-2 \pi \sqrt{3} n} \sum_{i=0}^{k-2} \frac{(k-2)!}{i!} (2 \pi \sqrt{3} n)^{i}. \] Since the highest power of $n$ in this expression is $k-2$, the piece \[ \frac{1}{n^{k-1}} \sum_{i=0}^{k-2} \frac{(k-2)!}{i!} (2 \pi \sqrt{3} n)^{i} \] of the right side of equation~\eqref{ff} is a decreasing function of $n$ and is therefore bounded by \[ \frac{1}{(\ell+1)^{k-1}} \sum_{i=0}^{\infty} \frac{(k-2)!}{i!} (2 \pi \sqrt{3} (\ell+1))^{i} = \frac{(k-2)! e^{2 \pi \sqrt{3} (\ell + 1)}}{(\ell+1)^{k-1}}. \] Hence, the contribution to $\langle G, G \rangle$ from the terms with $n \geq \ell + 1$ is at most \[ \frac{12}{(4 \pi)^{k}} \cdot (2003.34)^{2} (7.358)^{2 \ell} \left| \sum_{m=1}^{\ell} a(m) e^{-2 \pi m \cdot 1.16} \right|^{2} \cdot \frac{(k-2)! e^{2 \pi \sqrt{3} (\ell + 1)}}{(\ell+1)^{k-1}} \cdot \sum_{n=\ell+1}^{\infty} e^{4 \pi n \cdot 0.865} e^{-2 \pi \sqrt{3} n}. \] The sum on $n$ is a geometric series, and we have $4 \pi \cdot 0.865 - 2 \pi \sqrt{3} \leq -0.01288$. This gives the bound \[ \frac{(k-2)! (12168805)^{2}}{(4 \pi)^{k}} \left| \sum_{m=1}^{\ell} a(m) e^{-2 \pi m \cdot 1.16} \right|^{2} \cdot \frac{(7.358)^{k/6} 12^{k} e^{k \pi \sqrt{3}/6} e^{-0.00107 k}}{k^{k-1}}. \] Thus, we have \[ \langle G, G \rangle \leq \frac{12 (k-2)!}{(4 \pi)^{k}} \sum_{m=1}^{\ell} \frac{|a(m)|^{2}}{m^{k-1}} + \frac{(12168805)^{2} (k-2)!}{(4 \pi)^{k}} \left| \sum_{m=1}^{\ell} a(m) e^{-2 \pi m \cdot 1.16}\right|^{2} \cdot \frac{(41.41)^{k}}{k^{k-1}}. \] Now, we write $G = \sum_{i=1}^{\ell} c_{i} g_{i}$, where the $g_{i}$ are the normalized Hecke eigenforms. Using the lower bound on $L({\rm Sym}^{2} g_{i}, 1)$ and the relation between $L({\rm Sym}^{2} g_{i}, 1)$ and $\langle g_{i}, g_{i} \rangle$, we get \begin{align*} \langle G, G \rangle &= \sum_{i=1}^{\ell} |c_{i}|^{2} \langle g_{i}, g_{i} \rangle\\ &\geq \sum_{i=1}^{\ell} |c_{i}|^{2} \cdot \left(\frac{3 (k-1)!}{32 \pi^{2} (4 \pi)^{k} \log(k)}\right).\\ \end{align*} This gives an upper bound on $\sum_{i=1}^{\ell} |c_{i}|^{2}$ in terms of $\langle G, G \rangle$. The Cauchy-Schwarz inequality gives \begin{align*} \sum_{i=1}^{\ell} |c_{i}| \leq & \sqrt{\ell} \sqrt{\sum_{i=1}^{\ell} |c_{i}|^{2}}\\ & \leq \sqrt{\frac{k}{k-1} \cdot \frac{32 \pi^{2}}{3} \log(k)} \cdot \sqrt{\sum_{m=1}^{\ell} \frac{|a(n)|^{2}}{m^{k-1}}}\\ &+ \sqrt{\frac{k}{k-1} \cdot \frac{32 \pi^{2}}{3} \cdot \log(k)} \cdot 12168805 \cdot \left|\sum_{m=1}^{\ell} a(m) e^{-7.288m}\right| \cdot \frac{(41.41)^{k/2}}{k^{(k-1)/2}}\\ &\leq \sqrt{\log(k)} \left(11 \cdot \sqrt{\sum_{m=1}^{\ell} \frac{|a(m)|^{2}}{m^{k-1}}} + \frac{e^{18.72} (41.41)^{k/2}}{k^{(k-1)/2}} \cdot \left|\sum_{m=1}^{\ell} a(m) e^{-7.288m} \right|\right). \end{align*} This concludes the proof of Theorem~\ref{bound}. \section{Proof of Theorem~\ref{extreme}} \label{proofofextreme} Write $F_{k,0}(z) = E_{k}(z) + h(z)$, where \[ h(z) = \sum_{n=1}^{\infty} b(n) q^{n}. \] Since $F_{k,0}(z) = 1 + O(q^{\ell + 1})$, we have \[ b(m) = \frac{2k}{B_{k}} \sigma_{k-1}(m) \] for $1 \leq m \leq \ell$. We now apply Theorem~\ref{bound}, which gives that $b(n)$ is bounded by \[ \sqrt{\log(k)} \left(11 \sqrt{ \sum_{m=1}^{\ell} \frac{|b(m)|^{2}}{m^{k-1}} } + \frac{e^{18.72} (41.41)^{k/2}}{k^{(k-1)/2}} \cdot \left| \sum_{m=1}^{\ell} b(m) e^{-7.288 m} \right|\right) d(n) n^{\frac{k-1}{2}}. \] We have that \[ \zeta(k) = \frac{(-1)^{\frac{k}{2} - 1} (2 \pi)^{k} B_{k}}{(k-1)! \cdot 2k}. \] If $k \geq 12$, then $1 \leq \zeta(k) \leq \zeta(12) \leq 1.00025$. Thus, for $k \geq 12$ we have \[ 0.9997 \frac{(2 \pi)^{k}}{(k-1)!} \leq -\frac{2k}{B_{k}} \leq \frac{(2 \pi)^{k}}{(k-1)!}. \] Now, we have \[ \sigma_{k-1}(m) = \sum_{d | m} d^{k-1} = \sum_{d | m} (m/d)^{k-1} = m^{k-1} \sum_{d | m} \frac{1}{d^{k-1}} \leq m^{k-1} \zeta(k-1). \] We have \[ \sqrt{\sum_{m=1}^{\ell} \frac{|b(m)|^{2}}{m^{k-1}}} \leq -\frac{2k \zeta(k-1)}{B_{k}} \sqrt{\sum_{m=1}^{\ell} m^{k-1}}. \] Also, \begin{align*} \sum_{m=1}^{\ell} m^{k-1} &= \int_{1}^{\ell + 1} \lfloor x \rfloor^{k-1} \, dx \leq \int_{1}^{\ell + 1} x^{k-1} \, dx \leq \frac{(\ell+1)^{k}}{k}\\ &\leq \frac{\ell^{k} \left(1 + \frac{1}{\ell}\right)^{12 \ell + 12}}{k} \leq \frac{e^{12} \ell^{k}}{k} \cdot \left(1 + \frac{1}{\ell}\right)^{12}. \end{align*} Thus, the contribution from the first term in Theorem~\ref{bound} is \[ \frac{(2 \pi)^{k}}{(k-1)!} \frac{11 \cdot 1.0005 \cdot e^{6} \ell^{k/2}}{\sqrt{k}} \left(1 + \frac{1}{\ell}\right)^{6} \sqrt{\log(k)}. \] The function $m^{k-1} e^{-7.288 m}$ always has a maximum at $m = \ell$. Thus, the second term of the bound from Theorem~\ref{bound} is at most \begin{align*} & \frac{\zeta(11) e^{18.72} (41.41)^{k/2} \ell^{k} e^{-7.288 \ell} (2 \pi)^{k} \sqrt{\log(k)}}{(k-1)! k^{(k-1)/2}}\\ &\leq \frac{(2 \pi)^{k}}{(k-1)!} e^{28.4657} \ell^{(k+1)/2} (1.0242382)^{k/2} \sqrt{\log(k)}. \end{align*} Adding the two contributions above, we have that \[ C(h) \leq \frac{(2 \pi)^{k}}{(k-1)!} e^{28.466} \sqrt{\ell \log(k)} (1.0242382 \ell)^{k/2}, \] and so $|b(n)| \leq C(h) d(n) n^{\frac{k-1}{2}} \leq 2C(h) n^{k/2}$. Now, we have \[ a(n) = -\frac{2k}{B_{k}} \sigma_{k-1}(n) + b(n) \geq 0.9997 \frac{(2 \pi)^{k}}{(k-1)!} n^{k-1} - 2 C(h) n^{k/2}. \] The right hand side is positive if \begin{align*} n^{\frac{k}{2} - 1} &\geq \frac{2 e^{28.466} \sqrt{\ell \log(k)} (1.0242382 \ell)^{k/2}}{0.9997}\\ n &\geq e^{58.366/(k-2)} \left(\ell^{3} \log(k)\right)^{\frac{1}{k-2}} \cdot 1.0242382 \ell. \end{align*} This concludes the proof of Theorem~\ref{extreme}. \section{Proof of Corollary~\ref{cor0}} \label{computations} To verify that all Fourier coefficients of $F_{k,0}(z)$ are non-negative for\\ $k \in \{81288, 81460, 81632\}$, we use the bound from Theorem~\ref{extreme}. This shows that any negative Fourier coefficient occurs within the first 10000. We find the unique linear combination \[ \sum_{i=0}^{k/4} c_{i} E_{4}^{k - 3i} \Delta^{i} = 1 + O(q^{\ell + 1}) \] and this form will equal $F_{k,0}(z)$. It then suffices to check the first 10000 Fourier coefficients are non-negative. These computations are performed in Magma \cite{Magma}, and take approximately 3 days for each weight. Recall that \[ F_{k,0}(z) = \sum_{n=0}^{\infty} a(n) q^{n}. \] We will show that $a(\ell + 2) < 0$ for $k$ sufficiently large (depending on $k \bmod 12$), making effective the work of Mallows, Odlyzko, and Sloane. Write \[ E_{4}^{-k/4} = \sum_{n=0}^{\infty} A(n) j^{-n} \] where $j$ is the usual $j$-function. B\"urmann's theorem gives that \begin{equation} \label{readoffcoeff} A(n) = \left(-\frac{k}{4n}\right) \cdot \text{ the coefficient of } q^{n-1} \text{ in } \left(\frac{dE_{4}}{dq} \frac{E_{4}^{3n - k/4 - 1} q^{n}}{\Delta^{n}}\right). \end{equation} Mallows, Odlyzko, and Sloane show (see \cite{MOS}, pg. 73) that \begin{align*} a(\ell + 1) &= -A(\ell + 1) > 0\\ a(\ell + 2) &= -A(\ell + 2) + A(\ell + 1) \left(24 \ell - 240 \nu + 744\right). \end{align*} We write \begin{align*} A(\ell + 1) &= -\frac{k}{4(\ell + 1)} \int_{-1/2}^{1/2} \theta(E_{4}) E_{4}^{2 - \nu} \frac{1}{\Delta^{\ell + 1}} \, dx\\ A(\ell + 2) &= -\frac{k}{4(\ell + 2)} \int_{-1/2}^{1/2} \theta(E_{4}) E_{4}^{5 - \nu} \frac{1}{\Delta^{\ell + 2}} \, dx \end{align*} where $\theta\left(\sum a_{n} q^{n}\right) = \sum n a_{n} q^{n}$, and the integrals are over the line segment $x + iy$, $-1/2 \leq x \leq 1/2$ where $y$ is fixed. We wish to find an upper bound on $|A(\ell + 2)|$ and a lower bound on $|A(\ell + 1)|$. We choose $y$ so that $\frac{\Delta'(iy)}{\Delta(iy)} = 0$ (so $y \approx 0.52352$). We write the integrals above in the form \[ \int_{-1/2}^{1/2} H_{j}(x+iy) e^{-(\ell + j) \ln(\Delta(x+iy))} \, dx \] where $H_{1}(x+iy) = \theta(E_{4})(x + iy) E_{4}(x+iy)^{2 - \nu}$ and $H_{2}(x+iy) = \theta(E_{4})(x+iy) E_{4}(x+iy)^{5-\nu}$. If $B(x) = -\ln(\Delta(x+iy))$, then $|B(x)| \leq B(0) \approx 4.23579$. Moreover, the choice of $y$ gives that $B'(0) = 0$. We use Taylor's theorem with the Lagrange form of the remainder to write \[ B(x) = B(0) + \frac{1}{2} x^{2} {\rm Re}(B)''(z_{1}) + \frac{i}{2} x^{2} {\rm Im}(B)''(z_{2}) := B(0) + x^2 C_{1}(x) + i x^{2} C_{2}(x). \] for some $z_{1}$ and $z_{2}$ between $0$ and $x$. We bound from above and below the second derivatives of the real and imaginary parts of $B$. We derive similar bounds on $H_{1}(x+iy)$ and $H_{2}(x+iy)$. We then have \[ e^{-(\ell + j) B(x)} = e^{-(\ell + j) B(0)} \cdot e^{C_{1}(x) x^{2}} \left(\cos\left((\ell + j) C_{2}(x) x^{2}\right) + i \sin\left((\ell + j) C_{2}(x) x^{2}\right)\right). \] Since the integrals we are studying are both real, we wish to approximate the real part of the integrand. The main contribution comes in an interval of length about $\frac{1}{\sqrt{\ell}}$ in a neighborhood of $x = 0$, chosen so that $\cos((\ell + j) C_{2}(x) x^{2})$ is positive. We bound the contribution of the remaining part of $-1/2 \leq x \leq 1/2$ trivially. The bounds we obtain from this method show that $a(\ell + 2) < 0$ if $k \geq 84636$, $k \geq 83332$, and $k \geq 82532$ if $\nu = 0$, $\nu = 1$, or $\nu = 2$, respectively. We use \eqref{readoffcoeff} to compute the coefficient $a(\ell + 2)$ for all $k$ between the bounds given in Corollary~\ref{cor0} and the bounds above. This concludes the proof. \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
1,108,101,566,272
arxiv
\section{Overview} The successful start of the LHC and its first data taking mark the dawn of a new era in particle physics. Very soon we will be able to explore the energy scale of electro-weak symmetry breaking which enables us to either confirm the Standard Model as a low energy theory of particle physics or to discover new particles guiding us to extensions of the Standard Model. No matter what the outcome of the experiment will be, a successful interpretation of the data will require a large number of predictions calculated at least to Next-To-Leading Order~(NLO) in QCD. Some of these reactions involve up to four particles in the final state~\cite{:2008uu,Bern:2008ef,Buttar:2006zd,Campbell:2006wx}. The computation of NLO corrections to processes involving many final state particles are cumbersome and time consuming. The time required for a single calculation --- without the use of automated programs --- often coincides with the duration of a Ph.D. Considering the fast experimental progress it is very likely that already within the next two or three years the physics results of the LHC will raise the demand for precision calculations within and beyond the Standard Model for many multileg processes. Only the automatisation of NLO calculations will allow us to keep pace with the requirements set by the experiments. We argue that the automatisation of the computation of NLO cross-sections will also improve the possibility of comparing results from different implementations, especially if the tools are made public and common conventions are used for input/output and interfacing to external programs are in use. A first step in this direction has been made through the Binoth Les Houches Accord~\cite{Binoth:2010xt}. This accord exploits the modular structure of NLO calculations and proposes the reflection of this structure in the implementation of such calculations as computer programs. Any QCD cross-section at NLO can be written in the form \begin{equation} \sigma_{2\rightarrow N}^{\mathrm{NLO}}= \sigma_{2\rightarrow N}^{\mathcal B} + \sigma_{2\rightarrow N}^{\mathcal V} + \sigma_{2\rightarrow N+1}^{\mathcal R}. \end{equation} The first term on the right-hand side describes the Born-level cross-section calculated from the squared tree-level amplitude of the $2\rightarrow N$ process. The virtual corrections $\sigma_{2\rightarrow N}^{\mathcal V}$ stem from the interference term between tree-level and one-loop diagrams. The third term describes the real radiation of an extra, unobserved parton at tree-level. The last two terms lead to singularities which can be regularised by introducing a non-integer dimension $n=4-2\varepsilon$, yielding poles in $1/\varepsilon$ which cancel only after both terms have been added up. For practical applications it is therefore convenient to introduce subtraction terms, such that both $\sigma_{2\rightarrow N}^{\mathcal V}$ and $\sigma_{2\rightarrow N+1}^{\mathcal R}$ are finite and can be integrated over phase space~independently. A full implementation of an NLO calculation can therefore be modularised into a Monte-Carlo integrator for phase space integration, a tree-level matrix element generator for $\sigma_{2\rightarrow N}^{\mathcal B}$ and $\sigma_{2\rightarrow N+1}^{\mathcal R}$, a one-loop matrix element generator for $\sigma_{2\rightarrow N}^{\mathcal V}$ and infrared subtraction terms. Typically the first two of these components are implemented in the same program. An overview of existing techniques and recent contributions to all of the modules can be found, for example, in~\cite{Bern:2008ef} and~\cite{Binoth:2010xt}. Methods for computing~$\sigma_{2\rightarrow N}^{\mathcal V}$ can be classified in two groups. On the one hand there are unitarity based methods determining the coefficients of scalar one-loop integrals by exploiting analyticity of the amplitude; on the other hand there are Feynman diagrammatic techniques starting from tensor integrals, which are then reduced to simpler integrals which can be evaluated in a numerically stable way. Both techniques have led to automated implementations in recent years (\cite{Ellis:2007br,% Ossola:2007ax,Berger:2008sj,Giele:2008bc,Lazopoulos:2008ex,Winter:2009kd} and \cite{Hahn:1998yk,Kurihara:2002ne,Belanger:2003sd,Bredenstein:2010rs,% Binoth:2008uq} respectively). In Sections~\ref{golem_sec:golem95} and~\ref{golem_sec:golem20} we present the automated implementation of a method using Feynman diagrams for the calculation of~$\sigma_{2\rightarrow N}^{\mathcal V}$, the GOLEM method, based on the reduction scheme proposed in \cite{Binoth:2005ff}. This implementation has been used to compute the NLO virtual corrections to $q\bar{q}\rightarrow b\bar{b}b\bar{b}$, which is one of the two partonic initial states contributing to the $pp\to b\bar{b}b\bar{b}$ process. This process constitutes an important background for Higgs searches in models beyond the Standard Model where Higgs bosons decay predominantly into $b$-quarks as discussed in~\cite{Lafaye:2000ec,Krolikowski:2008qa}. \section{The Generic Integral Form Factor Library \texttt{golem95}} \label{golem_sec:golem95} The Feynman diagrammatic approach to computation of one-loop corrections to processes with N external particles requires the evaluation of tensor integrals of rank $r$ which have the general form \begin{equation} I_{\mathsf{N}}^{n;\mu_1\ldots\mu_r}(a_1,\ldots,a_r;S)= \int\!\frac{\mathrm{d}^nk}{i\pi^{n/2}}\frac{q_{a_1}^{\mu_1}\cdots% q_{a_r}^{\mu_r}}{(q_1^2-m_1^2+i\delta)\cdots (q_{\mathsf{N}}^2-m_{\mathsf{N}}^2+i\delta)}, \end{equation} where $q_i=k+r_i$ and $S$ denotes the matrix $S_{ij}=(r_i-r_j)^2-m_i^2-m_j^2$. It is well-known~\cite{Melrose:1965kb} that such a tensor integral can be expressed in terms of basis scalar integrals, but the reduction procedure introduces inverse Gram determinants ($\det G$) in the coefficients of the expansions which can lead to numerical instabilities in certain regions of the phase space. Therefore, in~\cite{Binoth:1999sp,Binoth:2005ff} we have proposed a reduction scheme which allows to write any N-point amplitude as a linear combination of basis integrals ($I_2$, $I_3^n$, $I_3^{n+2}$, $I_4^{n+2}$, $I_4^{n+4}$) with and without Feynman parameters in the numerator, avoiding the introduction of inverse Gram determinants. The evaluation of the basis functions can be performed by reducing them further to scalar integrals using recursion formulae. This further reduction, however, introduces Gram determinants in the coefficients which could lead to numerical instabilities in certain regions of the phase space. Potentially dangerous regions are identified by the criterion $\vert\det G\vert < \Lambda\vert\det S\vert$ for a fixed cut-off~$\Lambda \sim 10^{-5}$. In this regions the basis integrals are evaluated numerically without applying any further reduction. For all integrals without internal masses we have worked out one-dimensional integral representations which can be evaluated by numerical integration. This algorithm has been implemented in form of a Fortran\,90 library, \texttt{golem95}, for massless internal propagators~($m_i=0$)~\cite{Binoth:2008uq}. This version of the code has been made available for download\footnote{% \href{http://lappweb.in2p3.fr/lapth/GOLEM/golem95.html}{% \tt http://lappweb.in2p3.fr/lapth/GOLEM/golem95.html}}. We have recently extended the library \texttt{golem95} to the case where internal masses are present. All infrared divergent integrals have been implemented explicitly. For evaluating the finite boxes and triangles the user needs to link the LoopTools library~\cite{Hahn:1998yk,Hahn:2000kx,vanOldenborgh:1989wn}. This ``massive'' version of the \texttt{golem95} library is currently in the testing phase and will be available shortly. For integrals with internal masses, the option to evaluate the tensor integrals numerically prior to reduction in regions where the Gram determinant tends to zero, is not yet supported. However, one-dimensional integral representations valid for all possible kinematic configurations are under~construction. \section{Implementation of a One-Loop Matrix Element Generator} \label{golem_sec:golem20} Building upon \texttt{golem95} as a loop-integral library, our next step was the construction of a matrix-element generator at the one-loop level. The computation is carried out projecting the amplitude onto helicity and colour structures. The virtual corrections can therefore be expressed as \begin{equation} \mathrm{d}\sigma_{2\rightarrow N}^{\mathcal V}= \frac{1}{n_an_b}\sum_{\{\lambda_i\},j,k}\!\!\! {\mathcal{A}_j^{\mathcal B}(p_a^{\lambda_a},p_b^{\lambda_b}; p_1^{\lambda_1},\ldots,p_N^{\lambda_N})}^\dagger \left\langle c_j\vert c_k\right\rangle {\mathcal{A}_k^{\mathcal V}}(p_a^{\lambda_a},p_b^{\lambda_b}; p_1^{\lambda_1},\ldots,p_N^{\lambda_N}) + h.c. \end{equation} where $p_i^{\lambda_i}$ denotes the pair of momentum $p_i$ and helicity label $\lambda_i$ of the $i-$th particle. The matrix $\left\langle c_j\vert c_k\right\rangle$ consists of the contractions of all colour basis tensors for a given process evaluating to rational functions in the number of colours $N_C$ and the normalisation constant $T_R$ of the generators. The constants $n_a$ and $n_b$ represent the averaging over spin and~colour. The one-loop amplitude $\sum_k \mathcal{A}_k\vert c_k\rangle$ consists of a sum of Feynman diagrams, which we generate using QGraf~\cite{Nogueira:1991ex}. We use QGraf together with \LaTeX{} and Axodraw~\cite{Vermaseren:1994je} also for drawing the diagrams; the layout of the diagrams is determined using the algorithm of~\cite{Ohl:1995kr}. The expressions of the diagrams are then processed using Form~\cite{Vermaseren:2000nd} and the Form library \texttt{spinney} which we have developed for dealing with helicity spinors and $n$-dimensional Dirac and Lorentz algebra efficiently. Majorana spinors can also be dealt with thanks to the the implementation the flipping rules for spin lines as described in~\cite{Denner:1992vza}. At the moment the GOLEM program can import CompHep~\cite{Boos:2004kh} model files to perform Beyond the Standard Model computations. An interface to FeynRules~\cite{Christensen:2008py} is under construction. After the Form program has decomposed the diagram expression into colour structures and the tensor integrals are represented in terms of integral form factors as defined in \texttt{golem95}~\cite{Binoth:2008uq}, the resulting expressions are optimised and translated into Fortran\,90 functions using the code generator \texttt{haggies}~\cite{Reiter:2009ts}. At this step the number of multiplications is minimised applying a Horner scheme and common subexpression elimination. The generated Fortran\,90 program is linked with \texttt{golem95} for the numerical evaluation of the tensor integral form factors. A future version of the program will support the Binoth Les Houches Accord~\cite{Binoth:2010xt} to facilitate the interfacing to Monte-Carlo~generators. \section{NLO Results for $q\bar{q}\rightarrow b\bar{b}b\bar{b}$ \cite{Binoth:2009rv}} \label{golem_sec:results} The setup described in the previous section has been used to compute the virtual corrections of the QCD corrections to $q\bar{q}\rightarrow b\bar{b}b\bar{b}$ in the limit $m_b=0$ and $m_t\rightarrow\infty$. We have compared the results with an independent implementation using FeynArts and FormCalc~\cite{Hahn:1998yk} to generate and simplify the diagrams, where the tensor integrals are algebraically reduced to scalar integrals using the procedure described in~\cite{Binoth:2005ff}. In order to compute $q\bar{q}\rightarrow b\bar{b}b\bar{b}$ at NLO accuracy the Born level cross-section, the real emission contribution and the infrared subtraction terms also need to be evaluated. Since we are only interested in a process with 4 tagged $b$-jets the relevant process for the real emission contribution is $q\bar{q}\rightarrow b\bar{b}b\bar{b}g$. We have used \texttt{MadGraph/MadEvent}~\cite{Maltoni:2002qb,Alwall:2007st} and \texttt{MadDipole}~\cite{Frederix:2008hu} to evaluate the tree-like contributions and to perform the phase space integration. As an alternative setup, based on an extended version of the \texttt{Whizard}~\cite{Kilian:2007gr,Moretti:2001zz} Monte Carlo generator with an implementation of infrared subtraction terms has been~used to obtain an independent cross-check. For the infrared subtraction we have used Catani-Seymour dipoles~\cite{Catani:1996vz} in both implementations including a phase space slicing parameter following~\cite{Nagy:2005gn}. To define a $b\bar{b}b\bar{b}$ event, we first we apply a $k_T$ jet algorithm~\cite{Blazey:2000qt} to decide if the gluon should be merged with a $b$-quark. If the gluon is merged we use the effective momentum $\tilde{p}_b=p_b+p_g$ as the momentum of the $b$-quark in the cuts and in the observables. Then we apply a $p_T$ cut of $p_T(b_j)>30\,GeV$ and a rapidity cut of $\vert\eta\vert<2{.}5$ to all $b$-quarks and a separation cut of $\Delta R>0{.}8$ to all pairs of~$b$-quarks. We sum over $q\in\{u,d,c,s\}$ and use the CTEQ6M parton distribution functions~\cite{Pumplin:2002vw} with two-loop running of $\alpha_s$ for both Leading and Next-to-Leading Order computations. The centre of mass energy is set to $\sqrt{s}=14\,GeV$. In our results we use a fixed factorisation scale of $\mu_F=100\,GeV$; the renormalisation scale we set to~$\mu_0=\sqrt{\sum_j p_T^2(b_j)}$. In Figure~\ref{fig:golem_mbb} we show the invariant mass distribution of the system of the two $b$-pairs with highest $p_T$. The error bands have been obtained by varying the renormalisation scale $\mu_R=x\mu_0$ between $1/4<x<2$. The dashed line marks the leading order distribution for $x=1/2$, which turns out to be very similar to the NLO prediction for this value. The reduction of the uncertainty band due to scale variations clearly shows the importance of the NLO corrections for the precision of this calculation. \begin{figure}[hbtp] \begin{center} \includegraphics[width=0.8\textwidth]{mb1b2_plb} \end{center} \caption{Invariant mass ($m_{bb}$) distribution of the two leading $b$-quarks (see text). The error bands are obtained by varying the renormalisation scale $\mu_R=x\mu_0$ between $1/4<x<2$, where $\mu_0=\sqrt{\sum_j p_T^2(b_j)}$. The dashed line shows the value of the leading order prediction for $x=1/2$.}\label{fig:golem_mbb} \end{figure} \section{Conclusion} We have presented results obtained using the GOLEM method and recent progress in the implementation of an automated one-loop matrix element generator. A first important step towards this goal was the development of a one-loop integral library, \texttt{golem95}, which is currently being extended to the case of massive propagators. As a second step we have added a completely automated framework which generates efficient Fortran\,90 code for the numerically stable evaluation of one-loop matrix elements from a set of Feynman rules. This framework includes the development of new tools such as an optimising code generator (\texttt{haggies}), and a Form library (\texttt{spinney}) for the treatment of helicity spinors and $n$-dimensional Dirac and Lorentz algebra. Finally, we have presented the complete NLO result for the process $q\bar{q}\rightarrow b\bar{b}b\bar{b}$, which is a subprocess of $pp\rightarrow b\bar{b}b\bar{b}$, an important background to Higgs searches beyond the Standard Model. In the near future we plan to implement the interface to Monte-Carlo tools described in~\cite{Binoth:2010xt} and to make all parts of the program publicly available.
1,108,101,566,273
arxiv
\section{Introduction} In the present paper we work in a certain geometrical model of dynamics which can be constructed due to some variational principles of mechanics that allow one to construct an isomorphism between the dynamics and its model. Recently a geometric description of chaos in Hamiltonian systems of cosmological origin has been formulated using the tools of Riemannian (or pseudo-Riemannian) geometry \cite{Pettini:1993,Casetti:2000}. We concentrate on the approach to dynamics of the systems with a natural Lagrangian function $\mathcal{L} = \frac{1}{2} g_{\alpha \beta} \dot{q}^{\alpha} \dot{q}^{\beta} - V(q)$. On the level of constant energy $E$ this system can be reduced to the geodesics flow on a pseudo-Riemannian manifold with the Jacobi metric $\hat{g}_{\alpha \beta}=2(E-V)g_{\alpha \beta}$ \cite{Szydlowski:1996a,Szydlowski:1993a,Szydlowski:1993b,Szydlowski:1990}. The conformal metric becomes degenerate for certain values of energy $E$, at some points of configuration space $\{E=V\}$ for classical systems as well as for general relativistic ones. As a consequence one has the complication of matching together geodesics well defined on open sets across a singular surface and the divergence of curvature invariants characterizing the property of sensitive dependence on initial conditions. One can avoid this crucial problem addressed in the context of Mixmaster models \cite{Burd:1993,Motter:2001} by formulating corresponding dynamical system as a system in the Finsler \cite{Cipriani:1998} or Eisenhart metric \cite{Rama:2001}. The fact that Jacobi geometry is singular suggests that this model of dynamics \footnote{For interesting applications of Jacobi metric in the context of Schr{\"o}dinger equation see \cite{Faraoni:2002}} is one of the worst possible choices when it comes to achieving characterization of property of sensitive dependence on initial conditions it terms of curvature invariants or when it comes to achieving global results. It will be demonstrated that it is not true for the example of the behavior of geodesics in the Jacobi metric for FRW cosmological models with conformally coupled scalar field. Of course, because of the singular nature of the Jacobi geometry, the geodesics change from time-like to space-like (sometimes several or an infinite number of times) during their evolution, but in order to ``piece together the results'', one is forced to try to match geodesics (which are representing periodic orbits of their original dynamics) each time the solution passes a singular point. This leads to the full classification of periodic orbits in terms of ``just piecing together'' segments of geodesics. It is an example of the fact that some global theorem about geodesics can be achieved. Using this classification numerous methods based on symbolic dynamics and detection of the existence of unstable periodic orbits are proposed toward an invariant characterization of dynamical complexity in cosmology \cite{Motter:2003}. The organization of the paper is the following. In section~\ref{sec:2} the cosmological FRW model with conformally coupled scalar field is presented as a Hamiltonian dynamical system. This system belongs to larger class of simple indefinite dynamical systems for which kinetic energy form is quadric in the momenta and has Lorentzian signature (is indefinite). In section~\ref{sec:3} we consider Poincar{\'e} sections of the trajectories, with the surface formed by the boundary set of the domain classically admissible for motion, as a indicator of complex behavior. Note that the trajectories have property of recurrence due to multiple intersections of singular set. In this section the full classification of simple periodic orbits is performed and the largest Lyapunov exponent is numerically calculated. Section~\ref{sec:3} also contains the analysis of distributions of intersection points on the singular set. The existence of a weak noise observed in the Fourier analysis of intersections seems to be a complementary indicator of chaotic behavior. \section{FRW models with scalar fields as a simple indefinite dynamical systems} \label{sec:2} The dynamical systems of cosmological origin have many special features which distinguish them from those met in classical mechanics. It is in fact the origin of some problems and controversy in understanding of chaos in cosmology \cite{Motter:2002,Castagnino:2001}. In this section we shall study the dynamical complexity of a simple inflationary models of the Universe, regarded as Hamiltonian dynamical systems. They appeared, for example, in Linde's chaotic inflation scenario, where inflation is driven by the vacuum energy of a single slowly rolling inflaton field \cite{Linde:1983}. The dynamics of cosmological models, allowing for an inflaton field, has been studied by many authors \cite{Belinsky:1955,Belinsky:1985}. The idea of inflation which was introduced to solve some of the underlying problems in the standard big-bang cosmology becomes strictly connected with the existence of a scalar field which generates the period of an accelerated expansion of the Universe. In this case its energy density becomes dominated by the potential energy $V(\phi)$ of the scalar field $\phi$ (the inflaton). Although the dynamics of inflation depends on the specifics of the models, the basic mechanism lies in the equation of motion which for a homogeneous scalar field ($\phi=\phi(t)$) takes the form \begin{equation} \ddot{\phi} + 3 H \dot{\phi} + V'(\phi) + \frac{R}{6} \phi = 0 , \label{eq:1} \end{equation} where an over dot represents a derivative whit respect to the cosmological time $t$, and $V'=\mathrm{d} V/\mathrm{d}\phi$, $R$ is the Ricci scalar here, and the last term vanishes for minimally coupled scalar fields (in general the last term is $\xi R \phi$, where $\xi=1/6$ for the case of conformally coupled scalar fields). Here we deal with a single homogeneous and conformally coupled scalar field $\phi$ with potential $V(\phi)$ on the FRW background. The same system was previously considered in terms of the original dynamical systems without using the conformal Jacobi metric by many authors in the context of inflation and chaos \cite{Blanco:1995,Calzetta:1993,Cornish:1996}. Our goal in this paper is to point out different manifestations of the complex dynamics in terms of geodesics in the Jacobi metric. The motivation for such a study is to obtain an additional physical insight using the conformal metric and the behavior of periodic orbits. Our cosmological model assumes the FRW spatially geometry, that is, the line element is of the form \begin{equation} \mathrm{d} s^{2} = a^{2}(\eta) \{ - \mathrm{d}\eta^{2} + \mathrm{d}\chi^{2} +f^{2}(\chi)(\mathrm{d}\theta^{2} + \sin^{2}{\theta} \mathrm{d}\varphi^2)\}, \label{eq:2} \end{equation} where $0 \le \varphi \le 2\pi$, $0 \le \theta \le \pi$, $0 \le \chi \le \pi$ and $\eta$ is the conformal time $\mathrm{d} t/a=\mathrm{d}\eta$; a -- the scale factor; \begin{equation} f(\chi) = \left \{ \begin{array}{lll} \sin{\chi}, & 0 \le \chi \le \pi & k=+1 \\ \chi, & 0 \le \chi \le \infty & k=0 \\ \sinh{\chi}, & 0 \le \chi \le \infty & k=-1 \end{array} \right. \nonumber \end{equation} $k$ is the curvature index. The gravitational dynamics is derived from the Einstein-Hilbert action \begin{equation} S_{g} = \frac{1}{2} \int \mathrm{d}^{4} x \sqrt{-g}(R - 2 \Lambda), \label{eq:3} \end{equation} where $g$ is the determinant of the metric; $-g = a^{4} f^{2}(\chi) \sin{\theta}$ and $R$ is the curvature scalar for metric (\ref{eq:2}) given by \begin{equation} R = 6 \bigg\{ \frac{\ddot{a}}{a^{3}} + \frac{k}{a^2} \bigg\}, \label{eq:4} \end{equation} where a dot denotes differentiation with respect to $\eta$. We use the Misner-Thorne-Wheeler (MTW) convention \cite{Misner:1972} as well as a natural system of units, such that $\hbar = c = 8 \pi G = 1$ and $(2 \pi)^{2} = 1$. In so far as Robertson-Walker symmetry holds, the scalar field should be homogeneous $\phi=\phi(t)$. The action for a conformally coupled (real) scalar field is given by \begin{equation} S_{\phi} = -\frac{1}{2} \int \mathrm{d}^{4}x \sqrt{-g} \big\{ \partial_{\mu}\phi \partial^{\mu} \phi + \frac{1}{6} R \phi^{2} - 2 V(\phi) \big\} \label{eq:5} \end{equation} where $V(\phi) = \frac{1}{2} m^2 \phi^{2} + \frac{\lambda}{4} \phi^{4}$ is the assumed form of potential for this scalar field. After integration over the spatial variables ($\int \mathrm{d}^{3} x = 2\pi^{2}$ is the conformal volume of an spatial hypersurface of constant curvature) and discarding total derivatives in the full action we obtain dynamical system with two degrees of freedom: $a$ and a rescaled scalar field $\psi : \phi \to \psi = \sqrt{1/6} a \phi$ with Hamiltonian \begin{equation} \mathcal{H} = \frac{1}{2} \{ -(p_{a}^{2} +k a^{2}) + (p_{\psi}^{2} + k \psi^{2}) + m^{2} a^{2} \psi^{2} + \lambda \psi^{4} + \Lambda a^{4} \}, \label{eq:6} \end{equation} where $m^{2}$ is the mass of the scalar field, $\Lambda$ is constant proportional by the factor of $1/3$ to the cosmological constant. The evolution of the system should by considered on the $\mathcal{H}=0$ energy level for vacuum cosmology or on the $\mathcal{H}= -\rho_{r,0}$ energy surface if we add a radiation component to the energy-momentum tensor whose energy density scales like $\rho_{r}=\rho_{r,0} a^{-4}$, where $\rho_{r,0}=$ const \cite{Joras:2003}. Let us note that system (\ref{eq:6}) belongs to a larger class of dynamical systems which we call simple indefinite mechanical systems. By simple indefinite mechanical system we understand the triple $(\mathcal{M}, g, V)$, where $\mathcal{M}$ is the configuration space carrying a metric $g$ which defines the indefinite kinetic energy form $\mathcal{K}=\frac{1}{2}g(\mathbf{u},\mathbf{u})$, $\mathbf{u} \in T_{x}\mathcal{M}$, $x \in \mathcal{M}$. $V$ is the potential function $V : \mathcal{M} \to \mathcal{R}$ which is $C^{\infty}$, and $g$ has the Lorentz signature $(-,+,+,+)$. The ``simple'' in the above context means that dynamical system has the natural form of Lagrange function $\mathcal{L} = \frac{1}{2} g_{\alpha \beta} \dot{q}^{\alpha} \dot{q}^{\beta} - V(q)$, where $\alpha , \beta = 1,\ldots,N$ and $q$ and $\dot{q}$ are generalized coordinates and velocities respectively. The Hamilton function for such a system is of the form \begin{equation} \mathcal{H}(p,q) = \frac{1}{2} g^{\alpha \beta} p_{\alpha} p_{\beta} + V(q), \qquad p_{\alpha} = g_{\alpha \beta} \dot{q}^{\beta}. \label{eq:7} \end{equation} For the system under consideration $(q^{1},q^{2})=(a,\psi)$. Because of our general relativity and cosmology application $\mathcal{H}=E=$ const $\iff g_{\alpha \beta} \dot{q}^{\alpha} \dot{q}^{\beta} = 2(E-V(q))$. Therefore trajectories of the system in the tangent space of $\mathcal{R}^{2N}$ with coordinates $(q^{\alpha},\dot{q}^{\alpha})$ are situated in the domain described by $\Omega = \{(q^{\alpha},\dot{q}^{\alpha}) \in \mathcal{R}^{2N} : g_{\alpha \beta} \dot{q}^{\alpha} \dot{q}^{\beta} = \| \mathbf{u} \|^{2} = 2(E-V(q)) \}$. In the tangent space $T_{q}(\mathcal{R}^{N})$ it is natural to distinguish three classes of vectors, namely, a vector $\mathbf{u}$ is time-like if $\|\mathbf{u}\|^{2} < 0$, space-like if $\|\mathbf{u}\|^{2} > 0$ and null if $\|\mathbf{u}\|^{2}=0$. In the configuration space we distinguish three subsets \begin{equation} \begin{array}{l} \mathcal{D}_{S} = \{q\in\mathcal{R}^{N} : E-V(q) < 0 \}, \\ \mathcal{D}_{T} = \{q\in\mathcal{R}^{N} : E-V(q) > 0 \}, \\ \partial\mathcal{D} = \{q\in\mathcal{R}^{N} : E-V(q) = 0 \}. \end{array} \label{eq:8} \end{equation} Note that set $\partial\mathcal{D}$ is the boundary set because in its neighborhood we can always find points of $\mathcal{D}_{S}$ and $\mathcal{D}_{T}$ \begin{equation} \partial\mathcal{D}_{S} = \partial\mathcal{D}_{T} = \partial\mathcal{D}. \label{eq:9} \end{equation} In the three distinguished domains the character of the vector tangent to a trajectory is determined by the Hamiltonian constraint $\mathcal{H}=E$. Therefore if a trajectory changes the domain, say $\mathcal{D}_{S}$ to $\mathcal{D}_{T}$, it crosses $\partial\mathcal{D}$ and the tangent vector to that trajectory at the point $q \in \partial\mathcal{D}$ is situated on the cone determined by the kinetic form: $g_{\alpha \beta} \dot{q}^{\alpha} \dot{q}^{\beta} = E$. In Ref. \cite{Burd:1993} we can find the proof that in the case of B(IX) model trajectory crosses the boundary set $V=0$. During each of oscillations that occur close to boundary set the solution is instantaneously Kasner and therefore $V=0$ an infinite number of times. The physical trajectories of the simple indefinite systems are geodesics on pseudo-Riemannian manifold without the boundary (on which metric is degenerate) if we define the metric \begin{equation} \hat{g}_{\alpha \beta} = 2 | E-V | g_{\alpha \beta} \nonumber \end{equation} and reparameterize the time variable $\eta \to s$ \cite{Szydlowski:1996b}: \begin{equation} \frac{\mathrm{d} s}{\mathrm{d} \eta} = 2 | E-V |. \nonumber \end{equation} In our further considerations, we will demonstrate the advantages of analysis behavior of trajectories in a neighborhood of the boundary set $\partial\mathcal{D}$ of the region classically admissible for motion. It is also a surface of degeneration of the Jacobi metric. This surface and points of its intersections with the trajectories of the system contain interesting information about complex behavior. For simplicity of presentation of this idea we assume $k=+1$, i.e. vacuum and closed FRW model is considered. The domain (classically) admissible for motion, as well as the boundary set, are shown on Fig. \ref{fig:1} ($q^{1}=a$, $q^{2}=\psi$). \begin{figure}[t] \begin{center} \includegraphics[scale=0.65]{fig1.eps} \end{center} \caption{The domain admissible for motion of the FRW closed system with a conformally coupled scalar field. In the shaded area trajectories behave locally unstable $K g(\mathbf{u},\mathbf{u}) < 0$ where $K$ is the Gauss curvature for the Jacobi metric.} \label{fig:1} \end{figure} \section{Different evidences of chaotic behavior} \label{sec:3} \subsection{Poincar{\'e} sections} It is clear that if a trajectory passes trough the boundary set $\partial\mathcal{D}$, then tangent vector to the trajectory is null, i.e. it lies on the cone $g_{\alpha \beta} \dot{q}^{\alpha} \dot{q}^{\beta}=0$. The physical trajectories of the indefinite mechanical systems for given total energy $E$ (zero for vacuum cosmology) are geodesics if we choose metric in the form $\hat{g}_{\alpha \beta} = 2(E - V) g_{\alpha \beta}$ in both domains $\mathcal{D}_{S}$ and $\mathcal{D}_{T}$. On the boundary $E=V$ the metric is degenerate, which is a source of obstacles if we define the property of sensitive dependence on initial conditions in terms of curvature invariants (the Gauss curvature in our case). In both open regions $\mathcal{D}_{S}$ and $\mathcal{D}_{T}$ the Euler-Lagrange equation for the Lagrangian $\mathcal{L}= \frac{1}{2} g_{\alpha \beta} \dot{q}^{\alpha} \dot{q}^{\beta} - V(q)$, ( $\dot{ }=\frac{\mathrm{d}}{\mathrm{d} t}$) assumes the form of geodesics equation after reparameterization of the time variable \cite{Szydlowski:1996b} \begin{equation} t \to s : \frac{\mathrm{d} s}{\mathrm{d} t} = 2 | E-V | \label{eq:17} \end{equation} where $s$ is the natural parameter defined along geodesics. The criterion of local instability of geodesics flow can be formulated as $K g(\mathbf{u},\mathbf{u}) < 0$, where $\mathbf{u}$ is the vector tangent to the trajectory and $K$ is the Gauss curvature for the Jacobi metric. This domain is represented by shaded regions on Fig. \ref{fig:1}. Due to the representation dynamics as a geodesics flow one can imagine fictitious free falling particle in both domains $\mathcal{D}_{S}$ and $\mathcal{D}_{T}$ meeting the singularity (infinite curvature $K$) at the boundary set $\partial\mathcal{D}$ which play the role of a scattering surface. Fig.~\ref{fig:2},\ref{fig:3},\ref{fig:4} shows three trajectories in the configuration space for three different initial conditions together with Lyapunov exponents in time, calculated in the standard way. Note that in all cases Lyapunov principal exponent goes to zero as $\eta \to \infty$ which corresponds to the singularity at the conformal time. This suggests that the system under consideration has no property of sensitive dependence on initial conditions which is characteristic rather for integrable systems. Another property which distinguishes the system from other systems of classical mechanics with conception of absolute time is that they do not posses property of topological transitivity which is crucial point for understanding of classical chaos conception \cite{Wiggins:1990}. \begin{figure} \begin{center} \includegraphics[scale = 0.45]{fig2a.eps} \includegraphics[scale = 0.45]{fig2b.eps} \caption{The trajectory in the configuration space for initial conditions $a_{0}=0$, $\phi_{0}=\dot{\phi}_{0}=0.5$ ($\dot{a_{0}}$ calculated from Hamiltonian constraint) and Lyapunov principal exponent which tends to zero which may suggest that the system is regular.} \label{fig:2} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[scale = 0.45]{fig3a.eps} \includegraphics[scale = 0.45]{fig3b.eps} \caption{The trajectory in the configuration space for initial conditions $a_{0}=0$, $\phi_{0}=\dot{\phi}_{0}=0.55$ ($\dot{a_{0}}$ calculated from Hamiltonian constraint) and Lyapunov principal exponent.} \label{fig:3} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[scale = 0.45]{fig4a.eps} \includegraphics[scale = 0.45]{fig4b.eps} \caption{The trajectory in the configuration space for initial conditions $a_{0}=0$, $\phi_{0}=\dot{\phi}_{0}=0.65$ ($\dot{a_{0}}$ calculated from Hamiltonian constraint) and Lyapunov principal exponent.} \label{fig:4} \end{center} \end{figure} It is useful to choose degeneration line $V(a,\phi)=0$ as a Poincar{\'e} surface. Let us concentrate on the simplest case of $k=+1$ and $\Lambda=\lambda=0$. Then $\phi=\pm a/\sqrt{1+a^{2}}$ is an algebraic equation of the boundary set. On Fig. \ref{fig:5} one can see Poincar{\'e} section on the surface $V(a,\phi)=0$ on the plane $(a,\dot{\phi})$ for different initial conditions. \begin{figure} \begin{center} \includegraphics[scale = 1]{fig5.eps} \caption{Poincar{\'e} section surface $V(a,\phi)=0$ and $\dot{a} < 0$, $\dot{\phi} > 0$.} \label{fig:5} \end{center} \end{figure} While in Fig. \ref{fig:5} we find most trajectories as regular, there are, of course, chaotic trajectories. Some details of their concentrations in neighborhood of saddle points and forming stochastic layers is illustrated on Fig. \ref{fig:7}. \begin{figure} \begin{center} \includegraphics[scale = 1]{fig7a.eps} \includegraphics[scale = 1]{fig7b.eps} \caption{Details of Poincar{\'e} section from Fig. \ref{fig:5}.} \label{fig:7} \end{center} \end{figure} \subsection{Symbolic dynamics in detection of dynamical complexity} Hadamard \cite{Hadamard:1898} was the first to use methods of trajectories coding in investigations of geodesics on compact space with negative curvature, which belongs now to the field of symbolic dynamics. The significance of this method in the context of closed cosmology with a scalar field was pointed out by Kamenshchik \textit{et~al.} \cite{Kamenshchik:1999}. The crucial feature of this model is the existence of points of maximal expansion ($\dot{a}=0$, $\ddot{a}<0$) and sometimes points of minimal contraction ($\dot{a}=0$, $\ddot{a}>0$) or ``bounces''. Then it is possible to classify all trajectories using localization of their points of maximal expansion and calculate the topological entropy which measures the growth of the number of periodical orbits as their period increases. Hence one can quantify the length of orbit by the number of symbols $A$ -- bounce of trajectory ($\dot{a}=0$, $\ddot{a}>0$), $B$ -- crossing the line $\phi=0$. Note that $a \ge 0$ for physical reasons the extension of trajectories to $a<0$ domain means the extending the solutions beyond the big crunch which is only mathematically admissible \cite{Motter:2002}. For the model under consideration, two different coding procedures were used. In the first method we count all the intersections of trajectories with the axis $\{a=0\}$. For $\phi>0$ we put symbol $(1)$ while in opposite case if $\phi<0$ we put symbol $(0)$. Another method of codding trajectories rests on the analysis of intersections points with the boundary set $\partial\mathcal{D}$ defined as the surface $V(a,\phi)=0$ in the configuration space $(\phi,a)$. One can quantify the length of the orbit by two symbols : $(1)$ -- if $\phi>0$ at intersection point, and $(0)$ in the opposite case if $\phi<0$. In both approaches mentioned before the trajectories are represented by a sequence of zeros and ones. The next step in our analysis is the division of all coding trajectories into blocks in the simplest way. The blocks consist of to letters $(0 0)$, $(0 1)$, $(1 0)$ and $(1 1)$ and blocks $(0 1)$ and $(1 0)$ are treated as a identical. Then, after counting different blocks one can calculate the Shannon informative entropy following the rule \begin{equation} H_{S} = \sum_{i=1}^{r} p_{i} \ln{\frac{1}{p_{i}}}, \label{eq:20} \end{equation} where $p_{i}$ $(i=1,\dots,r)$ is the probability that a given block will appear in the trajectory coding. The informative Shannon entropy characterizes the uncertainty degree of appearance of a given result. Following definition (\ref{eq:20}) $H_{S}$ is a number which belongs to the interval $[0, \ln{r}]$. If for all $i$ $p_{i}$ assumes the same value equal $1/r$ then from definition (\ref{eq:20}) we obtain $H_{S}=\ln{r}$ which is the maximal value of $H_{S}$. From Fig. \ref{fig:8} one can observe that $H_{S}$ is a growing function of initial conditions $\phi_{0}$ to the limit which corresponds to purely chaotic behavior. \begin{figure} \begin{center} \includegraphics[scale=0.75]{fig8.eps} \caption{Shannon informative entropy as a function of $\phi_{0}$ for two different codding methods. The horizontal line $H_{S}=\ln{3}$ denotes the limit for purely random process.} \label{fig:8} \end{center} \end{figure} \subsection{Distribution of intersection points on the boundary set $\partial\mathcal{D}$ as a measure of complexity of dynamics} It is interesting to check how the intersection points are distributed on the boundary line described by function $\phi=a/\sqrt{1+a^{2}}$. Let $L$ be the length along that line calculated form the origin to the intersection point and $N$ denote number of such intersections in the interval $L \pm \Delta L$. Of course $N$ can be normalized to $P$ and treated as a probability of finding a fictitious particle moving along a geodesic at a given point of $\partial\mathcal{D}$. The probability $P$ as a function of $L$ (normalized to unity) is shown in Fig. \ref{fig:9}. For deeper analysis of the distribution of the points Fourier analysis was performed and the results of such analysis are shown in Fig. \ref{fig:10}. The existence of weak noise in the power spectrum can indicate chaotic distribution of the intersection points. \begin{figure} \begin{center} \includegraphics[scale=0.45]{fig9a.eps} \includegraphics[scale=0.45]{fig9b.eps} \includegraphics[scale=0.45]{fig9c.eps} \includegraphics[scale=0.45]{fig9d.eps} \caption{Intersection points of the boundary set $\partial\mathcal{D}$. Points of intersection of trajectory with set $\partial\mathcal{D}$ as a function of time (left) and probability of finding a particle-universe on a given subset of the set $\partial\mathcal{D}$ during evolution (right).} \label{fig:9} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=0.45]{fig10a.eps} \includegraphics[scale=0.45]{fig10b.eps} \caption{Fourier analysis of distributions form Fig. \ref{fig:9}. Existence of weak noise in the power spectrum can indicate chaotic distribution of intersection points.} \label{fig:10} \end{center} \end{figure} \subsection{The existence of unstable periodic orbits as an indicator of complexity of dynamics} It would by useful for classification of the periodic orbits to consider some reflection symmetries of the system. Of course, the system possesses reflection symmetry $t \to -t$ and $q^{i} \to -q^{i}$ $(q^{i}=a,\phi)$. Therefore it is sufficient to investigate motion in one quarter, say $a>0$ and $\phi>0$, to reconstruct the motion in the remaining quarters. Due to the symmetries mentioned before the simplest periodic orbits in the configuration space can be grouped in the following five classes \begin{itemize} \item{trajectories of type `I a' starting form the boundary line $V(a_{0},\phi_{0})=0$ with initial conditions $(\dot{a}_{0},\dot{\phi}_{0}) = (0,0)$ for which after a $1/4$ of period the inflection point at $\phi$ at $a=0$ is reached with $(\dot{a},\dot{\phi})=(-\phi,0)$ or $(+\phi,0)$; see Fig. \ref{fig:11}} \item{trajectories of type 'I b' also starting from the boundary set and reaching the singular point $(a,\phi)=(0,0)$ for which $\dot{a}=\dot{\phi}$ or $\dot{a}=-\dot{\phi}$ after a $1/4$ of the full period; see Fig. \ref{fig:12}} \item{trajectories of type 'II a' starting from a point of configuration space $(a_{0},0)$ with $(\dot{a}_{0},\dot{\phi}_{0})=(0,a_{0})$ and arriving at the inflection point of $\phi(a)$ diagram at $a=0$ $(\dot{a},\dot{\phi})=(-\phi,0)$ or $(+\phi,0)$ after a $1/4$ of full period; see Fig. \ref{fig:13}} \item{trajectories of type 'II b' starting form the point $(a_{0},0)$, $(\dot{a}_{0},\dot{\phi}_{0})=(0,a_{0})$ and reaching the singular point $(a,\phi)=(0,0)$ and $\dot{a}=\dot{\phi}$ or $\dot{a}=-\dot{\phi}$ after a $1/4$ of the full period; see Fig. \ref{fig:14}} \item{trajectories of type 'III' (which are the union of all previously mentioned cases) starting from the boundary set $V(a_{0},\phi_{0})=0$ with $(\dot{a}_{0},\dot{\phi}_{0})=(0,0)$, after a $1/4$ of the full period they reach one inflection point at scale factor $a$, and $\phi=0$ with $(\dot{a},\dot{\phi})=(0,\pm a)$, after a $1/2$ of the period they reach the symmetrical point to the initial condition with respect to $\phi$--axis; see Fig. \ref{fig:15}} \end{itemize} Of course there are also non-symmetric periodic orbits present (see Fig. \ref{fig:16}) but they are not a subject of our consideration. In Table \ref{tab:1} we can find the periods of typical periodic orbits which we obtain by using modified multi shooting method \cite{Reithmeier:1991}. To illustrate the property of sensitive dependence on initial conditions we investigate the evolution of the separation vector in the phase space in the term of ``Lyapunov like'' principal exponent. The principal Lyapunov exponent as well as the distance of the separation vector for initial conditions' separation $\Delta a = 10^{-12}$, are illustrated in Fig. \ref{fig:17}. One can observe the existence of unstable periodic orbits in the model which should be treated as a strong evidence of chaos \cite{Cvitanovic:2002}. \begin{table}[t] \begin{center} \begin{tabular}{|c|c|c|} \hline Type & $q^{1}_{0}=a_{0}$ & Period \\ \hline I a & 1.90228463575087 & 8.21179541505428 \\ & 4.07891339562106 & 9.68359976287486 \\ & 6.14180160301955 & 10.3389393707077 \\ \hline I b & 2.63614545223571 & 9.49181146895361 \\ & 4.67384662462167 & 10.4432886891063 \\ & 6.68822113801299 & 10.8883259280963 \\ \hline II a & 3.08416398565973 & 9.12958397138681 \\ & 5.16022385714180 & 10.06129848810491 \\ & 7.19459890603674 & 10.55370414489915 \\ \hline II b & 1.64890985955010 & 8.45628399932237 \\ & 3.71764148794895 & 10.07328719135456 \\ & 5.72295574169836 & 10.69967308122208 \\ \hline III & 2.47587020635594 & 17.99123630712047 \\ & 2.75437747583008 & 19.56534435565128 \\ & 2.94236929747301 & 20.15957644135879 \\ \hline \end{tabular} \end{center} \caption{Unstable periodic trajectories.} \label{tab:1} \end{table} \begin{figure}[t] \begin{center} \includegraphics[scale=0.45]{fig11a.eps} \includegraphics[scale=0.45]{fig11b.eps} \caption{Periodic trajectories of type `I a' in the configuration space.} \label{fig:11} \end{center} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[scale=0.45]{fig12a.eps} \includegraphics[scale=0.45]{fig12b.eps} \caption{Periodic trajectories of type `I b' in the configuration space.} \label{fig:12} \end{center} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[scale=0.45]{fig13a.eps} \includegraphics[scale=0.45]{fig13b.eps} \caption{Periodic trajectories of type `II a' in the configuration space.} \label{fig:13} \end{center} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[scale=0.45]{fig14a.eps} \includegraphics[scale=0.45]{fig14b.eps} \caption{Periodic trajectories of type `II b' in the configuration space.} \label{fig:14} \end{center} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[scale=0.45]{fig15a.eps} \includegraphics[scale=0.45]{fig15b.eps} \caption{Periodic trajectories of type `III' in the configuration space.} \label{fig:15} \end{center} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[scale=0.45]{fig16a.eps} \includegraphics[scale=0.45]{fig16b.eps} \caption{Non-symmetrical periodic trajectories in the configuration space.} \label{fig:16} \end{center} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[scale=0.45]{fig17b.eps} \includegraphics[scale=0.45]{fig17bd.eps} \caption{Stability analysis for the first trajectory from Fig. \ref{fig:11}. Principal Lyapunov exponent (left) and the distance of separation vector (right) for initial conditions' separation $\Delta a = 10^{-12}$.} \label{fig:17} \end{center} \end{figure} \section{Conclusions} In the present paper the complex behavior of trajectories of FRW cosmological models with scalar field was investigated by means of geodesics of the Jacobi metric. We pointed out the role of the singular set $\partial\mathcal{D}$ of degeneration of the Jacobi metric in detection of complexity of dynamical behavior. We find that this domain can be useful as a Poincar{\'e} surface. Moreover, the distribution of the intersection points as well as the existence of periodic orbits contain interesting information about the degree of complexity of its dynamics. Therefore all these investigations should be treated as a complementary description of chaotic behavior in a geometrical way. We have demonstrated the complexity of dynamics in the sense of $(1)$ Poincar{\'e} sections, $(2)$ random distribution of intersection points, $(3)$ the existence of unstable periodic orbits and $(4)$ chaos in trajectories coding. All this evidence has rather mathematical sense because we prolong trajectories to the nonphysical domain $a<0$. The true sense of complexity following Motter and Letelier \cite{Motter:2002} is in the nonintegrability of its dynamics. In Ref. \cite{Maciejewski:2002,Maciejewski:2000,Maciejewski:2001} we have investigated the model under consideration in the framework of nonintegrability and we showed that they are non-integrable in the sense of nonexistence of meromorfic first integrals for the generic case of the model's parameters. To that end, the Yoshida, Ziglin, and strong methods of differential Galois group are used. The presented approach offers some new possibilities of coding trajectories for which a well defined conception of Kolmogorov complexity can be also applied \cite{Kolmogorov:1965}. In 1963-1965 A.~N. Kolmogorov proposed to consider a measure of complexity in the framework of the general theory of algorithms. Let us consider model of dynamics in the form of mathematical object that has binary string $\alpha$ as its complete description. Then we can use the length of the shortest program in bits as a measure of the complexity of the object \cite{Li:1997}. Let us now consider a cosmological model as a mathematical object that has coding dynamics in the form of binary string $\alpha$ as its complete description. Then for the purpose of quantifying complexity, the notion of Kolmogorov idea can be useful. The question, whether the FRW cosmological model is complex in the Kolmogorov sense, is open. In our opinion the geometrical language introduced here offers a new, interesting possibility of trajectories coding in the form of binary strings which makes the answering this question easier.
1,108,101,566,274
arxiv
\section{Introduction}\label{intro} Our current paradigm for the origin of galaxy morphologies rests upon hierarchical mass assembly \citep[e.g.,][]{steinmetz02}, and many transformational processes are at work throughout the evolutionary histories of galaxies. Some determine the main structural traits (e.g., disk versus spheroid) while others only influence properties such as color and star-formation rates. Disk galaxy collisions lead to the formation of elliptical galaxies \citep{spitzer51,toomre72,farouki82,negroponte83,barnes92,barnes96,mihos96}, and the extreme example of this process is the build-up of the most massive galaxies in the Universe at the cores of galaxy clusters through the accretion of cluster members. Disks can also be transformed into spheroidals by tidal shocks as they are harassed by the cluster gravitational potential \citep{farouki81,moore96,moore98}. Harassment inflicts more damage to low luminosity galaxies because of their slowly rising rotation curves and their low density cores. Galaxies can be stripped of their internal gas and external supply through ram pressure exerted by the intracluster medium \citep{gunn72,larson80,quilis00}, and the result is a ``quenching'' (or ``strangulation'') of their star formation that leads to a rapid reddening of their colours \citep[also see][]{martig09}. The task of isolating observationally the effects of a given process has remained a major challenge to this day. Many processes affecting galaxy morphologies are clearly environmentally-driven, and galaxy clusters are therefore ideal laboratories in which to study all of them. The dynamical state of a cluster, which can be observationally characterized by measuring mass and substructures, should be related to its morphological content. For example, the number of interactions/collisions suffered by a given galaxy should depend on local number density and the time it has spent within the cluster. Dynamically young clusters with a high degree of subclustering should contain large numbers of galaxies that are infalling for the first time. More massive clusters will contain more galaxies, but they will also have higher galaxy-galaxy relative velocities that may impede merging \citep{lubin02}. Spheroidal/elliptical galaxies will preferentially be formed in environments where the balance between number density and velocity dispersions is optimal, but it is still not clear where this optimal balance lies. Cluster masses can be estimated from their galaxy internal velocity dispersion \citep{rood72,dressler84,carlberg97,tran99,borgani99,lubin02}, through weak-lensing shear \citep{kaiser93,schneider95,hoekstra00,clowe06} or through analysis of their hot X-ray emitting atmospheres \citep[e.g., ][]{allen98}, and it will be used here as the main independent variable against which morphological content will be studied. The morphological content of high-redshift clusters is most often characterized by the fraction $f_{E+S0}$ of early-type galaxies they contain \citep{dressler97,dokkum00, fasano00, dokkum01, lubin02, holden04, smith05, postman05, desai07,poggianti09b}. The bulk of the data available so far is based on visual classification. ``Early-type'' galaxies are defined in terms of visual classifications as galaxies with E or S0 Hubble types. A compilation of early-type fractions taken from the literature \citep{dokkum00} shows a dramatic increase of the early-type fractions as a function of decreasing redshift from values around 0.4$-$0.5 at $z \sim 1$ to values around 0.8 in the local Universe. However, the interpretation of this trend is not entirely clear as others \citep[e.g.,][]{dressler97,fasano00,desai07,poggianti09b} have reported that the fraction of E's remains unchanged as a function of redshift and that the observed changes in early-type fractions are entirely due to the S0 cluster populations. S0 populations were observed to grow at the expense of the spiral population \citep{smith05,postman05,moran07,poggianti09b} although others \citep[e.g.,][]{holden09} have argued for no evolution in the relative fraction of ellipticals and S0s with redshift. \citet{smith05} and \citet{postman05} show that the evolution of $f_{E+S0}$ is in fact a function of both lookback time (redshift) and projected galaxy density. They find $f_{E+S0}$ stays constant at 0.4 over the range 1 $< t_{lookback} <$ 8 Gyr for projected galaxy densities $\Sigma <$ 10 Mpc$^{-2}$. For high density environments ($\Sigma$ = 1000 Mpc$^{-2}$), $f_{E+S0}$ decreases from 0.9 to 0.7. At fixed lookback time, $f_{E+S0}$ varies by a factor of 1.8 from low to high densities at $t_{lookback}$ = 8 Gyr and by a factor of 2.3 at $t_{lookback}$ = 1 Gyr. The difference between low and high density environments thus increases with decreasing lookback time. Both studies indicate that the transition between low and high densities occurs at $0.6R_{200}$ ($R_{200}$ is the projected radius delimiting a sphere with interior mean density 200 times the critical density at the cluster redshift, see Equation~\ref{radius200}). \citet{postman05} also find that $f_{E+S0}$ does not change with cluster velocity dispersion for massive clusters ($\sigma$ $>$ 800 km/s). The data for one of their clusters also suggest that $f_{E+S0}$ decreases for lower mass systems. This trend would be consistent with observations of $f_{E+S0}$ in groups that show a strong trend of decreasing $f_{E+S0}$ versus decreasing $\sigma$ \citep{zabludoff98}. Finally, $f_{E+S0}$ seems to correlate with cluster X-ray luminosity at the 2-3$\sigma$ level \citep{postman05}. Recent works on stellar mass-selected cluster galaxy samples \citep{holden07,vanderwel07} paint a different picture. The fractions of E+S0 galaxies in clusters, groups and the field do not appear to have changed significantly from $z \sim 0.8$ to $z \sim 0.03$ for galaxies with masses greater than 4$\times 10^{10} M_{\odot}$. The mass-selected early-type fraction remains around 90\% in dense environments ($\Sigma >$ 500 gal Mpc$^{-2}$) and 45\% in groups and the field. These results show that the morphology-density relation of galaxies more massive than 0.5M$_{*}$ has changed little since $z \sim 0.8$ and that the trend in morphological evolution seen in luminosity-selected samples must be due to lower mass galaxies. This is in agreement with \citet{delucia04,delucia07} and \citet{rudnick09} who have shown the importance of lower mass (i.e., fainter) galaxies to the evolution of the color-magnitude relation and of the luminosity function versus redshift. Another interesting result has come from attempts to disentangle age, morphology and environment in the Abell 901/902 supercluster \citep{wolf07,lane07}. Local environment appears to be more important to galaxy morphology than global cluster properties, and while the expected morphology-density and age-morphology relations have been observed, there is no evidence for a morphology-density relation at a fixed age. The time since infall within the cluster environment and not density might thus be the more fundamental parameter dictating the morphology of cluster galaxies. A number of efforts have been made on the theoretical side to model the morphological content of clusters. \citet{diaferio01} used a model in which the morphologies of cluster galaxies are solely determined by their merger histories. A merger between two similar mass galaxies produces a bulge, and a new disk may form through the subsequent cooling of gas. Bulge-dominated galaxies are in fact formed by mergers in smaller groups that are later accreted by clusters. Based on their model, they reach the following conclusions: (1) the fraction of bulge-dominated galaxies inside the virial radius should depend on the mass of the cluster, and it should show a pronounced peak for clusters with mass of 3 $\times$ 10$^{14}$ M$_\odot$ followed by a decline for larger cluster masses. (2) The fraction of bulge-dominated galaxies should be independent of redshift for clusters of fixed mass, and (3) the dependence of morphology on cluster mass should be stronger at high redshift than at low redshift. \citet{lanzoni05} use the GALICS semi-analytical models and find that early-type fractions strongly depend on galaxy luminosity rather than cluster mass. By selecting a brighter subsample of galaxies from their simulations, they find a higher fraction of ellipticals irrespective of the cluster mass in which these galaxies reside. This trend is particularly noticeable in their high-density environments. Observations and these earlier models clearly do not agree in important areas, and a comparison between them would clearly benefit from a larger cluster sample size. More recently, the Millennium Simulation \citep[MS;][]{springel05} has provided the highest resolution model thus far of a large (0.125 Gpc$^3$), representative volume of the Universe. Improved tracking of dark matter structure and new semi-analytical prescriptions \citep{delucia07a} allow the evolution of the galaxy population to be followed with higher fidelity and better statistics than in the otherwise similar work of \citet{diaferio01}. We will use cluster catalogues from the MS later in this paper for comparison with our observational data. Our understanding of high-redshift cluster galaxy populations in terms of their evolution as a function of redshift and their cluster-to-cluster variations has been hampered by the lack of comprehensive multi-wavelength (optical, near-infrared and X-ray) imaging and spectroscopic studies of large, homogeneously-selected samples of clusters. Many efforts are underway to improve sample sizes \citep{gonzalez01,gladders05,willis05,postman05}. One of these efforts is the European Southern Observatory Distant Cluster Survey \citep[``EDisCS'';][]{white05}. The EDisCS survey is an ESO large programme aimed at the study of a sample of eighteen optically-selected clusters over the redshift range 0.5-0.8. It makes use of the FORS2 spectrograph on the Very Large Telescope for optical imaging and spectroscopy and of the SOFI imaging spectrograph on the New Technology Telescope (NTT) for near-infrared imaging. A number of papers on star formation in clusters \citep{poggianti06,poggianti09a} and the assembly of the cluster red sequence \citep{delucia04,delucia07,sanchez09,rudnick09} have been so far published from these data. In addition to the core VLT/NTT observations, a wealth of ancillary data are also being collected. A 80-orbit program for the Advanced Camera for Surveys (ACS) on the Hubble Space Telescope was devoted to the $i$-band imaging of our ten highest-redshift clusters. Details of the HST/ACS observations and visual galaxy classifications are given in \citet{desai07} and the frequency and properties of galaxy bars is studied in \citet{barazza09}. X-ray observations with the XMM-Newton satellite of three EDisCS clusters have been published in \citet{johnson06} with more clusters being observed. H-alpha observations of three clusters have been published in \citet{finn05} with more clusters also being observed. Finally, the analysis of Spitzer/IRAC observations of all EDisCS clusters is in progress (Finn et al., in preparation). This paper presents the early-type galaxy fractions of EDisCS clusters as a function of cluster velocity dispersion, redshift and star-formation activity. A set of local clusters extracted from the Sloan Digital Sky Survey (SDSS) is used as a comparison sample. Early-type fractions were measured from two-dimensional bulge+disk decompositions on deep, optical VLT/FORS2 and HST/ACS images of spectroscopically-confirmed cluster member galaxies. Section~\ref{data} describes the EDisCS cluster sample selection and the imaging data. Section~\ref{analysis} describes the procedure used to perform bulge+disk decompositions on SDSS, VLT/FORS2 and HST/ACS images. Section~\ref{efractions} presents early-type fractions for the EDisCS clusters with a detailed comparison between visual and quantitative morphologies and between HST- and VLT-derived early-type fractions. It also includes early-type fractions for the SDSS clusters. Changes in EDisCS early-type fractions as a function of cluster velocity dispersion, redshift and star-formation activity are studied in Section~\ref{results}. Finally, Sections~\ref{discussion} and~\ref{conclusions} discuss our results and their implications for the morphological content of clusters. The set of cosmological parameters used throughout this paper is ($H_0, \Omega_{m}, \Omega_{\Lambda}$) = (70, 0.3, 0.7). \section{Data}\label{data} \subsection{Sample Selection and VLT/FORS2 Optical Imaging}\label{sampsel} The sample selection and optical/near-infrared imaging data for the EDisCS survey are described in details in \citet{gonzalez02}, \citet{white05} (optical photometry) and Arag\'on-Salamanca et al. (near-IR photometry; in preparation). Photometric redshifts for the EDisCS clusters are presented in \citet{pello09}, and cluster velocity dispersions measured from weak-lensing mass reconstructions are given in \cite{clowe06}. Spectroscopy for the EDisCS clusters is detailed in \citet{halliday04} and \citet{milvang08}. Clusters in the EDisCS sample were drawn from the Las Campanas Distant Cluster Survey (LCDCS) candidate catalog \citep{gonzalez01}. Candidate selection was constrained by published LCDCS redshift and surface brightness estimates. Candidates were selected to be among the highest surface brightness detections at each redshift in an attempt to recover some of the most massive clusters at each epoch. Using the estimated contamination rate for the LCDCS of $\sim 30 \%$, we targeted thirty candidates in the redshift range 0.5$-$0.8 for snapshot VLT/FORS2 imaging in an effort to obtain twenty (10 at $z \sim 0.5$ and 10 at $z \sim 0.8$) confirmed clusters. The $z \sim 0.5$ candidates were observed for 20 minutes in each of $I_\mathrm{B}$ and $V_\mathrm{B}$, and the $z\sim0.8$ candidates were observed for 20 minutes in each of $I_\mathrm{B}$ and $R_\mathrm{sp}$. These filters are the standard FORS2 ones. $V_\mathrm{B}$ and $I_\mathrm{B}$ are close approximations to the \citet{bessell90} photometric system while the $R_\mathrm{sp}$ is a special filter for FORS2. Final cluster candidates for deeper VLT imaging were selected on the basis of color and surface density of galaxies on the sky \citep{white05}. The image quality on the final stacked images ranged from 0\farcs 4 to 0\farcs 8. As described in \citet{white05}, deep spectroscopy was not obtained for two cluster candidates (1122.9-1136 and 1238.5-1144), and we therefore did not include them here. The main characteristics (positions, redshifts, velocity dispersions and radii) of the EDisCS cluster sample used in this paper are given in Table~\ref{clsample}. $R_{200}$ is the projected radius delimiting a sphere with interior mean density 200 times the critical density at the cluster redshift, and it is used throughout this paper as an important fiducial radius. $R_{200}$ values in Table~\ref{clsample} were calculated using the equation : \begin{eqnarray} R_{200} = 1.73 \frac{\sigma}{1000 {\rm km/s}} \frac{1}{\sqrt{\Omega_\Lambda + \Omega_m(1+z)^3}} h_{100}^{-1} {\rm Mpc} \label{radius200} \end{eqnarray} \noindent where $h_{100}$ = $H_0$ / 100 and $\sigma_{cluster}$ is the cluster velocity dispersion measured using spectroscopically-confirmed cluster members \citep{carlberg97,finn05}. Cluster masses were calculated using the equation: \begin{eqnarray} M_{cl} = 1.2\times10^{15} \biggl (\frac{\sigma}{1000 {\rm km/s}}\biggr )^3\frac{1}{\sqrt{\Omega_\Lambda + \Omega_m(1+z)^3}} h_{100}^{-1} M_\odot \label{clmass} \end{eqnarray} \noindent as in \citet{finn05}. In practice, the redshift distributions of high-$z$ and the mid-$z$ samples partly overlap as can be seen from Table~\ref{clsample}. \begin{table*} \caption[]{Main characteristics of the EDisCS cluster sample: IDs, positions, redshifts, number of spectroscopically-confirmed members, velocity dispersions and radii. Clusters with HST imaging are identified by the superscript ``h'' in their ID.} \begin{center} \begin{tabular*}{15.5cm}{ccccccrcc} Mid-$z$ clusters & & & & & & & &\\ \hline ID & RA$^{\rm a}$ & DEC$^{\rm a}$ & z$^{\rm b}$ & Age of Universe & $N_{mem}$$^{\rm c}$ & $\sigma$$^{\rm d}$ & $R_{200}$$^{\rm e}$ & $M_{cl}$$^{\rm f}$ \\ & (2000.0) & (2000.0) & & ($\times$ $t_0$) & & (km/s) & (Mpc) & (10$^{15}$M$_{\odot}$)\\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9)\\ \hline 1018.8-1211 & 10:18:46.8 & $-$12:11:53 & 0.4716 & 0.654 & 33 & 474 $^{+~75}_{-~57}$ & 0.91 & 0.142\\ 1059.1-1253 & 10:59:07.1 & $-$12:53:15 & 0.4550 & 0.663 & 41 & 517 $^{+~71}_{-~40}$ & 1.00 & 0.186\\ 1119.3-1130 & 11:19:16.7 & $-$11:30:29 & 0.5491 & 0.615 & 21 & 165 $^{+~34}_{-~19}$ & 0.30 & 0.006\\ 1202.7-1224 & 12:02:43.4 & $-$12:24:30 & 0.4246 & 0.680 & 21 & 540 $^{+139}_{-~83}$ & 1.07 & 0.216\\ 1232.5-1250$^h$ & 12:32:30.5 & $-$12:50:36 & 0.5419 & 0.618 & 54 & 1080 $^{+119}_{-~89}$ & 1.99 & 1.610\\ 1301.7-1139 & 13:01:40.1 & $-$11:39:23 & 0.4828 & 0.648 & 37 & 681 $^{+~86}_{-~86}$ & 1.30 & 0.418\\ 1353.0-1137 & 13:53:01.7 & $-$11:37:28 & 0.5889 & 0.596 & 22 & 663 $^{+179}_{-~91}$ & 1.19 & 0.362\\ 1411.1-1148 & 14:11:04.6 & $-$11:48:29 & 0.5200 & 0.629 & 26 & 709 $^{+180}_{-105}$ & 1.32 & 0.461\\ 1420.3-1236 & 14:20:20.0 & $-$12:36:30 & 0.4969 & 0.641 & 27 & 225 $^{+~77}_{-~62}$ & 0.43 & 0.015\\ \hline & & & & & & & &\\ High-$z$ clusters & & & & & & & &\\ \hline ID & RA & DEC & z & Age of Universe & $N_{mem}$ & $\sigma$ & $R_{200}$ & $M_{cl}$\\ & (2000.0) & (2000.0) & & ($\times$ $t_0$) & & (km/s) & (Mpc) & (10$^{15}$M$_{\odot}$)\\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9)\\ \hline 1037.9-1243$^h$ & 10:37:51.2 & $-$12:43:27 & 0.5800 & 0.600 & 19 & 315 $^{+~76}_{-~37}$ & 0.57 & 0.039\\ 1040.7-1156$^h$ & 10:40:40.4 & $-$11:56:04 & 0.7020 & 0.548 & 30 & 418 $^{+~55}_{-~46}$ & 0.70 & 0.085\\ 1054.4-1146$^h$ & 10:54:24.5 & $-$11:46:20 & 0.6965 & 0.550 & 49 & 589 $^{+~78}_{-~70}$ & 0.99 & 0.238\\ 1054.7-1245$^h$ & 10:54:43.6 & $-$12:45:52 & 0.7503 & 0.529 & 36 & 504 $^{+113}_{-~65}$ & 0.82 & 0.144\\ 1103.7-1245b$^h$ & 11:03:36.5 & $-$12:44:22 & 0.7029 & 0.548 & 11 & 242 $^{+126}_{-104}$ & 0.40 & 0.016\\ 1138.2-1133$^h$ & 11:38:10.3 & $-$11:33:38 & 0.4801 & 0.649 & 48 & 737 $^{+~77}_{-~56}$ & 1.41 & 0.531\\ 1216.8-1201$^h$ & 12:16:45.1 & $-$12:01:18 & 0.7955 & 0.513 & 67 & 1018 $^{+~73}_{-~77}$ & 1.61 & 1.159\\ 1227.9-1138$^h$ & 12:27:58.9 & $-$11:35:13 & 0.6375 & 0.575 & 22 & 572 $^{+~96}_{-~54}$ & 0.99 & 0.226\\ 1354.2-1231$^h$ & 13:54:09.7 & $-$12:31:01 & 0.7562 & 0.527 & 21 & 668 $^{+161}_{-~80}$ & 1.08 & 0.335\\ \hline \label{clsample} \end{tabular*} $^{\rm a}$ Cluster BCG Coordinates (J2000)\\ $^{\rm b}$ Cluster redshift measured from EDisCS spectroscopy\\ $^{\rm c}$ Number of cluster members confirmed by EDisCS spectroscopy\\ $^{\rm d}$ Cluster velocity dispersion measured from EDisCS spectroscopy\\ $^{\rm e}$ From equation~\ref{radius200}\\ $^{\rm f}$ From equation~\ref{clmass}\\ \end{center} \end{table*} \subsection{VLT Spectroscopy and Cluster Membership}\label{vltspec-members} We use only spectroscopically-confirmed cluster members to calculate our cluster early-type fractions. Deep multislit spectroscopy of the EDisCS was obtained with the FORS2 spectrograph on VLT. Spectra of $>$ 100 galaxies per cluster field were obtained with typical exposure times of two and four hours for the mid-z and high-z samples respectively. Spectroscopic targets were selected from $I$-band catalogues. This corresponds to rest-frame $\sim$ 5000 $\pm$ 400 $\AA$ at the redshifts of the EDisCS clusters. Conservative rejection criteria based on photometric redshifts were used in the selection of spectroscopic targets to reject a significant fraction of non-members while retaining a spectroscopic sample of cluster galaxies equivalent to a purely $I$-band selected one. We verified {\it a posteriori} that these criteria excluded at most 1$\%$ of the cluster galaxies \citep{halliday04,milvang08}. The spectroscopic selection, observations and spectroscopic catalogs are presented in detail in \citet{halliday04} and \citet{milvang08}. As described in \citet{halliday04}, cluster redshifts and velocity dispersions were iteratively calculated using a biweight scale estimator for robustness. Cluster members were defined as galaxies with redshifts within the range $z_{cluster} \pm 3\sigma_{cluster}$ where $z_{cluster}$ is the median redshift of all cluster members. \subsection{HST/ACS Imaging}\label{hstimaging} In addition to our ground-based imaging, a 80-orbit program (GO 9476, PI: Dalcanton) for the Advanced Camera for Surveys (ACS) on the Hubble Space Telescope (HST) was devoted to the $i$-band imaging of our ten highest-redshift cluster fields. Details of these observations are given in \citet{desai07}. Briefly, the HST observations were designed to coincide as closely as possible with the coverage of the ground-based optical imaging and spectroscopy, within guide star constraints. The VLT/FORS2 images cover a 6\farcm5 $\times$ 6\farcm 5 region around each cluster, with the cluster center displaced by 1\arcmin from the center of the region. For reference, the ACS WFC has a field of view of roughly 3\farcm 5 $\times$ 3\farcm 5. Balancing scientific motives for going deep over the entire spectroscopic field against a limited number of available orbits, we tiled each 6\farcm 5 $\times$ 6\farcm 5 field in four 1-orbit pointings overlapping one additional deep 4-orbit pointing on the cluster center. The resulting exposure time per pixel was 2040 seconds except for the central 3\farcm 5 $\times$ 3\farcm 5, which had an exposure time per pixel of 10200 seconds. The deep central pointing probes to lower surface brightness, fainter magnitudes, and larger galactic radii in the region of the cluster containing the most galaxies. All exposures were taken under LOW SKY conditions to maximize our surface brightness sensitivity. An image mosaic was created for each cluster using the CALACS/Multidrizzle pipeline, and the final sampling of the multidrizzled image mosaics was 0\farcs045. This is the "native" ACS image sampling, and it was chosen to avoid potential aliasing problems that might have been introduced by a finer multidrizzle sampling given our limited dither pattern in the cluster outskirts. Clusters with HST imaging are identified by a ``h'' in Table~\ref{clsample}. \section{Quantitative Galaxy Morphology}\label{analysis} \subsection{Source Detection and Extraction}\label{sources} The source catalogs and segmentation images for the EDisCS clusters were created using the SExtractor (``Source Extractor'') galaxy photometry package version 2.2.2 \citep{bertin96}. The SExtractor source detection was run on the combined deep FORS2 images in ``two-image'' mode using the I-band image as the reference detection image for all the other passbands. The detection threshold was 1.5$\sigma_{bkg}$, and the required minimum object area above that threshold was 4 pixels. The convolution kernel was a 7$\times$7 Gaussian kernel with a FWHM of 3.0 pixels. No star/galaxy separation based on the SExtractor ``stellarity'' index was attempted. Every source was fit with a bulge+disk model, and unresolved sources such as stars could easily be identified as output models with zero half-light radius. As SExtractor performs source detection and photometry, it is able to deblend sources using flux multi-thresholding. This deblending technique works well in the presence of saddle points in the light profiles between objects. Each SExtractor pre-deblending ``object'' consists of all the pixels above the detection threshold that are spatially connected to one another. This group of pixels may or may not include several real objects. The multi-thresholding algorithm assigns the pixels between two adjacent objects and below the separation threshold based on a probability calculated from bivariate Gaussian fits to the two objects. No assumption is made regarding the shape of the objects in this statistical deblending technique. We used a value for the SExtractor deblending parameter DEBLEND$_-$MINCONT of 0.0005. This value is {\it subjective}, and it was found through visual inspection of several EDisCS cluster images to provide good object separation. Even though the value of DEBLEND$_-$MINCONT was determined subjectively, it provides an unequivocal definition of an object in the EDisCS catalogs. It was only determined once, and the same value of DEBLEND$_-$MINCONT was consistently used for all EDisCS cluster images as well as for all the reliability tests of Section~\ref{reliability}. \subsection{Two-Dimensional Bulge+Disk Decompositions} \label{bdcomps} This work uses GIM2D (Galaxy IMage 2D) version 3.2, a 2D decomposition fitting program \citep{simard02}, to measure the structural parameters of galaxies on the EDisCS VLT/FORS2 and HST/ACS images. GIM2D is an IRAF\footnote{IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.}/SPP package written to perform detailed bulge+disk surface brightness profile decompositions of low signal-to-noise (S/N) images of distant galaxies in a fully automated way. GIM2D is publicly available, and it has been used extensively in a wide range of different projects so far. \subsubsection{Fitting Model} \label{fitmodel} The fitting model used for the two-dimensional bulge+disk decompositions of EDisCS galaxies is the same as the one used by \citet{simard02}. It consists of a ``bulge'' component with a de Vaucouleurs profile and of an exponential ``disk'' component. We put ``bulge'' and ``disk'' between quotes to emphasize that this conventional nomenclature does does not say anything about the internal kinematics of the components. The presence of a ``disk'' component does not necessarily imply the presence of an actual disk because many dynamically hot systems also have simple exponential profiles. The fitting model had ten free parameters: the total galaxy flux $F$, the bulge fraction $B/T$ ($\equiv$ 0 for pure disk systems), the bulge semi-major axis effective radius $r_e$, the bulge ellipticity $e$ ($e \equiv 1-b/a$, $b \equiv$ semi-minor axis, $a \equiv$ semi-major axis), the bulge position angle of the major axis $\phi_{b}$ on the image (clockwise, y-axis $\equiv$ 0), the disk semi-major axis exponential scale length $r_d$ (also denoted $h$ in the literature), the disk inclination $i$ (face-on $\equiv$ 0), the disk position angle $\phi_d$ on the image, the subpixel $dx$ and $dy$ offsets of the model center with respect to the input science image center. The sky background is not a free parameter of the fits (see Section~\ref{skyestm}). The S\'ersic index for the bulge profile is fixed at a value of $n = 4$ (i.e., the de Vaucouleurs profile value). The position angles $\phi_b$ and $\phi_d$ were not forced to be equal for two reasons: (1) a large difference between these position angles is a signature of strongly barred galaxies, and (2) some observed galaxies do have {\it bona fide} bulges that are not quite aligned with the disk position angle. The smooth bulge+disk model used here is obviously a simple approximation. After all, many real galaxies will exhibit more than two structural components such as nuclear sources, bars, spiral arms and HII regions. Even in the presence of only a bulge and a disk, the ellipticity and/or the position angles of these components might be functions of galactocentric distance. The bulge+disk model is a trade-off between a reasonable number of fitting parameters and a meaningful decomposition of distant galaxy images. {\it No} non-parametric or parametric quantitative classification system is perfect. Any classification system will suffer from biases inherent to its basic definition. However, provided a given quantitative system is clearly defined before its use, its results will be readily reproducible in their successes {\it and} failure by other investigators. The exact shape of bulge profiles remains under debate \citep[e.g.,][and references therein]{balcells03}. Locally, there is evidence that the bulges of late-type spiral galaxies may be better fit by an $n$ = 1 profile, whereas bright ellipticals and the bulges of early-type spiral galaxies follow an $n$ = 4 profile \citep{dejong96,courteau96,andredakis98}. Local late-type galaxies with $n$ = 1 bulges have $B/T \leq 0.1$ \citep{dejong96}. Since such bulges contain only 10\% of the total galaxy light, low signal-to-noise measurements of late-type high-redshift galaxies make it very difficult, if not impossible, to determine the S\'ersic index of distant bulges even with the spatial resolution of the Hubble Space Telescope as demonstrated by an extensive set of tests on HST images of the high-redshift cluster CL1358+62 \citep{tran03}. On the other hand, $n$ is more important for bulge-dominated galaxies, and $n$ = 4 is the expected value based on local early-type galaxies. Knowing that bright ellipticals and the bulges of early-type spirals are well-fit by a de Vaucouleurs profile, a $n$ = 4 bulge profile was therefore adopted as the canonical bulge fitting model here for the sake of continuity across the full range of morphological types. \subsubsection{Fitting Regions}\label{fitregions} GIM2D disk+bulge decompositions are performed on thumbnail (or ``postage stamp'') images extracted around the objects detected by SExtractor rather than on the entire science image itself. The area of the thumbnail images is given by the isophotal area of the object. Here, all thumbnails were chosen to have an area 5 times larger than the 1.5$\sigma_{bkg}$ isophotal area. Each thumbnail is a square image with sides of length $\sqrt{5 \times isophotal\_area}$. The first thumbnail is extracted from the science image itself, and the local background calculated by SExtractor is subtracted from it so that it should have a background mean level close to zero. The second thumbnail is extracted from the SExtractor segmentation image. The GIM2D decompositions were performed on all pixels flagged as object {\it or} background in the SExtractor segmentation image. Object areas in the segmentation image are sharply delineated by the location of the isophote corresponding to the detection threshold because SExtractor considers all pixels below this threshold to be background pixels. However, precious information on the outer parts of the galaxy profile may be contained in the pixels below that threshold, and fits should therefore not be restricted only to object pixels to avoid throwing that information away. Pixels belonging to objects in the neighborhood of the primary object being fit are masked out of the fitting area using the SExtractor segmentation image. The flux from the primary object that would have been in those masked areas in the absence of neighbors is nonetheless properly included in the magnitude measurements given in this paper because magnitudes were obtained by integrating the best-fit models over {\it all} pixels. \subsubsection{Sky Background Level Measurements}\label{skyestm} Special care must be paid to the determination of the local sky background level $b$ and dispersion $\sigma_{bkg}$ as sky errors are the dominant source of systematic errors in bulge+disk decompositions of distant galaxies. As an example, overestimating the background sky level will lead to underestimates of the galaxy total flux, half-light radius and bulge fraction as a result of strong parameter covariances. Even though the SExtractor local background was subtracted from each galaxy thumbnail image, an additional (residual) background estimate $db$ was computed and used by GIM2D to correct for any systematic error in the initial SExtractor sky level estimate. In order to compute $db$, GIM2D used all the pixels in the science thumbnail image flagged as background pixels (flag value of zero) in the SExtractor segmentation image. GIM2D further pruned this sample of background pixels by excluding any background pixel that is closer than five pixels ($1\farcs0$ for the pixel sampling of the FORS2 detectors) from any (primary or neighboring) object pixels. This buffer zone ensures that the flux from all SExtracted objects in the image below all the 1.5$\sigma_{bkg}$ isophotes does not significantly bias the mean background level upwards and artificially inflate $\sigma_{bkg}$. A minimum of 7500 sky pixels was imposed on the area of the sky region. In cases where the number of sky pixels in the input science thumbnail image was insufficient, the original science image was searched for the 7500 sky pixels nearest to the object. For the EDisCS fits, background parameters were re-calculated with GIM2D before fitting, and the residual background levels $db$ were then frozen to their recalculated values for the bulge+disk fits. \subsubsection{Point-Spread-Functions} The shape of the Point-Spread-Function (PSF) on the VLT/FORS2 and HST/ACS images varies significantly as a function of position, and these variations must be taken into account when point-spread-functions for the bulge+disk decompositions are generated. For both sets of images, we used the stand-alone version of the stellar photometry program DAOPHOT II \citep{stetson87} to construct spatially-varying PSF models for the EDisCS cluster images. For each cluster and for each passband, we selected ``clean'', point sources (detection flag of zero and stellarity index of 0.8 or greater) from the SExtractor source catalog. The positions of these point sources were fed to the DAOPHOT routine PSF to be modelled as the sum of a Gaussian core and of an empirical look-up table representing corrections from the best-fitting Gaussian to the actual observed values. Both the gaussian core parameters and the look-up table were allowed to vary linearly as a function of $x$ and $y$ positions on the image. Finally, the PSF model was used to create a PSF at the position of each galaxy to be fit. The PSF images were $2\farcs5$ on a side to provide good dynamical range for the fits. \subsubsection{Reliability Tests}\label{reliability} Following the same procedure as in \citet{simard02}, we performed an extensive set of simulations to test the reliability of our sky background estimates and of the best-fit parameter values recovered through bulge+disk fits on both sets of images. 2000 smooth galaxy image models were created with structural parameters uniformly generated at random in the following ranges: $20.0 \leq I \leq 25.0$, $0.0 \leq B/T \leq 1.0$, $0 \leq r_{e} \leq 10\farcs 0$, $0.0 \leq e \leq 0.7$, $0 \leq r_{d} \leq 10\farcs0$, and $0 \degr \leq i \leq 85 \degr$. The bulge S\'ersic index was held fixed at $n = 4$ for all models. Both bulge and disk position angles were fixed to 90$\degr$\thinspace\thinspace for all simulations, and the bulge and disk sizes were uniformly generated in the log of the size ranges above. Each simulation was convolved with a PSF computed from one of the images with a FWHM typical of the VLT/FORS2 ($\sim 0\farcs 8$) and HST/ACS ($\sim 0\farcs 05$) observations. The same PSF was used in both creating and analyzing the simulations, so the results will not include any error in the structural parameters due to PSF mismatch. Poisson deviates were used to add photon noise due to galaxy flux into the simulations. The noisy images were then embedded in a 20$\arcsec$ $\times$ 20\arcsec section of one of the real $I-$band images to provide a real background for the simulations. In addition to sky photon noise and detector read-out noise, the real background noise includes brightness fluctuations of very faint galaxies below the detection threshold. This procedure thus yields realistic errors that include the effect of sky errors. The simulations were SExtracted exactly in the same way as real EDisCS sources (see Section~\ref{sources}). Science and segmentation thumbnails extracted from the simulations were analyzed with GIM2D following exactly the same steps as for the real galaxies (see Section~\ref{bdcomps}). Figures~\ref{emap_m-rhl} and~\ref{emap-bt} show maps of errors on the galaxy total magnitude $I$, galaxy intrinsic half-light radius $r_{hl}$ and galaxy bulge fraction $B/T$ for the VLT/FORS2 images. The left-hand panels show the mean parameter errors as a function of input galaxy magnitude and size, and the right-hand panels show the 1$\sigma$ parameter random error as a function of input galaxy magnitude and size. The lower number in each cell is the number of simulated galaxies created for that cell. Most systematic errors are directly related to surface brightness as magnitudes and sizes of low surface brightness sources are inherently harder to measure. This fact is borne out by the trends in the errors shown in Figure~\ref{emap_m-rhl}. Decreasing surface brightness follows a line going from the lower left-hand corners to the upper right-hand ones. The top panels of Figures~\ref{emap_m-rhl} show that systematic errors on $I$ start to become significant ($\Delta I \simeq 0.2$) fainter than $I$ = 22.5. Systematic errors on log $r_{hl}$ also increases significantly beyond this magnitude. It is important to note that $I$ = 22.5 is significantly fainter by about 2 mag than the galaxies that will be used to compute cluster early-type galaxy fractions in Section~\ref{efrac-VLT}, so these galaxy fractions should be unaffected. Figure~\ref{emap-bt} shows that systematic errors on $B/T$ are smallest over the region $I \leq 22.5, -0.5 \leq $ log $r_{hl} \leq 0.3$ where most of the real EDisCS galaxies actually lie. As mentioned above, our reliability tests do not include the effects of PSF mismatch errors because we used the same PSF for creating simulated images and for their analysis. However, we were able to check that these errors were not significant because we fitted both galaxies {\it and} stars on our real VLT/FORS2 images. The measured intrinsic radii of the stars clustered at zero, and this would not have been the case should PSF mismatch errors have been important. \begin{figure*} \resizebox{\hsize}{!}{\includegraphics[angle=270]{fig1.ps}} \caption{Two-dimensional maps of GIM2D systematic and random galaxy magnitude and half-light radius errors from 2000 VLT/FORS2 image simulations. {\it Top left-hand panel}: Systematic error on recovered galaxy total magnitude $I_{rec}$ as a function of {\it input} galaxy log half-light radius $r_{hl,input}$ in arcseconds and {\it input} galaxy total magnitude $I_{input}$. The top number in each cell is the mean magnitude error ($I_{rec} - I_{input}$), and the bottom number is the number of simulations created in that cell. {\it Top right-hand panel}: 1$\sigma$ random error on $I_{rec}$ ($\sigma$($I_{rec}-I_{input}$)) as a function of log $r_{hl,input}$ and $I_{input}$. {\it Bottom left-hand panel}: Systematic error on recovered galaxy intrinsic log half-light radius $r_{hl,rec}$ as a function of {\it input} galaxy log half-light radius $r_{hl,input}$ in arcseconds and {\it input} galaxy total magnitude $I_{input}$. The top number in each cell is the mean log radius error (log $r_{hl,rec} -$ log $r_{hl,input}$), and the bottom number is the number of simulations created in that cell. {\it Top right-hand panel}: 1$\sigma$ random error on log $r_{hl,rec}$ ($\sigma$(log $r_{hl,rec}-$ log $r_{hl,input}$)) as a function of log $r_{hl,input}$ and $I_{input}$.} \label{emap_m-rhl} \end{figure*} \begin{figure*} \begin{center} \resizebox{\hsize}{!}{\includegraphics[angle=270]{fig2.ps}} \caption{Two-dimensional maps of GIM2D systematic and random galaxy bulge fraction errors from 2000 VLT/FORS2 image simulations. {\it Top left-hand panel}: Systematic error on recovered galaxy bulge fraction $(B/T)_{rec}$ as a function of {\it input} galaxy log half-light radius $r_{hl,input}$ in arcseconds and {\it input} galaxy total magnitude $I_{input}$. The top number in each cell is the mean bulge fraction error ($(B/T)_{rec} - (B/T)_{input}$), and the bottom number is the number of simulations created in that cell. {\it Top right-hand panel}: 1$\sigma$ random error on $(B/T)_{rec}$ ($\sigma$($(B/T)_{rec}-(B/T)_{input}$)) as a function of log $r_{hl,input}$ and $I_{input}$.} \label{emap-bt} \end{center} \end{figure*} \section{Early-Type Galaxy Fractions}\label{efractions} \subsection{Definition and Comparison with Galaxy Visual Classifications}\label{efrac-defn} The bulk of the previous work on the morphological content of high-redshift clusters is based on the visual classification of galaxies, and this section compares visual and quantitative morphological classification. Visual classifications for 9200 galaxies in EDisCS clusters with HST images are presented in \citet{desai07}. As shown by previous works \citep{im02,mcintosh02,tran03,blakeslee06}, quantitative and visual morphologies can be best linked together by focussing on three structural parameters: bulge fraction $B/T$, image smoothness $S$ and bulge ellipticity $e$. The image smoothness, $S$, is defined as: \begin{equation} S = R_T + R_A \label{image-smoothness} \end{equation} \noindent where $R_T$ and $R_A$ are defined in Equation 11 of \citet{simard02}. These two indices quantify the amount of light in symmetric and asymmetric residuals from the fitting model respectively, and they are expressed as a fraction of the total galaxy model flux. $S$ is typically measured inside a radius that is a multiple of the galaxy half-light radius. Using our HST/ACS measurements, we found no differences between image smoothness within one and two galaxy half-light radii. We therefore use image smoothness inside two half-light radii (and denote it $S2$ hereafter) because it is more reliably measured on the VLT/FORS2 images with their lower spatial resolution. We can choose selection criteria on $B/T$, $S$ and $e$ that yield the best match to the visual classifications, and the particular choices are not important as long as the same selection criteria are applied to both local and high-redshift clusters. We divide the visually-classified EDisCS into $T$ = $-$5 (E), $-$2 (S0), 1 (S0/a) and ``others'' ($T > 1$). Using our HST/ACS structural parameter measurements, we find that E and S0 galaxies have similar $B/T$ distribution with the S0 distribution being skewed towards slightly lower $B/T$, but $e$ distributions are different. It is therefore possible to differentiate between E and S0 galaxies on the basis of these two parameters. S0 and S0/a galaxies have similar $e$ distributions but different $B/T$ and $S$ distributions. Given that the bulge ellipticity $e$ cannot be reliably measured on the VLT/FORS2 images, we restrict on selection criteria to $B/T$ and $S2$. Figure~\ref{hst_visual_S2-vs-bt} shows $S2$ versus $B/T$ for the four visual types of galaxies. $S2$ can take on small negative values due to statistical background subtraction terms \citep{simard02}. The optimal choice of limits on $B/T$ and $S2$ for our definition of early-type fraction is driven by the need to maximize the number of E/S0 galaxies selected while minimizing the contamination from Sa-Irr galaxies. After several iterations, we settled on $B/T \geq 0.35$ and $S2 \leq 0.075$ as our definition of an early-type galaxy. These limits are very similar to those used in previous studies \citep{im02,mcintosh02,tran03}. With these criteria, our quantitative selection can be translated into visual classification terms as \begin{equation} f_{et} = \bigr(0.69 N_E + 0.71 N_{S0} + 0.35 N_{S0/a} + 0.04 N_{Sa-Irr}\bigl) / N_{total} \label{fet-gim2d} \end{equation} \noindent The coefficients in Equation~\ref{fet-gim2d} give the completeness of the quantitative classification in terms of the \citet{desai07} visual classes. For example, the adopted $B/T$ and $S2$ cuts would select 69$\%$ of the galaxies visually classified by \cite{desai07} as E's, 71$\%$ of their S0's and so on. As mentioned earlier, E's and S0's cannot be distinguished using only $B/T$ and $S2$. Equation~\ref{fet-gim2d} is to be compared to the prescription of \citet{dokkum00}: \begin{equation} f_{et} = \bigr(N_E + N_{E/S0} + N_{S0} + \frac{1}{2} N_{S0/a}\bigl) / N_{total} \label{fet-vdk} \end{equation} \noindent where $N_{total}$ is the number of galaxies with $M_{V} \leq -20$. It is impossible to recover all the galaxies visually classified as early-types because a visual early-type does not necessarily imply a $r^{1/4}$ profile. Indeed, many early-type galaxies such as dwarf ellipticals have simple exponential profiles \citep{lin83,kormendy85}, and we have verified through isophote tracing that many galaxies visually classified as early-types and missed by our selection criteria do have radial surface brightness profiles that are exponential and thus consistent with their measured low $B/T$ values. \begin{figure} \resizebox{\hsize}{!}{\includegraphics[angle=270]{fig3.ps}} \caption{Image smoothness parameter $S2$ versus bulge fraction $B/T$ for different visual types. The galaxies selected by our quantitative early-type galaxy criteria ($B/T \geq 0.35$ and $S2 \leq 0.075$) are enclosed in the area delimited by dashed lines.} \label{hst_visual_S2-vs-bt} \end{figure} Given $N_{total}$ galaxies brighter than an absolute magnitude limit $M_{V,lim}$ inside a clustercentric radius $R_{max}$ of which $N_{et}$ are early-types galaxies, we actually calculate the early-type galaxy fraction by finding the median of the binomial probability distribution \begin{equation} p(x) dx = {{N_{total}!}\over{N_{et}!(N_{total}-N_{et})!}} x^{N_{et}}(1-x)^{N_{total}-N_{et}} \label{binom_dist} \end{equation} \noindent and we integrate Equation~\ref{binom_dist} to calculate the lower and upper bounds of the corresponding 68$\%$ confidence interval. In the limit of large $N_{total}$ and $N_{et}$ (not always true for the current cluster sample), this converges to the same symmetric error bars as would be obtained from the propagation of gaussian errors. \subsection{HST-Based Fractions}\label{efrac-HST} For each EDisCS cluster with HST/ACS imaging, we have computed the fraction of early-type galaxies using our quantitative HST/ACS morphologies ($B/T \geq 0.35$ and and $S2 \leq 0.075$). We used only spectroscopically-confirmed members brighter than an absolute $V$-band magnitude $M_{V,lim}$. We varied $M_{V,lim}$ as a function of redshift from $-$20.5 at $z$ = 0.8 to $-$20.1 at $z$ = 0.4 to account for passive evolution. This choice of $M_{V,lim}$ was made to be fully consistent with previous work \citep{poggianti06} although it may not be strictly the best choice for late-type galaxy populations. Our results did not appear to be sensitive to variations in $M_{V,lim}$ at the level of a few tens of a magnitude. Following \citet{poggianti06}, our early-type galaxy fractions were also computed by weighting each galaxy according to the incompleteness of the spectroscopic catalog. This incompleteness depends on both galaxy magnitude and clustercentric position. Incompleteness as a function of magnitude was computed by dividing the number of galaxies in the spectroscopic catalog in a given magnitude bin by the number of galaxies in the parent photometric catalog in the same bin. We used 0.5 mag bins here. Incompleteness due to the geometrical effects comes from the finite number of slitlets per sky area, and the increasing surface density of galaxies on the sky closer to the cluster centers. Geometric incompleteness is field dependent as it depends on cluster richness, and we thus computed this incompleteness on a field-by-field basis. We also used four radial bins out to $R_{200}$ with a bin width of 0.25$R_{200}$. The raw and incompleteness-corrected HST-based early-type galaxy fractions are given in Table~\ref{HST-efracs-table} for a maximum clustercentric radius $R_{et}$ of 0.6$R_{200}$ (columns 4 and 5) and $R_{200}$ (columns 9 and 10). Most of the corrected fractions do not significantly differ from the raw ones because our spectroscopic sample is essentially complete down to $I \leq 23$ ($M_{V} \sim $ $-$20 at $z = 0.8$), and we used multiple masks on dense clusters to improve the spatial sampling of our spectroscopic sample. As a comparison, Table~\ref{HST-efracs-table} also gives early-type galaxy fractions measured from visual classifications by \citet{desai07} (Columns 6 and 7). They should be compared with values in Column 5 because cluster galaxy samples selected using photometric redshifts are {\it de facto} free from the magnitude and geometric incompleteness of our spectroscopic sample. Another important caveat is that they were computed using two different ways to isolate cluster members (photometric redshift and statistical background subtraction), and they are thus not restricted to spectroscopically-confirmed members. Nonetheless, the agreement between fractions measured from visual and quantitative classifications is remarkably good. The largest disagreement is for 1138.2-1133, but even this case can be considered marginal as it is not quite 2$\sigma$. \begin{table*} \caption[]{Early-Type Galaxy Fractions Based on HST/ACS Imaging} \begin{center} \begin{tabular*}{16cm}{ccccccccccc} \hline ID & \multicolumn{6}{c}{$R_{et} \leq 0.6 R_{200}$} & \multicolumn{4}{c}{$R_{et} \leq R_{200}$}\\ & $N_{clus}$ & $N_{et}$ & $f_{et,raw}$ & $f_{et,corr}$ & $f_{E/S0,phz}^{\rm a}$ & $f_{E/S0,bkg}^{\rm b}$ & $N_{clus}$ & $N_{et}$ & $f_{et,raw}$ & $f_{et,corr}$\\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11)\\ \hline 1216.8-1201 & 45 & 23 & 0.51$\pm$0.07 & 0.55$\pm$0.07 & 0.47$\pm$0.06 & 0.54$\pm$0.06 & 57 & 25 & 0.44$\pm$0.06 & 0.42$\pm$0.06 \\ 1040.7-1156 & 9 & 4 & 0.45$\pm$0.15 & 0.45$\pm$0.15 & 0.53$\pm$0.17 & 0.37$\pm$0.17 & 13 & 5 & 0.40$\pm$0.12 & 0.33$\pm$0.12\\ 1054.4-1146 & 18 & 9 & 0.50$\pm$0.11 & 0.29$\pm$0.10 & 0.28$\pm$0.09 & 0.24$\pm$0.09 & 26 & 11 & 0.43$\pm$0.10 & 0.28$\pm$0.09\\ 1054.7-1245 & 11 & 8 & 0.70$\pm$0.13 & 0.46$\pm$0.14 & 0.44$\pm$0.16 & 0.57$\pm$0.13 & 19 & 10 & 0.52$\pm$0.11 & 0.38$\pm$0.11\\ 1232.5-1250 & 48 & 28 & 0.58$\pm$0.07 & 0.58$\pm$0.07 & 0.60$\pm$0.06 & 0.45$\pm$0.06 & 51 & 28 & 0.55$\pm$0.07 & 0.47$\pm$0.07\\ 1037.9-1243 & 8 & 1 & 0.18$\pm$0.12 & 0.18$\pm$0.12 & 0.13$\pm$0.09 & 0.08$\pm$0.07 & 8 & 1 & 0.18$\pm$0.12 & 0.18$\pm$0.12\\ 1103.7-1245b & 2 & 0 & 0.21$\pm$0.20 & 0.21$\pm$0.20 & 0.47$\pm$0.20 & 0.21$\pm$0.20 & 4 & 0 & 0.13$\pm$0.13 & 0.13$\pm$0.13 \\ 1354.2-1231 & 8 & 5 & 0.61$\pm$0.15 & 0.39$\pm$0.15 & 0.35$\pm$0.14 & 0.44$\pm$0.19 & 12 & 5 & 0.42$\pm$0.13 & 0.28$\pm$0.11\\ 1138.2-1133 & 22 & 4 & 0.20$\pm$0.08 & 0.20$\pm$0.08 & 0.37$\pm$0.09 & 0.50$\pm$0.14 & 24 & 6 & 0.26$\pm$0.08 & 0.34$\pm$0.09\\ \hline \end{tabular*} \label{HST-efracs-table} $^{\rm a}$ From Table 14 of \citet{desai07}\\ $^{\rm b}$ From Table 16 of \citet{desai07} \end{center} \end{table*} \subsection{VLT- versus HST-based Fractions}\label{efrac-VLT} Quantitative morphologies measured from HST images are more robust than those measured from ground-based images (Section~\ref{reliability} and \citet{simard02}). Figure~\ref{hst-vlt_morph} shows a direct galaxy-by-galaxy comparison between bulge fraction and image smoothness measurements from HST/ACS and VLT/FORS2 images. This comparison includes spectroscopically-confirmed member galaxies from all clusters with HST imaging that are brighter than $M_{V,lim}$ and within a clustercentric radius of 0.6$R_{200}$ to take into account the effect of crowding. For a given galaxy, the agreement between the two sets of measurements will obviously depend on its apparent luminosity and size. The overall agreement is reasonably good. The scatter in the bulge fraction plot is consistent with $\sigma_{B/T, ACS}$ $\sim$ 0.1 \citep{simard02} and $\sigma_{B/T, VLT} \sim$ 0.25 (Figure~\ref{emap-bt}) added in quadrature, but the fact that completely independent segmentation images were used for the HST and VLT morphological measurements also contributes significantly to this scatter. Indeed, this scatter would be smaller if only uncrowded galaxies (as indicated by the SExtractor photometry flag) on the VLT images had been plotted here. For the image smoothness plot, there is a correlation between $S2_{FORS2}$ and $S2_{ACS}$, but it is not one-to-one. $S2_{ACS}$ values increase faster than $S2_{FORS2}$. This is expected as PSF blurring will be more significant on the ground-based images, and $S2$ measurements are not corrected for PSF effects. Part of the scatter is again due to the use of independent segmentation images. \begin{figure*} \resizebox{\hsize}{!}{\includegraphics[angle=270]{fig4a.ps},\includegraphics[angle=270]{fig4b.ps}} \caption{Direct galaxy-by-galaxy comparison between bulge fraction (left-hand panel) and image smoothness (right-hand panel) measurements from HST/ACS and VLT/FORS2 images. Filled circles are galaxies classified as early-type on both ACS and VLT images, asterisks are galaxies classified as early-type only on the VLT images, pluses are galaxies classified as early-type only on the ACS images, and open circles are galaxies not classified as early-type on either ACS or VLT images, The dashed lines show the cuts used for the definition of an early-type galaxy as discussed in Sections~\ref{efrac-defn} and~\ref{efrac-VLT}.} \label{hst-vlt_morph} \end{figure*} The inclusion of clusters with only VLT/FORS2 imaging allows us to extend our analysis to nine additional clusters - an important consideration given that we seek to probe cluster-to-cluster variations in morphological content. We therefore need to show that we measure consistent early-type fractions for clusters with overlapping ACS and FORS2 images. The problem boils down to finding the set of limits on $B/T_{FORS2}$ and $S2_{FORS2}$ that yield FORS2 early-type fractions in agreement with the ACS fractions obtained with $B/T_{ACS} \geq 0.35$ and $S2_{ACS} \leq 0.075$ when the same galaxies are used for both FORS2 and ACS. For each cluster, we used all spectroscopically-confirmed cluster members brighter than $M_{V,lim}$ and within a clustercentric radius of $R_{200}$. No corrections for incompleteness were applied here as these corrections would be identical for both cases. We went through many manual iterations until we found satisfactory limits on $B/T_{FORS2}$ and $S2_{FORS2}$. We found FORS2 fractions to be in very good agreement with the ACS ones for $B/T_{FORS2} \geq 0.40$ and $S2_{FORS2} \leq 0.05$ (Figure~\ref{hst-vlt_efrac}). This agreement is especially good if one considers the fact that we performed our FORS2 and ACS bulge+disk decompositions completely independently from one another, i.e., we did not attempt to use the same SExtractor segmentation map for both FORS2 and HST images. The limit on $B/T_{FORS2}$ is slightly higher than the one on $B/T_{ACS}$ because lower spatial resolution typically leads to a small overestimate of the bulge fraction. Similarly, the limit on $S2_{FORS2}$ needs to be more stringent than on $S2_{ACS}$ to select the same galaxies as they will look smoother on the FORS2 images due to lower resolution. \begin{figure} \resizebox{\hsize}{!}{\includegraphics[angle=270]{fig5.ps}} \caption{Comparison between early-type galaxy fractions for clusters with overlapping VLT and HST imaging. VLT/FORS2 and HST/ACS early-type galaxy fractions were computed using galaxies with $B/T_{FORS2} \geq 0.40$ and $S2_{FORS2} \leq 0.05$ and $B/T_{ACS} \geq 0.35$ and $S2_{ACS} \leq 0.075$ respectively. The ACS and FORS2 $f_{et}$ values plotted here are listed in column 4 of Table~\ref{HST-efracs-table} and column 4 of Table~\ref{VLT-efracs-table}. Dashed line is the one-to-one line.} \label{hst-vlt_efrac} \end{figure} Following the procedure described in Section~\ref{efrac-HST}, we computed early-type galaxy fraction for all eighteen clusters using galaxies on our FORS2 images with $B/T_{FORS2} \geq 0.40$ and $S2_{FORS2} \leq 0.05$. The results are shown in Table~\ref{VLT-efracs-table}. The same incompleteness corrections as in Section~\ref{efrac-HST} were applied here as well. The errors on the early-type galaxy fractions in the table do not include errors on $R_{200}$ due to correlated errors on cluster $\sigma$. We hereafter use our VLT/FORS2 early-type fractions for all EDisCS clusters for the sake of uniformity. \begin{table*} \caption[]{Early-Type Galaxy Fractions Based on VLT/FORS2 Imaging} \begin{center} \begin{tabular*}{12cm}{cccccccc} \hline ID & $M_{V,lim}$ & \multicolumn{3}{c}{$R_{et} \leq 0.6 R_{200}$} & \multicolumn{3}{c}{$R_{et} \leq R_{200}$}\\ & & $N$$^{\rm a}$ & $f_{et,raw}$ & $f_{et,corr}$ & $N$$^{\rm a}$ & $f_{et,raw}$ & $f_{et,corr}$ \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) \\ \hline 1018.8-1211 & $-$20.2 & 18 & 0.60$\pm$0.11 & 0.55$\pm$0.11 & 20 & 0.59$\pm$0.11 & 0.55$\pm$0.11 \\ 1059.1-1253 & $-$20.2 & 20 & 0.64$\pm$0.10 & 0.59$\pm$0.10 & 28 & 0.64$\pm$0.09 & 0.60$\pm$0.09 \\ 1119.3-1130 & $-$20.3 & 6 & 0.64$\pm$0.17 & 0.77$\pm$0.15 & 9 & 0.64$\pm$0.14 & 0.74$\pm$0.15 \\ 1202.7-1224 & $-$20.1 & 11 & 0.54$\pm$0.14 & 0.54$\pm$0.14 & 13 & 0.60$\pm$0.13 & 0.67$\pm$0.13 \\ 1232.5-1250 & $-$20.2 & 48 & 0.48$\pm$0.07 & 0.60$\pm$0.07 & 51 & 0.49$\pm$0.07 & 0.62$\pm$0.07 \\ 1301.7-1139 & $-$20.2 & 17 & 0.53$\pm$0.11 & 0.53$\pm$0.11 & 28 & 0.43$\pm$0.09 & 0.40$\pm$0.09 \\ 1353.0-1137 & $-$20.3 & 9 & 0.55$\pm$0.15 & 0.55$\pm$0.15 & 17 & 0.42$\pm$0.11 & 0.36$\pm$0.11 \\ 1411.1-1148 & $-$20.2 & 15 & 0.47$\pm$0.12 & 0.53$\pm$0.12 & 16 & 0.44$\pm$0.12 & 0.44$\pm$0.11 \\ 1420.3-1236 & $-$20.2 & 4 & 0.50$\pm$0.20 & 0.50$\pm$0.20 & 7 & 0.56$\pm$0.17 & 0.44$\pm$0.17\\ 1037.9-1243 & $-$20.3 & 8 & 0.18$\pm$0.12 & 0.18$\pm$0.12 & 8 & 0.18$\pm$0.12 & 0.18$\pm$0.12 \\ 1040.7-1156 & $-$20.4 & 9 & 0.36$\pm$0.14 & 0.36$\pm$0.14 & 13 & 0.46$\pm$0.13 & 0.46$\pm$0.13 \\ 1054.4-1146 & $-$20.4 & 18 & 0.40$\pm$0.11 & 0.34$\pm$0.11 & 26 & 0.39$\pm$0.09 & 0.35$\pm$0.09 \\ 1054.7-1245 & $-$20.5 & 11 & 0.62$\pm$0.13 & 0.38$\pm$0.13 & 19 & 0.52$\pm$0.11 & 0.43$\pm$0.11 \\ 1103.7-1245b & $-$20.4 & 2 & 0.21$\pm$0.20 & 0.21$\pm$0.20 & 3 & 0.39$\pm$0.22 & 0.39$\pm$0.22 \\ 1138.2-1133 & $-$20.2 & 22 & 0.50$\pm$0.10 & 0.46$\pm$0.10 & 24 & 0.54$\pm$0.10 & 0.54$\pm$0.10 \\ 1216.8-1201 & $-$20.5 & 45 & 0.47$\pm$0.07 & 0.53$\pm$0.07 & 57 & 0.46$\pm$0.06 & 0.46$\pm$0.06 \\ 1227.9-1138 & $-$20.3 & 9 & 0.26$\pm$0.13 & 0.16$\pm$0.11 & 11 & 0.30$\pm$0.12 & 0.22$\pm$0.11 \\ 1354.2-1231 & $-$20.5 & 8 & 0.50$\pm$0.16 & 0.39$\pm$0.15 & 12 & 0.35$\pm$0.12 & 0.28$\pm$0.11 \\ \hline \end{tabular*} \label{VLT-efracs-table} $^{\rm a}$ Number of cluster members brighter than $M_{V,lim}$ inside $R_{et}$ \end{center} \end{table*} \subsection{Local Clusters}\label{efrac_sdss} The Sloan Digital Sky Survey \citep[SDSS;][]{abazajian09} offers by far the best, ``local'' ($z < 0.1$) baseline for a comparison of early-type galaxy fractions between local and high-redshift clusters. Clusters similar in mass to EDisCS clusters can be selected from spectroscopic SDSS data, {\it and} galaxy morphologies can be measured using GIM2D from SDSS images. We therefore used SDSS-selected clusters here to construct a local baseline as nearly free of systematics as currently possible given the available data. We use the sample of SDSS clusters defined in \citet{linden07}. The basis of this cluster sample is the C4 cluster catalogue \citep{miller05}, and we briefly recapitulate here how the von der Linden et al. sample was selected. Their primary aim was to find the galaxy closest to the deepest point of the potential well of a cluster. In order to insure that the clusters would span a large angular extent compared to the minimum distance of 55 arcsec between fibers, the sample was restricted to redshifts $z \leq 0.1$. This first cut resulted in an initial sample of 833 clusters. A combination of clustercentric distance, galaxy concentration and colour cuts was used to identify brightest cluster galaxies (BCGs) for these clusters. For cases where the same BCG was identified for more than one cluster, only the cluster with the density peak was retained, and the others were deemed to be substructures. This cut rejected 101 clusters. Refined velocity dispersion and virial radii were then computed through an iterative process of velocity cuts. This process failed for 55 clusters, and these were also rejected. All remaining clusters were then visually inspected. An additional set of 35 clusters were rejected at this point as being in the infall regions of other clusters, and another 17 clusters were discarded because they had less than three galaxies within 3$\sigma$ of the cluster redshift and 1$R_{200}$ of its center. This brought the total of SDSS clusters down to 625. Following \citet{poggianti06}, we applied a final redshift cut to keep clusters in the range 0.04 $< z < $ 0.085. The lower limit reduces fiber aperture effects, and the upper limit minimizes incompleteness in galaxy absolute magnitude. Our final SDSS comparison sample thus has 439 clusters. Given that we are interested in probing galaxy properties as a function of environment, it is important to ensure that the SDSS and EDisCS samples both cover the same range of environments. We therefore selected a subsample of SDSS clusters with a velocity dispersion distribution matching the EDisCS distribution. This match was done by adding SDSS clusters to the subsample one at a time and keeping only those that maintained the EDisCS-SDSS two-sample Kolmogorov-Smirnov probability above 50$\%$. This is the probability of the maximum difference between the normalized cumulative distributions of the EDisCS and SDSS samples. It means that even if the two sampls were selected at random from the same underlying distribution, they would differ by more than the two observed samples more than half the time. This probability threshold thus yields a SDSS subsample that is very well-matched to the EDisCS clusters. The resulting subsample (referred to as ``SDSS-C4" hereafter) includes 158 clusters, and these clusters are listed in Table~\ref{sdss-cls-list}. We ran GIM2D on SDSS Data Release Seven \citep[DR7;][]{abazajian09} $u$-, $g$-, $r$- and $i$-band images of objects in the magnitude range $14 \leq r_{petrosian,corr} \leq 17.77$ with a galaxy spectrum (i.e., with field SpecClass = 2 in database table SpecPhoto). Bulge+disk decompositions were successfully obtained for 674,693 galaxies (Simard, in preparation). GIM2D morphologies for galaxies in our matched SDSS-C4 clusters were extracted from this large morphological database to compute early-type fractions. There are two sources of incompleteness that must be taken into account here. The first one is incompleteness versus magnitude. We denote this spectroscopic completeness function as $C_{mag}(m)$ here, and we compute it around each cluster position by taking the ratio of the number of galaxies in the spectroscopic SDSS catalog (database table SpecPhoto) to the number of galaxies in the photometric SDSS catalog (database table PhotoPrimary) as a function of Petrosian $r$ magnitude. Galaxies around a given position on the sky were extracted from the database using the SDSS ``fGetNearbyObjEq'' function. The second source of incompleteness comes from the spatial sampling of the SDSS fibers on the sky. Fibers cannot be placed closer than 55\arcsec~from one another. This means that regions with a higher surface density of targets could not be sampled as completely as regions in the global field. The net result for SDSS clusters is a decrease in spectroscopic sampling as a function of decreasing clustercentric distance $R$. We can map the spectroscopic completeness versus $R$ by computing the ratio of galaxies in the spectroscopic and photometric SDSS catalogs as a function of $R$. We denote this geometrical completeness function as $C_{geom}(R)$ here. Ideally, $C_{geom}(R)$ should be computed for each cluster because it will depend on cluster richness and apparent size (and thus indirectly on redshift). However, in practice, there are not enough galaxies in a single cluster to yield $C_{geom}(R)$ with acceptable error bars. So, we opted for averaging clusters with the same redshifts and velocity dispersions to compute $C_{geom}(R)$. We divided the cluster list of Table~\ref{sdss-cls-list} into three cluster groups: (1) $z < 0.06$, (2) $z > 0.06, \sigma < 800$ km/s, and (3) $z > 0.06, \sigma > 800$ km/s. The weight $W_{spec}(m,R)$ in the spectroscopic catalog of a galaxy with a $r'$-band magnitude $m$ at a clustercentric $R$ is thus given by the product ${1\over{C_{mag}(m)}} {1\over{C_{geom}(R)}}$, and the completeness-weighted early-fraction of a SDSS cluster is then simply: \begin{eqnarray} f_{et}\biggl(\substack{M_{V}\leq-19.8, \\B/T\geq0.35, \\S2\leq0.075}\biggr) = {\displaystyle \sum_{\substack{i \in [M_{V}\leq-19.8,\\R\leq R_{et},\\B/T\geq0.35,\\S2\leq0.075]}} W_{spec}(m_i,R_i) \over{\displaystyle\sum_{\substack{i \in [M_{V}\leq-19.8, \\ R\leq R_{et}]}} W_{spec}(m_i,R_i)}} \label{sdss-et} \end{eqnarray} In terms of spatial resolution, the ACS, SDSS and FORS2 images have sampling of 0.68 kpc/FWHM at $z$ = 0.8 (0\farcs09 FWHM in $i$), 1.87 kpc/FWHM at $z$ = 0.07 (1\farcs4 FWHM in $g$) and 4.5 kpc/FWHM at $z$ = 0.8 (0\farcs6 FWHM in $I$) respectively. Even though the sampling of the ACS and FORS2 images differs by a factor of seven, their limits on $B/T$ and $S2$ for the computation of consistent early-type galaxy fractions were quite similar. This is an indication of the robustness of our measured structural parameters over this range of spatial resolutions. For the sake of simplicity, we therefore adopt the ACS limits ($(B/T)_{SDSS,g} \geq 0.35$ and $S2_{SDSS,g} \leq 0.075$) for our SDSS early-type galaxy fractions rather than use yet another set of limits. We can further test these limits on the catalogue of visually classified galaxies from the SDSS North Equatorial Region of \citet{fukugita07}. This catalogue contains Hubble T-type visual classifications for 2253 galaxies down to a magnitude limit of $r$ = 16. If we apply our limits on $(B/T)_{SDSS,g}$ and $S2_{SDSS,g}$ to galaxies in this catalogue, then we find that the coefficients of the SDSS-to-visual equivalent of Equation~\ref{fet-gim2d} would be 0.88, 0.68, 0.14, and 0.014 respectively. Early-type SDSS galaxies are therefore quantitatively selected with an ``efficiency'' comparable to our selection from the ACS images. \begin{figure} \resizebox{\hsize}{!}{\includegraphics[angle=270]{fig6.ps}} \caption{Comparison between fractions of [OII] emitters computed using emission-line measurements from \citet{brinchmann04} and the DR7 release. Filled and open circles are clusters with $\sigma \geq$ 600 km/s and $\sigma <$ 600 km/s respectively.} \label{fOII-brinch_vs_sdss} \end{figure} The raw fractions of [OII] emitters for the 158 SDSS-C4 clusters were calculated by directly querying the SDSS database table SpecLine for the [OII]3727 and [OII]3730 equivalent widths for each confirmed cluster member, adding them together and correcting them to rest-frame by dividing by (1+$z$). The corrected [OII] fractions were then computed following exactly the same calculations (and using the same weights, the same luminosity and clustercentric radius cuts of $M_V \leq - 19.8$ and $R \leq 0.6 R_{200}$) as for the early-type fractions except that the early-type selection criteria on bulge fraction and image smoothness were simply replaced by the \citet{poggianti06} cut of EW([OII]) $\leq -3 \AA$. In order to evaluate the importance of the errors on our equivalent widths on our determination of the fractions of [OII] emitters, we also computed [OII] fractions using equivalent widths from \citet{brinchmann04}. The two sets of equivalent widths are plotted against one another in Figure~\ref{fOII-brinch_vs_sdss}. The agreement between the two sets is excellent, and we conclude that our [OII] fractions are robust. Table~\ref{sdss-cls-list} gives corrected early-type galaxy fractions and fractions of [OII] computed for $R \leq 0.6R_{200}$ for the 158 SDSS clusters in our local comparison sample. We included only galaxies brighter than $M_{V,lim} = -19.8$ to avoid incompleteness in the SDSS spectroscopic sample. This cutoff magnitude corresponds to the absolute magnitude limits we used for our distant EDisCS clusters once passive evolution is taken into account (see Section~\ref{efrac-HST}). \addtocounter{table}{1} \subsection{Theoretical Models}\label{theo-models} Numerical simulations of dark matter haloes populated with galaxies using semi-analytical models greatly help in the interpretation of observational results. We use here the Millennium Simulation \citep[MS;][]{springel05}, and the semi-analytical code described in \citet{delucia07a}\footnote{Simulated galaxy catalogs used here are publicly available at http://www.mpa-garching.mpg.de/millennium/}. The MS followed 2160$^3$ particles of mass 8.6$\times$10$^8 h^{-1}$M$_\odot$ within a comoving box of size 500 $h^{-1}$ Mpc on a side with a spatial resolution of 5$h^{-1}$ kpc. Early-type galaxy fractions were computed from these simulated galaxy catalogs using the following procedure. Haloes were randomly selected at three different redshifts ($z$ = 0, 0.41, 0.62) so that they were uniformly distributed in log($M_{200}$). The final halo sample was 100 haloes at $z$ = 0, 94 haloes at $z$ = 0.41 and 92 haloes at $z$ = 0.62. For each of these haloes, all galaxies in a cubic box 6 Mpc on a side around the central galaxy were selected, and a morphological type was assigned to each model galaxy by computing the quantity $\Delta M$ = $M_{bulge} - M_{total}$ (in the rest-frame $B$-band). Galaxies with $\Delta M <$ 1.0 were considered to be "early-type". This is the same criterion as selecting real galaxies with $B/T_{FORS2} \geq 0.40$. It is important here to note that an early-type galaxy in the simulations was defined solely based on this cut in bulge fraction because the simulations do not have the resolution required to model internal fine structures such as asymmetries. Given that real, early-type galaxies were also selected according to image smoothness, one might find the early-type fractions of real clusters to be systematically lower. For each halo, the fraction of early-type galaxies within 0.6$R_{200}$ from the BCG was computed using three different projections. Furthermore, only galaxies that were within 2 Mpc from the BCG along the line of sight were included. The fractions were computed using only galaxies brighter than $-$20.5, $-$20.1, and $-$19.8 in the rest-frame $V-$band at redshift 0.6, 0.4, and 0.0 respectively to match the limits used for the SDSS and EDisCS early-type galaxy fractions. A galaxy in the simulation was deemed to be star-forming if its star-formation rate in the last timestep of its evolution was not equal to zero. Figure~\ref{theo-efraction-plot} shows the resulting model early-type fractions as a function of cluster velocity dispersion, redshift and fraction of star-forming galaxies for the MS haloes. At a given redshift, there is no dependence of the early-type fraction on cluster velocity dispersion, but the scatter symmetrically increases towards both lower and higher fractions leading to a "wedge-like" distribution towards lower cluster $\sigma$'s. The early-type fractions of both low and high-mass clusters increase with decreasing redshift from $\sim$0.70 at $z = 0.65$ to $\sim$0.85 at $z$ = 0. The early-type fractions of massive clusters are anticorrelated with the fractions of star-forming galaxies: clusters at $z = 0$ have higher early-type fractions but lower fractions of star-forming galaxies. Note that the trends in Figure~\ref{theo-efraction-plot} do not agree with those shown in \citet{diaferio01} although the assumptions made about morphological transformations are very similar in the two models. In particular, the MS shows little trend of early-type fraction with cluster velocity dispersion but a substantial trend with redshift, while Diaferio et al. found the opposite. This is likely a result of the poorer mass resolution, poorer statistics and cruder dynamical modelling of the earlier paper. \begin{figure*} \resizebox{\hsize}{!}{\includegraphics[angle=90]{fig7a.ps},\includegraphics[angle=90]{fig7b.ps}} \resizebox{\hsize}{!}{\includegraphics[angle=90]{fig7c.ps},\includegraphics[angle=90]{fig7d.ps}} \caption{Early-type galaxies in Millennium Simulation dark matter haloes {\it Top, left-hand panel:}~Early-type galaxy fraction within $0.6 R_{200}$ versus cluster velocity dispersion at three different redshifts. {\it Top, right-hand panel:} Early-type galaxy fraction within $0.6 R_{200}$ versus age of the universe. Blue and red points are clusters with velocity dispersions below and above 600 km/s respectively. {\it Lower, left-hand panel:}~ Early-type galaxy fraction within $0.6 R_{200}$ versus fraction of star-forming galaxies in clusters with $\sigma <$ 600 km/s. Blue points show haloes selected at redshift zero, and all the other haloes are in red. {\it Lower, right-hand panel:}~ Early-type galaxy fraction within $0.6 R_{200}$ versus fraction of star-forming galaxies in clusters with $\sigma \geq$ 600 km/s. } \label{theo-efraction-plot} \end{figure*} \section{Results}\label{results} We use here our VLT/FORS2 early-type fractions for all EDisCS clusters for the sake of uniformity. \subsection{Early-Type Galaxy Fractions versus Cluster Velocity Dispersion and Redshift}\label{efrac-sigma} Figure~\ref{vlt+sdss_efrac_sigma} shows early-type galaxy fractions versus velocity dispersion for the SDSS and EDisCS clusters. The early-type galaxy fractions of both cluster samples exhibit no clear trend as a function of $\sigma$. Table~\ref{spearman-tests} gives Spearman rank test results for the SDSS sample and different EDisCS subsamples. The only significant correlation between early-type fraction and velocity dispersion is found in the high-z EDisCS clusters. It only has a 2.5$\%$ chance of being due to randon sampling. Such a positive correlation was also reported in \citet{desai07} for the same cluster subsample, but it disappears when the full EDisCS sample is considered. The lack of a significant correlation agrees well with the results for the Millennium Simulation in the top left-hand panel of Figure~\ref{theo-efraction-plot} but disagrees with the earlier theoretical results of \citet{diaferio01} which showed a trend between $f_{et}$ and $\sigma$. A visual inspection of Figure~\ref{vlt+sdss_efrac_sigma} confirms the statistical test results. The mid-z EDisCS clusters do not show any correlation with $\sigma$ in contrast to the high-z clusters. In particular, two mid-$z$ EDisCS clusters (CL1119.3-1130 and CL1420.3-1236) with $\sigma \sim$ 200 km/s have early-type fractions similar or higher ($f_{et} \sim$ 0.5-0.8) than the most massive clusters in our sample. Interestingly, the same two clusters were found by \cite{poggianti06} to be the most outstanding outliers in the [OII] fraction - $\sigma$ relation in the sense that they have a low fraction of [OII] emitters for their mass. This is consistent with what we observe here given that early-type galaxies typically have lower [OII] emission fluxes. Figure~\ref{vlt+sdss_efrac_sigma} does show that there is a marked difference in the morphological content of the EDisCS and SDSS clusters. All EDisCS $f_{et}$ values (with the exception of one cluster) are below 0.6, but half of the SDSS clusters are above this value. The population of early-type galaxies has thus increased significantly in half of the clusters {\it of all velocity dispersions}. An increase in early-type fraction with decreasing redshift may already be visible when one compares mid-$z$ and high-$z$ EDisCS clusters. Mid-$z$ clusters around $\sigma$ = 500 km/s have $f_{et} \simeq$ 0.5 whereas the high-$z$ clusters have $f_{et} \simeq$ 0.4. This would represent a $\sim$25$\%$ increase over a time interval of 2 Gyrs. As shown in Figure~\ref{theo-efraction-plot}, the early-type fractions of clusters in the Millenium Simulation also increase with decreasing redshift in clusters of all velocity dispersions, but there is a lack of simulated clusters with $f_{et} < 0.5$ compared with the SDSS-C4 clusters. The scatter in the $f_{et}$ values of simulated clusters is also smaller than in those of real clusters. For simulated clusters at $z = 0$ with $\sigma \geq $ 600 km/s, $\sigma(f_{et})$ = 0.06 compared to $\sigma(f_{et})$ = 0.21 for SDSS clusters over the same range of velocity dispersions. Given that the mean error on the SDSS $f_{et}$ values is 0.12, the intrinsic scatter would be 0.17. This intrinsic scatter is still almost three times the scatter in the simulated clusters. \begin{figure*} \resizebox{\hsize}{!}{\includegraphics[angle=270]{fig8a.ps},\includegraphics[angle=270]{fig8b.ps}} \caption{Early-type galaxy fraction within $0.6 R_{200}$ versus velocity dispersion for SDSS and EDisCS clusters. Both samples have been matched in velocity dispersion. {\it Left panel:} SDSS clusters. Only typical error bars are shown in the lower right-hand corner for clarity. {\it Right panel:} Filled and open circles are mid-$z$ and high-$z$ EDisCS clusters respectively. Errors bars shown in both panels are 1$\sigma$ errors. Our VLT/FORS2 early-type fractions are used here for all EDisCS clusters for the sake of uniformity.} \label{vlt+sdss_efrac_sigma} \end{figure*} \begin{table} \caption[]{Spearman Rank Tests Results for Early-Type Fraction versus Cluster Velocity Dispersion} \begin{center} \begin{tabular}{lrrr} \hline Cluster Sample & $N_{cl}$ & $R_s$ & $p$-value\\ (1) & (2) & (3) & (4)\\ \hline SDSS &158 & $-$0.05 & 0.51\\ EDisCS all & 18 & 0.18 & 0.47\\ EDisCS mid-z & 9 & $-$0.11 & 0.78\\ EDisCS high-z & 9 & 0.73 & 0.025\\ \hline \end{tabular} \label{spearman-tests} \end{center} \end{table} \begin{table} \caption[]{Two-sample Kolmogorov-Smirnov Test Probabilities for Early-Type Fraction versus Cluster Velocity Dispersion} \begin{tabular}{lrrrr} \hline & \multicolumn{4}{c}{Cluster Sample 2}\\ & SDSS & SDSS & EDisCS &EDisCS \\ Cluster Sample 1 & (All) & ($\sigma \geq$ 600) & ($\sigma <$ 600) & ($\sigma \geq$ 600) \\ \hline SDSS ($\sigma <$ 600) & \ldots & 0.628 & 0.190 & \ldots\\ SDSS ($\sigma \geq$ 600) & \ldots & \ldots & \ldots & 0.472\\ EDisCS ($\sigma <$ 600) & \ldots & \ldots & \ldots & 0.250\\ EDisCS (All) & 0.506 & \ldots & \ldots & \ldots \\ \hline \end{tabular} \label{ks-tests-fet} \end{table} Figure~\ref{vlt+sdss_efrac_age} shows SDSS and EDisCS early-type fractions as a function of the age of the universe (i.e., redshift). The clusters have been divided into two subgroups based on their velocity dispersions. The early-type fractions of massive ($\sigma >$ 600 km/s) EDisCS clusters (right panel) are in very good agreement with the ones in the compilation of \citet{dokkum01} which also have velocity dispersions greater than 600 km/s. The clusters at low redshift in the \citet{dokkum01} compilation suggest that there are no local clusters with low early-type fractions and hence that all clusters have uniformly increased their early-type fraction from $z \sim 0.$ to the present day. However, our SDSS cluster sample shows that this simple picture is not entirely true. While half of the SDSS clusters have higher early-type fractions than clusters at high redshift, the other half have early-type fractions equal or even lower than the EDisCS clusters. The same holds true for the low mass clusters (left-hand panel). The scatter in $f_{et}$ ($<$ 0.1) in high-mass EDisCS clusters does appear to be considerably less that the scatter seen in low-mass clusters. \begin{figure*} \resizebox{\hsize}{!}{\includegraphics[angle=270]{fig9a.ps},\includegraphics[angle=270]{fig9b.ps}} \caption{Early-type galaxy fraction versus age of the universe (i.e., redshift) for clusters with $\sigma < 600$ km/s (left panel) and clusters with $\sigma \geq 600$ km/s (right panels). SDSS and EDisCS clusters are blue and red respectively, and both samples have been matched in velocity dispersion. Clusters shown in black are from the compilation of \citet{dokkum01} in which open and solid points have X-ray luminosities below and over 10$^{44.5}$ ergs s$^{-1}$ respectively. Our VLT/FORS2 early-type fractions are used here for all EDisCS clusters for the sake of uniformity.} \label{vlt+sdss_efrac_age} \end{figure*} \begin{figure*} \resizebox{\hsize}{!}{\includegraphics[angle=270]{fig10a.ps},\includegraphics[angle=270]{fig10b.ps}} \caption{Early-type galaxy fraction versus [OII] emitter fraction for clusters with $\sigma < 600$ km/s (left panel) and clusters with $\sigma \geq 600$ km/s (right panel). SDSS and EDisCS clusters are shown in blue and red respectively, and both samples have been matched in velocity dispersion. Only typical error bars are shown for the SDSS clusters in the lower right-hand corner for clarity. Our VLT/FORS2 early-type fractions are used here for all EDisCS clusters for the sake of uniformity.} \label{vlt+sdss_fet_fOII} \end{figure*} The lack of a clear trend in early-type fraction with redshift in the right-hand panel of Figure~\ref{vlt+sdss_efrac_age} is in disagreement with the Millennium Simulation prediction in the top right-hand panel of Figure~\ref{theo-efraction-plot}. There is a clear deficit of clusters with low early-type fraction at low redshift in the Millenium Simulation compared with our SDSS sample. \subsection{Early-Type Galaxy Fractions versus Fractions of [OII] Emitters}\label{efrac-OII} The link between star formation and morphological transformation and its evolution as a function of redshift provides more clues on the processes driving galaxy morphology in local and distant clusters. The fractions of galaxies with [OII] emission in the EDisCS clusters were computed as in \citet{poggianti06}~using the same absolute magnitude limits and the same prescriptions for correcting magnitude and geometric incompletness, but the clustercentric radius cut was changed to match the one used for the early-type fractions in this paper ($R_{et} \leq 0.6R_{200}$). The two datasets are therefore directly comparable. Figure~\ref{vlt+sdss_fet_fOII} shows $f_{et}$ versus $f_{[OII]}$ with our local and distant samples again divided according to velocity dispersion. Table~\ref{spearman-tests-fo2} gives Spearman test results between $f_{et}$ and $f_{[OII]}$. There is a strong correlation between $f_{et}$ versus $f_{[OII]}$ in both SDSS and EDisCS cluster samples irrespective of cluster velocity dispersion. The EDisCS clusters lie within the envelopes defined by the SDSS clusters. There is no offset between the zeropoints of the correlations at low and high redshift. However, as demonstrated by \citet{poggianti06}, the star formation activity (parametrized by $f_{[OII]}$) has decreased in all environments from $z \sim 0.75$ to $z \sim 0.08$. This is confirmed by the K-S test results in Table~\ref{ks-tests-fo2}. The probabilities that the EDisCS and SDSS clusters are drawn from the same parent $f_{[OII]}$ distribution are only 0.026, 0.005 and 0.046 for the whole samples, low $\sigma$ and high $\sigma$ subsamples respectively. The $f_{et}$ versus $f_{[OII]}$ values for clusters from the Millenium Simulation (Figure~\ref{theo-efraction-plot}) are quite different from the observations. Low $\sigma$ MS clusters at low and high redshifts are confined to high $f_{et}$ and $f_{OII}$ values with no apparent correlation. There is only a handful of clusters with low values for both $f_{et}$ and $f_{OII}$. The high $\sigma$ MS clusters are found in a very limited range of $f_{et}$ and $f_{OII}$ values ($0.35 < f_{OII} < 0.75$, $0.6 < f_{et} < 0.85$). \begin{table} \caption[]{Spearman Rank Tests Results for Early-Type Fraction versus Fraction of [OII] Emitters} \begin{center} \begin{tabular}{lccc} \hline Cluster Sample & $N_{cl}$ & $R_s$ & $p$-value\\ (1) & (2) & (3) & (4)\\ \hline SDSS (All) & 158 & $-$0.63 & 4.33$\times 10^{-19}$\\ SDSS ($\sigma < 600)$ & 108 & $-$0.57 & 1.50$\times 10^{-10}$\\ SDSS ($\sigma \geq 600)$ & 50 & $-$0.77 & 6.04$\times 10^{-11}$\\ EDisCS (All) & 18 & $-$0.74 & 0.00043\\ EDisCS ($\sigma < 600)$ & 11 & $-$0.78 & 0.0043\\ EDisCS ($\sigma \geq 600)$ & 7 & $-$0.77 & 0.0438\\ \hline \end{tabular} \label{spearman-tests-fo2} \end{center} \end{table} \begin{table} \caption[]{Two-sample Kolmogorov-Smirnov Test Probabilities for [OII] Emitter Fraction versus Cluster Velocity Dispersion} \begin{tabular}{lrrrr} \hline & \multicolumn{4}{c}{Cluster Sample 2}\\ & SDSS & SDSS & EDisCS &EDisCS \\ Cluster Sample 1 & (All) & ($\sigma \geq$ 600) & ($\sigma <$ 600) & ($\sigma \geq$ 600) \\ \hline SDSS ($\sigma <$ 600) & \ldots & 0.090 & 0.005 & \ldots\\ SDSS ($\sigma \geq$ 600) & \ldots & \ldots & \ldots & 0.046\\ EDisCS ($\sigma <$ 600) & \ldots & \ldots & \ldots & 0.761\\ EDisCS (All) & 0.026 & \ldots & \ldots & \ldots \\ \hline \end{tabular} \label{ks-tests-fo2} \end{table} \section{Discussion}\label{discussion} In order to fully understand possible evolutionary trends observed here, it is important to determine how cluster velocity dispersion changes with redshift as a result of the hierarchical growth of structures. Are we looking at similar clusters when we focus on the same range of velocity dispersions in the SDSS and EDisCS clusters? \citet{poggianti06} looked at the mean change in $\sigma$ between $z$ = 0 and $z$ = 0.76 using a sample of 90 haloes from the Millennium Simulation uniformly distributed in log(mass) between 5 $\times$ 10$^{12}$ and 5 $\times$ 10$^{15}$ M$_{\odot}$. Their Figure 8 shows how $\sigma$ evolves over that redshift interval. For example, a $z$ = 0 cluster with $\sigma$ = 900 km/s would typically have $\sigma \sim$ 750 km/s at $z$ = 0.76. This evolution is not sufficient to introduce biases in our analysis here. Indeed, selecting clusters with $\sigma \geq$ 600 km/s, say, at either $z$ = 0 or $z$ = 0.76 would keep nearly all the same clusters. Measured velocity dispersions may exhibit a large scatter with respect to the true halo mass particularly for low-mass clusters. The velocity dispersions for the SDSS and EDisCS clusters were calculated in a very similar way in order to minimize any biases. Velocity dispersions calculated from a small number of cluster members may be overestimates of the true cluster mass. Table~\ref{clsample} lists 1103.7-1245b as the cluster with the lowest number of members ($N$ = 11). In order to check the robustness of our results, we re-ran our analyses by excluding SDSS clusters in Table~\ref{sdss-cls-list} with $N < 10$ for which velocity dispersions may be less reliable and found that our results remained unchanged. \citet{poggianti06} proposed a scenario in which two channels are responsible for the production of passive galaxies in clusters, and others \citep{faber07,brown07} have proposed a similar scenario for the migration of galaxies from the "blue cloud" to the red sequence. "Primordial passive galaxies" are composed of galaxies whose stars all formed at very high redshift ($z >$ 2) over a short timescale. These galaxies have been observed in clusters up and beyond $z = 1$, and they largely comprise luminous ellipticals. "Quenched passive galaxies" have had a more extended period of star formation activity, and their star formation has been quenched after their infall into dense cluster environments. These quenched passive galaxies would then suffer the effects of cluster processes such as ram pressure stripping, harassment, strangulation and mergers to become S0 and earlier type galaxies. A key point of this scenario is that processes affecting morphology and star formation activity operate on different timescales as shown recently for the EDisCS sample by \citet{sanchez09}. There is good evidence that star formation is quenched in galaxies over timescales of 1-3 Gyr after they have entered the cluster environment \citep{poggianti99,poggianti06} whereas morphological transformation through mergers and harassment can take longer \citep[$\sim$ 5 Gyr,][]{moore98}. The best example of this is the fact that the vast majority of post-starburst galaxies in distant clusters, those that have had their star formation activity terminated during the last Gyr, still retain a spiral morphology \citep{poggianti99}. Such a two-channel scenario would naturally explain observations indicating that the elliptical galaxy fraction actually remains constant with redshift while the S0 fraction rises with decreasing redshift \citep{dressler97,fasano00,desai07}. Unfortunately, the VLT/FORS2 images do not have sufficient spatial resolution to disentangle E and S0 galaxies as mentioned in Section~\ref{efrac-defn} to determine the exact contribution from each channel. We can therefore only study the overall production of early-type galaxies, but it should exhibit different behaviors with cluster global properties depending on the process(es) dominating it. Given our quantitative definition of an early-type galaxy based on bulge fraction and image smoothness, there are essentially two ways to transform late-type galaxies into early-type ones: 1) processes such as collisions and harassment that can fundamentally alter the structure of a galaxy by forming bulges and/or destroying disks and 2) quenching processes that can extinguish star forming regions responsible for some of the galaxy image asymmetries and also cause a fading of the disks. Applying the \citet{poggianti06} scenario to our results, the "threshold" in $f_{et}$ values in our high redshift clusters (Figures~\ref{vlt+sdss_efrac_sigma} and~\ref{vlt+sdss_efrac_age}) could be explained by a population of primordial passive galaxies that formed at even higher redshifts. Most of our high redshift clusters have early-type fractions in the range 0.3-0.6 with no correlation with cluster velocity dispersion. Are these early-type fractions indeed consistent with a populations of primordial passive galaxies? Calculations done in \citet{poggianti06} show that the fraction of galaxies at $z = 0.6$ that were present in haloes with masses greater than 3$\times$10$^{12}$ M$_\odot$ at $z$ = 2.5 is 0.4$\pm$0.2. These primordial passive galaxies can therefore account for at least 2/3 (if not all) of the early-type populations in high redshift clusters, and their high formation redshift would explain the lack of dependence of $f_{et}$ on cluster velocity dispersion. One of our main results is that the early-type fractions of galaxy clusters increase from $z = 0.6 - 0.8$ to $z\sim 0.08$ in clusters of all velocity dispersions. What kind of morphological transformation process(es) can lead to such an evolution? Collisions and harassment both depend on galaxy-galaxy interactions and the time a galaxy has spent within the cluster environment. Cluster velocity dispersion influences the number of interactions and their duration. Higher velocity dispersions in more massive clusters yield more interactions per unit time $N$ but with shorter durations $\Delta t$ in a given time interval. One might therefore expect to see a peak in early-type type fraction at the cluster velocity dispersion where the product N$\Delta t$ is maximized. No such peak is seen in our clusters. Ram-pressure stripping is expected to go as ($n_{ICM} v_{gal}^{2.4})/\dot{M}_{rep}$ \citep{gaetz87} with $n_{ICM}$, $v_{gal}$ and $\dot{M}_{rep}$ being the density of the ICM, the velocity of the galaxies within the ICM and the rate at which galaxies can replenish their gas respectively. The fraction of passive galaxies should therefore be a relatively strong function of cluster velocity dispersion if quenching by ram pressure stripping is the dominant process. The number of post-starburst galaxies in EDisCS clusters does correlate with cluster velocity dispersion \citep{poggianti09a}, but the uniform increase in early-type fractions at all cluster velocity dispersions observed going from EDisCS to SDSS clusters is not consistent with the intracluster medium being the main cause of the changes in cluster morphological content. Even though the early-type and [OII] emitter fractions in EDisCS and SDSS clusters show no correlation with cluster velocity dispersion \citep[][and this work]{poggianti06}, there is a very strong correlation between $f_{et}$ and $f_{OII}$. This correlation is seen at both low and high cluster masses as well as at both low and high redshifts. Morphology and star formation therefore appear to be closely linked with one another over a wide range of environments and times. However, different structural transformation and quenching processes are thought to operate over different timescales \citep[e.g.,][]{sanchez09}. Timescales range from 1-2 Gyr (based on typical cluster crossing times) for truncating star formation to 3-5 Gyr for totally extinguishing star formation in newly accreted galaxies \citep{poggianti06,tonnesen09}. Looking at the evolution of EDisCS cluster red-sequence galaxies over 2 Gyr (from $z = 0.75$ to $z$ = 0.45), \citet{sanchez09} found that morphological transformation and quenching of star formation indeed appeared to not be simultaneous. As noted in Section~\ref{efrac-sigma}, the early-type fractions of mid-$z$ EDisCS clusters may be $\sim$25$\%$ higher than the ones of high-$z$ clusters. This change would therefore have taken place over a 2 Gyr interval in our adopted cosmology. However, the time baseline here between SDSS and EDisCS clusters is almost 6 Gyr, and, unfortunately, this is ample time to erase any difference arising from different timescales in the link between morphology and star formation. The lack of dependence of morphology and star formation on global cluster properties such as velocity dispersion raises the question of whether changes in galaxy properties are driven by more local effects or whether they occur outside of the cluster environment. Recent work \citep{poggianti08,park09,bamford09,ellison09} have re-emphasized the strong link between galaxy properties and local galaxy density rather than cluster membership. Galaxy properties are seen to change at densities around 15-40 galaxies Mpc$^{-2}$ or projected separations of 20-30$h^{-1}$ kpc. Others \citep[e.g.,][]{kautsch08,wilman09} have suggested that the galaxy group environment might be more conducive to galaxy transformation. Our observed evolution in early-type fraction as a function of redshift and the strong correlation between morphology and star formation at all cluster masses would support the idea that cluster membership is of lesser importance than other variables such as local density in determining galaxy properties. The properties of simulated clusters from the Millenium Simulation compare well with those of EDisCS and SDSS clusters. Their early-type fractions also show no dependence with cluster velocity dispersion in contrast to previous theoretical work \citep[e.g.][]{diaferio01} but in agreement with observations. However, there is a definite lack of MS clusters with low early-type fractions at $z$ = 0 compared to the SDSS sample. It is important here to note that an early-type galaxy in the simulations was defined solely based on its bulge fraction because the simulations do not have the resolution required to model internal fine structures such as asymmetries. Given that real, early-type galaxies were also selected according to image smoothness, one would expect the early-type fractions of real clusters to be systematically lower. However, half of the SDSS clusters have low early-type fractions not seen in the simulations at $z$ = 0, and such a large discrepancy could only be explained by a significant population of real bulge-dominated galaxies with relatively large asymmetries. It is more likely that bulge formation in the simulations may be too efficient. The scatter in $f_{et}$ values for the simulated clusters with $\sigma \geq $ 600 km/s is also nearly three times smaller than observed in the real clusters (Section~\ref{efrac-sigma}) which may indicate that the models may not include the right mixture of evolutionary processes at work on real galaxies. High-mass simulated clusters show a correlation between early-type fraction and star-forming fraction (albeit over narrower ranges than observed), but the correlation is not seen in the low-mass simulated clusters. This may be understood by high mass clusters having been formed long enough for evolutionary processes to have had enough time to act on galaxies to modify their properties whereas this is not necessarily the case for low-mass clusters. The fact that the correlation is observed in both low- and high-mass real clusters may be an indication that processes giving rise to the correlation may be more efficient (or altogether different) than modelled. It is also important to keep in mind here that the properties of a galaxy in these models are essentially driven by the mass of its parent halo. \section{Summary}\label{conclusions} We have presented quantitative morphologies measured from PSF-convolved, 2D bulge+disk decompositions of cluster and field galaxies on deep VLT/FORS2 images of eighteen, optically-selected galaxy clusters at $0.45 < z < 0.80$ observed as part of the ESO Distant Cluster Survey. The morphological content of these clusters was characterized by the early-type fraction within a clustercentric radius of 0.6$R_{200}$, and early-type galaxies were selected based on bulge fraction and image smoothness. We showed a very good agreement between quantitative and visual galaxy classifications. We used a set of 158 clusters extracted from the Sloan Digital Sky Survey matched in velocity dispersion to our EDisCS sample and analyzed exactly in the same way to provide a robust comparison baseline and to control systematics. We studied trends in early-type fraction as a function of cluster mass and redshift. We also explored the link between morphology and star formation by comparing early-type fractions to the fractions of [OII] emitters in our clusters. Our main results are: 1. The early-type fractions of the SDSS and EDisCS clusters exhibit no clear trend as a function of cluster velocity dispersion. 2. Mid-$z$ EDisCS clusters around $\sigma$ = 500 km/s have $f_{et} \simeq$ 0.5 whereas high-$z$ EDisCS clusters have $f_{et} \simeq$ 0.4. This represents a $\sim$25$\%$ increase over a time interval of 2 Gyrs. 3. There is a marked difference in the morphological content of the EDisCS and SDSS samples. None of the EDisCS clusters have an early-type fraction greater than 0.6 whereas half of the SDSS clusters lie above this value. {\it This difference is seen in clusters of all velocity dispersions (i.e., masses)}. 4. There is a strong and clear correlation between morphology and star formation activity in the sense that decreasing fractions of [OII] emitters are tracked by increasing early-type fractions. This correlation holds in both low and high cluster masses as well as at both low and high redshift. 5. The early-type fractions of clusters drawn from the Millennium Simulation \citep{springel05} using the galaxy formation model of \citet{delucia07a} also show no clear dependence on cluster velocity dispersion. However, at $z$ = 0, they are not enough simulated clusters with low early-type fractions compared to the SDSS cluster sample. While high-mass simulated clusters show a correlation between early-type fraction and star-forming fraction (albeit over narrower ranges than observed), this correlation is not seen in the low-mass simulated clusters in contrast to the real ones. Our results pose an interesting challenge to structural transformation and star formation quenching processes that strongly depend on the global cluster environment (e.g., a dense ICM) and suggest that cluster membership may be of lesser importance than other variables in determining galaxy properties. \begin{acknowledgements} We are thankful to the anonymous referee for suggestions that greatly contributed this paper. We have benefitted from the generosity of the ESO/OPC. G. R. thanks Special Research Area No 375 of the German Research Foundation for financial support. The Millennium Simulation databases used in this paper and the web applications providing access to them were constructed as part of the activities of the German Astrophysical Virtual Observatory. Funding for the creation and distribution of the SDSS Archive has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Aeronautics and Space Administration, the National Science Foundation, the U.S. Department of Energy, the Japanese Monbukagakusho, and the Max Planck Society. The SDSS Web site is http://www.sdss.org/.The SDSS is managed by the Astrophysical Research Consortium (ARC) for the Participating Institutions. The Participating Institutions are The University of Chicago, Fermilab, the Institute for Advanced Study, the Japan Participation Group, The Johns Hopkins University, the Korean Scientist Group, Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington. The Dark Cosmology Centre is funded by the Danish National Research Foundation. \end{acknowledgements}
1,108,101,566,275
arxiv
\section{Introduction} \label{sec:introduction} Determination of leptonic mixing parameters, the three mixing angles and the two mass squared differences of neutrinos, marks a new epoch of physics beyond the standard model (SM). Despite that we still do not know the value of leptonic Kobayashi-Maskawa phase and the neutrino mass pattern, successes of {\em hypothesis} of using the standard three-flavor mixing in describing a wealth of experimental data prompts us to think about one further step, namely, the paradigm test. We feel that we have reached the right point that raising the question of how to test the three-flavor mixing framework itself is timely. The most common way of testing the framework is to verify unitarity of the mixing matrix. It appears to us that the two different strategies of testing leptonic unitarity are thinkable: \begin{itemize} \item One is to show closing of the lepton unitarity triangle in an analogous way of unitarity test for the quark CKM matrix \cite{Agashe:2014kda}. \item The other is to prepare a model of unitarity violation, confront it against the available experimental data, and derive constraints on unitarity violation parameters. \end{itemize} \noindent The first method serves as a unitarity test purely within the framework of standard three-flavor mixing scheme of neutrinos, without recourse to any particular model of unitarity violation. This is the advantage of this method. See e.g., ref.~\cite{Farzan:2002ct} for this approach. In the second way, one introduces a general framework for leptonic unitarity violation, or a class of models that embody the property is constructed. To our opinion, there are pros and cons in the above two different strategies. In the first method, despite its charming model-independent nature, it is quite challenging to determine size of each side of the unitarity triangles experimentally \cite{Farzan:2002ct}. The second method introduces model-dependent features into the unitarity test, which can be considered as a drawback of this approach. On the other hand, there is a definite underlying scenario behind the non-unitarity in the latter case, or at least general guidance to it. Therefore, once hinted, it may allow us to identify the cause of unitarity violation. We feel, therefore, that both methods of leptonic unitarity test must be pursued. In studies of testing leptonic unitarity so far done along the second strategy above, it appears that people took two different attitudes. That is, one integrates out new physics effects at high energy scales to obtain effective theories of three generation leptons at low energies which represents non-unitarity in this limited subspace \cite{Antusch:2006vwa}. The other one takes more relaxed attitude in explicitly introducing SM singlet leptons, and examines the models in a relatively model independent fashion in SM subspace with non-unitarity \cite{Escrihuela:2015wra,Parke:2015goa}. In the latter approach, the masses of SM gauge-group singlet leptons can be large or small, reflecting varying underlying scenarios of new physics. If we take the former way, $SU(2) \times U(1)$ gauge invariance dictates that the same unitarity violation must also be manifest in the charged lepton sector. In the framework of ref.~\cite{Antusch:2006vwa}, generally speaking, the constraints on unitarity violation are dominated by the ones coming from the charged lepton sector. If we follow the latter way, it is more case-sensitive and neutrino experiments can play greater roles. One of the most interesting questions in the whole area of study of unitarity violation is to reveal qualitative differences between unitarity violation at high- and low-energy scales. It is the purpose of this paper to discuss low energy-scale unitarity violation, hereafter {\em ``low-scale unitarity violation''} for short, in detail. By low-scale unitarity violation we mean that a ``hidden'' sector in state space to which probability flow occurs is located at low energy scales, like eV or MeV. It allows the hidden sector particles be produced along with neutrinos, and also they participate in neutrino oscillations assuming their mixing with neutrinos. We first recapitulate the interesting characteristic features of low-scale unitarity violation different from high-scale unitarity violation. They include (1) retaining flavor universality, and (2) lack of zero-distance flavor transitions. See section~\ref{sec:unitarity-high-low} for more about these points. Some specific scenarios of high- and low-scale unitarity violation were explored in refs.~\cite{Xing:2012kh,Luo:2014fia,Luo:2014vha,Li:2015oal}. Then, we go on to construct a framework for experimental testing of low-scale unitarity violation. Since there is such interesting qualitative differences above between high- and low-scale unitarity violation, they must be tested and be distinguished from each other. We argue, in agreement with the preceding works \cite{Parke:2015goa,Escrihuela:2015wra}, that the constraint by $Z$ width measurement in LEP \cite{LEP} makes extension of low-mass lepton sector to be essentially unique, only allowing inclusion of SM singlet fermions. Thus, our model of non-unitarity at low-energies utilizes three active neutrinos and an arbitrary numbers of sterile neutrino states. We discuss in detail how the model prediction can be made insensitive to the details of the sterile sector, e.g., the mass spectrum of sterile neutrinos and mixing between active and sterile neutrinos. We find that the resultant expressions of oscillation probabilities in vacuum contain a new term, an explicit probability leakage term, which distinguishes between low- and high-scale unitarity violation. To our knowledge, the term has not been incorporated in the previous analyses of unitarity violation at low energies. We examine how this framework works by analyzing future medium-baseline reactor neutrino experiments. In the final two sections we discuss how CP violating terms in accelerator appearance measurement can be used to signal non-unitarity and how the matter effect affects the foregoing discussions above. There is an obvious relation between the model we discuss in this paper and various versions of active plus sterile neutrino models proposed to provide description of the LSND-MiniBooNE anomaly (see a review \cite{Conrad:2013mka} and references therein). We will make remarks on the relationship between them below at wherever appropriate. In particular, we should note that in the frameworks of 3 active plus a few sterile neutrinos the various bounds on the mixing parameters are derived by using the existing data. For the most recent comprehensive analysis, see ref.~\cite{Kopp:2013vaa}.\footnote{ Though they are very relevant for this paper we do not implement these bounds into discussions in this paper. It is because they are not derived by using the generic $(3+N)$ model, and the translation of their bound to our setting requires great care. Furthermore, the principal purpose of this paper is to provide suitable framework for leptonic unitarity test in high-precision experiments in the future.} \section{Unitarity violation at high- and low-energy scales} \label{sec:unitarity-high-low} The cause of unitarity violation in the lepton sector can be due to new physics beyond the SM at high-energy scales, or the ones at low energies. In the best studied high-scale seesaw scenario of neutrino mass \cite{Minkowski:1977sc,Yanagida:1979as,GellMann:1980vs,Mohapatra:1980yp}, the three-flavor mixing of neutrinos has a tiny violation of unitarity due to the mixing of heavy right-handed neutrinos. A more generic formulation of high-scale unitarity violation was given by Antusch et al.~\cite{Antusch:2006vwa} by taking the minimal unitarity violation scheme. One of the salient features in high-scale unitarity violation is that even though SM singlet leptons exist which mix with neutrinos, it is likely that such neutral leptons are much heavier than neutrinos. They are not produced copiously in the same processes as neutrinos are produced, and a physical transition from neutrinos to them are kinematically forbidden. On the other hand, if we assume that unitarity violation occurs due to physics at an energy scale much lower than the electroweak scale, the light SM gauge group singlet leptons not only mix with neutrinos, but also their masses are so light that they participate in the process of neutrino oscillations. In this paper, we try to develop a framework for experimental test of unitarity violation by assuming such situation, to which we simply refer as ``low-scale unitarity violation''. Hereafter, we call the SM gauge group singlet fermions generically as ``sterile neutrinos'' for simplicity. We notice that there are some characteristic features in high- and low-scale unitarity violation that one can recognize even without going into any details. They are: \begin{itemize} \item Yes or no of violation of lepton universality: It is expected on general ground that due to non-unitarity of the lepton mixing matrix the lepton universality is violated. See refs.~\cite{Antusch:2006vwa,Escrihuela:2015wra}, and the references cited therein.\footnote{ General bounds on non-unitarity are discussed in the context of high-scale unitarity violation, e.g., in refs.~\cite{Antusch:2006vwa,Escrihuela:2015wra,Antusch:2014woa,Fernandez-Martinez:2016lgt}. } While it is a generic feature in high-scale unitarity violation, lepton universality can be maintained in low-scale unitarity violation. It is because sterile neutrinos can be produced as well, for example in $\mu \rightarrow e + \text{steriles}$ process. Assuming no detection sensitivity to sterile neutrinos, it masks the effect of non-unitary mixing matrix in the active neutrino sector. \item Yes or no of zero distance neutrino flavor transition: Similarly to the above point, in high-scale unitarity violation, kinematically forbidden active to sterile states transition entails zero-distance attenuation of probability of a given flavor neutrino \cite{Langacker:1988ur}. It {\em does not} occur if sterile neutrinos can take part in the flavor oscillation processes, as we will show in section~\ref{sec:zero-distance}. \item Of course, there are common features in high- and low-scale unitarity violation: Emission of sterile neutrinos, if kinematically allowed range of low to high masses, affects the observable spectrum of charged leptons. It can be utilized to place constraints on non-unitarity by using, e.g., electron spectra in beta and muon decays, or muon spectrum in pion decay, cosmological observations etc. See \cite{deGouvea:2015euy} for a comprehensive summary of the current status of the bounds for the 3+1 scenario.\footnote{ For some of the early analyses of extra neutral heavy leptons and the bounds on them, see e.g., \cite{Langacker:1988ur,Nardi:1991rg,Nardi:1994iv}. } \end{itemize} \noindent In the rest of this paper, we construct a model of low-scale unitarity violation which can be used to test leptonic unitarity in neutrino experiments. Although the constraints from beta and muon decays etc. just mentioned above are relevant, we do not try to elaborate the discussions already given in \cite{deGouvea:2015euy} and the references cited therein. \section{A model of unitarity violation at low energies} \label{sec:low-E-unitarity-violation} Now, we introduce our model of unitarity violation at low energies. But, one recognizes immediately that there is no big room for this. Precision measurement of $Z$ decay width at LEP \cite{LEP} dictates that there is only three active neutrinos. Therefore, extra fermions we introduce which mix with neutrinos must be SM singlets, which we call ``sterile neutrinos'' in this paper. Then, we are left with the unique possibility, the system of three active neutrinos plus arbitrary number of sterile neutrinos which mix with each other. We denote the number of sterile neutrino states as $N$. We assume that our system of the three active neutrinos and $N$ sterile neutrinos are complete, which we call the {\bf $(3+N)$ space unitary model} hereafter. Though we deal with the particular model we want it as {\em model-independent} as possible within the framework of the $(3+N)$ space unitary model. Therefore, we shall always keep number of sterile neutrinos $N$ arbitrary in this paper. Toward constructing a framework for leptonic unitarity test, however, we must make additional requirement on our $(3+N)$ space unitary model. We want to avoid the situation that experimental predictions of the model depend very sensitively on details of the $N$ sterile neutrino sector, for example, on the mass spectrum of sterile states. In the rest of this section, we discuss how it can be achieved, and what are the conditions for this. \subsection{3 active $+ N$ sterile unitary system} \label{sec:non-unitarity-vacuum} To define the notation and for definiteness, we introduce the $(3+N)$ space unitary system in vacuum. The Hamiltonian which governs the evolution of 3 active and $N$ sterile state vector in flavor basis, $\nu = \left[ \nu_{e}, \nu_{\mu}, \nu_{\tau}, \nu_{s_{1}}, \nu_{s_{2}}, \cdot \cdot \cdot, \nu_{s_{N}} \right]^{T}$, as $ i \frac{d}{dx} \nu = H \nu$ is given by\footnote{ For simplicity, we assume that sterile states do not decay along its length of flight in neutrino oscillation experiments. If sterile states have decay length much shorter than the baseline, and if the decay products do not include the three active neutrinos, the oscillation probabilities converge to those of ``high-scale unitarity violation'' discussed in the previous section. } \begin{eqnarray} H = {\bf U} \left[ \begin{array}{cccccc} \Delta_{1} & 0 & 0 & 0 & 0 & 0 \\ 0 & \Delta_{2} & 0 & 0 & 0 & 0 \\ 0 & 0 & \Delta_{3} & 0 & 0 & 0 \\ 0 & 0 & 0 & \Delta_{4} & 0 & 0 \\ 0 & 0 & 0 & 0 & \cdot \cdot \cdot & 0 \\ 0 & 0 & 0 & 0 & 0 & \Delta_{3+N} \\ \end{array} \right] {\bf U}^{\dagger} \label{H-3+N-vac} \end{eqnarray} where \begin{eqnarray} \Delta_{i} \equiv \frac{ m^2_{i} }{2E} \hspace{4mm} (i = 1,2,3), \hspace{8mm} \Delta_{J} \equiv \frac{ m^2_{J} }{2E} \hspace{4mm} (J = 4, \cdot \cdot \cdot, 3+N). \label{Delta-def} \end{eqnarray} Here, $m_{i}$ ($m_{J}$) denote the mass of mostly active (sterile) neutrinos and $E$ is the neutrino energy. For notations of the mass squared differences we use, in generic case including both active and sterile neutrinos, \begin{eqnarray} \Delta m_{ab}^2 \equiv m_a^2 - m_b^2, \label{eq:mass_diff-generic} \end{eqnarray} where $a,b=1,...,3+N$. When we want to distinguish between active-active, active-sterile, and sterile-sterile neutrino mass differences, we use \begin{eqnarray} \Delta m_{ji}^2 &\equiv& m_j^2 - m_i^2, \nonumber \\ \Delta m_{J i}^2 &\equiv& m^2_{J} - m^2_{i}, \nonumber \\ \Delta m_{J I}^2 & \equiv & m^2_{J} - m^2_{I}, \label{eq:mass_diff-specific} \end{eqnarray} where $i,j,k, ...$ (small letters) = 1,2,3 are for active neutrinos, and $I,J,K, ... $ (capital letters) = 4,.., 3+$N$ for sterile neutrinos. The mixing matrix ${\bf U}$ relates the flavor eigenstate $\nu$ to the vacuum mass eigenstate $\tilde{\nu}$ as $\nu_{\zeta} = {\bf U}_{\zeta a} \tilde{\nu}_{a}$ (here, $\zeta=e,\mu,\tau,s_1,...,s_N$), and hence it is a $(3+N) \times (3+N)$ matrix. By construction of the model, the matrix {\bf U} is unitary. While the flavor index $\zeta$ above includes also sterile states, from now on, we will eventually single out only the active flavor indices $\alpha, \beta = e, \mu, \tau$ whenever they are explicitly specified in the formulas for the $S$ matrix as well as for probabilities. \subsection{Averaging out the sterile oscillations due to decoherence} \label{sec:decoherence} How to make our model insensitive to the mass spectrum in the sterile sector? The masses of the sterile neutrinos appear in the factor of $e^{-i E_{J} x} = e^{-i \sqrt{p_{J}^2 + m_{J}^2} x}$ in the propagations of mass eigenstates. This results in phase differences $e^{-i (E_{J} - E_{I}) x} \approx e^{-i \Delta m_{JI}^2 x/(2E)}$ which can be observed through neutrino oscillation phenomena. Assuming no accidental mass degeneracy among the sterile states i.e. $|\Delta m_{JI}^2| \gg |\Delta m_{31}^2|$, the oscillation terms involving sterile masses can be averaged out due to (partial) decoherence if certain conditions are fulfilled.\footnote{ We are interested in partial decoherence where the oscillations involving sterile states are averaged out, whereas the active ones do oscillate. } Intuitively, decoherence occurs when the variation in the phase due to spatial and/or energy resolution is greater than $2\pi$, \begin{eqnarray} \left| \delta \left(\frac{\Delta m_{ab}^2 x}{2E}\right) \right| = \left| \frac{\Delta m_{ab}^2}{2E} \delta x - \frac{ \Delta m_{ab}^2 x}{2E^2} \delta E \right| \gtrsim 2\pi. \label{eq:decoherence} \end{eqnarray} From the terms in eq.~\eqref{eq:decoherence} which depend, respectively, on the variation of baseline distance ($\delta x$) and that of energy ($\delta E$), we can classify the following two types of decoherence: \begin{enumerate} \item[i.] \emph{Spatial resolution.} In this case, decoherence happens if \begin{eqnarray} \delta x \gtrsim \frac{4 \pi E}{|\Delta m_{ab}^2|}. \label{eq:decoherence_x} \end{eqnarray} \item[ii.] \emph{Energy resolution.} In this case, decoherence happens if \begin{eqnarray} \delta E \gtrsim \frac{4 \pi E^2}{|\Delta m_{ab}^2| x}. \label{eq:decoherence_E} \end{eqnarray} \end{enumerate} Notice that the conditions derived heuristically above are in agreement with those obtained from formal approaches (i.e. wavepacket description) as e.g., in refs.~\cite{Giunti:1997wq,Hernandez:2011rs,Akhmedov:2012uu}. Since we are interested in the decoherence involving sterile sector, the conditions \eqref{eq:decoherence_x} and \eqref{eq:decoherence_E} have to be fulfilled for $\Delta m_{Ja}^2$ which involve at least one sterile mass. This allows us to obtain \emph{lower} bound on the scale of sterile sector $|\Delta m_{Ja}^2|$ where our model becomes insensitive to the sterile mass spectrum. Notice that $\delta x$ and $\delta E$ in eqs.~\eqref{eq:decoherence_x} and \eqref{eq:decoherence_E} are associated with the experimental setup (concerning \emph{both} production and detection), e.g., with the production region of neutrinos and with energy resolution of a detector. In the following, we discuss the conditions that must be satisfied for sterile oscillations to be averaged out. Most of the neutrino oscillation experiments work with the kinematical setting, either \begin{eqnarray} \frac{\Delta m^2_{21} x}{4E} \sim 1 \hspace{10mm} \text{or} \hspace{10mm} \frac{|\Delta m^2_{31}| x}{4E} \sim 1. \hspace{10mm} \label{VOM} \end{eqnarray} in which the former is for long-baseline (LBL) reactor neutrino experiments, KamLAND, JUNO, and RENO-50, and the latter for the accelerator LBL and the reactor $\theta_{13}$ experiments. From eq.~\eqref{eq:decoherence_x}, the condition for averaging out due to the size of production region, i.e. $\delta x = x_{\rm prod}$ reads \begin{eqnarray} x_{\rm prod} \gtrsim \frac{4 \pi E}{ |\Delta m_{Ja}^2| }, \label{prod-region-average} \end{eqnarray} where $x_{\rm prod}$ denotes the size of the production region of neutrinos, e.g., core diameter for nuclear reactor neutrinos and the length of decay pipe for accelerator neutrino beams. Assuming the setting as in (\ref{VOM}), the condition (\ref{prod-region-average}) leads to \begin{eqnarray} |\Delta m^2_{Ja}| &\gsim& \pi \Delta m^2_{21} \left( \frac{ x }{ x_{\rm prod} } \right) \approx 1.2~\text{eV}^2 \left( \frac{ x / x_{\rm prod} }{ 5 \times 10^{3} } \right) \hspace{6mm} \text{(for reactor)}, \nonumber \\ |\Delta m^2_{Ja}| &\gsim& \pi |\Delta m^2_{31}| \left( \frac{ x }{ x_{\rm prod} } \right) \approx 7.5~\text{eV}^2 \left( \frac{ x / x_{\rm prod} }{ 10^{3} } \right) \hspace{6mm} \text{(for accelerator)}, \label{prod-size} \end{eqnarray} where we have taken, for the typical source sizes and baseline distances, $x_{\rm prod} = 10$ m and $x=50$ km for reactor neutrinos, and $x_{\rm prod} = 1$ km and $x=1000$ km for accelerator neutrinos. Due to energy resolution of a detector, using $\Delta m^2_{21} = 7.5 \times 10^{-5}$ eV$^2$, and $|\Delta m^2_{31}| = 2.4 \times 10^{-3}$ eV$^2$, eqs.~\eqref{eq:decoherence_E} and (\ref{VOM}) lead to the condition on $\Delta m^2_{J a}$ for the sterile oscillation to be averaged out: \begin{eqnarray} |\Delta m^2_{J a}| &\gsim& \pi \left( \frac{ \Delta m^2_{21} }{ \delta E / E } \right) \approx 7.9 \times 10^{-3} \text{eV}^2 \left( \frac{ \delta E / E }{ 0.03 } \right)^{-1} \hspace{6mm} \text{(for LBL reactor)}, \nonumber \\ |\Delta m^2_{J a}| &\gsim& \pi \left( \frac{|\Delta m^2_{31}|}{ \delta E / E } \right) \approx 7.5 \times 10^{-2} \text{eV}^2 \left( \frac{ \delta E / E }{ 0.1 } \right)^{-1} \hspace{6mm} \text{(for accelerator)}. \label{E-resolution2} \end{eqnarray} The aggressive choice of a typical 3\% error in energy measurement in the former case is based on the JUNO proposal in \cite{JUNO}, whereas a conservative choice of $\delta E / E = 10\%$ is made for accelerator neutrino experiments. Therefore, if we restrict ourselves to $\Delta m^2_{J i} \sim |\Delta m^2_{JK}| \gsim 0.1~\text{eV}^2$, the fast oscillation due to the active-sterile and sterile-sterile mass squared differences can be averaged out by the effect of energy resolution. For the JUNO-like setting the requirement on $|\Delta m_{Ja}^2|$ can be relaxed by an order of magnitude. We note that effect of averaging over the production points of neutrinos is less sizeable compared to that of energy resolution for the fast sterile oscillation to be averaged out, which therefore leads to more restrictive condition on the sterile state masses. \subsection{Requirement on the sterile mass spectrum} \label{sec:requirement} In addition to the condition of averaging out the fast sterile oscillations, we require that the masses of sterile neutrinos are light enough such that they can be produced in the same environment as neutrinos are produced. We do this because it is the most significant characteristic feature of unitarity violation at low energies. It gives raise to the condition $m_{J} \lsim 1$ MeV for reactor neutrinos, and $m_{J} \lsim 100$ MeV for accelerator neutrinos. To summarize: In seeking the case that neutrino oscillation in our 3 active + $N$ sterile neutrino system is insensitive to the detailed properties of the sterile sector, such as the mass spectrum of the sterile states and the fine structure of the active-sterile mixing, we require for sterile neutrino masses in our $(3+N)$ space unitary model that \begin{eqnarray} 0.1~\text{eV}^2 \lsim m_{J}^2 \lsim 1~\text{MeV}^2. \label{sterile-mass} \end{eqnarray} The lower limit is from condition of averaging out the fast oscillations for accelerator neutrinos, and the upper one from producibility of sterile neutrinos in reactors. With the conditions \eqref{eq:decoherence_x} and/or \eqref{eq:decoherence_E} of averaging out the fast oscillations being satisfied we can make approximation\footnote{ To describe the borderline regime, one has to resort to formal description e.g. of refs.~\cite{Giunti:1997wq,Hernandez:2011rs,Akhmedov:2012uu}. } \begin{eqnarray} \left\langle \sin \left(\frac{ \Delta m^2_{J i} x}{ 2E }\right) \right\rangle &\approx& \left\langle \sin \left(\frac{ \Delta m^2_{J K} x}{ 2E }\right) \right\rangle \approx 0, \label{average-out1} \\ \left\langle \cos \left(\frac{ \Delta m^2_{J i} x}{ 2E }\right) \right\rangle &\approx& \left\langle \cos \left(\frac{ \Delta m^2_{J K} x}{ 2E }\right) \right\rangle \approx 0, \label{average-out2} \end{eqnarray} where $\langle ... \rangle$ stands for averaging over neutrino energy within the uncertainty of energy resolution, as well as averaging over uncertainty of distance between production and detection points of neutrinos. The latter approximate equalities in \eqref{average-out1} and \eqref{average-out2} assume that there is no accidental degeneracy among the sterile state masses i.e. $|\Delta m^2_{JK}| \gg |\Delta m^2_{31}|$. \subsection{Cases in which sterile oscillations are not averaged out} While allowing wide range of sterile lepton masses, the condition (\ref{sterile-mass}) excludes the certain characteristic regions of sterile neutrino mass spectrum. Exclusion of higher masses is done under the spirit of low-scale unitarity violation, and therefore we consider it granted. But, there is no a priori reason for excluding sterile neutrino masses in regions $\Delta m^2_{J i} \sim |\Delta m^2_{J K}| \sim $ the atmospheric $\Delta m^2$, or the solar $\Delta m^2$. In this case, however, one must expect severe model dependence in the experimental predictions by the $(3+N)$ unitary model. Clearly, the number of CP violating phases depends on $N$, and the additional phases will play important roles in fitting data. Therefore, an extensive separate treatment is necessary to include this case. How about sterile neutrino masses which are much lighter than the atmospheric, or the solar $\Delta m^2$? Again, there is no a priori model-independent reason for excluding this case. If the active neutrino masses are such that KATRIN can detect the signal, $m_{i} \gsim 0.2$ eV, then, averaging out condition may be barely maintained for the fast active-sterile oscillation for very small sterile masses, $\Delta m^2 \simeq 0.04$ eV$^2$. However, it would be accompanied by extremely slow developing (as a function of propagation distance $x$) sterile-sterile oscillations. Clearly, a separate analysis is needed to know to what extent the case survives a test with the currently available experimental data. As a summary of our discussions in this section, we state as follows: If we construct $(3+N)$ space unitary system as a model of low-scale unitarity violation, we can make the model predictions insensitive to details of the sterile neutrino sector, such as the mass spectrum. It requires us to restrict ourselves to the region of sterile neutrino masses $0.1~\text{eV}^2 \lsim m_{J}^2 \lsim 1~\text{MeV}^2$. We assume this in our all subsequent discussions in this paper. \section{The oscillation probabilities in 3 active and $N$ sterile model in vacuum} \label{sec:oscillation-P-vac} Given the Hamiltonian in (\ref{H-3+N-vac}), it is straightforward to compute the neutrino oscillation probabilities $P(\nu_\beta \rightarrow \nu_\alpha)$ in vacuum, where the Greek indices $\alpha, \beta, ... = e, \mu, \tau$. Let us start by showing that there is no zero distance transition in our $(3+N)$ space unitary model. \subsection{No zero distance transition in $(3+N) \times (3+N)$ unitary system} \label{sec:zero-distance} The oscillation probabilities take the form \begin{eqnarray} P(\nu_\beta \rightarrow \nu_\alpha) &= & \left| ~\sum_a {\bf U}_{\alpha a} {\bf U}^*_{\beta a} ~e^{-i\frac{m_a^2 x}{2E}} ~\right|^2 \nonumber\\ &= & \left| ~\sum_{a = 1}^{3+N} {\bf U}_{\alpha a} {\bf U}^*_{\beta a} ~\right|^2 - 2 \sum_{b \neq a} \mbox{Re}[{\bf U}_{\alpha a} {\bf U}_{\beta a}^* {\bf U}_{\alpha b}^* {\bf U}_{\beta b}] \sin^2 \left(\frac{ \Delta m^2_{ba} x}{ 4E }\right)\nonumber\\ &- &\sum_{b \neq a} \mbox{Im}[ {\bf U}_{\alpha a} {\bf U}_{\beta a}^* {\bf U}_{\alpha b}^* {\bf U}_{\beta b} ] \sin \left(\frac{ \Delta m^2_{ba} x}{ 2E } \right). \label{eq:P-ba-general} \end{eqnarray} At $x=0$, $P(\nu_\beta \rightarrow \nu_\alpha) = \delta_{\alpha \beta}$ thanks to unitarity of the ${\bf U}$ matrix. It means, of course, no zero distance transition in the $(3+N)$ space unitary model. This is in sharp contrast to the feature possessed by the high-scale unitarity violation \cite{Antusch:2006vwa,Escrihuela:2015wra}. \subsection{The oscillation probabilities in the ($3 + N$) model} \label{sec:oscillation-P} Here, we derive the expressions of the oscillation probabilities in our ($3 + N$) model when the active-sterile and sterile-sterile oscillations are averaged out. For this purpose we define a new notation of the $(3+N) \times (3+N)$ unitary matrix {\bf U}. It can be parameterized as~\cite{Escrihuela:2015wra} \begin{eqnarray} {\bf U} = \left[ \begin{array}{cc} U & W \\ Z & V \\ \end{array} \right], \label{U-parametrize} \end{eqnarray} satisfying ${\bf U} {\bf U}^\dagger = {\bf U}^\dagger{\bf U} = 1_{(3+N)\times (3+N)}$. The active space mixing matrix $U$ is $3 \times 3$ matrix with elements $U_{\alpha i}$, the rectangular matrices $W$ and $Z$ are respectively $3 \times N$ and $N \times 3$ matrices with elements $W_{\alpha I}$ and $Z_{I \alpha}$, and the square matrix $V$ is $N \times N$ matrix with elements $V_{I J}$. To develop general framework we do not make any assumptions on the size of $W$ and $Z$ matrix elements (besides $|W|, |Z| < 1$) in this paper. The oscillation probability is written in terms of $S$ matrix as $P(\nu_\beta \rightarrow \nu_\alpha) = \vert S_{\alpha \beta} \vert^2$, \begin{eqnarray} S_{\alpha \beta} &=& \sum_{k=1}^{3} U_{\alpha k} U^{*}_{\beta k} e^{ - i \Delta_{k} x} + \sum_{K=4}^{3+N} W_{\alpha K} W^{*}_{\beta K} e^{ - i \Delta_{K} x}. \label{S-elements-3+N-vac} \end{eqnarray} where $\Delta_{k(K)} \equiv { m^2_{k(K)} }/{(2E)}$ as defined in (\ref{Delta-def}), and the elements of $3\times N$ matrix $W$ are defined as \begin{eqnarray} W \equiv \left[ \begin{array}{ccc} W_{e\, 4} &\ W_{e\,5} &...\ \ W_{e\,3+N}\\ W_{\mu\, 4} &\ W_{\mu\, 5} &...\ \ W_{\mu\, 3+N}\\ W_{\tau\, 4} &\ W_{\tau\, 5} &...\ \ W_{\tau\, 3+N} \end{array} \right], \label{W-parametrize} \end{eqnarray} such that its integer index indicated by the capital letters like $I,J$ and $K$ (which run from 4 to $3+N$) always refers to the sterile neutrino mass eigenstate. After squaring the $S$ matrix, $P(\nu_\beta \rightarrow \nu_\alpha)$ has three terms: the first and second terms squared and the interference term, each of which can be easily computed. They are given, in order, as \begin{eqnarray} && P(\nu_\beta \rightarrow \nu_\alpha) = \left| \sum_{k=1}^{3} U_{\alpha k} U^{*}_{\beta k} \right|^2 \nonumber\\ &-& 2 \sum_{j \neq k} \mbox{Re} \left( U_{\alpha j}^* U_{\beta j} U_{\alpha k} U^{*}_{\beta k} \right) \sin^2 \frac{ ( \Delta_{k} - \Delta_{j} ) x }{ 2 } + \sum_{j \neq k} \mbox{Im} \left( U_{\alpha j}^* U_{\beta j} U_{\alpha k} U^{*}_{\beta k} \right) \sin ( \Delta_{k} - \Delta_{j} ) x \nonumber\\ &+& \sum_{J} \vert W_{\alpha J} \vert^2 \vert W_{\beta J} \vert^2 \nonumber\\ &+& \sum_{J \neq K} \left[ \mbox{Re} \left( W_{\alpha J}^* W_{\beta J} W_{\alpha K} W^{*}_{\beta K} \right) \cos ( \Delta_{K} - \Delta_{J} ) x + \mbox{Im} \left( W_{\alpha J}^* W_{\beta J} W_{\alpha K} W^{*}_{\beta K} \right) \sin ( \Delta_{K} - \Delta_{J} ) x \right] \nonumber\\ &+& 2 \sum_{j=1}^{3} \sum_{K=4}^{3+N} \left[ \mbox{Re} \left( U_{\alpha j}^{*} U_{\beta j} W_{\alpha K} W^{*}_{\beta K} \right) \cos ( \Delta_{K} - \Delta_{j} ) x + \mbox{Im} \left( U_{\alpha j}^{*} U_{\beta j} W_{\alpha K} W^{*}_{\beta K} \right) \sin ( \Delta_{K} - \Delta_{j} ) x \right]. \nonumber \\ \label{P-beta-alpha-vac} \end{eqnarray} We notice that the last two lines vanish after averaging over energy resolution, as discussed in section~\ref{sec:low-E-unitarity-violation}. Then, we obtain the expressions of oscillation probabilities in our $(3 + N)$ model in vacuum. In the appearance channel, $\alpha \neq \beta$, it reads \begin{eqnarray} P(\nu_\beta \rightarrow \nu_\alpha) &=& \mathcal{C}_{\alpha \beta} + \left| \sum_{j=1}^{3} U_{\alpha j} U^{*}_{\beta j} \right|^2 - 2 \sum_{j \neq k} \mbox{Re} \left( U_{\alpha j} U_{\beta j}^* U_{\alpha k}^* U_{\beta k} \right) \sin^2 \frac{ ( \Delta_{k} - \Delta_{j} ) x }{ 2 } \nonumber\\ &-& \sum_{j \neq k} \mbox{Im} \left( U_{\alpha j} U_{\beta j}^* U_{\alpha k}^* U_{\beta k} \right) \sin ( \Delta_{k} - \Delta_{j} ) x, \label{P-beta-alpha-ave-vac} \end{eqnarray} and in the disappearance channel \begin{eqnarray} P(\nu_\alpha \rightarrow \nu_\alpha) = \mathcal{C}_{\alpha \alpha } + \left( \sum_{j}^{3} \vert U_{\alpha j} \vert^2 \right)^2 - 4 \sum_{ k > j }^{3} \vert U_{\alpha j} \vert^2 \vert U_{\alpha k} \vert^2 \sin^2 \frac{ ( \Delta_{k} - \Delta_{j} ) x }{ 2 }, \label{P-alpha-alpha-ave-vac} \end{eqnarray} where \begin{eqnarray} \mathcal{C}_{\alpha \beta} \equiv \sum_{J=4}^{3+N} \vert W_{\alpha J} \vert^2 \vert W_{\beta J} \vert^2, \hspace{10mm} \mathcal{C}_{\alpha \alpha } \equiv \sum_{J=4}^{3+N} \vert W_{\alpha J} \vert^4. \label{Cab-Caa} \end{eqnarray} One should notice that after averaging over high-frequency sterile oscillations, the expressions in (\ref{P-beta-alpha-ave-vac}) and (\ref{P-alpha-alpha-ave-vac}) have terms which look like the ``zero-distance flavor transition''. But, it cannot be the correct interpretation because the averaging procedure (even though it is on energy spectrum) inherently contains certain distance scale to observe destructive interference which leads to cancellation of oscillatory behavior. The expression of the oscillation probabilities in (\ref{P-beta-alpha-ave-vac}) and (\ref{P-alpha-alpha-ave-vac}) look similar to the ones in the standard three-flavor mixing. But, there are two important differences: \begin{itemize} \item The active space mixing matrix $U$ is not unitary, \item There is a probability leaking term to the sterile neutrino sector, $\mathcal{C}_{\alpha \beta}$ in (\ref{P-beta-alpha-ave-vac}) and $\mathcal{C}_{\alpha \alpha }$ in (\ref{P-alpha-alpha-ave-vac}). \end{itemize} \noindent The former is a common feature of the theories in which unitarity is violated in active neutrino subspace. In the unitary case the second term in (\ref{P-beta-alpha-ave-vac}) is $\delta_{\alpha \beta}$. On the other hand, the second point above, the existence of probability leaking term, is the characteristic feature of the low-scale unitarity violation. However, the term is omitted in the expression of the oscillation probability in the literature, e.g., in refs.~\cite{Parke:2015goa,Qian:2013ora}, and was considered only for some specific models of sterile neutrinos, e.g., in \cite{Maltoni:2007zf,Li:2015oal}. Does the leaking term introduce a heavy model-dependence into the prediction by our $(3+N)$ model? The answer is no: though it indeed displays some sterile sector model dependence, it is only a mild one. That is, the term can be treated as the channel dependent constant $\mathcal{C}_{\alpha \beta}$ when this formula is used to analyze leptonic unitarity violation in vacuum. We emphasize that the clearest evidence for low-scale unitarity violation is the demonstration of existence of probability leaking constant $\mathcal{C}_{\alpha \beta} \equiv \sum_{J=4}^{3+N} \vert W_{\alpha J}\vert^2 \vert W_{\beta J} \vert^2$. Unfortunately, it would not be easy to carry out for the two reasons: (1) the term is small in size because it is the fourth order in unitarity-violating elements $W_{\alpha J}$, and (2) it is just a constant term and hence it could be confused by the uncertainty in the flux normalization of neutrino beams. Apart from the probability leaking term $\mathcal{C}_{\alpha \beta}$ ($\alpha = \beta$, $\alpha \neq \beta$), our formulas agree with those of ref.~\cite{Escrihuela:2015wra}. On the other hand, the oscillation probability formulas in ref.~\cite{Antusch:2006vwa} have extra normalization factor. Therefore, it looks like they do not agree with each other although they are both dealing with high-scale unitarity violation. But, since the normalization factor cancels against those included in the neutrino cross sections they are consistent, if the probability formulas in \cite{Escrihuela:2015wra} are understood as the ones after the cancellation. \subsection{$(3+N)$ state space unitarity and constraint on probability leaking term} \label{sec:constraint} In our three active plus $N$ sterile neutrino model, unitarity is obeyed in the whole $(3+N)$ state space, ${\bf U} {\bf U}^{\dagger} = {\bf U}^{\dagger} {\bf U} = {\bf 1}$. It takes the form in the active $3 \times 3$ subspace \begin{eqnarray} U U^{\dagger} + W W^{\dagger} = 1_{3 \times 3}, \hspace{10mm} U^{\dagger} U + Z^{\dagger}Z = 1_{3 \times 3}. \label{eqn:unitarity} \end{eqnarray} The first relation in (\ref{eqn:unitarity}) implies that size of the probability leaking terms, $\mathcal{C}_{\alpha \beta}$ or $\mathcal{C}_{\alpha \alpha}$, and the size of unitarity violation in active space $U$ matrix are related to each other. In fact, it is easy to derive the upper and lower bounds on $\mathcal{C}_{\alpha \beta} = \sum_{J=4}^{3+N} \vert W_{\alpha J} \vert^2\vert W_{\beta J} \vert^2$ and $\mathcal{C}_{\alpha \alpha} = \sum_{J=4}^{3+N} \vert W_{\alpha J} \vert^4$. One can start from \begin{eqnarray} \left( \sum_{I} \vert W_{\alpha I} \vert^2 \right) \left( \sum_{J} \vert W_{\beta J} \vert^2 \right) &=& \sum_{J} \vert W_{\alpha J} \vert^2 \vert W_{\beta J} \vert^2 + \sum_{I \neq J} \vert W_{\alpha I} \vert^2 \vert W_{\beta J} \vert^2 \hspace{6mm} (\alpha \neq \beta), \nonumber \\ \left( \sum_{J} \vert W_{\alpha J} \vert^2 \right)^2 &=& \sum_{J=1}^{N} \vert W_{\alpha J} \vert^4 + \sum_{I \neq J} \vert W_{\alpha I} \vert^2 \vert W_{\alpha J} \vert^2. \end{eqnarray} Since the last terms are non-negative we obtain the upper bounds\footnote{ As pointed out in ref.~\cite{Parke:2015goa}, for $\alpha \neq \beta$ and $i \neq j$ cases respectively, there are two relevant bounds that can be obtained by applying Cauchy-Schwartz inequalities to unitarity constraints \eqref{eqn:unitarity}: $|\sum_{i=1}^3 U_{\alpha i} U_{\beta i}^*|^2 \leq \left( 1- \sum_{j=1}^{3} \vert U_{\alpha j} \vert^2 \right) \left( 1- \sum_{j=1}^{3} \vert U_{\beta j} \vert^2 \right)$ and $|\sum_{\alpha=e}^\tau U_{\alpha i} U_{\alpha j}^*|^2 \leq \left( 1- \sum_{\alpha=e}^{\tau} \vert U_{\alpha i} \vert^2 \right) \left( 1- \sum_{\alpha=e}^{\tau} \vert U_{\alpha j} \vert^2 \right)$. These bounds are relevant when studying neutrino appearance $\nu_\alpha \to \nu_\beta$. } \begin{eqnarray} \mathcal{C}_{\alpha \beta} &\leq& \left( 1- \sum_{j=1}^{3} \vert U_{\alpha j} \vert^2 \right) \left( 1- \sum_{j=1}^{3} \vert U_{\beta j} \vert^2 \right) \hspace{6mm} (\alpha \neq \beta), \nonumber \\ \mathcal{C}_{\alpha \alpha} &\leq& \left( 1- \sum_{j=1}^{3} \vert U_{\alpha j} \vert^2 \right)^2. \label{eqn:H-max-ab} \end{eqnarray} The lower bound is slightly nontrivial, but they are derived in appendix~\ref{sec:bound-C}: \begin{eqnarray} \mathcal{C}_{\alpha \beta} &\geq& \frac{ 1 }{ N } \left( 1- \sum_{j=1}^{3} \vert U_{\alpha j} \vert^2 \right) \left( 1- \sum_{j=1}^{3} \vert U_{\beta j} \vert^2 \right) \hspace{6mm} (\alpha \neq \beta), \nonumber \\ \mathcal{C}_{\alpha \alpha} &\geq& \frac{ 1 }{ N } \left( 1- \sum_{j=1}^{3} \vert U_{\alpha j} \vert^2 \right)^2. \label{eqn:H-min-ab} \end{eqnarray} The lower bounds depend on $N$, and therefore they are sterile-sector model dependent. But, since the upper bounds are more restrictive, as we will see in the analysis in section~\ref{sec:JUNO}, we assume the least restrictive case, $N=\infty$ there. Using (\ref{eqn:H-max-ab}) and (\ref{eqn:H-min-ab}), and the fact that $(1 - \sum_{i=1}^{3} |U_{\alpha i}|^2)$ and $\mathcal{C}_{\alpha \alpha}$ are both positive, one can derive the bound $\sqrt{ \mathcal{C}_{\alpha \alpha} } \leq (1 - \sum_{i=1}^{3} |U_{\alpha i}|^2) \leq \sqrt{ N \mathcal{C}_{\alpha \alpha} }$. Suppose that the analysis of future experimental data indicates unitarity violation with nonzero value of $\mathcal{C}_{\alpha \alpha}$ and $(1 - \sum_{i=1}^{3} |U_{\alpha i}|^2)$. If the data shows $(1 - \sum_{i=1}^{3} |U_{\alpha i}|^2) = \sqrt{M \mathcal{C}_{\alpha \alpha} }$. Then, the $(3+N)$ space unitary model with $N < M$ is excluded. \subsection{Summarizing our method of testing leptonic unitarity} \label{sec:summary} Now, we can summarize our method of testing our $(3+N)$ model of low-scale unitarity violation in vacuum: \vspace{3mm} We fit the data by using the two ansatz: (1) the standard three-flavor mixing with unitary mixing matrix $U_\text{PDG}$ \cite{Agashe:2014kda}, and (2) the expressions of the oscillation probabilities in (\ref{P-beta-alpha-ave-vac}) and (\ref{P-alpha-alpha-ave-vac}), with the non-unitary $U$ matrix and the probability leaking terms $\mathcal{C}_{\alpha \beta}$ and/or $\mathcal{C}_{\alpha \alpha }$. In the latter fit, it is important to place the constraints (\ref{eqn:H-max-ab}) on $\mathcal{C}_{\alpha \beta}$ and $\mathcal{C}_{\alpha \alpha }$. In section~\ref{sec:JUNO} we present an analysis of simulated JUNO data within our formalism. \vspace{3mm} One can think of various features of the fit results that can be obtained in this way. To discuss possible implications, let us assume for conceptual clarity that a set of super-high precision measurement were done by experiments with perfectly controlled neutrino beam. \begin{itemize} \item If the fit results using (1) the standard three-flavor mixing, and (2) the $(3+N)$ model reveal only small difference between them, it is an indication of absence of unitarity violation. One can obtain quantitative bounds on how severely unitarity violation is constrained. \item If the fit revealed a discrepancy between (1) and (2), it is an indication of unitarity violation. It is likely that the first indication of unitarity violation comes from nonzero values of $1 - \sum_{i=1}^{3} \vert U_{\alpha i} \vert^2$ ($\alpha = e, \mu, \tau$) in the disappearance channels, and/or $\left| \sum_{j=1}^{3} U_{\alpha j} U^{*}_{\beta j} \right|$ in the appearance channels. They are both of the order of $W^2$. \item If the measurement is sufficiently accurate to detect nonzero values of $\mathcal{C}_{\alpha \beta}$ ($\alpha \neq \beta$ and/or $\alpha = \beta$) of the order of $W^4$, in addition to nonzero $1 - \sum_{i=1}^{3} \vert U_{\alpha i} \vert^2$ and/or $\left| \sum_{j=1}^{3} U_{\alpha j} U^{*}_{\beta j} \right|$, it is a hint for low-scale unitarity violation. \item If the fit revealed a discrepancy between (1) and (2), indicating unitarity violation, and the fit results of $\mathcal{C}_{\alpha \beta}$ ($\alpha \neq \beta$ and/or $\alpha = \beta$) is outside the region allowed by the constraints (\ref{eqn:H-max-ab}). Nonvanishing $\mathcal{C}_{\alpha \beta}$ suggests unitarity violation at low energies, which however implies that either both the conditions \eqref{eq:decoherence_x} and \eqref{eq:decoherence_E} are not satisfied or the scenario cannot be described by our $(3+N)$ space unitary model. \end{itemize} \noindent The final consistency check for proving low-scale unitarity violation in the third case above is to verify (i) the consistency between the magnitudes of $\mathcal{C}_{\alpha \beta}$ ($\sim W^4$) and $1 - \sum_{i=1}^{3} \vert U_{\alpha i} \vert^2$ and/or $\left| \sum_{j=1}^{3} U_{\alpha j} U^{*}_{\beta j} \right|$ ($\sim W^2$), and (ii) over-all consistency between deviation of unitarity of $U$ matrix and the size of $W$ matrix expected from the $(3+N)$ space unitarity (\ref{eqn:unitarity}). We note that the relative magnitudes of $\mathcal{C}_{\alpha \beta}$ and $1 - \sum_{i=1}^{3} \vert U_{\alpha i} \vert^2$ (or $\left| \sum_{j=1}^{3} U_{\alpha j} U^{*}_{\beta j} \right|$) is also enforced by the upper and lower bounds (\ref{eqn:H-max-ab}) and (\ref{eqn:H-min-ab}), and therefore the property is in the heart of the $(3+N)$ space unitary model. A clarifying remark is in order: In the appearance oscillation probability, (\ref{P-beta-alpha-ave-vac}), $\left| \sum_{j=1}^{3} U_{\alpha j} U^{*}_{\beta j} \right|$ comes in as squared and the term is of the same order $\sim W^4$ as the leaking term $\mathcal{C}_{\alpha \beta}$. Therefore, one might think that the better accuracy may not be expected for $\left| \sum_{j=1}^{3} U_{\alpha j} U^{*}_{\beta j} \right|$. The statement above that ``$\left| \sum_{j=1}^{3} U_{\alpha j} U^{*}_{\beta j} \right|$ is the first indicator of unitarity violation'' really means that the non-unitary $U$ matrix elements are determined mostly by the $x/E$ dependent oscillation terms and it determines (or strongly constrains) $\left| \sum_{j=1}^{3} U_{\alpha j} U^{*}_{\beta j} \right|$, and in this way a better accuracy is expected for $\left| \sum_{j=1}^{3} U_{\alpha j} U^{*}_{\beta j} \right|$. The similar statement for disappearance channel also follows. \section{Unitarity violation: Case study using JUNO-like setting and the current constraints } \label{sec:JUNO} In this section we carry out the first test of our framework describing low-scale unitarity violation by applying it to data to be obtained by medium-baseline reactor neutrino experiments. For definiteness we assume the JUNO-like setting as defined below.\footnote{ The similar analysis of simulated JUNO data in the context of leptonic unitarity test was carried out in ref.~\cite{Qian:2013ora}. See also section 3.3 of \cite{JUNO}. } We define our analysis method in section \ref{sec:method} and present the results in section~\ref{sec:result}. During the course of describing the results of our analysis, a comparison with the constraints currently available for the $\nu_{e}$ channel will be done. For the $\nu_{\mu}$ and $\nu_{\tau}$ related channels, we will give a brief overview of the current constraints in section~\ref{sec:constraints}, together with miscellaneous remarks on the $\nu_{e}$ channel. In our analysis using the JUNO-like setting, we give special attention to the probability leaking term $\mathcal{C}_{\alpha \alpha}$ ($\alpha=e$) in eq.~(\ref{P-alpha-alpha-ave-vac}), as discussed in section~\ref{sec:summary}. Of course, estimation of JUNO's capability of constraining (or probing) non-unitary nature of active space $U$ matrix in the $\nu_{e}$ sector is a very interesting point by itself. Yet, we must admit that our analysis using a simple-minded $\chi^2$ cannot be considered as the real quantitative one. We use the expression of disappearance probability $P(\nu_\alpha \rightarrow \nu_\alpha)$ ($\alpha=e$) in eq.~(\ref{P-alpha-alpha-ave-vac}) for reactor neutrino analysis because it is identical to $P(\bar{\nu}_\alpha \rightarrow \bar{\nu}_\alpha)$ assuming CPT invariance. \subsection{Analysis method} \label{sec:method} We basically follow the analysis done in \cite{Abrahao:2015rba} with some modification and simplification. In our statistical analysis, we define the $\chi^2$ function which consists of two terms as, \begin{eqnarray} \chi^2 \equiv \chi^2_\text{stat} +\chi^2_\text{sys}. \label{eq:chi2} \end{eqnarray} In the present analysis we do not take into account any data except for JUNO, not even precision measurement of $\sin^2 \theta_{13}$ by Daya Bay and RENO \cite{An:2015rpe,RENO:2015ksa}, which is expected to be improved to $\sim 3$\% level. On this point, we will make a comment in section~\ref{sec:result}. Following ~\cite{Ge:2012wj,Capozzi:2013psa}, the $\chi^2_\text{stat}$ is defined as, \begin{eqnarray} \chi^2_\text{stat} \equiv \int_0^{{E^\text{max}_\text{vis}}} dE_\text{vis} \left( \frac{ \displaystyle \frac{dN^\text{obs}}{dE_\text{vis}} -f_\text{norm}\sum_{i=\text{reac}} \frac{dN^\text{fit}_i}{dE_\text{vis}} } {\displaystyle \sqrt{ \frac{dN^\text{obs}}{ dE_\text{vis}} } } \right)^2, \label{eq:chi2_stat} \end{eqnarray} where $dN^\text{obs} / dE_\text{vis}$ denotes the energy distributions of the observed (simulated) signal, and $f_\text{norm}$ is the flux normalization parameter for reactor neutrinos, to be varied freely subject to the pull term in $\chi^2_\text{sys}$ (see below) and we integrate up to $E^\text{max}_\text{vis} = 8$ MeV. Due to the lack of space, we do not describe here how to compute the event number distribution $dN^\text{obs} / dE_\text{vis}$, leaving it to appendix \ref{number-of-events}. We consider only one kind of systematic error to take into account the reactor neutrino flux uncertainty, \begin{eqnarray} \chi^2_\text{sys} \equiv \left(\frac{1-f_\text{norm} } { \sigma_{{f}_\text{norm}} } \right)^2 \label{eq:chi2_sys}, \end{eqnarray} and use $\sigma_{f_\text{norm}}$ = 3\% as the reference value, assuming progress in understanding of the reactor neutrino flux at the JUNO measurement era. Yet, given the current status of simulating reactor neutrino flux, we also examine the case of $\sigma_{f_\text{norm}}$ = 6\% for comparison. There are five relevant free parameters to be fitted in our analysis, namely, $|U_{e1}|^2$, $|U_{e2}|^2$, $\sum_{i=1}^3 |U_{ei}|^2$, $\mathcal{C}_{ee}$ as well as the flux normalization parameter, $f_\text{norm}$. These five parameters are varied freely under the conditions, \begin{eqnarray} \sum_{i=1}^3 |U_{ei}|^2 \le 1, \hspace{10mm} 0 \le \mathcal{C}_{ee} \le (1-\sum_{i=1}^3 |U_{ei}|^2)^2, \label{eq:restrictions} \end{eqnarray} as well as with the $\chi^2_\text{sys}$ defined in (\ref{eq:chi2_sys}). For simplicity, we fix the two mass squared differences as $\Delta m^2_{21} = 7.5 \times 10^{-5}$ eV$^2$, $\Delta m^2_{31} = 2.46 \times 10^{-3}$ eV$^2$ and consider only the case of normal mass hierarchy. We believe that even if we vary them our results would not change significantly. Using the $\chi^2$ function, we will determine the allowed ranges of the five parameters mentioned above, which will be projected into two or one dimensional subspace by using the conditions, \begin{equation} \Delta \chi^2 \equiv \chi^2 - \chi^2_{{\text{min}}} = 2.3,\ 6.18\ \text{and}\ 11.93 \ (1,\ 4\ \text{and} \ 9), \end{equation} at 1, 2 and 3 $\sigma$ CL, respectively, for two (one) degrees of freedom. The allowed contours obtained by following the above procedure for the cases of flux normalization uncertainties of 3\% and 6\% are presented in figures~\ref{fig:U-UV-comparison3} and \ref{fig:U-UV-comparison6}, respectively. Since we consider the input which corresponds to the case without unitarity violation, $\chi^2_\text{min}=0$ by construction as we do not take into account the statistical fluctuation in simulating the artificial data. To understand better the features of the allowed contours in figures~\ref{fig:U-UV-comparison3} and \ref{fig:U-UV-comparison6}, we have also performed the analysis using the same procedure as above but without the constraints (\ref{eq:restrictions}). The results of such analysis with $\sigma_{f_\text{norm}}$ = 3\% are given in figure~\ref{fig:UV-C-NC} in appendix~\ref{sec:C-NC}. \begin{figure}[h!] \begin{center} \hspace{-18mm} \includegraphics[bb=0 0 792 600,width=1.1\textwidth]{allowed-regions-uncert-3perc.pdf} \end{center} \vspace{-5mm} \caption{ Regions allowed for the five parameters $|U_{e1}|^2$, $|U_{e2}|^2$, $\sum_{i=1}^3 |U_{ei}|^2$, $\mathcal{C}_{ee}$ and $f_\text{norm}$, are plotted by projecting into each 2 dimensional subspace at 1$\sigma$, 2$\sigma$ and 3$\sigma$ CL. The case of reactor neutrino flux uncertainty of 3\%. The colored solid contours are for the cases with unitarity violation under the conditions $\sum_{i=1}^3 |U_{ei}|^2 \le 1$ and $0 \le \mathcal{C}_{ee} \le (1-\sum_{i=1}^3 |U_{ei}|^2)^2$. The black dashed contours are for the standard unitary case. } \label{fig:U-UV-comparison3} \end{figure} \subsection{Analysis result} \label{sec:result} In this section we present the results of our analysis of simulated JUNO data with particular emphasis to the bounds on the parameters, $\mathcal{C}_{ee}$ and $1 - \sum_{i=1}^{3} \vert U_{e i} \vert^2$. A nonzero value of $\mathcal{C}_{ee}$ implies existence of the low-scale unitarity violation, distinguishing it from the high-scale unitarity violation. Unfortunately, size of $\mathcal{C}_{ee}$ is quite small because it is of the order of $W^4$. While the latter, $1 - \sum_{i=1}^{3} \vert U_{e i} \vert^2$, being of the order of $W^2$, must be the first indicator of unitarity violation. We generate the input data without considering unitarity violation (corresponding to the standard three flavor scheme) but in the fit, we allow non-unitarity, in order to determine to what extent a JUNO-like experiment can constrain non-unitarity when the data are consistent with the standard three flavor scenario. \subsubsection{Comparison between the unitary and the non-unitary cases} \label{sec:U-UV} In figures~\ref{fig:U-UV-comparison3} and \ref{fig:U-UV-comparison6}, presented are the allowed regions of $\mathcal{C}_{ee}$, $\sum_{i=1}^{3} \vert U_{ei} \vert^2$, $\vert U_{ei} \vert^2$ ($i=1,2$), and the flux normalization $f_{ \text{norm} }$ projected onto the various two-dimensional spaces at 1, 2, and 3 $\sigma$ CL (each differentiated by colors) obtained with 5 years measurement by JUNO.\footnote{ To be more precise, we consider the total exposure corresponding to $5\times 35.8 \times 20 = 3.58\times 10^3$ kt$\cdot$GW$\cdot$yr. } The reactor neutrino flux uncertainty is taken as 3\% and 6\% in figures~\ref{fig:U-UV-comparison3} and \ref{fig:U-UV-comparison6}, respectively. Alternatively, the allowed regions of unitarity violation parameters $\mathcal{C}_{ee}$, and $1 - (\vert U_{e 1} \vert^2 + \vert U_{e 2} \vert^2 +\vert U_{e 3} \vert^2)$, as well as $f_{ \text{norm} }$, $\vert U_{e1} \vert^2$, and $\vert U_{e2} \vert^2$ at 1 and 3 $\sigma$ CL for 1 degree of freedom are summarized in table~\ref{tab:allowed-range} for the both cases of the reactor flux normalization uncertainties of 3\% and 6\%. We first concentrate on the former (the case for $\sigma_{f_\text{norm}}$=3\%) results given in figure \ref{fig:U-UV-comparison3}. The colored solid contours are for the cases with unitarity violation, while the black dashed contours are for the standard unitary case. Since unitarity is preserved in the true (input) simulated data of JUNO, the contours obtained with ansatz assuming unitarity violation always contain the ones obtained with the standard unitary ansatz. \begin{table}[h!] \begin{center} \caption{ Ranges allowed at 1$\sigma$ and 3$\sigma$ CL of five parameters $|U_{e1}|^2$, $|U_{e2}|^2$, $\sum_{i=1}^3 |U_{ei}|^2$, $\mathcal{C}_{ee}$ and $f_\text{norm}$, for one degree of freedom for 3\% (second and third columns) and 6\% (fourth and fifth columns) uncertainties of the reactor flux normalization. } \label{tab:allowed-range} \begin{tabular}{c|cc|cc} \hline parameter & 1$\sigma$ range (3\%) & 3$\sigma$ range (3\%) & 1$\sigma$ range (6\%) & 3$\sigma$ range (6\%) \\ \hline $|U_{e1}|^2$ & [0.668, 0.676] &[0.654, 0.680] & [0.661, 0.676] & [0.632, 0.680] \\ $|U_{e2}|^2$ & [0.299, 0.304] &[0.293, 0.307] & [0.297, 0.304] & [0.285, 0.307] \\ $\sum_{i=1}^3 |U_{ei}|^2$ & [0.989, 1] & [0.968, 1] & [0.979, 1] & [0.941, 1] \\ $\mathcal{C}_{ee}$ & [0, $10^{-4}$] &[0,$10^{-3}$] & [0, $4\times 10^{-4}$] & [0, $4\times 10^{-3}$] \\ \hline $f_{\rm norm}$ & [0.994, 1.02] & [0.983, 1.063] & [0.994, 1.04] & [0.983, 1.13] \\ \hline \end{tabular} \end{center} \end{table} Let us understand some key features of figure~\ref{fig:U-UV-comparison3}. The unitarity violation parameter $1 - \sum_{i=1}^{3} \vert U_{e i} \vert^2$ is determined in strong correlation with the flux normalization $f_\text{norm}$. It enters into the constant term in the probability in eq.~\eqref{P-alpha-alpha-ave-vac} with $\alpha=\beta=e$ as \begin{eqnarray} f_\text{norm} (\mathcal{C}_{ee} + \left\{ \vert U_{e 1} \vert^2 + \vert U_{e 2} \vert^2 + \vert U_{e 3} \vert^2 \right\}^2 ) \simeq f_\text{norm} \left\{ \vert U_{e 1} \vert^2 + \vert U_{e 2} \vert^2 +\vert U_{e 3} \vert^2 \right\}^2, \end{eqnarray} where the approximate equality above is justified because of the smallness of $\mathcal{C}_{ee}$ as seen in figure~\ref{fig:U-UV-comparison3}. Then, it is natural to expect that $1 - \sum_{i=1}^{3} \vert U_{e i} \vert^2$ would be constrained to the accuracy of $\sim 1-\sqrt{1-\sigma_{f_\text{norm}}}\sim 0.015$. It seems to be consistent with figure~\ref{fig:U-UV-comparison3}, and the results given in table~\ref{tab:allowed-range}, $1 - \sum_{i=1}^{3} \vert U_{e i} \vert^2 \leq $ 0.01 (0.03) at 1$\sigma$ (3$\sigma$) CL for one degree of freedom. The probability leaking parameter $\mathcal{C}_{ee}$ is constrained to be small, $\mathcal{C}_{ee} \lsim 2 \times 10^{-4}$ ($10^{-3}$) at 1$\sigma$ (3$\sigma$) CL in figure~\ref{fig:U-UV-comparison3} with two degrees of freedom, and $\mathcal{C}_{ee} < 10^{-4}$ ($10^{-3}$) at 1$\sigma$ (3$\sigma$) CL in table~\ref{tab:allowed-range} with one degree of freedom. The stringent constraints obtained for $\mathcal{C}_{ee}$ can be understood as coming from the upper bound on $\mathcal{C}_{ee}$ in eq.~(\ref{eq:restrictions}), which is imposed in the analysis. Using the above bound on the unitarity violation parameter with one degree of freedom, $\mathcal{C}_{ee} \le (1-\sum_{i=1}^3 |U_{ei}|^2)^2 = 10^{-4}$ ($9 \times 10^{-4}$) at 1$\sigma$ (3$\sigma$) CL. They are quite consistent with the obtained upper bound on $\mathcal{C}_{ee}$ in table~\ref{tab:allowed-range}. Noticing that $1 - \sum_{i=1}^{3} \vert U_{e i} \vert^2$ and $\mathcal{C}_{ee}$ are of the order of $W^2$ and $W^4$, respectively, it means that the $W$ matrix elements are constrained to be order $\sim 10\%$ by the JUNO measurement.\footnote{ If all the $W$ matrix elements are equal, it means that $\vert W \vert \leq 0.1 / \sqrt{N}$. } Does inclusion of precision data of $\sin^2 \theta_{13}$ to be obtained by future measurement by Daya Bay and RENO of $\sim 3\%$ level significantly improve the sensitivity to unitarity violation? We believe that the answer is no, and here is the reasoning for our belief. The accuracy of measurement of $\sin^2 \theta_{13}$ in JUNO estimated in \cite{Capozzi:2013psa} is $\simeq 7\%$ level, which implies the accuracy $\delta( \sin^2 \theta_{13} ) = 1.5 \times 10^{-3}$. It probably means that in our framework the accuracy of measurement of $\vert U_{e 3} \vert^2$ is $\sim 10^{-3}$, which is an order of magnitude smaller than the 1\% level uncertainty of $1 - \sum_{i=1}^{3} \vert U_{e i} \vert^2$. Furthermore, determination of $1 - \sum_{i=1}^{3} \vert U_{e i} \vert^2$ is very weakly correlated with $\vert U_{e 3} \vert^2$. While $\vert U_{e 3} \vert^2$ is measured by detecting small atmospheric ripples on the long-wavelength solar oscillations, $1 - \sum_{i=1}^{3} \vert U_{e i} \vert^2$ is determined in strong correlation with the flux normalization. Therefore, it is important to reduce the flux uncertainty in order to increase the sensitivity to unitarity violation, and improvement of the $\vert U_{e 3} \vert^2$ measurement would have much less impact on it. To examine the effect of worsen reactor flux normalization uncertainty, we have repeated the same calculation with 6\% error, as given in figure~\ref{fig:U-UV-comparison6}. As one can see from the figure, the over-all features of the correlation between the quantities of interests are unchanged. The extent of prolongation of contours due to the worsen flux uncertainty may be estimated once we understand the one for the unitarity violation parameter $1 - (\vert U_{e 1} \vert^2 + \vert U_{e 2} \vert^2 +\vert U_{e 3} \vert^2)$. Following the same logic as above the accuracy of constraining this parameter is expected to be $\sim 1-\sqrt{1-\sigma_{f_\text{norm}}}\sim 0.03$, which is again consistent with figure~\ref{fig:U-UV-comparison6}. \begin{figure}[h!] \begin{center} \hspace{-18mm} \includegraphics[bb=0 0 792 600,width=1.1\textwidth]{allowed-regions-uncert-6perc.pdf} \end{center} \vspace{-5mm} \caption{ The same as in figure~\ref{fig:U-UV-comparison3}. The case of reactor neutrino flux uncertainty of 6\%. \label{fig:U-UV-comparison6} } \end{figure} To know to what extent JUNO can tighten the current constraints on the $\nu_{e}$ row elements, let us compare our results to the ones obtained in ref.~\cite{Parke:2015goa}. We must remark that the authors of ref.~\cite{Parke:2015goa} assumed 5\% uncertainty of reactor neutrino flux. Whereas we use our results obtained by assuming 3\% uncertainty for comparison. According to the estimate done in this reference (the fourth equation), the current uncertainties of $\vert U_{e 1} \vert^2$ and $\vert U_{e 2} \vert^2$ are 11\% and 18\% at 3$\sigma$ CL, respectively. On the other hand, the results of our analysis with JUNO-like setting shows (see table~\ref{tab:allowed-range}) that at 3$\sigma$ CL the uncertainties of $\vert U_{e 1} \vert^2$ and $\vert U_{e 2} \vert^2$ are, respectively, 1.9\% and 2.3\%. It implies a great improvement over the current constraints by a factor of $\simeq 6$ (8) for $\vert U_{e 1} \vert^2$ ($\vert U_{e 2} \vert^2$). For the 6\% reactor flux normalization uncertainty, the uncertainties of both of $\vert U_{e 1} \vert^2$ and $\vert U_{e 2} \vert^2$ are 3.7\% implying a factor of $\simeq$ 3 (5) improvement for $\vert U_{e 1}\vert^2$ ($\vert U_{e 2}\vert^2$). The current constraint on unitarity violating parameter is given by $1 - (\vert U_{e 1} \vert^2 + \vert U_{e 2} \vert^2 +\vert U_{e 3} \vert^2) \leq$ 0.074, as one can read off from Fig.~3 of ref.~\cite{Parke:2015goa}. Whereas in our JUNO analysis, the unitarity violating parameter for 3\% (6\%) flux normalization uncertainty is constrained to be $1 - (\vert U_{e 1} \vert^2 + \vert U_{e 2} \vert^2 +\vert U_{e 3} \vert^2) \leq 0.032$ (0.059), indicating a modest improvement by a factor of $\simeq 2$ (1.2). The current constraint on $1 - (\vert U_{e 1} \vert^2 + \vert U_{e 2} \vert^2 +\vert U_{e 3} \vert^2)$ suggests that one could obtain the bound on $\mathcal{C}_{ee}$ as $\mathcal{C}_{ee} \le (0.074)^2 \sim 5.5 \times 10^{-3}$, if the analysis were done in the similar way as ours. We stress that the bound on $\mathcal{C}_{ee}$ obtained by our JUNO analysis is stronger by a factor of 5.5. Notice, however, that under the assumption that the bound on $\vert U_{e 4} \vert^2$ obtained in the framework of $(3+1)$ model translates into the one on $1 - (\vert U_{e 1} \vert^2 + \vert U_{e 2} \vert^2 +\vert U_{e 3} \vert^2)$ in the $(3+N)$ model, the kinematical constraint from beta decay (neutrinoless double beta decay) is severer than the JUNO bound for massive sterile neutrinos with masses $m^2_{4} \gsim 10^5~{\rm eV}^2$ ($m^2_{4} \gsim 100~{\rm eV}^2$).\footnote{ The bound from neutrinoless double beta decay is valid only if the neutrinos are Majorana particles. } See Fig.~4 in ref.~\cite{deGouvea:2015euy}. \subsubsection{Understanding correlations between the parameters } \label{sec:correlation-Ue12} One observes that, except for the ones which involve $\mathcal{C}_{ee}$, the allowed contours in the non-unitary case are much wider and expanded to the particular direction, indicating the correlations between the parameters taken in figure~\ref{fig:U-UV-comparison3}. Let us understand this feature. For this purpose we call readers attention to the bottom 4 panels (g), (h), (i), and (j) in figure~\ref{fig:U-UV-comparison3}. In the left-bottom panel (g), we see that $\vert U_{e1} \vert^2 + \vert U_{e2} \vert^2 + \vert U_{e3} \vert^2$ is restricted to be unity in the unitary case, as it should. Whereas, when unitarity violation is allowed, the contours are expanded into a left-up direction. The contour cannot expand to the right because $\vert U_{e1} \vert^2 + \vert U_{e2} \vert^2 + \vert U_{e3} \vert^2$ must be equal to or less than unity by $(3+N)$ space unitarity, eq.~(\ref{eqn:unitarity}). They can extend only to left-up direction because the effect of decrease of $\vert U_{e1} \vert^2 + \vert U_{e2} \vert^2 + \vert U_{e3} \vert^2$ has to be compensated by increase of the flux normalization $f_{ \text{norm} }$. It then explains the similar behavior of the contours in the panels (i), and (j).\footnote{ Later in this section, we offer an alternative but consistent explanation for these features by using a new representation of the $\bar{\nu}_{e}$ survival probability, eq.~(\ref{eqn:Paa-MNPZ}). } In the panel (f), when unitarity violation is introduced, the allowed contours prolongate to left-down direction, indicating a positive correlation between $\vert U_{e1} \vert^2$ and $\vert U_{e2} \vert^2$. If we assume the positive correlation between $\vert U_{e1} \vert^2$ and $\vert U_{e2} \vert^2$, and taking into account that $|U_{e3}|^2 \ll |U_{e1}|^2, |U_{e2}|^2$, we have the positive correlation between $\vert U_{e1} \vert^2$ and $\vert U_{e1} \vert^2 + \vert U_{e2} \vert^2 + \vert U_{e3} \vert^2$ (between $\vert U_{e2} \vert^2$ and $\vert U_{e1} \vert^2 + \vert U_{e2} \vert^2 + \vert U_{e3} \vert^2$), as indicated in the panel (b) ((d)). It almost completes the discussion to understand the features of correlations between the quantities plotted in figure~\ref{fig:U-UV-comparison3}. Now, what is left is to understand the reason for positive correlation between $\vert U_{e1} \vert^2$ and $\vert U_{e2} \vert^2$, to which we now turn. In fact, it is quite a nontrivial feature to understand: If we run the same simulation without the constraint (\ref{eqn:H-max-ab}), we have a negative correlation between $\vert U_{e1} \vert^2$ and $\vert U_{e2} \vert^2$. See figure~\ref{fig:UV-C-NC} in appendix~\ref{sec:C-NC}. Here, we focus on the positive correlation between $\vert U_{e1} \vert^2$ and $\vert U_{e2} \vert^2$ seen in figure~\ref{fig:U-UV-comparison3}, and present a model to understand this feature. In appendix~\ref{sec:C-NC}, we will offer the possible explanation of negative correlation between $\vert U_{e1} \vert^2$ and $\vert U_{e2} \vert^2$ in the case without the constraint. We have learned from the results of the analysis that $1 - \sum_{i=1}^{3} \vert U_{e i} \vert^2$ and $\mathcal{C}_{ee}$ are consistently constrained to be small so that $W^2 \lsim 10^{-2}$. It means that the system is nearly unitary. In the unitary case, it is expected that the JUNO setting has sensitivity to the individual $\Delta m^2_{31}$ and $\Delta m^2_{32}$ waves. Let us suppose that this is the case also in the extended parameter space in our $(3+N)$ model. Then, the suitable representation of $P(\bar{\nu}_e \rightarrow \bar{\nu}_e)$ is given by the non-unitary version of the one derived in ref.~\cite{Minakata:2007tn} ($\alpha=e$ below): \begin{eqnarray} && P(\bar{\nu}_\alpha \rightarrow \bar{\nu}_\alpha) = \mathcal{C}_{\alpha \alpha} + \left\{ \vert U_{\alpha 1} \vert^2 + \vert U_{\alpha 2} \vert^2 +\vert U_{\alpha 3} \vert^2 \right\}^2 - 4 \vert U_{\alpha 1} \vert^2 \vert U_{\alpha 2} \vert^2 \sin^2 \frac{ \Delta m^2_{21} x }{ 4E } \nonumber \\ &-& 2\vert U_{\alpha 3}\vert^2 \left( \vert U_{\alpha 1} \vert^2 +\vert U_{\alpha 2}\vert^2 \right) \left[ 1- \sqrt{ 1- 4 XY \sin^2 \frac{ \Delta m^2_{21} x }{ 4E } } ~\cos \left( \frac{\Delta m^2_{\alpha \alpha} x}{2E} \pm \phi^\alpha_\odot \right) \right], \label{eqn:Paa-MNPZ} \end{eqnarray} where \begin{eqnarray} X \equiv \frac{\vert U_{\alpha 1} \vert^2}{ \vert U_{\alpha 1} \vert^2 +\vert U_{\alpha 2}\vert^2 }, \hspace{10mm} Y \equiv \frac{\vert U_{\alpha 2} \vert^2}{ \vert U_{\alpha 1} \vert^2 +\vert U_{\alpha 2}\vert^2 }, \end{eqnarray} and \begin{eqnarray} \Delta m^2_{\alpha \alpha} &\equiv& X \vert \Delta m^2_{31} \vert + Y \vert \Delta m^2_{32} \vert, \nonumber \\ \phi^{\alpha}_\odot &= & \arctan \left[ \left( X - Y \right) \tan \left( \frac{ \Delta m^2_{21} x }{ 4E } \right) \right] - \left( X - Y \right) \left( \frac{ \Delta m^2_{21} x }{ 4E } \right). \end{eqnarray} $\phi^\alpha_\odot$ is a slowly varying function of $x/E$ which depends only on the solar parameters, see \cite{Minakata:2007tn}. The $\pm$ sign in front of $\phi^\alpha_\odot$ determines the mass ordering. Notice that the function inside the square bracket in (\ref{eqn:Paa-MNPZ}) determines the way how the $\Delta m^2_{31}$ and $\Delta m^2_{32}$ waves are superposed, and we assume that the JUNO setting has the sensitivity to it, as was the case of our simple-minded analysis described in section~\ref{sec:method} used for the unitary case \cite{Abrahao:2015rba}. Then, variations of the parameters must render the fast varying function of $x/E$ inside the square bracket be invariant, at least approximately. To compute the number of events, the probability in eq.~(\ref{eqn:Paa-MNPZ}) should be multiplied by the flux normalization factor $f_\text{norm}$, as mentioned in the previous section. Then, we must analyze the effective probability defined as $P(\bar{\nu}_e \rightarrow \bar{\nu}_e)_\text{eff} \equiv f_\text{norm}\times P(\bar{\nu}_e \rightarrow \bar{\nu}_e)$. We now look for the transformations which render $P(\bar{\nu}_e \rightarrow \bar{\nu}_e)_\text{eff}$ invariant. They are \begin{eqnarray} \vert U_{e i} \vert^2 &\rightarrow& \xi \vert U_{e i} \vert^2 \ \ (i=1,2,3), \nonumber \\ f_\text{norm} &\rightarrow& \xi^{-2} f_\text{norm}, \nonumber \\ \mathcal{C}_{ee} &\rightarrow& \xi^{2} \mathcal{C}_{ee}, \label{eqn:invariant-transf} \end{eqnarray} where $\xi$ is an arbitrary parameter. Notice that $X$, $Y$, $\Delta m^2_{ee}$, and $\phi^{\alpha}_\odot$ are manifestly invariant under (\ref{eqn:invariant-transf}). The invariance of $P(\bar{\nu}_e \rightarrow \bar{\nu}_e)_\text{eff}$ under (\ref{eqn:invariant-transf}) implies that the allowed contours can be extended to this ``invariance direction''. Therefore, $\vert U_{e 1} \vert^2$ and $\vert U_{e 2} \vert^2$ must be positively correlated with each other, whereas $\vert U_{e i} \vert^2$ ($i=1,2$) and $f_\text{norm}$ is negatively correlated. The former is consistent with the feature shown in panel (f), and the latter in agreement with the one in panel (g), (i), and (j) in figure~\ref{fig:U-UV-comparison3}. Similarly, $\mathcal{C}_{ee}$ must have positive correlation with $\vert U_{e i} \vert^2$ and negative correlation with $f_\text{norm}$, the feature which, however, does not appear to be seen in figure~\ref{fig:U-UV-comparison3}. The most important reason for this is $\mathcal{C}_{ee}$ is essentially determined by the conditions given in eq.~(\ref{eq:restrictions}), as mentioned earlier.\footnote{ But, we must note that the validity of the invariance argument is limited. It breaks down at some point because (i) first of all, the invariance under the transformations (\ref{eqn:invariant-transf}) is broken for $\chi^2$ by the pull term (\ref{eq:chi2_sys}), and (ii) the variables we are dealing with live only in restricted ranges, either by $(3+N)$ space unitarity, or as a result of fitting the data. Therefore, the scaling argument has its own inherent limitation. For the above mentioned features of correlations which involve $\mathcal{C}_{ee}$, these limitations of our invariance argument may not play a key role, because $\mathcal{C}_{ee}$ is restricted to be very small, $\lsim 10^{-3}$ ($3 \sigma$ CL). } \subsection{The current constraints on unitarity violation} \label{sec:constraints} We start by discussing the constraints obtained on unitarity violation in the $\nu_{\mu}$ and $\nu_{\tau}$ channels. We first focus on the relatively low mass sterile states $m^2_{J} \lsim 10~{\rm eV}^2$, and rely on the results obtained by the authors of ref.~\cite{Parke:2015goa}, because their analysis is based on the $(3+N)$ model. We also check the consistency of the results with those in ref.~\cite{Kopp:2013vaa} keeping in mind that most of the analyses in this reference are done using the $(3+1)$ model. According to ref.~\cite{Parke:2015goa} (see Fig.~3), the unitarity violating parameter $1 - (\vert U_{\alpha 1} \vert^2 + \vert U_{\alpha 2} \vert^2 +\vert U_{\alpha 3} \vert^2)$ is constrained to be $\leq$ 0.064 and $\leq$ 0.44 at 3$\sigma$ CL for $\alpha = \mu$ and $\tau$, respectively. The constraints are obtained by marginalizing over the sterile neutrino masses $\Delta m^2_{J i} \geq 0.01$ eV$^2$. The constraints on $\vert U_{\mu 4} \vert^2$ and $\vert U_{\tau 4} \vert^2$ are obtained in ref.~\cite{Kopp:2013vaa} (see Fig.~4 of this reference). The results can roughly be summarized as $\vert U_{\mu 4} \vert^2 \lsim$ $(1-3) \times 10^{-2}$ for $1~{\rm eV}^2 \lsim \Delta m^2_{41} \lsim 10~{\rm eV}^2$, and $\vert U_{\mu 4} \vert^2 \lsim (3-6) \times 10^{-2}$ for $0.1~{\rm eV}^2 \lsim \Delta m^2_{41} \lsim 1~{\rm eV}^2$. The constraints on $\vert U_{\tau 4} \vert^2$ is much milder, when the case of worst phases is taken, $\vert U_{\tau 4} \vert^2 \lsim 0.42$ for the entire region of $\Delta m^2_{41}$ quoted above. The bound on $\vert U_{\alpha 4} \vert^2$ derived in the framework of $(3+1)$ model may be interpreted as the one for $1 - (\vert U_{\alpha 1} \vert^2 + \vert U_{\alpha 2} \vert^2 +\vert U_{\alpha 3} \vert^2)$ in the context of $(3+N)$ model. If we take this interpretation the both results are consistent with each other. We notice that there is an ample room for improvement for the bound on unitarity violation in the $\nu_{\tau}$ channel. With regard to the constraints on each individual $\vert U_{\mu i} \vert^2$, the fourth equation of [5] tells us that $0.044 \leq \vert U_{\mu 1} \vert^2 \leq 0.29$ (74\%), $0.18 \leq \vert U_{\mu 2} \vert^2 \leq 0.49$ (46\%), and $0.37 \leq \vert U_{\mu 3} \vert^2 \leq 0.62$ (25\%) at 3$\sigma$ CL, where the numbers inside parentheses are for percent errors assuming the symmetric errors. The similar constraints on $\vert U_{\tau i} \vert^2$ ($i=1,2,3$) are: $0.032 \leq \vert U_{\tau 1} \vert^2 \leq 0.34$ (82\%), $0.14 \leq \vert U_{\tau 2} \vert^2 \leq 0.52$ (56\%), and $0.16 \leq \vert U_{\tau 3} \vert^2 \leq 0.61$ (58\%) at 3$\sigma$ CL. If the sterile states are more massive, $m^2_{J} \gsim 10~{\rm eV}^2$, the kinematical constraints in beta and meson decays play more important role. As we mentioned in section~\ref{sec:U-UV}, the kinematical constraint from neutrinoless double beta decay plays an important role for massive sterile neutrinos, $\vert U_{e 4} \vert^2 \lsim 10^{-3}$ for $m^2_{4} \sim 1~\mbox{keV}^2$ to $\vert U_{e 4} \vert^2 \lsim 10^{-6}$ for $m^2_{4} \sim 1~\mbox{MeV}^2$~\cite{deGouvea:2015euy}. However, no constraint on $\vert U_{\mu 4} \vert^2$ and $\vert U_{\tau 4} \vert^2$ arizes for the mass range $m_{4} \leq 1~{\rm MeV}$ in which we are interested in the context of low-scale unitarity violation, according to the $(3+1)$ model analysis in \cite{deGouvea:2015euy}. The neutrino oscillation experiments can constrain the sterile mixing parameters for relatively high mass sterile states, $m^2_{J} \gsim 10~{\rm eV}^2$. Assuming an additional sterile state with $10\,{\rm eV} \lesssim m_4 \lesssim 1\,{\rm MeV}$, the KARMEN experiment constrains $4 |U_{e 4}|^2|U_{\mu 4}|^2 < 1.3 \times 10^{-3}$ at 90 \% CL~\cite{Armbruster:2002mp} while the FNAL-E531 experiment constrains $4 |U_{\mu 4}|^2|U_{\tau 4}|^2 \lsim 4 \times 10^{-3}$ and $4 |U_{e 4}|^2|U_{\tau 4}|^2 \lsim 0.2$ at 90 \% CL~\cite{Ushida:1986zn}. We must note, however, that the precise translation from the constraints obtained by using the $(3+1)$ model to the ones obtainable by our generic (3+$N$) model requires great care. In particular, it is mandatory but is highly nontrivial task for the constraints from accelerator appearance experiments mentioned above. For unitarity violation at high scales, due to the SM $SU(2)$ gauge invariance, the constraints coming from the charged lepton sector must also be considered \cite{Antusch:2006vwa}. While we do not describe them here the interested readers are advised to refer to, for example, refs.~\cite{Antusch:2006vwa, Escrihuela:2015wra,Antusch:2014woa,Fernandez-Martinez:2016lgt} and the ones quoted therein. \section{Structure of CP violation in the $(3 + N)$ space unitary model} \label{sec:CP-violation} As in the preceding section, we can use the formulas for $P(\nu_\mu \rightarrow \nu_e)$ and $P(\nu_\mu \rightarrow \nu_\mu)$ given in (\ref{P-beta-alpha-ave-vac}) and (\ref{P-alpha-alpha-ave-vac}) ($\beta=\mu, \alpha=e$ etc.) to do unitarity test in the accelerator neutrino experiments with muon neutrino beam in near vacuum environment. While we postpone this task to future communications, we want to make remarks on structure of CP violation in the active neutrino sector of our $(3+N)$ unitary model. We note that some authors addressed the issue of CP phase in theories with non-unitarity. See e.g., \cite{FernandezMartinez:2007ms,Miranda:2016wdr,Ge:2016xya}. Yet, we believe that our discussion below nicely complements those given before. The number of CP violating phases in non-unitary $n \times n$ $U$ matrix can be counted by the similar way as in the CKM matrix: It is $2n^2 - n^2 - (2n-1) = (n-1)^2$, in which we have subtracted number of elements $\vert U_{\alpha i} \vert$ and number of phases that can be absorbed into the neutrino wave functions. Hence, four phases exist in the $U$ matrix in our $(3+N)$ model ($n=3$), and it can be parameterized, for example, as \begin{eqnarray} U &=& \left[ \begin{array}{ccc} \vert U_{e 1} \vert & \vert U_{e 2} \vert & \vert U_{e 3} \vert e^{ i \phi_{1} } \\ \vert U_{\mu 1} \vert e^{ i \phi_{2} } & \vert U_{\mu 2} \vert & \vert U_{\mu 3} \vert \\ \vert U_{\tau 1} \vert e^{ i \phi_{3} } & \vert U_{\tau 2} \vert e^{ i \phi_{4} } & \vert U_{\tau 3} \vert \\ \end{array} \right]. \label{U-parametrize33} \end{eqnarray} Using (\ref{P-beta-alpha-ave-vac}) the CP odd combination of the appearance oscillation probabilities is given by \begin{eqnarray} \Delta P_{\beta \alpha} \equiv P(\nu_\beta \rightarrow \nu_\alpha) - P(\bar{\nu}_\beta \rightarrow \bar{\nu}_\alpha) &=& - 4 \sum_{ j > i } J_{\alpha \beta i j} \sin \left(\frac{ \Delta m^2_{ji} x }{2E}\right) \label{Delta-P} \end{eqnarray} where we have defined the generalized Jarlskog invariants \cite{Jarlskog:1985ht} \begin{eqnarray} J_{\alpha \beta i j} \equiv \mbox{Im} \left( U_{\alpha i} U_{\beta i}^* U_{\alpha j}^* U_{\beta j} \right). \label{jarlskog} \end{eqnarray} They are called ``invariants'' because they are invariant under phase redefinition of neutrino fields. Though $J_{\alpha \beta i j}$ is unique, up to sign, in unitary case, the property no longer holds in our $(3 + N)$ space unitary model. But, some properties remain, e.g., antisymmetry: $J_{\alpha \beta i j} = - J_{\beta \alpha i j}$, $J_{\alpha \beta i j} = - J_{\alpha \beta j i}$. It allows us to show some interesting properties of CP odd combination $\Delta P_{\beta \alpha}$. Multiplying $U_{\alpha j}^* U_{\beta j}$ to the unitarity relation, the first equation in (\ref{eqn:unitarity}), \begin{eqnarray} \sum_{i} U_{\alpha i} U_{\beta i}^* = \delta_{ \alpha \beta } - \sum_{I} W_{\alpha I} W_{\beta I}^* \label{unitarity1} \end{eqnarray} and taking imaginary part we obtain the relation \begin{eqnarray} \sum_{i} J_{\alpha \beta i j} = - \mbox{Im} \left( U_{\alpha j}^* U_{\beta j} W_{\alpha I} W_{\beta I}^* \right) \equiv S_{\alpha \beta j}. \label{unitarity2} \end{eqnarray} Because of antisymmetry of $J_{\alpha \beta i j}$ mentioned above we can write $S_{\alpha \beta j}$ as \begin{eqnarray} S_{\alpha \beta 1} &=& J_{\alpha \beta 21} + J_{\alpha \beta 31}, \nonumber \\ S_{\alpha \beta 2} &=& J_{\alpha \beta 12} + J_{\alpha \beta 32}, \nonumber \\ S_{\alpha \beta 3} &=& J_{\alpha \beta 13} + J_{\alpha \beta 23}, \label{S-J-relation} \end{eqnarray} from which the relation $S_{\alpha \beta 1} + S_{\alpha \beta 2} + S_{\alpha \beta 3} =0$ follows. Then, one can easily show that CP odd combination $\Delta P_{\beta \alpha}$ can be written as\footnote{ We have used the identity $ \sin \left(\frac{ \Delta m^2_{32} x }{2E}\right) - \sin \left(\frac{ \Delta m^2_{31} x }{2E}\right) + \sin \left(\frac{ \Delta m^2_{21} x }{2E}\right) = 4 \sin \left(\frac{ \Delta m^2_{32} x }{4E} \right) \sin \left(\frac{ \Delta m^2_{31} x }{4E} \right) \sin \left(\frac{ \Delta m^2_{21} x }{4E}\right)$. By cyclic permutation one can obtain the other forms with the first coefficient $J_{\alpha \beta 23}$ or $J_{\alpha \beta 31}$. } \begin{eqnarray} \Delta P_{\beta \alpha} &=& - 16 J_{\alpha \beta 12} \sin \left(\frac{ \Delta m^2_{32} x }{4E} \right) \sin \left(\frac{ \Delta m^2_{31} x }{4E} \right) \sin \left(\frac{ \Delta m^2_{21} x }{4E}\right) \nonumber \\ &+& 4 S_{\alpha \beta 1} \sin \left(\frac{ \Delta m^2_{31} x }{2E} \right)+ 4 S_{\alpha \beta 2} \sin \left(\frac{ \Delta m^2_{32} x }{2E}\right). \label{Delta-P2} \end{eqnarray} The form of CP-odd combination $\Delta P_{\beta \alpha}$ in (\ref{Delta-P2}) is interesting because CP violation effect is decomposed into two pieces, one unitary-like $x/E$ dependence (first line), and the other ``unitarity-violating'' $x/E$ dependence (second line). Of course, the coefficient of the first term receives unitarity violating effect through non-unitary $U$ matrix elements in $J_{\alpha \beta 12}$. But, it should be possible to disentangle between these two different $x/E$ dependences by precision measurement of neutrino spectrum in the next generation experiments \cite{Abe:2015zbg,Acciarri:2015uup} provided that the non-unitarity effect is sufficiently large enough. Presence of the second term would provide with us a clear evidence for unitarity violation, because $S_{\alpha \beta i}$ involves explicitly the $W$ matrix elements which connect the active to sterile sectors.\footnote{ One must be careful so as not to misinterpret our statement. Through unitarity of ${\bf U}$ matrix (\ref{eqn:unitarity}), the $U$ matrix always carries information of $W$ matrix. Therefore, CP odd term is not the only place where we see the effect of non-unitarity. But, $\Delta P_{\beta \alpha}$ is special because an explicit $W$ dependent piece may be singled out, as we emphasized above. } To summarize: We have shown in near vacuum environments that the structure of CP odd combination of the appearance oscillation probabilities is illuminating enough to allow us to disentangle unitarity violating piece by studying the $x/E$ dependence of the signal. \section{Unitarity violation in matter: Matter perturbation theory} \label{sec:UV-matter-perturbation} In this paper we have developed a framework describing unitarity violation at low energies. It utilizes the three active and $N$ sterile neutrino state space which is assumed to be complete, i.e., $(3 + N)$ space unitarity. The key issue is whether the model can be formulated in such a way that its prediction is insensitive to the details of the sterile sector, for example, the sterile neutrino mass spectrum. In vacuum we have shown that our $(3 + N)$ model satisfies the requirement if $m_{J}^2 \gtrsim 0.1$ eV$^2$ for $J \ge 4$. An immediate question is if this feature survives in matter. In this section, we investigate this problem in a restricted framework of leading-order matter effect perturbation theory. We will answer the question in the positive but under the additional requirement, eq.~(\ref{2nd-condition}). We note that our approach which relies on matter perturbation theory is not purely academic. The resultant formulas for the disappearance and appearance probabilities, $P(\nu_\mu \rightarrow \nu_\mu)$ and $P(\nu_\mu \rightarrow \nu_e)$, to first order in matter perturbation theory can be utilized in leptonic unitarity test in T2K and T2HK experiments \cite{Abe:2011ks,Abe:2015zbg}. Notice that keeping higher order terms in $W$ is important because the bound obtainable by the ongoing and the next generation experiments may not be so stringent. Therefore, we do not make any assumptions on the size of $W$ matrix elements in this paper (besides $|W| < 1$). \subsection{Matter perturbation theory of three active plus $N$ sterile unitary system} \label{sec:P-theory} We formulate the matter perturbation theory of $(3+N)$ space unitary model by assuming that $\vert A \vert \ll |\Delta m^2_{31}|$ where $A \equiv 2\sqrt{2} G_F N_e(x) E$, with $G_F$ being the Fermi constant and $N_e(x)$ electron number density in matter, is the Wolfenstein matter potential \cite{Wolfenstein:1977ue}. In deriving the formulas for the oscillation probabilities, for simplicity, we assume charge-neutrality in matter, and take constant number density approximation for electron, proton and neutron. Inclusion of the spatial dependence can be done assuming adiabaticity, but it will not alter the results in a qualitative way. To discuss neutrino oscillation in matter in the three active plus $N$ sterile neutrino system the matter potential due to neutral current (NC) as well as charged current (CC) interactions must be taken into account. We therefore take the Hamiltonian in the flavor basis as \begin{eqnarray} H = {\bf U} \left[ \begin{array}{cccccc} \Delta_{1} & 0 & 0 & 0 & 0 & 0 \\ 0 & \Delta_{2} & 0 & 0 & 0 & 0 \\ 0 & 0 & \Delta_{3} & 0 & 0 & 0 \\ 0 & 0 & 0 & \Delta_{4} & 0 & 0 \\ 0 & 0 & 0 & 0 & \cdot \cdot \cdot & 0 \\ 0 & 0 & 0 & 0 & 0 & \Delta_{3+N} \\ \end{array} \right] {\bf U}^{\dagger} + \left[ \begin{array}{cccccc} \Delta_{A} - \Delta_{B} & 0 & 0 & 0 & 0 & 0 \\ 0 & - \Delta_{B} & 0 & 0 & 0 & 0 \\ 0 & 0 & - \Delta_{B} & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & \cdot \cdot \cdot & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right] \label{hamiltonian} \end{eqnarray} where $\Delta_{i(J)} \equiv \frac{ m^2_{i(J)} }{2E} $ as before, as defined in eq.~(\ref{Delta-def}) and, \begin{eqnarray} \Delta_{A} \equiv \frac{ A }{2E}, \hspace{10mm} \Delta_{B} \equiv \frac{ B }{2E}. \label{Delta-ab-def} \end{eqnarray} The matter potentials $A$ and $B$, which are respectively due to CC and NC interactions, take the forms and the values as \begin{eqnarray} A &=& 2 \sqrt{2} G_F N_e E \approx 1.52 \times 10^{-4} \left( \frac{Y_e \rho}{\rm g.cm^{-3}} \right) \left( \frac{E}{\rm GeV} \right) {\rm eV}^2, \nonumber \\ B &=& \sqrt{2} G_F N_n E = \frac{1}{2} \left( \frac{N_n}{N_e} \right) A, \label{matt-potential} \end{eqnarray} where $N_n$ is the neutron number density in matter. \subsection{Perturbation theory in vacuum mass eigenstate basis} To formulate perturbative treatment it is convenient to work with the vacuum mass eigenstate basis defined as $\tilde{\nu} = ({\bf U}^{\dagger}) \nu$, in which the Hamiltonian is related to the flavor basis one as $\tilde{H} \equiv {\bf U}^{\dagger} H {\bf U} = \tilde{H}_{0} + \tilde{H}_{1}$, where\footnote{ If we choose a different phase convention e.g. $\tilde H_1 = {\bf U}^\dagger \diag(\Delta_A, 0, 0, \Delta_B, ...,\Delta_B){\bf U}$, the $S$ matrix discussed in the following will change but the physical observable (oscillation probability) remains the same, as it must. This is confirmed by an explicit calculation. } \begin{eqnarray} \hspace{-0.4cm} \tilde{H}_{0} = \left[ \begin{array}{cccccc} \Delta_{1} & 0 & 0 & 0 & 0 & 0 \\ 0 & \Delta_{2} & 0 & 0 & 0 & 0 \\ 0 & 0 & \Delta_{3} & 0 & 0 & 0 \\ 0 & 0 & 0 & \Delta_{4} & 0 & 0 \\ 0 & 0 & 0 & 0 & \cdot \cdot \cdot & 0 \\ 0 & 0 & 0 & 0 & 0 & \Delta_{3+N} \\ \end{array} \right], \hspace{3mm} \tilde{H}_{1} = {\bf U}^{\dagger} \left[ \begin{array}{cccccc} \Delta_{A} - \Delta_{B} & 0 & 0 & 0 & 0 & 0 \\ 0 & - \Delta_{B} & 0 & 0 & 0 & 0 \\ 0 & 0 & - \Delta_{B} & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & \cdot \cdot \cdot & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right] {\bf U}. \label{H-tilde-3+N} \end{eqnarray} The $S$ matrix in the flavor basis $S(x)$ is related to the one in the vacuum mass eigenstate basis $\tilde{S}(x)$ as \begin{eqnarray} S(x) = {\bf U} \tilde{S} (x) {\bf U}^{\dagger} \label{Smatrix} \end{eqnarray} where \begin{eqnarray} \tilde{S} (x) = T \text{exp} \left[ -i \int^{x}_{0} dx' \tilde{H} (x') \right]. \label{tilde-Smatrix} \end{eqnarray} We calculate perturbatively the elements of $\tilde{S}$ matrix. Toward the goal, we define $\Omega(x)$ as $\Omega(x) = e^{i \tilde{H}_{0} x} \tilde{S} (x)$, which obeys the evolution equation \begin{eqnarray} i \frac{d}{dx} \Omega(x) = H_{1}(x) \Omega(x) \label{omega-evolution} \end{eqnarray} where \begin{eqnarray} H_{1}(x) \equiv e^{i \tilde{H}_{0} x} \tilde{H}_{1} e^{-i \tilde{H}_{0} x}. \label{def-H1} \end{eqnarray} Then, $\Omega(x)$ can be computed perturbatively as \begin{eqnarray} \Omega(x) &=& 1 + (-i) \int^{x}_{0} dx' H_{1} (x') + (-i)^2 \int^{x}_{0} dx' H_{1} (x') \int^{x'}_{0} dx'' H_{1} (x'') + \cdot \cdot \cdot, \label{Omega-exp} \end{eqnarray} where the ``space-ordered'' form in (\ref{Omega-exp}) is essential because of the non-commutativity between $H_{1}(x)$ of different locations. Having obtained $\Omega(x)$, $\tilde{ S }$ matrix can be written as \begin{eqnarray} \tilde{ S }(x) = e^{- i \tilde{H}_{0} x} \Omega(x). \label{Smatrix-tilde2} \end{eqnarray} We calculate $\tilde{ S }$ matrix to first order in matter perturbation theory. Since $\tilde{H}_{0}$ is diagonal, $e^{ \pm i \tilde{H}_{0} x}$ takes the simple form $\diag \left( e^{ \pm i \Delta_{1} x }, e^{ \pm i \Delta_{2} x }, e^{ \pm i \Delta_{3} x }, e^{ \pm i \Delta_{4} x }, \cdot \cdot \cdot, e^{ \pm i \Delta_{3+N} x } \right)$. Using eqs.~(\ref{def-H1}) and (\ref{Omega-exp}) respectively, we first determine $H_1$ and then $\Omega$. Using (\ref{Smatrix}), the $S$ matrix elements are given by the $\tilde{ S }$ matrix elements as \begin{eqnarray} S_{\alpha \beta} &=& \sum_{i} U_{\alpha i} U^{*}_{\beta i} \tilde{S}_{ii} + \sum_{i \neq j} U_{\alpha i} U^{*}_{\beta j} \tilde{S}_{ij} + \sum_{I, j} W_{\alpha I} U^{*}_{\beta j} \tilde{S}_{I j} + \sum_{i, J} U_{\alpha i} W^{*}_{\beta J} \tilde{S}_{i J} \nonumber \\ &+& \sum_{I} W_{\alpha I} W^{*}_{\beta I} \tilde{S}_{II} + \sum_{I \neq J} W_{\alpha I} W^{*}_{\beta J} \tilde{S}_{IJ}, \label{Smatrix-by-Stilde} \end{eqnarray} where the expressions of $\tilde{S}$ matrix elements are given in appendix~\ref{sec:tilde-S}. If we decompose $S_{\alpha \beta}$ to zeroth and the first order terms, $S_{\alpha \beta} = S_{\alpha \beta}^{(0)} + S_{\alpha \beta}^{(1)}$, we obtain \begin{eqnarray} S_{\alpha \beta}^{(0)} &=& \sum_{k} U_{\alpha k} U^{*}_{\beta k} e^{ - i \Delta_{k} x} + \sum_{K} W_{\alpha K} W^{*}_{\beta K} e^{ - i \Delta_{K} x}, \label{S-elements-3+N-0th} \end{eqnarray} which is, of course, identical with (\ref{S-elements-3+N-vac}), and \begin{eqnarray} S_{\alpha \beta}^{(1)} &=& \sum_{k} U_{\alpha k} U^{*}_{\beta k} e^{ - i \Delta_{k} x} \left[ - i (\Delta_{A} x) \vert U_{e k} \vert^2 + i (\Delta_{B} x) \sum_{\gamma} \vert U_{\gamma k} \vert^2 \right] \nonumber \\ &+& \sum_{K} W_{\alpha K} W^{*}_{\beta K} e^{ - i \Delta_{K} x} \left[ -i ( \Delta_{A} x) \vert U_{e K} \vert^2 +i ( \Delta_{B} x) \sum_{\gamma} \vert W_{\gamma K} \vert^2 \right] \nonumber \\ &+& \sum_{k \neq l} U_{\alpha k} U^{*}_{\beta l} \left[ \Delta_{A} U^*_{e k } U_{e l } - \Delta_{B} \sum_{\gamma} U^*_{\gamma k} U_{\gamma l} \right] \frac{ e^{ - i \Delta_{l} x} - e^{ - i \Delta_{k} x} }{ ( \Delta_{l} - \Delta_{k} )} \nonumber \\ &+& \sum_{K, l} W_{\alpha K} U^{*}_{\beta l} \left[ \Delta_{A} W^*_{e K } U_{e l} - \Delta_{B} \sum_{\gamma} W^*_{\gamma K} U_{\gamma l} \right] \frac{ e^{ - i \Delta_{l} x} - e^{ - i \Delta_{K} x} } { ( \Delta_{l} - \Delta_{K} )} \nonumber \\ &+& \sum_{k, L} U_{\alpha k} W^{*}_{\beta L} \left[ \Delta_{A} U^*_{e k} W_{e L} - \Delta_{B} \sum_{\gamma} U^*_{\gamma k} W_{\gamma L} \right] \frac{ e^{ - i \Delta_{L} x} - e^{ - i \Delta_{k} x} }{ ( \Delta_{L} - \Delta_{k} )} \nonumber \\ &+& \sum_{K \neq L} W_{\alpha K} W^{*}_{\beta L} \left[ \Delta_{A} W^*_{e K} W_{e L} - \Delta_{B} \sum_{\gamma} W^*_{\gamma K} W_{\gamma L} \right] \frac{ e^{ - i \Delta_{L} x} - e^{ - i \Delta_{K} x} }{ ( \Delta_{L} - \Delta_{K} ) }. \label{S-elements-3+N-1st} \end{eqnarray} The oscillation probabilities $P(\nu_\beta \rightarrow \nu_\alpha)$ in the appearance ($\beta \neq \alpha$) and disappearance ($\beta = \alpha$) channels can be computed to first order in matter perturbation theory as \begin{eqnarray} P(\nu_\beta \rightarrow \nu_\alpha) &=& \left| S^{(0)}_{\alpha \beta} + S^{(1)}_{\alpha \beta} \right|^2 = \left| S^{(0)}_{\alpha \beta} \right|^2 + 2 \mbox{Re} \left[ \left( S^{(0)}_{\alpha \beta} \right)^{*} S^{(1)}_{\alpha \beta} \right]. \label{P-beta-alpha} \end{eqnarray} Since the zeroth order term in $P(\nu_\beta \rightarrow \nu_\alpha)$ above is already given as the vacuum term, eq.~(\ref{P-beta-alpha-vac}), we only compute the first order matter correction terms. The results of $P(\nu_\alpha \rightarrow \nu_\alpha)^{(1)}$ and $P(\nu_\beta \rightarrow \nu_\alpha)^{(1)}$ are given in appendices~\ref{sec:disapp-P} and \ref{sec:app-P}, respectively. \subsection{Disappearance channels} \label{sec:disappearance} For simplicity, we first discuss the oscillation probability in the disappearance channel. Given the zeroth-order term in eq.~(\ref{P-beta-alpha-vac}), we focus on the first-order term here. We present here $P(\nu_\alpha \rightarrow \nu_\alpha)^{(1)}$ after averaging over energy resolution and dropping the rapidly oscillating terms due to the large mass squared differences which involve sterile neutrinos\footnote{ The averaging out procedure involves not only (\ref{average-out2}) but also \begin{eqnarray} \left\langle (\Delta_{A} x) \sin ( \Delta_{k} - \Delta_{J} ) x \right\rangle &\approx& \left\langle (\Delta_{A} x) \sin ( \Delta_{K} - \Delta_{J} ) x \right\rangle \approx 0, \label{average-out3} \end{eqnarray} and cosine as well. It is justified because the rapidly oscillating sine functions are imposed onto monotonic slowly increasing function of $x$. This feature arises due to $\frac{\vert \Delta_{A} \vert}{ \Delta_{J} } \approx \frac{\vert A \vert}{ \Delta m^2_{J k} } \approx \frac{\vert A \vert}{\vert \Delta m^2_{J K} \vert} \ll 1$, see eq.~(\ref{rA-def-value}). }, \begin{eqnarray} && P(\nu_\alpha \rightarrow \nu_\alpha)^{(1)} = 2 \mbox{Re} \left[ \left( S^{(0)}_{\alpha \alpha} \right)^{*} S^{(1)}_{\alpha \alpha} \right] \nonumber\\ &=& - 2 \sum_{j \neq k} \vert U_{\alpha j} \vert^2 \vert U_{\alpha k} \vert^2 \sin ( \Delta_{k} - \Delta_{j} ) x \left[ (\Delta_{A} x) \vert U_{e k} \vert^2 - (\Delta_{B} x) \sum_{\gamma} \vert U_{\gamma k} \vert^2 \right] \nonumber\\ &+& 2 \sum_{j} \sum_{k \neq l} \vert U_{\alpha j} \vert^2 \mbox{Re} \left[ \Delta_{A} U_{\alpha k} U^{*}_{\alpha l} U^*_{e k } U_{e l } - \Delta_{B} U_{\alpha k} U^{*}_{\alpha l} \sum_{\gamma} U^*_{\gamma k} U_{\gamma l} \right] \frac{ \cos ( \Delta_{l} - \Delta_{j} ) x - \cos ( \Delta_{k} - \Delta_{j} ) x }{ ( \Delta_{l} - \Delta_{k} )} \nonumber\\ &+& 2 \sum_{j} \sum_{l} \sum_{K} \vert U_{\alpha j} \vert^2 \mbox{Re} \left[ \Delta_{A} W_{\alpha K} U^{*}_{\alpha l} W^*_{e K } U_{e l} - \Delta_{B} W_{\alpha K} U^{*}_{\alpha l} \sum_{\gamma} W^*_{\gamma K} U_{\gamma l} \right] \frac{ \cos ( \Delta_{l} - \Delta_{j} ) x }{ ( \Delta_{l} - \Delta_{K} )} \nonumber\\ &-& 2 \sum_{j} \sum_{k} \sum_{L} \vert U_{\alpha j} \vert^2 \mbox{Re} \left[ \Delta_{A} U_{\alpha k} W^{*}_{\alpha L} U^*_{e k} W_{e L} - \Delta_{B} U_{\alpha k} W^{*}_{\alpha L} \sum_{\gamma} U^*_{\gamma k} W_{\gamma L} \right] \frac{ \cos ( \Delta_{k} - \Delta_{j} ) x }{ ( \Delta_{L} - \Delta_{k} )}, \label{disapp-P-1st-av} \end{eqnarray} leaving the full expression before averaging to appendix~\ref{sec:disapp-P}. We find that the last two terms in (\ref{disapp-P-1st-av}) violate our requirement that the oscillation probability in our $(3+N)$ model to be insensitive to the spectrum of sterile states unless they are smaller than $\mathcal{C}_{ab} \sim {\cal O}(W^4)$ which implies \begin{eqnarray} \frac{\vert \Delta_{A} \vert}{ ( \Delta_{J}- \Delta_{k} )} = \frac{\vert A \vert}{ \Delta m^2_{J k} } \ll \vert W \vert^2. \label{2nd-condition} \end{eqnarray} A severer restriction is not required because these terms are already suppressed by $W^2$ apart from the energy denominator. From \begin{eqnarray} \frac{\vert A \vert}{ \Delta m^2_{J k} } = 2.13 \times 10^{-3} \left(\frac{ \Delta m^2_{J k} }{ 0.1 \mbox{eV}^2}\right)^{-1} \left(\frac{\rho}{2.8 \,\text{g/cm}^3}\right) \left(\frac{E}{1~\mbox{GeV}}\right), \label{rA-def-value} \end{eqnarray} we notice that, unless $W^2$ is extremely small, $W^2 \lsim 10^{-2}$, the last two terms in (\ref{disapp-P-1st-av}) can be ignored under the same condition as in vacuum, $\Delta m^2_{J k} \gsim 0.1$ eV$^2$. If we discuss the region of $W^2$ which is much smaller, we need to restrict ourselves to the case of higher mass sterile neutrinos. If we treat the regime $W^2 \sim 10^{-3}$ ($W^2 \lsim 10^{-n}$), we need to limit to $\Delta m^2_{J k} \simeq m^2_{J} \gsim 1$ eV$^2$ ($10^{(n-3)}$ eV$^2$) to keep our $(3+N)$ space unitary model insensitive to details of the sterile sector. Assuming the further restriction to the sterile mass spectrum such that condition \eqref{2nd-condition} is fulfilled, we obtain the final form of the first-order matter correction to $P(\nu_\alpha \rightarrow \nu_\alpha)$ as \begin{eqnarray} && P(\nu_\alpha \rightarrow \nu_\alpha)^{(1)} = - 2 \sum_{j \neq k} \vert U_{\alpha j} \vert^2 \vert U_{\alpha k} \vert^2 \sin ( \Delta_{k} - \Delta_{j} ) x \left[ (\Delta_{A} x) \vert U_{e k} \vert^2 - (\Delta_{B} x) \sum_{\gamma} \vert U_{\gamma k} \vert^2 \right] \nonumber\\ &+& 4 \sum_{j} \sum_{k \neq l} \vert U_{\alpha j} \vert^2 \mbox{Re} \left[ \Delta_{A} U_{\alpha k} U^{*}_{\alpha l} U^*_{e k } U_{e l } - \Delta_{B} U_{\alpha k} U^{*}_{\alpha l} \sum_{\gamma} U^*_{\gamma k} U_{\gamma l} \right] \nonumber \\ & \times & \frac{ \sin^2 \frac{ ( \Delta_{k} - \Delta_{j} ) x }{2} - \sin^2 \frac{ ( \Delta_{l} - \Delta_{j} ) x }{2} }{ ( \Delta_{l} - \Delta_{k} )}. \label{disapp-P-1st-1-6-strong-av} \end{eqnarray} This expression is written in terms of only active space $U$ matrix elements. Therefore, with additional condition on the sterile neutrino mass spectrum given in (\ref{2nd-condition}), the effect of unitarity violation is only through the non-unitarity $U$ matrix to first order in matter perturbation theory. Thus, we find that the most important modification in the oscillation probability due to non-unitarity is in the vacuum expression in the disappearance channel. \subsection{Appearance channels} \label{sec:appearance} Despite the expression of $P(\nu_\beta \rightarrow \nu_\alpha)^{(1)}$ given in appendix~\ref{sec:app-P} is a little cumbersome, it has a simple form after averaging over neutrino energy within the energy resolution and using the condition (\ref{2nd-condition}): \begin{eqnarray} && P(\nu_\beta \rightarrow \nu_\alpha)^{(1)} \nonumber\\ &=& 2 \sum_{j \neq k} \left[ - \mbox{Re} \left( U^*_{\alpha j} U_{\beta j} U_{\alpha k} U^*_{\beta k} \right) \sin ( \Delta_{k} - \Delta_{j} ) x + \mbox{Im} \left( U^*_{\alpha j} U_{\beta j} U_{\alpha k} U^*_{\beta k} \right) \cos ( \Delta_{k} - \Delta_{j} ) x \right] \nonumber\\ &\times& \left[ (\Delta_{A} x) \vert U_{e k} \vert^2 - (\Delta_{B} x) \sum_{\gamma} \vert U_{\gamma k} \vert^2 \right] \nonumber\\ &+& 2 \sum_{j} \sum_{k \neq l} \mbox{Re} \left[ \Delta_{A} U^{*}_{\alpha j} U_{\beta j} U_{\alpha k} U^{*}_{\beta l} U^*_{e k } U_{e l } - \Delta_{B} U^{*}_{\alpha j} U_{\beta j} U_{\alpha k} U^{*}_{\beta l} \sum_{\gamma} U^*_{\gamma k} U_{\gamma l} \right] \nonumber\\ &\times& \frac{ \cos ( \Delta_{l} - \Delta_{j} ) x - \cos ( \Delta_{k} - \Delta_{j}) x }{ ( \Delta_{l} - \Delta_{k} )} \nonumber\\ &+& 2 \sum_{j} \sum_{k \neq l} \mbox{Im} \left[ \Delta_{A} U^{*}_{\alpha j} U_{\beta j} U_{\alpha k} U^{*}_{\beta l} U^*_{e k } U_{e l } - \Delta_{B} U^{*}_{\alpha j} U_{\beta j} U_{\alpha k} U^{*}_{\beta l} \sum_{\gamma} U^*_{\gamma k} U_{\gamma l} \right] \nonumber\\ &\times& \frac{ \sin ( \Delta_{l} - \Delta_{j} ) x - \sin ( \Delta_{k} - \Delta_{j}) x }{ ( \Delta_{l} - \Delta_{k} )}. \label{app-Pba-1st} \end{eqnarray} Again, the survived matter correction terms are written in terms of only active space $U$ matrix elements, leaving the important effect of unitarity violation only in the vacuum term. The obvious question would be: Do the features obtained in the leading order in matter perturbation theory, in particular, the restriction to the sterile masses (\ref{2nd-condition}), prevails to higher orders? A tantalizing feature of the sterile mass condition (\ref{2nd-condition}) is that its fulfillment relies on smallness of $A/\Delta m^2_{J k}$ in our present discussion. Therefore, better treatment of the matter effect is necessary to know whether our $(3+N)$ model can be insensitive to details of the sterile sector under reasonably strong matter effect. We hope to return to these questions in the near future. \section{Conclusions} In this paper, we have discussed the relationship between low-scale unitarity violation, the one due to new physics at much lower energies than the electroweak scale, and the conventional high-scale unitarity violation. They include (1) presence (absence) of lepton flavor universality in low-scale (high-scale) unitarity violation, and (2) absence (presence) of zero-distance flavor transition in low-scale (high-scale) unitarity violation. In the case of low-scale unitarity violation, it is likely that extension of low energy lepton sector may enrich the features of neutrino mixing and the effects could be detectable by the precision neutrino oscillation experiments. To provide a framework for leptonic unitarity test, by embodying such features of low-scale unitarity violation, we have constructed a three-active plus $N$-sterile neutrino model which is assumed to be unitary in the whole ($3+N$) dimensional state space. Presence of the sterile sector results in non-unitarity in active three neutrino subspace. Though inside this specific model, we sought the possibility that the framework is nearly model-independent to better serve unitarity test. Namely, we require the prediction of the $(3+N)$ model be insensitive to the properties of the sterile sector, such as the number of states $N$ and detailed features of the mass spectrum. We have shown that restriction to the sterile neutrino masses to $m^2_{J} \geq 0.1$ eV$^2$ ($J\ge 4$), due to decoherence, is sufficient to achieve the desired properties, under a mild assumption of no accidental degeneracy in the mass spectrum, i.e., $|\Delta m^2_{Ja}| \gg |\Delta m^2_{31}|$, or $\gg \Delta m^2_{21}$ where $J=4,.., 3+N, a = 1,...,3+N$. The characteristic features of unitarity violation, as modeled by our $(3+N)$ space unitary model, are as follows: \begin{itemize} \item the neutrino oscillation probability contains the constant term $\mathcal{C}_{\alpha \beta}$ in (\ref{Cab-Caa}) ($\alpha \neq \beta$ for appearance channels, and $\alpha = \beta$ for disappearance channels), describing the probability leaking into the sterile subspace. \item the mixing matrix in $3 \times 3$ active neutrino subspace is non-unitary. \end{itemize} \noindent While the second feature is common to high- and low-scale unitarity violation, the first feature is unique to low-scale unitarity violation. Since probability leaking occurs due to presence of sterile sector which has energies comparable to active neutrinos we suspect that the first feature above is generic in low-scale unitarity violation even outside of our $(3+N)$ model. In our $(3+N)$ space unitary model, the first observable which signals non-unitarity would be nonzero values of $1 - \sum_{i=1}^{3} \vert U_{\alpha i} \vert^2$ ($\alpha = e, \mu, \tau$) in the disappearance channels, and/or $\left| \sum_{j=1}^{3} U_{\alpha j} U^{*}_{\beta j} \right|$ in the appearance channels. They are both of the order of $W^2$, where $W$ is the mixing matrix which connects the active and sterile neutrino subspaces. On the other hand, the probability leaking term $\mathcal{C}_{\alpha \beta}$ (see (\ref{Cab-Caa})) is of the order of $W^4$. To verify low-scale unitarity violation, finding a nonzero values of $\mathcal{C}_{\alpha \beta}$ would be enough. But, to prove that unitarity violation occurs in the manner predicted by the $(3+N)$ space unitary model, the consistency between order of magnitudes of $\left| \sum_{j=1}^{3} U_{\alpha j} U^{*}_{\beta j} \right|^2$ and $\mathcal{C}_{\alpha \beta}$ ($\alpha \neq \beta$) (and the corresponding quantities in the disappearance channels) must be checked. Thus, we have presented a framework for analysis of unitarity violation in the lepton sector which is suitable for low-scale unitarity violation. To examine how it works we have analyzed a simulated data of medium baseline reactor neutrino experiments prepared by assuming a JUNO-like setting. By analyzing the data with our simple-minded statistical procedure, we have shown that the expected superb performance of JUNO would allow us to constrain unitarity violation and the probability leaking parameters as $1 - \sum_{i=1}^{3} \vert U_{e i} \vert^2 \leq 0.01 (0.03)$ and $\mathcal{C}_{ee} \lsim 10^{-4}$ ($10^{-3}$) at 1$\sigma$ (3$\sigma$) CL (one degree of freedom), respectively, by its 5 years measurement. We have also discussed in a qualitative way how to detect unitarity violation in accelerator appearance measurement. Using the antisymmetry property of the generalized Jarlskog invariants we have shown in section~\ref{sec:CP-violation} that the CP odd combination $P(\nu_\beta \rightarrow \nu_\alpha) - P(\bar{\nu}_\beta \rightarrow \bar{\nu}_\alpha)$ can be decomposed into the two terms with different $x/E$ dependences. See eq.~(\ref{Delta-P2}). If measurement of neutrino energy spectra is sufficiently accurate it would be possible to single out the explicit $W$ matrix dependent piece, providing with us a clear evidence for unitarity violation. Finally, we have addressed the question of how inclusion of the matter effect alters the nearly model-independent feature of our $(3+N)$ space unitary model. We have learned that if we discuss the region $W^2 \gsim 10^{-2}$ the condition on the sterile neutrino masses $m^2_{J} \gsim 0.1$ eV$^2$ needed in vacuum is sufficient, but if we want to treat case of even smaller $W^2$, $W^2 \lsim 10^{-n}$, restriction to sterile masses to $m^2_{J} \gsim 10^{(n-3)}$ eV$^2$ is necessary for our $(3+N)$ space unitary model be insensitive to details of the sterile sector. Though our treatment in section~\ref{sec:UV-matter-perturbation} is restricted to first order in matter perturbation theory it is perfectly applicable to the analysis for a class of the LBL experiments, for example, T2HK. Clearly the similar discussion must be attempted under environment of larger matter effect that is expected in some of the next generation LBL experiments such as DUNE. \acknowledgments One of the authors (H.M.) thanks Renata Zukanovich Funchal for intriguing conversations which for him prepared the cradle of this work. He thanks Instituto de F\'{\i}sica, Universidade de S\~ao Paulo for the great opportunity of stay under support by Funda\c{c}\~ao de Amparo \`a Pesquisa do Estado de S\~ao Paulo (FAPESP) with grant number 2015/05208-4. C.S.F. is supported by FAPESP under grants 2013/01792-8 and 2012/10995-7. H.N. was supported by Funda\c{c}\~ao de Amparo \`a Pesquisa do Estado do Rio de Janeiro (FAPERJ) and Conselho Nacional de Ci\^encia e Tecnologia (CNPq). H.M. and H.N. thank Masashi Yokoyama and the members of his Group in University of Tokyo for their warm hospitality, where part of this work was carried out. \newpage
1,108,101,566,276
arxiv
\section{Introduction} Quantum correlations in finite-dimensional composite systems have been studied intensively in last decades, originally with focus on the simplest systems i.e. qubit systems. Although one can consider in this respect a generic d-level system, so called qudit, but progress in studying quantum correlations for qudits is limited, due to the level of complication. Therefore, three-level systems (qutrits) are intensively studied presently. They are interesting for many reasons. First of all, such systems model realistic three-level atoms in which the interference between different radiative transitions is possible, resulting in new kinds of coherence \cite{FS}. Quantum dynamics of collective systems of such atoms significantly differ from a dynamics of two-level atoms (see e.g. \cite{AP, EKM, DJ}). On the other hand, the theory of quantum correlations between the pairs of such atoms is much more complex than in the case of qubits. Even description of the set of states of a single qutrit is much more complicated then that for qubit states \cite{GSS}. Moreover, there is no simple necessary and sufficient condition probing entanglement of qutrits. The Peres-Horodecki separablity criterion \cite{Pe, HHH1} is not sufficient for two qutrit system, it only shows that the states that are not positive after partial transposition (NPPT states) are entangled. It turns out that all entangled states can be divided into two classes: free entangled states that can be distilled using local operations and classical communication (LOCC); bound entangled states for which no LOCC strategy can be used to extract pure state entanglement \cite{HHH2}. Since many effects in quantum information depend on maximally entangled pure states, only distillable states can directly be used for quantum communication. \par Recently, more general properties of quantum correlations, which go beyond quantum entanglement, have attracted a lot of interest. They arise from the observation that for pure separable states, there exists von Neumann measurements on a part of composite system that do not disturb the state, whereas nonseparable states are always disturbed by such local measurements. Extension of this feature to the mixed states, gives rise to the notion of quantum discord \cite{OZ, HV, MBC}. For pure states notion of quantum discord coincides with entanglement, but in the case of mixed states discord and entanglement differ significantly. For example, almost all quantum states have non-vanishing discord and there exist discordant separable mixed states \cite{FA}. \par To evaluate quantum discord at a given state, one can use its geometric measure instead of the original measure proposed in \cite{OZ}. Such geometric measure of quantum discord is given by a minimal distance of a given state $\ro$ from the set of states $\PP_{\cA}(\ro)$, obtained after any local measurement $\PP_{\cA}$ on a part $\cA$. The proper choice of a distance measure is crucial. Presently there are three of them in use. The measure proposed in \cite{DVB} uses a Hilbert-Schmidt norm to define a distance in the set of states. This choice has a technical advantage: the minimization process can be realized analytically for arbitrary two-qubit states. However this measure has some unwanted properties. The most important problem is that it may increase under local operations performed on the unmeasured system \cite{P,TGV, J}. It can be cured by the use of the Schatten 1-norm (trace norm) instead, however such defined measure of quantum discord is more difficult to compute \cite{Paula}. By now, the explicit formula for it is known only in the case of Bell-diagonal states or two-qubit X-states \cite{Paula, Cic}. The third measure used for studying the geometric quantum discord is based on the Bures distance \cite{SpOr}. It has nice property that for pure states it is strictly equal to the geometric measure of entanglement. \par In this paper, we extend the analysis of trace norm based geometric discord $D_{1}$ to the system of two qutrits. The results known for the two qutrit system are related to the geometric discord based on the Hilbert-Schmidt norm and give information only about two qutrit Werner states \cite{Y} and about upper and lower bounds of such discord in the case of bound entangled states \cite{RP}. Firstly we compute the form of $D_{1}$ for special class of states with maximally mixed marginals and diagonal correlation matrix. We find that for pure Bell states and qutrit Werner states, the distance of a state $\ro$ to states $\PP_{\cA}(\ro)$ is constant, and one does not need to minimize over all local measurements to compute quantum discord. The value of this distance for the Bell state we use to normalize $D_{1}$ in such a way that for any state $\ro$ \begin{equation*} 0\leq D_{1}(\ro)\leq 1 \end{equation*} and $D_{1}(\ro)=1$ for maximally entangled state $\ro$. Then, the normalized quantum discord is computed for a class of qutrit Werner states $\ro_{W}$ and we obtain the result that $D_{1}(\ro_{W})$ is equal to the mixing parameter $p$ (Sect. IV.B). For other families of states with maximally mixed marginals, a minimization over all measurements is necessary. This makes the problem of analytic evaluation of qutrit discord extremely difficult. Fortunately, for examples considered in this work, numerical analysis shows that the minimal distance between $\ro$ and $\PP_{\cA}(\ro)$ is achieved for projective measurement given by standard orthonormal basis in $\C^{3}$. In this way we can compute trace norm geometric discord for two-parameter family of mixed entangled states (Sect. V.A) and one-parameter family containing bound entangled states (Sect. V.B). In particular, we obtain first known analytic formula giving trace norm quantum discord of bound entangled states. \section{Qudit state parametrization} \subsection{One - qudit parametrization} Let us start our analysis with the general $d$-level quantum system (qudit). To describe the states of qudit, it is convenient to use as a basis in a set of $d\times d$ matrices the hermitian generators of $\mr{su(d)}$ algebra and the identity matrix $\I_{d}$. Let $\las{1},\ldots,\las{d^{2}-1}$ be the generators of $\mr{su(d)}$ algebra. The matrices $\las{j}$ satisfy \begin{equation*} \tr\, \las{j}=0,\quad \tr\, (\las{j}\las{k})=2\,\delta_{jk},\; j,k=1,\ldots,d^{2}-1 \end{equation*} and \begin{equation} \las{j}\las{k}=\frac{2}{d}\,\delta_{jk}\,\I_{d} +\sum\limits_{l}\,(\hat{d}_{jkl}+i\,\hat{f}_{jkl})\,\las{l}\label{lajlak} \end{equation} where the structure constants $\hat{d}_{jkl}$ and $\hat{f}_{jkl}$ are given by \begin{equation} \hat{d}_{jkl}=\frac{1}{4}\,\tr\,([\las{j},\las{k}]_{+}\,\las{l})\label{d} \end{equation} and \begin{equation} \hat{f}_{jkl}=\frac{1}{4i}\,\tr\,([\las{j},\las{k}]\,\las{l})\label{f} \end{equation} Using the structure constants (\ref{d}) and (\ref{f}) one can introduce the following "star" and "wedge" products in a real linear space $\R^{d^{2}-1}$. For $n,\, m\in \R^{d^{2}-1}$ we define \begin{equation} (n\star m)_{j}=\sqrt{\frac{d(d-1)}{2}}\,\frac{1}{d-2}\,\sum\limits_{k,l}\,\hat{d}_{jkl}n_{k}m_{l} \end{equation} and \begin{equation} (n\wedge m)_{j}=\sqrt{\frac{d(d-1)}{2}}\,\frac{1}{d-2}\,\sum\limits_{k,l}\,\hat{f}_{jkl}n_{k}m_{l} \end{equation} Let $\la=(\las{1},\ldots,\las{d^{2}-1})$ and \begin{equation} \ip{n}{\la}=\sum\limits_{j}n_{j}\las{j} \end{equation} then taking into account (\ref{lajlak}), we obtain \begin{equation} \ip{n}{\la}\ip{m}{\la}=\frac{2}{d}\,\ip{n}{m}\I_{d}+\frac{1}{d^{\prime}}\,\ip{n\star m}{\la}+\frac{i}{d^{\prime}}\,\ip{n\wedge m}{\la}, \label{product} \end{equation} where \begin{equation*} d^{\prime}=\sqrt{\frac{d(d-1)}{2}}\,\frac{1}{d-2}. \end{equation*} The set of states of $d$-level system can be parametrized as follows (see e.g. \cite{BK}) \begin{equation} \ro=\frac{1}{d}\,\left(\I_{d}+d^{\prime\prime}\,\ip{n}{\la}\right),\quad n\in \R^{d^{2}-1},\label{state} \end{equation} where \begin{equation*} d^{\prime\prime}=\sqrt{\frac{d(d-1)}{2}} \end{equation*} and the components of the vector $n$ are \begin{equation*} n_{j}=\frac{d}{\sqrt{2d(d-1)}}\,\tr\,(\ro\,\la_{j}),\quad j=1,\ldots,d^{2}-1. \end{equation*} The matrix (\ref{state}) is hermitian and has a unit trace. To describe a quantum state, the matrix $\ro$ have to be positive-definite and this condition is not easy to characterize in terms of the vector $n$. However the pure states given by one-dimensional projectors can be fully described. Using (\ref{product}), one can check that $\ro$ given by (\ref{state}) satisfies $\ro^{2}=\ro$ if and only if \begin{equation*} \ip{n}{n}=1\quad\text{and}\quad n\star n=n. \end{equation*} \subsection{Two - qudit parametrization} Consider now two qudits $\cA$ and $\cB$. The state of a compound system can be parametrized as follows \begin{equation} \begin{split} \ro=\frac{1}{d^{2}}\bigg(&\I_{d}\otimes \I_{d}+d^{\prime\prime}\,\ip{x}{\la}\otimes \I_{d}+\I_{d}\otimes d^{\prime\prime}\,\ip{y}{\la}\\ &+\sum\limits_{j,k=1}^{d^{2}-1}T_{jk}\las{j}\otimes\las{k}\bigg) \end{split} \label{9state} \end{equation} with $x,\, y\in \R^{d^{2}-1}$. Notice that \begin{equation*} x_{j}=\frac{d}{\sqrt{2d(d-1)}}\,\tr\,(\ro\,\las{j}\otimes\I_{d}),\quad y_{j}=\frac{d}{\sqrt{2d(d-1)}}\,\tr\,(\ro\,\I_{d}\otimes \las{j}) \end{equation*} and \begin{equation*} T_{jk}=\frac{d^{2}}{4}\,\tr\,(\ro\las{j}\otimes\las{k}). \end{equation*} The parametrization (\ref{9state}) is chosen is such a way, that the marginals $\ptr{\cA}\ro$ and $\ptr{\cB}\ro$ are given by the vectors $x$ and $y$ as in (\ref{state}). \section{Trace-norm geometric qudit discord} When a bipartite system $\cA\cB$ is prepared in a state $\ro$ and we perform local measurement on the subsystem $\cA$, almost all states $\ro$ will be disturbed due to such measurement. The one-sided geometric discord is defined as the minimal disturbance induced by any projective measurement $\PP_{\cA}$ on subsystem $\cA$ \cite{DVB}. Here we choose a distance in the set of states given by the trace norm, instead of Hilbert-Schmidt norm used in the standard approach, and define (not normalized) measure of quantum discord as\cite{Paula} \begin{equation} \fala{D}_{1}(\ro)=\min\limits_{\PP_{\cA}}\,||\ro-\PP_{\cA}(\ro)||_{1},\label{trdisc} \end{equation} where \begin{equation*} ||\sigma||_{1}=\tr\,|\sigma|. \end{equation*} In the case of qudits, local projective measurement $\PP_{\cA}$ is given by the one-dimensional projectors $P_{1},\, P_{2},\ldots,\, P_{d}$ on $\C^{d}$, such that \begin{equation*} P_{1}+P_{2}+\cdots + P_{d}=\I_{d},\quad P_{j}P_{k}=\delta_{jk}\,P_{k} \end{equation*} and $\PP_{\cA}=\PP\otimes \mr{id}$, where \begin{equation*} \PP(\sigma)=P_{1}\,\sigma\, P_{1}+P_{2}\,\sigma\, P_{2}+\cdots +P_{d}\,\sigma\,P_{d}. \end{equation*} It is worth to stress that definition (\ref{trdisc}) is equivalent to the more common one, which is given by the minimal distance of a given state to the set $\Omega_{0}$ of all states with zero discord. In the case of one-sided quantum discord studied in this paper, the set $\Omega_{0}$ contains all "classical-quantum" states \begin{equation*} \ro_{\mr{cq}}=\sum\limits_{k=1}^{3}p_{k}\ket{\psi_{k}}\bra{\psi_{k}}\otimes \ro_{k}^{\cB}, \end{equation*} where $\{\psi_{k}\}$ is any single-qutrit orthonormal basis, $\{\ro_{k}^{\cB}\}$ are any states of the subsystem $\cB$ and $p_{k}\geq 0,\; \sum\limits_{k=1}^{3}p_{k}=1$. \par For the state (\ref{9state}) we have \begin{equation*} \begin{split} \ro-\PP_{\cA}(\ro)=\frac{1}{d^{2}}\,&\big[\,d^{\prime\prime}\ip{x}{\la}- \PP(d^{\prime\prime}\ip{x}{\la}))\otimes \I_{d}\\ &+\sum\limits_{j,k=1}^{d^{2}-1}T_{jk}\,(\las{j}- \PP(\las{j}))\otimes \las{k}\,\big]. \end{split} \end{equation*} Since \begin{equation*} \PP(\las{j})=\sum\limits_{k=1}^{d^{2}-1}a_{jk}\,\las{k},\quad a_{jk}=\frac{1}{2}\,\tr\,(\PP(\las{j})\las{k}) \end{equation*} and the matrix $A=(a_{jk})$ is real and symmetric (in fact it is a projector operator \cite{ZCL}), \begin{equation*} \PP(\ip{m}{\la})=\ip{m}{A\la}=\ip{Am}{\la},\quad m\in \R^{d^{2}-1}. \end{equation*} So \begin{equation} \ro-\PP_{\cA}(\ro)=\frac{1}{d^{2}}\,\big[ d^{\prime\prime}\ip{Mx}{\la}\otimes \I_{d}+\sum\limits_{j,k}T_{jk}\ip{Me_{j}}{\la}\otimes \ip{e_{k}}{\la}\big] \label{dif} \end{equation} where $M=\I_{d^{2}-1}-A$ and $\{e_{j}\}_{j=1}^{d^{2}-1}$ is the canonical basis in $\R^{d^{2}-1}$. Let $R(M)$ denotes the right hand side of equation (\ref{dif}). Then not normalized geometric quantum discord of the state (\ref{9state}) equals \begin{equation} \fala{D}_{1}(\ro)=\min\limits_{M}||R(M)||_{1}=\min\limits_{M}\,\tr\,\sqrt{Q(M)} \end{equation} where $Q(M)=R(M)\,R(M)^{\ast}$ and the minimum is taken over all matrices $M$ corresponding to a measurements on subsystem $\cA$. \par To simplify further computations, we consider first the states with maximally mixed marginals i.e. such states $\ro$ that \begin{equation} \ptr{\cA}\ro=\frac{\I_{d}}{d},\quad \ptr{\cB}\ro=\frac{\I_{d}}{d}.\label{MMM} \end{equation} In the parametrization (\ref{9state}) this property corresponds to $x=y=0$. We also choose such states for which the correlation matrix $T=(T_{jk})$ is diagonal. (Notice that contrary to the case of qubits ($d=2$), the general state of two qudits ($d>2$) satisfying (\ref{MMM}) is not locally equivalent to the state with diagonal $T$ and such defined class is only a subclass of all states with maximally mixed marginals.) Let \begin{equation*} T=\mr{diag}\,(t_{1},\ldots,t_{d^{2}-1}), \end{equation*} then \begin{equation} R(M)=\frac{1}{d^{2}}\,\sum\limits_{j=1}^{d^{2}-1}t_{j}\,\ip{M\,e_{j}}{\la}\otimes \ip{e_{j}}{\la}, \end{equation} and using (\ref{product}), we obtain \begin{equation*} \begin{split} Q(M)=\frac{1}{d^{4}}\,\bigg[&\,\frac{4}{d^{2}}\,\sum\limits_{j}\,t_{j}^{2}\,\ip{Me_{j}}{Me_{j}}\, \I_{d}\otimes \I_{d}\\ &+\frac{2}{d\,d^{\prime}}\,\sum\limits_{j}\,t_{j}^{2}\,\ip{Me_{j}\ast Me_{j}}{\la}\otimes\I_{d}\\ &+\frac{2}{d\,d^{\prime}}\,\sum\limits_{j,k}t_{j}t_{k}\,\ip{Me_{j}}{Me_{k}}\I_{d}\otimes \ip{e_{j}\ast e_{k}}{\la}\\ &+\frac{1}{d^{\prime 2}}\,\sum\limits_{j,k}t_{j}t_{k}\,\ip{Me_{j}\ast Me_{k}}{\la}\otimes \ip{e_{j}\ast e_{k}}{\la}\\ &-\frac{1}{d^{\prime 2}}\sum\limits_{j,k}t_{j}t_{k}\,\ip{Me_{j}\wedge Me_{k}}{\la}\otimes \ip{e_{j}\wedge e_{k}}{\la}\,\bigg] \end{split} \end{equation*} \section{The case of qutrits} Now we consider in more details the case of two qutrits i.e. when $d=3$. In this case $d^{\prime}=d^{\prime\prime}=\sqrt{3}$ and the set of projectors corresponding to local projective measurement forms the four parameter family. In the explicit parametrization we have (see e.g. \cite{M}) \begin{equation*} \begin{split} &P_{1}=\begin{pmatrix}\cos^{2}\te \sin^{2}\vf&e^{-i(\psi-\chi)}a(\te,\vf)&e^{i\chi}b(\te,\vf)\\[1mm] e^{i(\psi-\chi)}a(\te,\vf)&\sin^{2}\te \sin^{2}\vf&e^{i\psi}c(\te,\vf)\\[1mm] e^{-i\chi}b(\te,\vf)&e^{-i\psi}c(\te,\vf)&\cos^{2}\vf \end{pmatrix},\\[2mm] &P_{2}=\begin{pmatrix}\cos^{2}\te \cos^{2}\vf&e^{-i(\psi-\chi)}d(\te,\vf)&-e^{i\chi}b(\te,\vf)\\[1mm] e^{i(\psi-\chi)}d(\te,\vf)&\sin^{2}\te \cos^{2}\vf&-e^{i\psi}c(\te,\vf)\\[1mm] -e^{-i\chi}b(\te,\vf)&-e^{-i\psi}c(\te,\vf)&\sin^{2}\vf \end{pmatrix},\\[2mm] &P_{3}=\begin{pmatrix}\sin^{2}\te&-\frac{1}{2}e^{-i(\psi-\chi)}\sin 2\te&0\\[1mm] -\frac{1}{2}e^{i(\psi-\chi)}\sin 2\te&\cos^{2}\te&0\\[1mm] 0&0&0 \end{pmatrix}, \end{split} \end{equation*} where \begin{equation*} \begin{split} &a(\te,\vf)=\frac{1}{2}\sin 2\te \sin^{2}\vf,\\ &b(\te,\vf)=\frac{1}{2}\cos\te \sin 2\vf,\\ &c(\te,\vf)=\frac{1}{2}\sin\te \sin 2\vf,\\ &d(\te,\vf)=\frac{1}{2}\sin 2\te \cos^{2}\vf \end{split} \end{equation*} and $\te,\, \vf,\, \chi\in [-\pi,\pi],\, \psi\in [-\pi/2, \pi/2]$. \par Our first attempt is to compute quantum discord for states with diagonal correlation matrix $T$ and matrix elements \begin{equation} t_{j}=t\,\ee_{j},\quad j=1,\ldots , 8, \label{t} \end{equation} where \begin{equation*} \ee_{j}=\W{+1}{j=1,3,4,6,8}{-1}{j=2,5,7}. \end{equation*} This kind of correlation matrix corresponds for example to qutrit Bell states and Werner states. Notice that in this case \begin{equation} \begin{split} &\sum\limits_{j,k}t_{j}t_{k}\ip{Me_{j}}{Me_{k}}\,\I_{3}\otimes\ip{e_{j}\ast e_{k}}{\la}\\ &=t^{2}\,\sum\limits_{j,k}\ee_{j}\ee_{k}\ip{e_{j}}{Me_{k}}\,\I_{3}\otimes \ip{e_{j}\ast e_{k}}{\la}\\ &=t^{2}\,\sum\limits_{j,k}\ip{e_{j}}{IMI\,e_{k}}\,\I_{3}\otimes\ip{e_{j}\ast e_{k}}{\la}, \end{split}\label{suma} \end{equation} where \begin{equation*} I=\mr{diag}\,(\ee_{1},\ldots, \ee_{8}). \end{equation*} Since \begin{equation*} \begin{split} &\sum\limits_{j}\ip{e_{j}}{IMI\,e_{k}}\,\I_{3}\otimes \ip{e_{j}\ast e_{k}}{\la}\\ &=\I_{3}\otimes \ip{\sum\limits_{j}\ip{e_{j}}{IMI\,e_{k}}e_{j}\ast e_{k}}{\la}\\ &=\I_{3}\otimes \ip{IMI\,e_{k}\ast e_{k}}{\la}, \end{split} \end{equation*} the sum (\ref{suma}) is equal to \begin{equation*} t^{2}\, \I_{3}\otimes \sum\limits_{k}\ip{IMI\,e_{k}\ast e_{k}}{\la}. \end{equation*} By a direct computation one can check that \begin{equation*} \sum\limits_{k}IMI\,e_{k}\ast e_{k}=0, \end{equation*} so also the sum (\ref{suma}) is equal to zero. Moreover, since \begin{equation*} \sum\limits_{j}\, M\,e_{j}\ast M\,e_{j}=0, \end{equation*} we have \begin{equation*} \sum\limits_{j}t_{j}^{2}\,\ip{M\,e_{j}\ast M\,e_{j}}{\la}\otimes \I_{3}=0. \end{equation*} Notice that \begin{equation*} \sum\limits_{j=1}^{8}\ip{Me_{j}}{Me_{j}}=\sum\limits_{j=1}^{8}\ip{Me_{j}}{e_{j}}=\tr\, M, \end{equation*} so \begin{equation*} \begin{split} Q(M)=\frac{1}{81}\,\bigg[&\frac{4}{9}t^{2}\,\tr\, M\,\I_{3}\otimes \I_{3}\\ &+\frac{1}{3}\,\sum\limits_{j,k=1}^{8}t_{j}t_{k}\,\ip{Me_{j}\ast Me_{k}}{\la}\otimes \ip{e_{j}\ast e_{k}}{\la}\\ &-\frac{1}{3}\,\sum\limits_{j,k=1}^{8}t_{j}t_{k}\,\ip{Me_{j}\wedge Me_{k}}{\la}\otimes \ip{e_{j}\wedge e_{k}}{\la}\,\bigg] \end{split} \end{equation*} and \begin{equation*} \tr\, Q(M)=\frac{4t^{2}}{9\cdot 81}\,\tr\, M\cdot\tr \I_{3}\cdot \tr \I_{3} =\frac{4t^{2}}{81}\,\tr M. \end{equation*} Since $M=\I_{8}-A$ projects on six-dimensional subspace of $\R^{8}$ \cite{ZCL}, $\tr M =6$ and \begin{equation*} \tr Q(M)=\left(\frac{2}{3}\right)^{3}\, t^{2}. \end{equation*} By the similar, but more involved computations, one can check that \begin{equation*} \tr\, Q(M)^{k}=q_{k}\,t^{2k},\quad k=2,\ldots, 9, \end{equation*} where $q_{k}$ are constants. In particular \begin{equation*} \tr Q(M)^{k}=\tr Q(M_{0})^{k},\quad k=1,\ldots ,9, \end{equation*} where $M_{0}$ denotes the matrix $M$ with all parameters equal to zero. From that, it follows that the eigenvalues of $Q(M)$ and $Q(M_{0})$ are the same \cite{L} and the distance between $\ro$ and $\PP_{\cA}(\ro)$ is constant. Thus for such class of states to compute quantum discord we need not to minimize over all matrices $M$ and it is enough to find the trace norm of $\sqrt{Q(M_{0})}$. Next we consider two examples of states with the correlation matrix satisfying (\ref{t}). \subsection{Qutrit Bell state} We start with the \textit{Bell state} of two qutrits i.e the maximally entangled pure state given by the vector \begin{equation*} \Psi_{0}=\frac{1}{\sqrt{3}}\,\sum\limits_{k=1}^{3}\vf_{k}\otimes \vf_{k}, \end{equation*} where $\{\vf_{k}\}$ is the standard orthonormal basis in $\C^{3}$. The correlation matrix corresponding to this state is given by \begin{equation*} T=\mr{diag}\,\left(\frac{3}{2},\,-\frac{3}{2},\,\frac{3}{2},\,\frac{3}{2},\,-\frac{3}{2},\,\frac{3}{2},\,-\frac{3}{2},\,\frac{3}{2}\right). \end{equation*} One can check that in this case \begin{equation} Q(M_{0})=\frac{1}{9}\,\begin{pmatrix}2&0&0&0&1&0&0&0&1\\0&0&0&0&0&0&0&0&0\\0&0&0&0&0&0&0&0&0\\0&0&0&0&0&0&0&0&0\\ 1&0&0&0&2&0&0&0&1\\0&0&0&0&0&0&0&0&0\\0&0&0&0&0&0&0&0&0\\0&0&0&0&0&0&0&0&0\\1&0&0&0&1&0&0&0&2 \end{pmatrix}\label{QBell} \end{equation} and \begin{equation*} \mr{sp}\, Q(M_{0})=\big\{\frac{4}{9},\, \frac{1}{9},\, \frac{1}{9},0,0,0,0,0,0\big\}, \end{equation*} so \begin{equation*} \fala{D}_{1}({\Psi_0})=\tr\,\sqrt{Q(M_{0})}=\frac{4}{3}. \end{equation*} It is natural to demand that the quantum discord of any maximally entangled state should be equal to $1$, so we introduce \textit{normalized geometric measure of qutrit discord} $D_{1}(\ro)$, defined as \begin{equation*} D_{1}(\ro)=\frac{3}{4}\,\fala{D}_{1}(\ro). \end{equation*} Obviously $D_{1}(\Psi_{0})=1$. \subsection{Qutrit Werner states} As a second example we shall consider the family of qutrit \textit{Werner states} \begin{equation} \ro_{W}=(1-p)\,\frac{\I_{9}}{9}+ p\,\ket{\Psi_{0}}\bra{\Psi_{0}},\quad p\in [0,1].\label{W} \end{equation} The states (\ref{W}) have interesting properties. For $p\leq 1/4$, $\ro_{W}$ are PPT states, whereas for $p>1/4$ they are NPPT. In fact such Werner states are distillable, since they violate reduction criterion of separability \cite{HH}. \par One can check that in this case \begin{equation*} T=\mr{diag}\,\left(\frac{3}{2}p,\,-\frac{3}{2}p,\,\frac{3}{2}p,\,\frac{3}{2}p,\,-\frac{3}{2}p,\,\frac{3}{2}p,\,-\frac{3}{2}p,\,\frac{3}{2}p\right) \end{equation*} so the corresponding matrix $Q(M_{0})$ is just the matrix (\ref{QBell}) multiplied by the factor $p$ and \begin{equation*} D_{1}(\ro_{W})=p. \end{equation*} \par It is instructive to compare just obtained measure of quantum discord with other measures of quantum correlations. First consider Hilbert -Schmidt norm geometric discord $D_{2}(\ro)$, which in the case of two qutrits is defined as (see e.g \cite{GA}) \begin{equation*} D_{2}(\ro)=\min\limits_{\PP_{\cA}}\, \frac{3}{2}\,||\ro-\PP_{\cA}(\ro)||_{2}^{2}, \end{equation*} where \begin{equation*} ||\sigma||_{2}^{2}=\tr\,(\sigma \sigma^{\ast}). \end{equation*} For the states considered in this subsection \begin{equation*} D_{2}(\ro)=\frac{3}{2}\,\tr\, Q(M)=\frac{4}{9}\,t^{2}. \end{equation*} In particular, for the Werner state \begin{equation} D_{2}(\ro_{W})=p^{2},\label{D2W} \end{equation} and \begin{equation*} D_{1}(\ro_{W})=\sqrt{D_{2}(\ro_{W})}. \end{equation*} The result (\ref{D2W}) was previously obtained in \cite{Y}, where the authors used minimization over all local measurements, which as we have shown, is not needed. \par Now we discuss the relation between $D_{1}$ and the measure of entanglement given by negativity, which in the case of two-qutrits is defined as \cite{VW} \begin{equation*} N(\ro)=\frac{1}{2}\,(||\ro^{PT}||_{1}-1), \end{equation*} where $\ro^{PT}$ denotes partial transposition of the state $\ro$. If $N(\ro)>0$ then the state $\ro$ is non separable, but negativity cannot detect bound entangled states. For the Werner state we have \begin{equation*} N(\ro_{W})=\W{0}{p\leq\frac{1}{4}}{\frac{1}{3}(4p-1)}{p>\frac{1}{4}}. \end{equation*} Obviously \begin{equation*} D_{1}(\ro_{W})\geq N(\ro_{W}) \end{equation*} which is in accordance with the general result proved in \cite{DMV}. \section{Other examples} \subsection{Some states with diagonal correlation matrix} Now we consider the family of states with more general diagonal matrix $T$, not satisfying the condition (\ref{t}). Let \begin{equation} \ro=\begin{pmatrix}\frac{1}{3}&0&0&0&a&0&0&0&0\\[1mm] 0&0&0&0&0&0&0&0&0\\[1mm] 0&0&0&0&0&0&0&0&0\\[1mm] 0&0&0&0&0&0&0&0&0\\[1mm] a&0&0&0&\frac{1}{3}&0&0&0&c\\[1mm] 0&0&0&0&0&0&0&0&0\\[1mm] 0&0&0&0&0&0&0&0&0\\[1mm] 0&0&0&0&0&0&0&0&0\\[1mm] 0&0&0&0&c&0&0&0&\frac{1}{3} \end{pmatrix},\label{roac} \end{equation} where $a\geq 0,\, c\geq 0$. The matrix (\ref{roac}) is positive-definite if and only if $a^{2}+c^{2}\leq 1/9$, so in polar coordinates we have \begin{equation*} a=r\,\cos \vartheta,\; c=r\,\sin\vartheta,\quad r\in [0,1/3],\, \vartheta\in [0,\pi/2]. \end{equation*} The corresponding correlation matrix is given by \begin{equation*} T=\mr{diag}\, \left(\frac{9}{2}a,-\frac{9}{2}a,\frac{3}{2},0,0,\frac{9}{2}c,-\frac{9}{2}c,\frac{3}{2}\right). \end{equation*} In this case the distance between $\ro$ and $\PP_{\cA}(\ro)$ is not constant and to compute $D_{1}(\ro)$ we must minimize $\tr\,\sqrt{Q(M)}$ over all projectors $M$. However numerical computations show that the minimum is achieved for $M=M_{0}$. Since \begin{equation*} Q(M_{0})=\begin{pmatrix}a^{2}&0&0&0&0&0&0&0&ac\\ 0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0\\ 0&0&0&0&a^{2}+c^{2}&0&0&0&0\\ 0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0\\ ac&0&0&0&0&0&0&0&c^{2} \end{pmatrix}, \end{equation*} and \begin{equation*} \mr{sp}\, Q(M_{0})=\{ a^{2}+c^{2},\, a^{2}+c^{2},\, 0,0,0,0,0,0,0\}, \end{equation*} we have \begin{equation*} D_{1}(\ro)=\frac{3}{2}\,\sqrt{a^{2}+c^{2}}=\frac{3}{2}\, r. \end{equation*} On the other hand \begin{equation*} N(\ro)=a+c=r\,(\cos \vartheta+\sin\vartheta) \end{equation*} and obviously \begin{equation*} D_{1}(\ro)>N(\ro). \end{equation*} \subsection{States with non-diagonal correlation matrix: bound entangled states} Let us finally consider the following family of states \cite{HHH} \begin{equation} \ro_{\al}=\frac{2}{7}\,\ket{\Psi_{0}}\bra{\Psi_{0}}+\frac{\al}{7}\,\ro_{+}+\frac{5-\al}{7}\,\ro_{-}, \label{bent} \end{equation} where \begin{equation*} \begin{split} &\ro_{+}=\frac{1}{3}\,\left[P_{\vf_{1}\otimes \,\vf_{2}}+P_{\vf_{2}\otimes \,\vf_{3}}+P_{\vf_{3}\otimes \,\vf_{1}}\right]\\ &\ro_{-}=\frac{1}{3}\,\left[P_{\vf_{2}\otimes \,\vf_{1}}+P_{\vf_{3}\otimes \,\vf_{2}}+P_{\vf_{1}\otimes \,\vf_{3}}\right] \end{split} \end{equation*} and $0\leq \al\leq 5$. It is known that the states (\ref{bent}) are separable for $2\leq \al\leq 3$, bound entangled for $3<\la \leq 4$ and free entangled for $4<\al\leq 5$. One can check that the marginals of $\ro_{\al}$ are maximally mixed, but the correlation matrix $T$ is not diagonal. In fact $T$ equals to \begin{equation*} \frac{1}{7}\, \begin{pmatrix}3&0&0&0&0&0&0&0\\[1mm] 0&-3&0&0&0&0&0&0\\[1mm] 0&0&-\frac{3}{4}&0&0&0&0&\frac{3\sqrt{3}}{4}(2\al-5)\\[1mm] 0&0&0&3&0&0&0&0\\[1mm] 0&0&0&0&-3&0&0&0\\[1mm] 0&0&0&0&0&3&0&0\\[1mm] 0&0&0&0&0&0&-3&0\\[1mm] 0&0&-\frac{3\sqrt{3}}{4}(2\al-5)&0&0&0&0&-\frac{3}{4} \end{pmatrix}. \end{equation*} In this case we have to use directly the formula (\ref{dif}). As in the previous example, numerical computations show that it is enough to consider $Q(M_{0})$, which is equal to \begin{equation} Q(M_{0})=\frac{4}{441}\,\begin{pmatrix}2&0&0&0&1&0&0&0&1\\ 0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0\\ 1&0&0&0&2&0&0&0&1\\ 0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0\\ 1&0&0&0&1&0&0&0&2\end{pmatrix}. \end{equation} So we have \begin{equation} D_{1}(\ro_{\al})=\frac{3}{4}\,\tr\sqrt{Q(M_{0})}=\frac{2}{7} \end{equation} and we see that quantum discord does not discriminates between separable, bound entangled and free entangled states. On the other hand $D_{1}(\ro_{\al})>N(\ro_{\al})$, where \begin{equation*} N(\ro_{\al})=\W{\frac{\DS 1}{\DS 14}\, (G(\al)-5)}{\al\in [0,1]\cup [4,5]}{\:0}{\al\in (1,4)}, \end{equation*} with \begin{equation*} G(\al)=\sqrt{41-20\al+4\al^{2}}. \end{equation*} We can also simply compute Hilbert-Schmidt quantum discord. It is equal to \begin{equation*} D_{2}(\ro_{\al})=\frac{3}{2}\,\tr\, Q(M_{0})=\frac{4}{49}, \end{equation*} so \begin{equation*} D_{1}(\ro_{\al})=\sqrt{D_{2}(\ro_{\al})}. \end{equation*} To the authors best knowledge, the above results are the first exact results giving quantum discord of bound entangled states. The earlier known result concerns Hilbert-Schmidt distance quantum discord and provides only the lower and upper bounds for $D_{2}(\ro_{\al})$ \cite{RP}. In particular it was shown that \begin{equation} D_{2}(\ro_{\al})\geq \W{\:\frac{\DS 4}{\DS 49}}{\al\in [0,\al_{-}]\cup [\al_{+},5]}{\frac{\DS 1}{\DS 49}\,(9-5\al+\al^{2})}{\al\in (\al_{-},\al_{+})}\label{lb} \end{equation} and the bound (\ref{lb}) is consistent with the obtained value of $D_{1}(\ro_{\al})$. \par The family of states (\ref{bent}) is interesting also for another reason. When we have non-diagonal correlation matrix $T$, we can always apply to it singular value decomposition \begin{equation*} T=V\,T_{0}\,W, \end{equation*} where $V,\, W$ are orthogonal matrices and \begin{equation*} T_{0}=\mr{diag}\,(s_{1},s_{2},\ldots,s_{d^{2}-1}), \end{equation*} with matrix elements $s_{k}$ given by singular values of $T$. In the case of qubits ($d=2$), this procedure always leads to locally equivalent states, so we can restrict the analysis to the case of diagonal correlation matrix. For qudits it is generally not true and the states (\ref{bent}) are the explicit counterexamples. To show this, we notice that the singular values of the correlation matrix of the states (\ref{bent}) are given by \begin{equation} s_{1}=\cdots = s_{6}=\frac{3}{7},\, s_{7}=s_{8}=\frac{3}{28}\sqrt{1+3(2\al-5)^{2}}.\label{sv} \end{equation} Then we take $T_{0}$ defined by the sequence (\ref{sv}) and try to construct a state using the formula (\ref{9state}), but we end with the matrix which not positive-definite. Thus there is no equivalent description of the family (\ref{bent}) by the states with diagonal correlation matrices. \section{Conclusions} We have studied behaviour of the geometric discord based on the trace norm in the system of two qutrits. Analysis of such a system is the first non-trivial step in extending the two-qubit theory of quantum correlations to the general case of $d$-level systems. We have computed geometric discord for some interesting families of two-qutrit states, such as maximally entangled Bell states, Werner states and bound entangled states. Our analysis of qutrit systems in which entanglement can be bound or free, show, even more clearly then in the qubit case, that discord and entanglement describe different aspects of quantum correlations in composed systems.
1,108,101,566,277
arxiv
\section*{Appendix \thesection\protect\indent \parbox[t]{11.715cm} {#1}} \addcontentsline{toc}{section}{Appendix \thesection\ \ \ #1} } \newcommand{\complex}{{\mathbb C}} \newcommand{\zed}{{\mathbb Z}} \newcommand{\nat}{{\mathbb N}} \newcommand{\real}{{\mathbb R}} \newcommand{{\mathbb E}}{{\mathbb E}} \newcommand{\rat}{{\mathbb Q}} \newcommand{\mat}{{\mathbb M}} \newcommand{\id}{{1\!\!1}} \def\Dirac{{D\!\!\!\!/\,}} \def\semiprod{{\,\rhd\!\!\!<\,}} \def{\mathcal A}{{\mathcal A}} \def{\mathcal H}{{\mathcal H}} \def\otimes_{ A}{\otimes_{ A}} \def\otimes_{\complexs}{\otimes_{\complexs}} \def\otimes_{reals}{\otimes_{reals}} \newif\ifold \oldtrue \def\oldfalse{\oldfalse} \font\mathsm=cmmi9 \def\nonumber{\nonumber} \newcommand{\tr}[1]{\:{\rm tr}\,#1} \newcommand{\Tr}[1]{\:{\rm Tr}\,#1} \newcommand{\sdet}[1]{\:{\rm SDet}\,#1} \def{\,\rm e}\,{{\,\rm e}\,} \newcommand{\rf}[1]{(\ref{#1})} \newcommand{\nonumber \\*}{\nonumber \\*} \newcommand{{\vec \nabla}}{{\vec \nabla}} \newcommand{{\vec x}}{{\vec x}} \hyphenation{pre-print} \hyphenation{pre-prints} \hyphenation{di-men-sion-al} \hyphenation{di-men-sion-al-ly} \def\begin{equation}{\begin{equation}} \def\end{equation}{\end{equation}} \def\begin{eqnarray}{\begin{eqnarray}} \def\end{eqnarray}{\end{eqnarray}} \def\begin{displaymath}{\begin{displaymath}} \def\end{displaymath}{\end{displaymath}} \def{\rm const}{{\rm const}} \def\sigma{\sigma} \def\varphi_{\rm cl}{\varphi_{\rm cl}} \def\left\langle{\left\langle} \def\right\rancle{\right\rancle} \def\partial{\partial} \defS_{\rm eff}{S_{\rm eff}} \defF{F} \newcommand{\fr}[2]{{\textstyle {#1 \over #2}}} \newcommand{\fintf}[1]{\int \frac{d^{2d} #1}{(2\pi)^{2d}}} \newcommand{\fintt}[1]{\int \frac{d^2 #1}{(2\pi)^2}} \def\rule{14cm}{0pt}\and{\rule{14cm}{0pt}\and} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\bf \mathcal M}{\bf \mathcal M} \newcommand{\gamma}{\gamma} \newcommand{\Gamma}{\Gamma} \newcommand{\rho}{\rho} \newcommand{\sigma}{\sigma} \newcommand{\Sigma}{\Sigma} \newcommand{\partial}{\partial} \newcommand{\tilde{\Sigma}}{\tilde{\Sigma}} \newcommand{\hat{r}}{\hat{r}} \newcommand{\hat{q}}{\hat{q}} \newcommand{\hat{K}}{\hat{K}} \newcommand{\hat{\omega}}{\hat{\omega}} \newcommand{\partial}{\partial} \newcommand{ e}{ e} \newcommand{{\mathcal O}}{{\mathcal O}} \newcommand{\zeta}{\zeta} \newcommand{\uparrow}{\uparrow} \newcommand{\downarrow}{\downarrow} \newcommand{{\mathcal Z}}{{\mathcal Z}} \makeatletter \newdimen\normalarrayskip \newdimen\minarrayskip \normalarrayskip\baselineskip \minarrayskip\jot \newif\ifold \oldtrue \def\oldfalse{\oldfalse} \def\arraymode{\ifold\relax\else\displaystyle\fi} \def\eqnumphantom{\phantom{(\theequation)}} \def\@arrayskip{\ifold\baselineskip\zeta@\lineskip\zeta@ \else \baselineskip\minarrayskip\lineskip2\minarrayskip\fi} \def\@arrayclassz{\ifcase \@lastchclass \@acolampacol \or \@ampacol \or \or \or \@addamp \or \@acolampacol \or \@firstampfalse \@acol \fi \edef\@preamble{\@preamble \ifcase \@chnum \hfil$\relax\arraymode\@sharp$\hfil \or $\relax\arraymode\@sharp$\hfil \or \hfil$\relax\arraymode\@sharp$\fi}} \def\@array[#1]#2{\setbox\@arstrutbox=\hbox{\vrule height\arraystretch \ht\strutbox depth\arraystretch \dp\strutbox width\zeta@}\@mkpream{#2}\edef\@preamble{\halign \noexpand\@halignto \bgroup \tabskip\zeta@ \@arstrut \@preamble \tabskip\zeta@ \cr}% \let\@startpbox\@@startpbox \let\@endpbox\@@endpbox \if #1t\vtop \else \if#1b\vbox \else \vcenter \fi\fi \bgroup \let\par\relax \let\@sharp##\let\protect\relax \@arrayskip\@preamble} \makeatother \allowdisplaybreaks \begin{document} \begin{titlepage} \begin{flushright} \baselineskip=12pt HWM--06--5\\ EMPG--06--02\\ hep--th/0602036\\ \hfill{ }\\ February 2006 \end{flushright} \begin{center} \vspace{2cm} \baselineskip=24pt {\Large\bf Noncommutative Field Theory \\ on Homogeneous Gravitational Waves} \baselineskip=14pt \vspace{1cm} {\bf Sam Halliday} and {\bf Richard J. Szabo} \\[4mm] {\it Department of Mathematics}\\ and\\ {\it Maxwell Institute for Mathematical Sciences\\ Heriot-Watt University\\ Colin Maclaurin Building, Riccarton, Edinburgh EH14 4AS, U.K.} \\{\tt [email protected]} , {\tt [email protected]} \\[40mm] \end{center} \begin{abstract} \baselineskip=12pt We describe an algebraic approach to the time-dependent noncommutative geometry of a six-dimensional Cahen-Wallach pp-wave string background supported by a constant Neveu-Schwarz flux, and develop a general formalism to construct and analyse quantum field theories defined thereon. Various star-products are derived in closed explicit form and the Hopf algebra of twisted isometries of the plane wave is constructed. Scalar field theories are defined using explicit forms of derivative operators, traces and noncommutative frame fields for the geometry, and various physical features are described. Noncommutative worldvolume field theories of D-branes in the pp-wave background are also constructed. \end{abstract} \end{titlepage} \setcounter{page}{2} \newpage \renewcommand{\thefootnote}{\arabic{footnote}} \setcounter{footnote}{0} \setcounter{equation}{0}\section{Introduction and Summary\label{Intro}} The general construction and analysis of noncommutative gauge theories on curved spacetimes is one of the most important outstanding problems in the applications of noncommutative geometry to string theory. These non-local field theories arise naturally as certain decoupling limits of open string dynamics on D-branes in curved superstring backgrounds in the presence of a non-constant background Neveu-Schwarz $B$-field. On a generic Poisson manifold $M$, they are formulated using the Kontesevich star-product~\cite{Kont1} which is linked to a topological string theory known as the Poisson sigma-model~\cite{CattFel1}. Under suitable conditions, the quantization of D-branes in the Poisson sigma-model which wrap coisotropic submanifolds of $M$, i.e. worldvolumes defined by first-class constraints, may be consistently carried out and related to the deformation quantization in the induced Poisson bracket~\cite{CattFel2}. Branes defined by second-class constraints may also be treated by quantizing Dirac brackets on the worldvolumes~\cite{CFal1}. However, in other concrete string theory settings, most studies of noncommutative gauge theories on curved D-branes have been carried out only within the context of the AdS/CFT correspondence by constructing the branes as solutions in the dual supergravity description of the gauge theory (see for example~\cite{Cai1,Cai2,HashSethi1,HashTh1,ASY1}). It is important to understand how to describe the classical solutions and quantization of these models directly at the field theoretic level in order to better understand to what extent the noncommutative field theories capture the non-local aspects of string theory and quantum gravity, and also to be able to extend the descriptions to more general situations which are not covered by the AdS/CFT correspondence. In this paper we will investigate worldvolume deformations in the simple example of the Hpp-wave background $\NW_6$~\cite{Meessen1}, the six-dimensional Cahen-Wallach lorentzian symmetric space $\CW_6$~\cite{CW1} supported by a constant null NS--NS background three-form flux. The spacetime $\NW_6$ lifts to an exact background of ten-dimensional superstring theory by taking the product with an exact four-dimensional background, but we will not write this explicitly. By projecting the transverse space of $\NW_6$ onto a plane one obtains the four-dimensional Nappi-Witten spacetime $\NW_4$~\cite{NW1}, and occasionally our discussion will pertain to this latter exact string background. Our techniques are presented in a manner which is applicable to a wider class of homogeneous pp-waves supported by a constant Neveu-Schwarz flux. Open string dynamics on this background is particularly interesting because it has the potential to display a time-dependent noncommutative geometry~\cite{DN1,HashSethi1}, and hence the noncommutative field theories built on $\NW_6$ can serve as interesting toy models for string cosmology which can be treated for the most part as ordinary field theories. However, this point is rather subtle for the present geometry~\cite{DN1,HashTh2}. A particular gauge choice which leads to a time-dependent noncommutativity parameter breaks conformal invariance of the worldsheet sigma-model, i.e. it does not satisfy the Born-Infeld field equations, while a conformally invariant background yields a non-constant but time-independent noncommutativity. In this paper we will partially clarify this issue. The more complicated noncommutative geometry that we find contains both the transverse space dependent noncommutativity between transverse and light-cone position coordinates of the Hashimoto-Thomas model~\cite{HashTh2} and the asymptotic time-dependent noncommutativity between transverse space coordinates of the Dolan-Nappi model~\cite{DN1}. The background $\NW_6$ arises as the Penrose-G\"uven limit~\cite{Penrose1,Guven1} of an $\AdS_3\times\S^3$ background~\cite{BF-OFP1}. While this limit is a useful tool for understanding various aspects of string dynamics, it is not in general suitable for describing the quantum geometry of embedded D-submanifolds~\cite{HSz1}. In the following we will resort to a more direct quantization of the spacetime $\NW_6$ and its D-submanifolds. We tackle the problem in a purely algebraic way by developing the noncommutative geometry of the universal enveloping algebra of the twisted Heisenberg algebra, whose Lie group $\mathcal{N}$ coincides with the homogeneous spacetime $\CW_6$ in question. While our algebraic approach has the advantage of yielding very explicit constructions of noncommutative field theories in these settings, it also has several limitations. It does not describe the full quantization of the curved spacetime $\NW_6$, but rather only the semi-classical limit of small NS--NS flux $\theta$ in which $\CW_6$ approaches flat six-dimensional Minkowski space. This is equivalent to the limit of small light-cone time $x^+$ for the open string dynamics. In this limit we can apply the Kontsevich formula to quantize the pertinent Poisson geometry, and hence define noncommutative worldvolume field theories of D-branes. Attempting to quantize the full curved geometry (having $\theta\gg0$) would bring us deep into the stringy regime~\cite{HoYeh1} wherein a field theoretic analysis would not be possible. The worldvolume deformations in this case are described by nonassociative algebras and variants of quantum group algebras~\cite{LorCorn1,ARS1}, and there is no natural notion of quantization for such geometries. We will nonetheless emphasize how the effects of curvature manifest themselves in this semi-classical limit. The spacetime $\NW_6$ is wrapped by non-symmetric D5-branes which can be obtained, as solutions of Type~II supergravity, from the Penrose-G\"uven limit of spacetime-filling D5-branes in $\AdS_3\times\S^3$~\cite{KNSanjay1}. This paper takes a very detailed look at the first steps towards the construction and analysis of noncommutative worldvolume field theories on these branes. While we deal explicitly only with the case of scalar field theory in detail, leaving the more subtle construction of noncommutative gauge theory for future work, our results provide all the necessary ingredients for analysing generic field theories in these settings. We will also examine the problem of quantizing regularly embedded D-submanifolds in $\NW_6$. The symmetric D-branes wrapping twisted conjugacy classes of the Lie group $\mathcal{N}$ were classified in~\cite{SF-OF1}. Their quantization was analysed in~\cite{HSz1} and it was found that, in the semi-classical regime, only the untwisted euclidean D3-branes support a noncommutative worldvolume geometry. We study these D3-branes as a special case of our more general constructions and find exact agreement with the predictions of the boundary conformal field theory analysis~\cite{DAK1}. We also find that the present technique captures the noncommutative worldvolume geometry in a much more natural and tractable way than the foliation of the group $\mathcal{N}$ by quantized coadjoint orbits does~\cite{HSz1}. Our analysis is not restricted to symmetric D-branes and can be applied to other D-submanifolds of the spacetime $\NW_6$ as well. The organisation of the remainder of this paper is as follows. In Section~\ref{TTHA} we describe the twisted Heisenberg algebra, its geometry, and the manner in which it may be quantized in the semi-classical limit. In Section~\ref{StarProds} we construct star-products which are equivalent to the Kontsevich product for the pertinent Poisson geometry. These products are much simpler and more tractable than the star-product on $\NW_6$ which was constructed in~\cite{HSz1} through the noncommutative foliation of $\NW_6$ by D3-branes corresponding to quantized coadjoint orbits. Throughout this paper we will work with three natural star-products which we construct explicitly in closed form. Two of them are canonically related to coordinatizations of the classical pp-wave geometry, while the third one is more natural from the algebraic point of view. We will derive and compare our later results in all three of these star-product deformations. In Section~\ref{WeylSystems} we work out the corresponding generalized Weyl systems~\cite{ALZ1} for these star-products, and use them in Section~\ref{Coprod} to construct the Hopf algebras of twisted isometries~\cite{CPT1,CKNT1,Wess1} of the noncommutative plane wave geometry. In Section~\ref{Derivatives} we use the structure of this Hopf algebra to build derivative operators. In contrast to more conventional approaches~\cite{ConnesBook}, these operators are not derivations of the star-products but are defined so that they are consistent with the underlying noncommutative algebra of functions. This ensures that the quantum group isometries, which carry the nontrivial curvature content of the spacetime, act consistently on the noncommutative geometry. In Section~\ref{Integrals} we define integration of fields through a relatively broad class of consistent traces on the noncommutative algebra of functions. With these general constructions at hand, we proceed in Section~\ref{FieldTheory} to analyse as a simple starting example the case of free scalar field theory on the noncommutative spacetime $\NW_6$. The analysis reveals the flat space limiting procedure in a fairly drastic way. To get around this, we introduce noncommutative frame fields which define derivations of the star-products~\cite{BehrSyk1,HoMiao1}. Some potential physical applications in the context of string dynamics in $\NW_6$~\cite{DAK1,DAK2,BDAKZ1,CFS1,PK1} are also briefly addressed. Finally, as another application we consider in Section~\ref{D3Branes} the construction of noncommutative worldvolume field theories of D-branes in $\NW_6$ using our general formalism and compare with the quantization of symmetric D-branes which was carried out in~\cite{HSz1}. \setcounter{equation}{0}\section{Geometry of the Twisted Heisenberg Algebra\label{TTHA}} In this section we will recall the algebraic definition~\cite{SF-OF1} of the six-dimensional gravitational wave $\NW_6$ of Cahen-Wallach type and describe the manner in which its geometry will be quantized in the subsequent sections. \subsection{Definitions \label{Defs}} The spacetime NW$_6$ is defined as the group manifold of the universal central extension of the subgroup $\mathcal{S}:={\rm SO}(2)\ltimes\real^4$ of the four-dimensional euclidean group ${\rm ISO}(4)={\rm SO}(4)\ltimes\real^4$. The corresponding simply connected group $\mathcal N$ is homeomorphic to six-dimensional Minkowski space ${\mathbb E}^{1,5}$. Its non-semisimple Lie algebra $\mathfrak n$ is generated by elements ${\sf J}$, ${\sf T}$ and ${\sf P}^i_\pm$, $i=1,2$ obeying the non-vanishing commutation relations \begin{eqnarray} \left[{\sf P}^i_+\,,\,{\sf P}^j_-\right]&=&2{\,{\rm i}\,}\delta^{ij}~{\sf T} \ , \nonumber\\ \left[{\sf J}\,,\,{\sf P}^i_\pm\right]&=&\pm{\,{\rm i}\,}{\sf P}^i_\pm \ . \label{NW4algdef}\end{eqnarray} This is just the five-dimensional Heisenberg algebra extended by an outer automorphism which rotates the noncommuting coordinates. The twisted Heisenberg algebra may be regarded as defining the harmonic oscillator algebra of a particle moving in two dimensions, with the additional generator ${\sf J}$ playing the role of the number operator (or equivalently the oscillator hamiltonian). It is this twisting that will lead to a noncommutative geometry that deviates from the usual Moyal noncommutativity generated by the Heisenberg algebra (See~\cite{KS1,DougNek1,Sz1} for reviews in the present context). On the other hand, $\mathfrak{n}$ is a solvable algebra whose properties are very tractable. The subgroup $\mathcal{N}_0$ generated by ${\sf P}^1_\pm$, ${\sf J}$, ${\sf T}$ is called the Nappi-Witten group and its four-dimensional group manifold is the Nappi-Witten spacetime $\NW_4$~\cite{NW1}. The most general invariant, non-degenerate symmetric bilinear form $\langle-,-\rangle:\mathfrak{n}\times\mathfrak{n}\to\real$ is defined by the non-vanishing values~\cite{NW1} \begin{eqnarray} \left\langle{\sf P}^i_+\,,\,{\sf P}^j_-\right\rangle&=&2\,\delta^{ij} \ , \nonumber\\ \left\langle{\sf J}\,,\,{\sf T}\right\rangle&=&1 \ , \nonumber\\ \left\langle{\sf J}\,,\,{\sf J}\right\rangle&=&b \ . \label{NW4innerprod}\end{eqnarray} The arbitrary parameter $b\in\real$ can be set to zero by a Lie algebra automorphism of $\mathfrak{n}$. This inner product has Minkowski signature, so that the group manifold of $\mathcal N$ possesses a homogeneous, bi-invariant lorentzian metric defined by the pairing of the Cartan-Maurer left-invariant $\mathfrak n$-valued one-forms $g^{-1}~{\rm d} g$ for $g\in\mathcal N$ as \begin{eqnarray} {\rm d} s^2=\left\langle g^{-1}~{\rm d} g\,,\,g^{-1}~{\rm d} g\right\rangle \ . \label{NW4CM}\end{eqnarray} A generic group element $g\in\mathcal N$ may be parametrized as \begin{eqnarray} g(u,v,{\mbf a},\overline{{\mbf a}}\,)={\,\rm e}\,^{a_i\,{\sf P}^i_++\overline{a}_i \,{\sf P}^i_-}~{\,\rm e}\,^{\theta\,u\,{\sf J}}~{\,\rm e}\,^{\theta^{-1}\,v\,{\sf T}} \label{NW4coords}\end{eqnarray} with $u,v,\theta\in\real$ and ${\mbf a}=(a_1,a_2)\in\complex^2$. In these global coordinates, the metric (\ref{NW4CM}) reads \begin{eqnarray} {\rm d} s^2=2~{\rm d} u~{\rm d} v+{\rm d}{\mbf a}\cdot{\rm d}\overline{{\mbf a}}+2{\,{\rm i}\,}\theta\,\left({\mbf a}\cdot {\rm d}\overline{{\mbf a}}-\overline{{\mbf a}}\cdot{\rm d}{\mbf a}\right)~{\rm d} u \ . \label{NW4metricNW}\end{eqnarray} The metric (\ref{NW4metricNW}) assumes the standard form of the plane wave metric of a conformally flat, indecomposable Cahen-Wallach lorentzian symmetric spacetime CW$_6$ in six dimensions~\cite{CW1} upon introduction of Brinkman coordinates~\cite{Brinkman1} $(x^+,x^-,{\mbf z})$ defined by rotating the transverse space at a Larmor frequency as $u=x^+$, $v=x^-$ and ${\mbf a}={\,\rm e}\,^{{\,{\rm i}\,}\theta\,x^+/2}\,{\mbf z}$. In these coordinates the metric assumes the stationary form \begin{eqnarray} {\rm d} s^2=2~{\rm d} x^+~{\rm d} x^-+{\rm d}{\mbf z}\cdot{\rm d}\overline{{\mbf z}} -\mbox{$\frac14$}\,\theta^2\,|{\mbf z}|^2~ \left({\rm d} x^+\right)^2 \ , \label{NW4metricBrink}\end{eqnarray} revealing the pp-wave nature of the geometry. Note that on the null hyperplanes of constant $u=x^+$, the geometry becomes that of flat four-dimensional euclidean space ${\mathbb E}^4$. This is the geometry appropriate to the Heisenberg subgroup of $\mathcal{N}$, and is what is expected in the Moyal limit when the effects of the extra generator ${\sf J}$ are turned off. The spacetime NW$_6$ is further supported by a Neveu-Schwarz two-form field $B$ of constant field strength \begin{eqnarray} H&=&-\mbox{$\frac13$}\,\bigl\langle g^{-1}~{\rm d} g\,,\,{\rm d} \left(g^{-1}~{\rm d} g\right)\bigl\rangle {}~=~2{\,{\rm i}\,}\theta~{\rm d} x^+\wedge{\rm d}{\mbf z}^\top\wedge{\rm d}\overline{{\mbf z}} {}~=~{\rm d} B \ , \nonumber\\ B&=&-\mbox{$\frac12$}\,\bigl\langle g^{-1}~{\rm d} g\,,\, \frac{\id+{\rm Ad}_g}{\id-{\rm Ad}_g}\,g^{-1}~{\rm d} g\bigl\rangle~=~ 2{\,{\rm i}\,}\theta\,x^+~{\rm d}{\mbf z}^\top\wedge{\rm d}\overline{{\mbf z}} \ , \label{NS2formBrink}\end{eqnarray} defined to be non-vanishing only on vectors tangent to the conjugacy class containing $g\in\mathcal{N}$~\cite{AlekSch1}. It is the presence of this $B$-field that induces time-dependent noncommutativity of the string background in the presence of D-branes. Because its flux is constant, the noncommutative dynamics in certain kinematical regimes on this space can still be formulated exactly, just like on other symmetric curved noncommutative spaces (See~\cite{Schomrev} for a review of these constructions in the case of compact group manifolds). \subsection{Quantization\label{NWQuant}} We will now begin working our way towards describing how the worldvolumes of D-branes in the spacetime $\NW_6$ are deformed by the non-trivial $B$-field background. The Seiberg-Witten bi-vector~\cite{SW1} induced by the Neveu-Schwarz background (\ref{NS2formBrink}) and the pp-wave metric $G$ given by (\ref{NW4metricBrink}) is \begin{eqnarray} \Theta=-(G+B)^{-1}\,B\,(G-B) \ . \label{SWTheta}\end{eqnarray} Let us introduce the one-form \begin{eqnarray} \Lambda:=-{\,{\rm i}\,}\left(\theta^{-1}\,x_0^-+\theta\,x^+\right)\,\left( {\mbf z}\cdot{\rm d}\overline{{\mbf z}}-\overline{{\mbf z}}\cdot{\rm d}{\mbf z} \right) \label{Lambdadef}\end{eqnarray} on the null hypersurfaces of constant $x^-=x_0^-$, and compute the corresponding two-form gauge transformation of the $B$-field in (\ref{NS2formBrink}) to get \begin{eqnarray} B~\longmapsto~B+{\rm d}\Lambda=-{\,{\rm i}\,}\theta~{\rm d} x^+\wedge\left( {\mbf z}\cdot{\rm d}\overline{{\mbf z}}-\overline{{\mbf z}}\cdot{\rm d}{\mbf z} \right)+2{\,{\rm i}\,}\theta\,x_0^-~{\rm d}\overline{{\mbf z}}{}^{\,\top} \wedge{\rm d}{\mbf z} \ . \label{B6gaugeequiv}\end{eqnarray} The Seiberg-Witten bi-vector in this gauge is given by~\cite{HSz1} \begin{eqnarray} \Theta=-\mbox{$\frac{2{\,{\rm i}\,}\theta}{\theta^2+\left(x_0^-\right)^2}$}\, \left[\theta^2~\partial_-\wedge\left({\mbf z}\cdot{\mbf\partial}- \overline{{\mbf z}}\cdot\overline{{\mbf\partial}}\,\right)+4x_0^-~ {\mbf\partial}^\top\wedge\overline{{\mbf\partial}}\,\right] \ , \label{ThetaLambda}\end{eqnarray} where $\partial_\pm:=\frac\partial{\partial x^\pm}$ and ${\mbf\partial}=(\partial^1,\partial^2):=(\frac\partial{\partial z_1},\frac\partial{\partial z_2})$. Since (\ref{ThetaLambda}) is degenerate on the whole $\NW_6$ spacetime, it does not define a symplectic structure. However, one easily checks that it does define a Poisson structure, i.e. $\Theta$ is a Poisson bi-vector~\cite{HSz1}. In this gauge one can show that a consistent solution to the Born-Infeld equations of motion on a non-symmetric spacetime-filling D5-brane wrapping $\NW_6$ has vanishing ${\rm U}(1)$ gauge field flux $F=0$~\cite{HashTh2}. In particular, at the special value $x_0^-=\theta$ and with the rescaling ${\mbf z}\to\sqrt{2/\theta\,\tau}~{\mbf z}$, the corresponding open string metric~\cite{SW1} $G_{\rm open}=G-B\,G^{-1}\,B$ becomes that of $\CW_6$ in global coordinates (\ref{NW4metricNW})~\cite{HSz1}, while the non-vanishing Poisson brackets corresponding to (\ref{ThetaLambda}) read \begin{eqnarray} \left\{z_i\,,\,\overline{z}_j\right\}&=&2{\,{\rm i}\,}\theta\,\tau~\delta_{ij} \ , \nonumber\\ \left\{x^-\,,\,z_i\right\}&=&-{\,{\rm i}\,}\theta\,z_i \ , \nonumber\\ \left\{x^-\,,\,\overline{z}_i\right\}&=&{\,{\rm i}\,}\theta\, \overline{z}_i \label{Poissonspecial}\end{eqnarray} for $i,j=1,2$. The Poisson algebra thereby coincides with the Lie algebra $\mathfrak{n}$ in this case and the metric on the branes with the standard curved geometry of the pp-wave. In the semi-classical flat space limit $\theta\to0$, the quantization of the brackets (\ref{Poissonspecial}) thereby yields a noncommutative worldvolume geometry on D5-branes wrapping $\NW_6$ which can be associated with a quantization of $\mathfrak{n}$ (or more precisely of its dual $\mathfrak{n}^{\vee\,}$). In this limit, the corresponding quantization of $\NW_6$ is thus given by the associative Kontsevich star-product~\cite{Kont1}. Henceforth, with a slight abuse of notation, we will denote the central coordinate $\tau$ as the plane wave time coordinate $x^+$. Our semi-classical quantization will then be valid in the small time limit $x^+\to0$. Our starting point in describing the noncommutative geometry of $\NW_6$ will therefore be at the algebraic level. We will consider the deformation quantization of the dual $\mathfrak{n}^{\vee\,}$ to the Lie algebra $\mathfrak{n}$. Naively, one may think that the easiest way to carry this out is to compute star products on the pp-wave by taking the Penrose limits of the standard ones on $\S^3$ and $\AdS_3$ (or equivalently by contracting the standard quantizations of the Lie algebras ${\rm su}(2)$ and ${\rm sl}(2,\real)$). However, some quick calculations show that the induced star-products obtained in this way are divergent in the infinite volume limit, and the reason why is simple. While the standard In\"on\"u-Wigner contractions hold at the level of the Lie algebras~\cite{SF-OF1}, they need not necessarily map the corresponding universal enveloping algebras, on which the quantizations are performed. This is connected to the phenomenon that twisted conjugacy classes of branes are not necessarily related by the Penrose-G\"uven limit~\cite{HSz1}. We must therefore resort to a more direct approach to quantizing the spacetime $\NW_6$. For notational ease, we will write the algebra $\mathfrak{n}$ in the generic form \begin{eqnarray} [{\sf X}_a,{\sf X}_b]={\,{\rm i}\,}\theta\,C_{ab}^{~~c}\,{\sf X}_c \ , \label{n6genform}\end{eqnarray} where $({\sf X}_a):=\theta\,({\sf J},{\sf T},{\sf P}^i_\pm)$ are the generators of $\mathfrak{n}$ and the structure constants $C_{ab}^{~~c}$ can be gleamed off from (\ref{NW4algdef}). The algebra (\ref{n6genform}) can be regarded as a formal deformation quantization of the Kirillov-Kostant Poisson bracket on $\mathfrak{n}^{\vee\,}$ in the standard coadjoint orbit method. Let us identify $\mathfrak{n}^{\vee\,}$ as the vector space $\real^6$ with basis ${\sf X}_a^{\vee\,}:=\langle{\sf X}^{~}_a,-\rangle:{\mathfrak n}\to\real$ dual to the ${\sf X}_a^{~}$. In the algebra of polynomial functions $\complex(\mathfrak{n}^{\vee\,})=\complex(\real^6)$, we may then identify the generators ${\sf X}_a$ themselves with the coordinate functions \begin{eqnarray} {\sf X}^{~}_{\sf J}({\mbf x})&=&x^{~}_{\sf T}~=~x^- \ , \nonumber\\ {\sf X}^{~}_{\sf T}({\mbf x}) &=&x^{~}_{\sf J}~=~x^+ \ , \nonumber\\ {\sf X}^{~}_{{\sf P}_+^i}({\mbf x})&=&2x^{~}_{{\sf P}_-^i}~=~2\overline{z}_i \ , \nonumber\\ {\sf X}^{~}_{{\sf P}_-^i}({\mbf x})&=&2x^{~}_{{\sf P}_+^i}~=~2z_i \label{Xacoordfns}\end{eqnarray} for any ${\mbf x}\in\mathfrak{n}^{\vee\,}$ with component $x_a$ in the ${\sf X}_a^{\vee\,}$ direction. These functions generate the whole coordinate algebra and their Poisson bracket $\Theta$ is defined by \begin{eqnarray} \Theta({\sf X}_a,{\sf X}_b)({\mbf x})={\mbf x}\bigl([{\sf X}_a,{\sf X}_b]\bigr) ~~~~ \forall{\mbf x}\in \mathfrak{n}^{\vee\,} \ . \label{KKXadef}\end{eqnarray} Therefore, when viewed as functions on $\real^6$ the Lie algebra generators have a Poisson bracket given by the Lie bracket, and their quantization is provided by (\ref{n6genform}) with deformation parameter~$\theta$. In the next section we will explore various aspects of this quantization and derive several (equivalent) star products on ${\mathfrak n}^{\vee\,}$. \setcounter{equation}{0}\section{Gutt Products\label{StarProds}} The formal completion of the space of polynomials $\complex({\mathfrak n}^{\vee\,})$ is the algebra ${\rm C}^\infty(\mathfrak{n}^{\vee\,})$ of smooth functions on $\mathfrak{n}^{\vee\,}$. There is a natural way to construct a star-product on the cotangent bundle $T^*\mathcal{N}\cong\mathcal{N}\times\mathfrak{n}^{\vee\,}$, which naturally induces an associative product on ${\rm C}^\infty({\mathfrak n}^{\vee\,})$. This induced product is called the Gutt product~\cite{Gutt1}. The Poisson bracket defined by (\ref{KKXadef}) naturally extends to a Poisson structure $\Theta:{\rm C}^\infty({\mathfrak n}^{\vee\,})\times{\rm C}^\infty({\mathfrak n}^{\vee\,}) \to{\rm C}^\infty({\mathfrak n}^{\vee\,})$ defined by the Kirillov-Kostant bi-vector \begin{eqnarray} \Theta=\mbox{$\frac12$}\,C_{ab}^{~~c}\,x_c\,\partial^a\wedge\partial^b \label{KKbivector}\end{eqnarray} where $\partial^a:=\frac{\partial}{\partial x_a}$. This coincides with the Seiberg-Witten bi-vector in the limits described in Section~\ref{NWQuant}. The Gutt product constructs a quantization of this Poisson structure. It is equivalent to the Kontsevich star-product in this case~\cite{Dito1}, and by construction it keeps that part of the Kontsevich formula which is associative~\cite{Shoikhet1}. In general, within the present context, the Gutt and Kontsevich deformation quantizations are only identical for nilpotent Lie algebras~\cite{Kathotia1}. The algebra $\complex({\mathfrak n}^{\vee\,})$ of polynomial functions on the dual to the Lie algebra is naturally isomorphic to the symmetric tensor algebra $S({\mathfrak n})$ of ${\mathfrak n}$. By the Poincar\'e-Birkhoff-Witt theorem, there is a natural isomorphism $\Omega:S({\mathfrak n})\to U({\mathfrak n})$ with the universal enveloping algebra $U({\mathfrak n})$ of ${\mathfrak n}$. Using the above identifications, this extends to a canonical isomorphism \begin{eqnarray} \Omega\,:\,{\rm C}^\infty\left(\real^6\right)~\longrightarrow~\overline{ U({\mathfrak n})^\complex} \label{Sigmaiso}\end{eqnarray} defined by specifying an ordering for the elements of the basis of monomials for $S({\mathfrak n})$, where $\overline{U({\mathfrak n})^\complex}$ denotes a formal completion of the complexified universal enveloping algebra $U(\mathfrak{n})^\complex:=U(\mathfrak{n})\otimes\complex$. Denoting this ordering by $\NO-\NO$, we may write this isomorphism symbolically as \begin{eqnarray} \Omega(x_{a_1}\cdots x_{a_n})=\NO\,{\sf X}_{a_1}\cdots{\sf X}_{a_n}\,\NO \ . \label{Sigmasymbol}\end{eqnarray} The original Gutt construction~\cite{Gutt1} takes the isomorphism $\Omega$ on $S({\mathfrak n})$ to be symmetrization of monomials. In this case $\Omega(f)$ is usually called the Weyl symbol of $f\in{\rm C}^\infty(\real^6)$ and the symmetric ordering $\NO-\NO$ of symbols $\Omega(f)$ is called Weyl ordering. In the following we shall work with three natural orderings appropriate to the algebra ${\mathfrak n}$. The isomorphism (\ref{Sigmaiso}) can be used to transport the algebraic structure on the universal enveloping algebra $U({\mathfrak n})$ of ${\mathfrak n}$ to the algebra of smooth functions on ${\mathfrak n}^{\vee\,}\cong\real^6$ and give the star-product defined by \begin{eqnarray} f\star g:=\Omega^{-1}\bigl(\,\NO\,\Omega(f)\cdot\Omega(g)\,\NO \,\bigr) \ , ~~ f,g\in{\rm C}^\infty\left(\real^6\right) \ . \label{fstargSigma}\end{eqnarray} The product on the right-hand side of the formula (\ref{fstargSigma}) is taken in $U({\mathfrak n})$, and it follows that $\star$ defines an associative, noncommutative product. Moreover, it represents a deformation quantization of the Kirillov-Kostant Poisson structure on ${\mathfrak n}^{\vee\,}$, in the sense that \begin{eqnarray} [x,y]_\star:=x\star y-y\star x={\,{\rm i}\,} \theta\,\Theta(x,y) \ , ~~ x,y\in\complex_{(1)}\left({\mathfrak n}^{\vee\,}\right) \ , \label{xyPoisson}\end{eqnarray} where $\complex_{(1)}({\mathfrak n}^{\vee\,})$ is the subspace of homogeneous polynomials of degree~$1$ on ${\mathfrak n}^{\vee\,}$. In particular, the Lie algebra relations (\ref{n6genform}) are reproduced by star-commutators of the coordinate functions as \begin{eqnarray} [x_a,x_b]_\star={\,{\rm i}\,}\theta\,C_{ab}^{~~c}\,x_c \ , \label{xaxbstarcomm}\end{eqnarray} in accordance with the Poisson brackets (\ref{Poissonspecial}) and the definition (\ref{KKXadef}). Let us now describe how to write the star-product (\ref{fstargSigma}) explicitly in terms of a bi-differential operator $\hat{\mathcal{D}}:{\rm C}^\infty({\mathfrak n}^{\vee\,})\times{\rm C}^\infty({\mathfrak n}^{\vee\,})\to {\rm C}^\infty({\mathfrak n}^{\vee\,})$~\cite{Kathotia1}. Using the Kirillov-Kostant Poisson structure as before, we identify the generators of ${\mathfrak n}$ as coordinates on ${\mathfrak n}^{\vee\,}$. This establishes, for small $s\in\real$, a one-to-one correspondence between group elements ${\,\rm e}\,^{s\,{\sf X}}$, ${\sf X}\in{\mathfrak n}$ and functions ${\,\rm e}\,^{s\,x}$ on ${\mathfrak n}^{\vee\,}$. Pulling back the group multiplication of elements ${\,\rm e}\,^{s\,{\sf X}}\in\mathcal{N}$ via this correspondence induces a bi-differential operator $\hat{\mathcal{D}}$ acting on the functions ${\,\rm e}\,^{s\,x}$. Since these functions separate the points on ${\mathfrak n}^{\vee\,}$, this extends to an operator on the whole of ${\rm C}^\infty({\mathfrak n}^{\vee\,})$. To apply this construction explicitly, we use the following trick~\cite{MSSW1,BehrSyk1} which will also prove useful for later considerations. By restricting to an appropriate Schwartz subspace of functions $f\in{\rm C}^\infty(\real^6)$, we may use a Fourier representation \begin{eqnarray} f({\mbf x})=\int\limits_{\real^6}\frac{{\rm d}{\mbf k}}{(2\pi)^6}~\tilde f({\mbf k})~ {\,\rm e}\,^{{\,{\rm i}\,}{\mbf k}\cdot{\mbf x}} \ . \label{Fouriertransfdef}\end{eqnarray} This establishes a correspondence between (Schwartz) functions on ${\mathfrak n}^{\vee\,}$ and elements of the complexified group $\mathcal{N}^\complex:=\mathcal{N}\otimes\complex$. The products of symbols $\Omega(f)$ may be computed using (\ref{Sigmasymbol}), and the star-product (\ref{fstargSigma}) can be represented in terms of a product of group elements in $\mathcal{N}^\complex$ as \begin{eqnarray} f\star g=\int\limits_{\real^6}\frac{{\rm d}{\mbf k}}{(2\pi)^6}~ \int\limits_{\real^6}\frac{{\rm d}{\mbf q}}{(2\pi)^6}~\tilde f({\mbf k})\, \tilde g({\mbf q})~\Omega^{-1}\left(\,\NO~~\NO\,{\,\rm e}\,^{{\,{\rm i}\,} k^a\,{\sf X}_a}\,\NO\cdot \NO\,{\,\rm e}\,^{{\,{\rm i}\,} q^a\,{\sf X}_a}\,\NO~~\NO\,\right) \ . \label{fstargFourier}\end{eqnarray} Using the Baker-Campbell-Hausdorff formula, to be discussed below, we may write \begin{eqnarray} \NO~~\NO\,{\,\rm e}\,^{{\,{\rm i}\,} k^a\,{\sf X}_a}\,\NO\cdot\NO\,{\,\rm e}\,^{{\,{\rm i}\,} q^a\,{\sf X}_a}\,\NO~~\NO= \NO\,{\,\rm e}\,^{{\,{\rm i}\,} D^a({\mbf k},{\mbf q})\,{\sf X}_a}\,\NO \label{NOproductsBCH}\end{eqnarray} for some function ${\mbf D}=(D^a):\real^6\times\real^6\to\real^6$. This enables us to rewrite the star-product (\ref{fstargFourier}) in terms of a bi-differential operator $f\star g:=\hat{\mathcal{D}}(f,g)$ given explicitly by \begin{eqnarray} f\star g=f~{\,\rm e}\,^{{\,{\rm i}\,}{\mbf x}\cdot[{\mbf D}(\,-{\,{\rm i}\,}\overleftarrow{{\mbf\partial}} \,,\,-{\,{\rm i}\,}\overrightarrow{{\mbf\partial}}\,)+{\,{\rm i}\,}\overleftarrow{{\mbf\partial}}+{\,{\rm i}\,} \overrightarrow{{\mbf\partial}}\,]}~g \label{fstargbidiff}\end{eqnarray} with ${\mbf\partial}:=(\partial^a)$. In particular, the star-products of the coordinate functions themselves may be computed from the formula \begin{eqnarray} x_a\star x_b=\left.-\frac{\partial}{\partial k^a}\frac\partial {\partial q^b}{\,\rm e}\,^{{\,{\rm i}\,}{\mbf D}({\mbf k},{\mbf q})\cdot{\mbf x}}\right|_{{\mbf k}={\mbf q}=\mbf0} \ . \label{xastarxb}\end{eqnarray} Finally, let us describe how to explicitly compute the functions $D^a({\mbf k},{\mbf q})$ in (\ref{NOproductsBCH}). For this, we consider the Dynkin form of the Baker-Campbell-Hausdorff formula which is given for ${\sf X},{\sf Y}\in{\mathfrak n}$ by \begin{equation} \label{eq:BCH:define} {\,\rm e}\,^{\sf X}~{\,\rm e}\,^{\sf Y}={\,\rm e}\,^{\mathrm{H}({\sf X}:{\sf Y})} \ , \end{equation} where $\mathrm{H}({\sf X}:{\sf Y})=\sum_{n\geq1}\mathrm{H}_n({\sf X}:{\sf Y})\in{\mathfrak n}$ is generically an infinite series whose terms may be calculated through the recurrence relation \begin{eqnarray} \label{eq:BCH} &&(n+1)~\mathrm{H}_{n+1}({\sf X}:{\sf Y})~=~\mbox{$\frac 12$}\,\bigl[{\sf X}-{\sf Y}\,,\, \mathrm{H}_n({\sf X}:{\sf Y})\bigr] \nonumber \\ &&~~~~~~~~~~~~~~~~~~~~ +\,\sum_{p=1}^{\lfloor n/2\rfloor}\frac{B_{2p}}{(2p)!}~ \sum_{\substack{k_1,\ldots,k_{2p}> 0 \\ k_1+\ldots+k_{2p}=n }} \bigl[\mathrm{H}_{k_1}({\sf X}:{\sf Y})\,,\,\bigl[\,\ldots\,,\,\bigl[ \mathrm{H}_{k_{2p}}({\sf X}:{\sf Y})\,,\,{\sf X}+{\sf Y}\bigr]\ldots\bigr]\,\bigr]\nonumber\\ \end{eqnarray} with $\mathrm{H}_1({\sf X}:{\sf Y}):={\sf X}+{\sf Y}$. The coefficients $B_{2p}$ are the Bernoulli numbers which are defined by the generating function \begin{equation} \label{eq:BCH:K} \frac{s}{1-{\,\rm e}\,^{-s}}-\frac s2-1=\sum_{p=1}^\infty\frac{B_{2p}}{(2p)!} ~s^{2p} \ . \end{equation} The first few terms of the formula (\ref{eq:BCH:define}) may be written explicitly as \begin{eqnarray} \label{eq:BCH:1} \mathrm{H}_1({\sf X}:{\sf Y})&=& {\sf X}+{\sf Y} \ , \nonumber\\ \mathrm{H}_2({\sf X}:{\sf Y})&=&\mbox{$\frac 12$}\,\cb {\sf X}{\sf Y} \ , \nonumber \\ \mathrm{H}_3({\sf X}:{\sf Y})&=&\mbox{$\frac 1{12}$}\,\bigl[{\sf X}\,,\,\cb {\sf X}{\sf Y}\,\bigr] -\mbox{$\frac 1{12}$}\,\bigl[{\sf Y}\,,\,\cb {\sf X}{\sf Y}\,\bigr] \ , \nonumber\\ \mathrm{H}_4({\sf X}:{\sf Y})&=& -\mbox{$\frac 1{24}$}\,\bigl[{\sf Y}\,,\,\bigl[{\sf X} \,,\,\cb {\sf X}{\sf Y}\,\bigr]\,\bigr] \ . \end{eqnarray} Terms in the series grow increasingly complicated due to the sum over partitions in \eqref{eq:BCH}, and in general there is no closed symbolic form, as in the case of the Moyal product based on the ordinary Heisenberg algebra, for the functions $D^a({\mbf k},{\mbf q})$ appearing in (\ref{NOproductsBCH}). However, at least for certain ordering prescriptions, the solvability of the Lie algebra ${\mathfrak n}$ enables one to find explicit expressions for the star-product (\ref{fstargbidiff}) in this fashion. We will now proceed to construct three such products. \subsection{Time Ordering\label{TOP}} The simplest Gutt product is obtained by choosing a ``time ordering'' prescription in (\ref{Sigmasymbol}) whereby all factors of the time translation generator ${\sf J}$ occur to the far right in any monomial in $U({\mathfrak n})$. It coincides precisely with the global coordinatization (\ref{NW4coords}) of the Cahen-Wallach spacetime, and written on elements of the complexified group $\mathcal{N}^\complex$ it is defined by \begin{equation} \label{eq:time:defn} \Omega_*\left({\,\rm e}\,^{{\,{\rm i}\,}{\mbf k}\cdot{\mbf x}}\right)= \NOa\,{\,\rm e}\,^{{\,{\rm i}\,} k^a\,{\sf X}_a}\,\NOa:={\,\rm e}\,^{{\,{\rm i}\,}(p_i^+\,{\sf P}^i_+ +p_i^-\,{\sf P}^i_-)}~{\,\rm e}\,^{{\,{\rm i}\,} j\,{\sf J}}~{\,\rm e}\,^{{\,{\rm i}\,} t\,{\sf T}} \ , \end{equation} where we have denoted ${\mbf k}:=(j,t,{\mbf p}^\pm)$ with $j,t\in\real$ and ${\mbf p}^\pm=\overline{{\mbf p}^\mp}=(p_1^\pm,p_2^\pm)\in\complex^2$. To calculate the corresponding star-product $*$, we have to compute the group products \begin{eqnarray} \NOa~~\NOa\,{\,\rm e}\,^{{\,{\rm i}\,} k^a\,{\sf X}_a}\,\NOa\cdot\NOa\,{\,\rm e}\,^{{\,{\rm i}\,} k^{\prime\,a}\,{\sf X}_a} \,\NOa~~\NOa&=&\NOa\,{\,\rm e}\,^{{\,{\rm i}\,}(p_i^+\,{\sf P}^i_++p_i^-\,{\sf P}^i_-)}~{\,\rm e}\,^{{\,{\rm i}\,} j\,{\sf J}}~ {\,\rm e}\,^{{\,{\rm i}\,} t\,{\sf T}}\nonumber\\ && \quad\times~ {\,\rm e}\,^{{\,{\rm i}\,}(p_i^{\prime\,+}\,{\sf P}^i_++p_i^{\prime\,-}\,{\sf P}^i_-)}~{\,\rm e}\,^{{\,{\rm i}\,} j'\,{\sf J}}~ {\,\rm e}\,^{{\,{\rm i}\,} t'\,{\sf T}}\,\NOa \ . \label{TOgpprods}\end{eqnarray} The simplest way to compute these products is to realize the six-dimensional Lie algebra ${\mathfrak n}$ as a central extension of the subalgebra ${\mathfrak s}={\rm so}(2)\ltimes\real^4$ of the four-dimensional euclidean algebra ${\rm iso}(4)={\rm so}(4)\ltimes\real^4$~\cite{SF-OF1,F-OFS1}. Regarding $\real^4$ as $\complex^2$ (with respect to a chosen complex structure), for generic $\theta\neq0$ the generators of ${\mathfrak n}$ act on ${\mbf w}\in\complex^2$ according to the affine transformations ${\,\rm e}\,^{{\,{\rm i}\,} j\,{\sf J}}\cdot{\mbf w}={\,\rm e}\,^{-\theta\,j}\,{\mbf w}$ and ${\,\rm e}\,^{{\,{\rm i}\,}(p_i^+\,{\sf P}^i_++p_i^-\,{\sf P}^i_-)}\cdot{\mbf w}={\mbf w}+{\,{\rm i}\,}\theta\,{\mbf p}^-$, corresponding to a combined rotation in the $(12)$, $(34)$ planes and translations in $\real^4\cong\complex^2$. The central element generates an abstract one-parameter subgroup acting as ${\,\rm e}\,^{{\,{\rm i}\,} t\,{\sf T}}\cdot{\mbf w}={\,\rm e}\,^{-\theta\,t}\,{\mbf w}$ in this representation. From this action we can read off the group multiplication laws \begin{eqnarray} {\,\rm e}\,^{{\,{\rm i}\,} j\,{\sf J}}~{\,\rm e}\,^{{\,{\rm i}\,} j'\,{\sf J}}&=&{\,\rm e}\,^{{\,{\rm i}\,}(j+j'\,)\,{\sf J}} \ , \label{JJgpmultlaw}\\ {~~~~}^{~~}_{~~}\nonumber\\{\,\rm e}\,^{{\,{\rm i}\,} j\,{\sf J}}~{\,\rm e}\,^{{\,{\rm i}\,}(p_i^+\,{\sf P}^i_++p_i^-\,{\sf P}^i_-)} &=&{\,\rm e}\,^{{\,{\rm i}\,}(\,{\,\rm e}\,^{-\theta\,j}\,p_i^+\,{\sf P}^i_+ +{\,\rm e}\,^{\theta\,j}\,p_i^-\,{\sf P}^i_-)}~{\,\rm e}\,^{{\,{\rm i}\,} j\,{\sf J}} \ , \label{JQgpmultlaw}\\ {~~~~}^{~~}_{~~}\nonumber\\{\,\rm e}\,^{{\,{\rm i}\,}(p_i^+\,{\sf P}^i_++p_i^-\,{\sf P}^i_-)}~ {\,\rm e}\,^{{\,{\rm i}\,}(p_i^{\prime\,+}\,{\sf P}^i_++p_i^{\prime\,-}\,{\sf P}^i_-)}~ &=&{\,\rm e}\,^{{\,{\rm i}\,}[(p_i^++p_i^{\prime\,+})\,{\sf P}^i_+ +(p_i^-+p_i^{\prime\,-})\,{\sf P}^i_-]}~{\,\rm e}\,^{2\theta~{\rm Im}( {\mbf p}^+\cdot{\mbf p}^{\prime\,-})\,{\sf T}} \label{QQgpmultlaw}\end{eqnarray} where the formula (\ref{JQgpmultlaw}) displays the semi-direct product nature of the euclidean group, while (\ref{QQgpmultlaw}) displays the group cocycle of the projective representation of the subgroup $\mathcal S$ of ${\rm ISO}(4)$, arising from the central extension, which makes the translation algebra noncommutative and is computed from the Baker-Campbell-Hausdorff formula. Using (\ref{JJgpmultlaw})--(\ref{QQgpmultlaw}) we may now compute the products (\ref{TOgpprods}) and one finds \begin{eqnarray} \NOa~~\NOa\,{\,\rm e}\,^{{\,{\rm i}\,} k^a\,{\sf X}_a}\,\NOa\cdot\NOa\,{\,\rm e}\,^{{\,{\rm i}\,} k^{\prime\,a}\,{\sf X}_a} \,\NOa~~\NOa&=&{\,\rm e}\,^{{\,{\rm i}\,}[(p_i^++{\,\rm e}\,^{-\theta\,j}\,p_i^{\prime\,+})\, {\sf P}^i_++(p_i^-+{\,\rm e}\,^{\theta\,j}\,p_i^{\prime\,-})\,{\sf P}^i_-]}~ {\,\rm e}\,^{{\,{\rm i}\,}(j+j'\,)\,{\sf J}}\nonumber\\ &&\times~ {\,\rm e}\,^{{\,{\rm i}\,}[t+t'-\theta\,({\,\rm e}\,^{\theta\,j}\, {\mbf p}^+\cdot{\mbf p}^{\prime\,-}-{\,\rm e}\,^{-\theta\,j}\, {\mbf p}^-\cdot{\mbf p}^{\prime\,+})]\,{\sf T}} \ . \label{TOgpprodexpl}\end{eqnarray} {}From (\ref{xastarxb}) we may compute the star-products between the coordinate functions on ${\mathfrak n}^{\vee\,}$ and easily verify the commutation relations of the algebra ${\mathfrak n}$, \begin{eqnarray} x_a*x_a&=&(x_a)^2 \ , \nonumber\\x_a*x^+&=&x^+*x_a~=~x_a\,x^+ \ , \nonumber\\zeta_1*z_2&=&z_2*z_1~=~z_1\,z_2 \ , \nonumber\\ \overline{z}_1*\overline{z}_2&=&\overline{z}_2*\overline{z}_1 {}~=~\overline{z}_1\,\overline{z}_2 \ , \nonumber\\ x^-*z_i&=&x^-\,z_i-{\,{\rm i}\,}\theta\,z_i \ , \nonumber\\ z_i*x^-&=&x^-\,z_i \ , \nonumber\\ x^-*\overline{z}_i&=&x^-\,\overline{z}_i+{\,{\rm i}\,}\theta\,\overline{z}_i \ , \nonumber\\ \overline{z}_i*x^-&=&x^-\,\overline{z}_i \ , \nonumber\\ z_i*\overline{z}_i&=&z_i\,\overline{z}_i-{\,{\rm i}\,}\theta\,x^+ \ , \nonumber\\ \overline{z}_i*z_i&=&z_i\,\overline{z}_i+{\,{\rm i}\,}\theta\,x^+ \ , \label{TOcoordstarprods}\end{eqnarray} with $a=1,\dots,6$ and $i=1,2$. From (\ref{NOproductsBCH},\ref{fstargbidiff}) we find the star-product $*$ of generic functions $f,g\in{\rm C}^{\infty}({\mathfrak n}^{\vee\,})$ given by \begin{eqnarray} f*g&=&\mu\circ\exp\left[{\,{\rm i}\,}\theta\,x^+\,\left({\,\rm e}\,^{-{\,{\rm i}\,}\theta\,\partial_-}\, {\mbf\partial}^\top\otimes\overline{{\mbf\partial}}-{\,\rm e}\,^{{\,{\rm i}\,}\theta\,\partial_-}\, {\overline{{\mbf\partial}}}{}^{\,\top}\otimes{\mbf\partial}\right)\right.\nonumber \\ &&\qquad\qquad+\left.\overline{z}_i\, \left({\,\rm e}\,^{{\,{\rm i}\,}\theta\,\partial_-}-1\right)\otimes \partial^i+z_i\,\left({\,\rm e}\,^{-{\,{\rm i}\,}\theta\,\partial_-}-1\right) \otimes\overline{\partial}{}^{\,i}\right]f\otimes g \ , \label{TOstargen}\end{eqnarray} where $\mu(f\otimes g)=f\,g$ is the pointwise product. To second order in the deformation parameter $\theta$ we obtain \begin{eqnarray} \label{eq:time:positionspace} \nonumber f\ast g&=&f\,g -{\,{\rm i}\,}\theta\,\left[ x^+\,\left(\,\overline{{\mbf\partial}}f\cdot{\mbf\partial} g -{\mbf\partial} f\cdot\overline{{\mbf\partial}}g\right) -\overline{{\mbf z}}\cdot\partial_-f\,{\mbf\partial} g +{\mbf z}\cdot\partial_-f\,\overline{{\mbf\partial}}g \right]\\\nonumber &&-\,\theta^2\,\mbox{$\sum\limits_{i=1,2}$}\,\left[\, \mbox{$\frac12$}\,\left(x^+\right)^2\,\left((\partial^i)^2f\, (\,{\overline{\partial}}{}^{\,i})^2g -2{\overline{\partial}}{}^{\,i}\partial^if\,{\overline{\partial}}{}^{\,i}\partial^ig +({\overline{\partial}}{}^{\,i})^2f\,(\partial^i)^2g\right)\right. \\\nonumber&&\qquad\qquad\quad-\,x^+\,\left(\partial^i\partial_- f\, {\overline{\partial}}{}^{\,i}g -{\overline{\partial}}{}^{\,i}\partial_- f\,\partial^ig\right) -x^+\,\overline{z}_i\,\left(\,{\overline{\partial}}{}^{\,i}\partial_- f\,(\partial^i)^2g -\partial^i\partial_- f\,{\overline{\partial}}{}^{\,i}\partial^ig\right)\\ \nonumber &&\qquad\qquad\quad +\,x^+\,z_i\,\left(\,{\overline{\partial}}{}^{\,i}\partial_- f\,{\overline{\partial}}{}^{\,i}\partial^ig -\partial^i\partial_- f\,(\,{\overline{\partial}}{}^{\,i})^2g\right) -\overline{z}_i\,z_i\,\partial_-^2f\,{\overline{\partial}}{}^{\,i}\partial^ig \\ \nonumber &&\qquad\qquad\quad +\Bigl.\mbox{$\frac12$}\,\left(\overline{z}_i^{\,2}\,\partial_-^2f\,(\partial^i)^2g +\overline{z}_i\,\partial_-^2f\,\partial^ig +z_i\,\partial_-^2f\,{\overline{\partial}}{}^{\,i}g +z_i^2\,\partial_-^2f\,(\,{\overline{\partial}}{}^{\,i})^2g\right) \Bigr]\\ && +\,O\left(\theta^3\right) \ . \end{eqnarray} \subsection{Symmetric Time Ordering\label{TSOP}} Our next Gutt product is obtained by taking a ``symmetric time ordering'' whereby any monomial in $U({\mathfrak n})$ is the symmetric sum over the two time orderings obtained by placing ${\sf J}$ to the far right and to the far left. This ordering is induced by the group contraction of ${\rm U}(1)\times{\rm SU}(2)$ onto the Nappi-Witten group $\mathcal{N}_0$~\cite{DAK2}, and it thereby induces the coordinatization of $\NW_4$ that is obtained from the Penrose-G\"uven limit of the spacetime $\S^{1,0}\times\S^3$, i.e. it coincides with the Brinkman coordinatization of the Cahen-Wallach spacetime. On elements of $\mathcal{N}^\complex$ it is defined by \begin{eqnarray} \Omega_\bullet\left({\,\rm e}\,^{{\,{\rm i}\,}{\mbf k}\cdot{\mbf x}}\right)= \NOb\,{\,\rm e}\,^{{\,{\rm i}\,} k^a\,{\sf X}_a}\,\NOb:={\,\rm e}\,^{\frac\ii2\,j\,{\sf J}}~ {\,\rm e}\,^{{\,{\rm i}\,}(p_i^+\,{\sf P}^i_+ +p_i^-\,{\sf P}^i_-)}~{\,\rm e}\,^{\frac\ii2\,j\,{\sf J}}~{\,\rm e}\,^{{\,{\rm i}\,} t\,{\sf T}} \ . \label{TOsymgpprods}\end{eqnarray} {}From (\ref{JJgpmultlaw})--(\ref{QQgpmultlaw}) we can again easily compute the required group products to get \begin{eqnarray} \NOb~~\NOb\,{\,\rm e}\,^{{\,{\rm i}\,} k^a\,{\sf X}_a}\,\NOb\cdot\NOb\,{\,\rm e}\,^{{\,{\rm i}\,} k^{\prime\,a}\,{\sf X}_a} \,\NOb~~\NOb&=&{\,\rm e}\,^{\frac\ii2\,(j+j'\,)\,{\sf J}}\nonumber\\ &&\times~ {\,\rm e}\,^{{\,{\rm i}\,}[({\,\rm e}\,^{\frac{\theta}2\,j'}\,p_i^++ {\,\rm e}\,^{-\frac{\theta}2\,j}\,p_i^{\prime\,+})\, {\sf P}^i_++({\,\rm e}\,^{-\frac{\theta}2\,j'}\,p_i^-+ {\,\rm e}\,^{\frac{\theta}2\,j}\,p_i^{\prime\,-})\,{\sf P}^i_-]} \nonumber\\ &&\times~{\,\rm e}\,^{\frac\ii2\,(j+j'\,)\,{\sf J}}~ {\,\rm e}\,^{{\,{\rm i}\,}[t+t'-\theta\,({\,\rm e}\,^{\frac{\theta}2\,(j+j'\,)}\, {\mbf p}^+\cdot{\mbf p}^{\prime\,-}-{\,\rm e}\,^{-\frac{\theta}2\,(j+j'\,)}\, {\mbf p}^-\cdot{\mbf p}^{\prime\,+})]\,{\sf T}} \ . \nonumber\\ && \label{TOsymgpprodexpl}\end{eqnarray} With the same conventions as above, from (\ref{xastarxb}) we may now compute the star-products $\bullet$ between the coordinate functions on ${\mathfrak n}^{\vee\,}$ and again verify the commutation relations of the algebra ${\mathfrak n}$, \begin{eqnarray} x_a\bullet x_a&=&(x_a)^2 \ , \nonumber\\ x_a\bullet x^+&=&x^+\bullet x_a~=~x_a\,x^+ \ , \nonumber\\zeta_1\bullet z_2&=&z_2\bullet z_1 {}~=~z_1\,z_2 \ ,\nonumber\\\overline{z}_1\bullet \overline{z}_2 &=&\overline{z}_2\bullet \overline{z}_1 {}~=~\overline{z}_1\,\overline{z}_2 \ , \nonumber\\ x^-\bullet z_i&=&x^-\,z_i-\mbox{$\frac\ii2$}\, \theta\,z_i \ , \nonumber\\ z_i\bullet x^-&=&x^-\,z_i+\mbox{$\frac\ii2$}\, \theta\,z_i \ , \nonumber\\x^-\bullet \overline{z}_i &=&x^-\,\overline{z}_i+\mbox{$\frac\ii2$}\, \theta\,\overline{z}_i \ , \nonumber\\\overline{z}_i\bullet x^- &=&x^-\,\overline{z}_i-\mbox{$\frac\ii2$}\, \theta\,\overline{z}_i \ , \nonumber\\ z_i\bullet \overline{z}_i&=&z_i\,\overline{z}_i-{\,{\rm i}\,}\theta\,x^+ \ , \nonumber\\ \overline{z}_i\bullet z_i &=&z_i\,\overline{z}_i+ {\,{\rm i}\,}\theta\,x^+ \ . \label{TOsymcoordstarprods}\end{eqnarray} {}From (\ref{NOproductsBCH},\ref{fstargbidiff}) we find for generic functions the formula \begin{eqnarray} f\bullet g&=&\mu\circ\exp\left\{{\,{\rm i}\,}\theta\,x^+\,\left({\,\rm e}\,^{-\frac{{\,{\rm i}\,}\theta}2 \,\partial_-}\, {\mbf\partial}^\top\otimes{\,\rm e}\,^{-\frac{{\,{\rm i}\,}\theta}2\,\partial_-}\, \overline{{\mbf\partial}}-{\,\rm e}\,^{\frac{{\,{\rm i}\,}\theta}2\,\partial_-}\, \overline{{\mbf\partial}}{}^{\,\top}\otimes{\,\rm e}\,^{\frac{{\,{\rm i}\,}\theta}2\,\partial_-}\, {\mbf\partial}\right)\right.\nonumber \\ &&\qquad\qquad+\,\overline{z}_i\,\left[\partial^i\otimes \left({\,\rm e}\,^{-\frac{{\,{\rm i}\,}\theta}2\,\partial_-}-1\right) +\left({\,\rm e}\,^{\frac{{\,{\rm i}\,}\theta}2\,\partial_-}-1\right)\otimes \partial^i\right]\nonumber\\ &&\qquad\qquad+\left. z_i\,\left[\,\overline{\partial}{}^{\,i}\otimes \left({\,\rm e}\,^{\frac{{\,{\rm i}\,}\theta}2\,\partial_-}-1\right) +\left({\,\rm e}\,^{-\frac{{\,{\rm i}\,}\theta}2\,\partial_-}-1\right) \otimes\overline{\partial}{}^{\,i}\right]\right\}f\otimes g \ . \label{TOsymstargen}\end{eqnarray} To second order in $\theta$ we obtain \begin{eqnarray} \label{eq:symtime:positionspace}\nonumber f\bullet g&=&f\,g-\mbox{$\frac{{\,{\rm i}\,}}2$}\,\theta\,\left[ 2x^+\,\left(\,\overline{{\mbf\partial}}f\cdot{\mbf\partial} g - {\mbf\partial} f\cdot\overline{{\mbf\partial}}g\right)\right.\\\nonumber &&\qquad\qquad\quad\quad+\left.\overline{{\mbf z}}\cdot\left({\mbf\partial} f\,\partial_- g - \partial_- f\,{\mbf\partial} g\right)+ {\mbf z}\cdot\left(\partial_- f\,\overline{{\mbf\partial}}g - \overline{{\mbf\partial}}f\,\partial_- g\right)\right]\\ \nonumber &&-\,\mbox{$\frac1{2}$}\,\theta^2\,\mbox{$\sum\limits_{i=1,2}$}\, \left[\left(x^+\right)^2\,\left((\,\overline{\partial}{}^{\,i})^2f\,(\partial^i)^2g +(\partial^i)^2f\,(\,\overline{\partial}{}^{\,i})^2g -2\overline{\partial}{}^{\,i}\partial^if\,\overline{\partial}{}^{\,i}\partial^ig \right)\right.\\ \nonumber &&\qquad\qquad\quad\quad -\,x^+\,\left(\partial^if\,\overline{\partial}{}^{\,i}\partial_- g +\overline{\partial}{}^{\,i}f\,\partial^i\partial_- g +\overline{\partial}{}^{\,i}\partial_- f\,\partial^ig +\partial^i\partial_- f\,\overline{\partial}{}^{\,i}g\right)\\ \nonumber &&\qquad\qquad\quad\quad +\,x^+\,\overline{z}_i\,\left(\,\overline{\partial}{}^{\,i}\partial^if\,\partial^i\partial_- g -\overline{\partial}{}^{\,i}\partial_- f\,(\partial^i)^2g +\partial^i\partial_- f\,\overline{\partial}{}^{\,i}\partial^ig -(\partial^i)^2f\,\overline{\partial}{}^{\,i}\partial_- g\right) \\ \nonumber &&\qquad\qquad\quad\quad +\,x^+\,z_i\,\left(\,\overline{\partial}{}^{\,i}\partial^if\,\overline{\partial}{}^{\,i}\partial_- g -\partial^i\partial_- f\,(\,\overline{\partial}{}^{\,i})^2g +\overline{\partial}{}^{\,i}\partial_- f\,\overline{\partial}{}^i\partial^ig -(\,\overline{\partial}{}^{\,i})^2f\,\partial^i\partial_- g\right)\\ \nonumber &&\qquad\qquad\quad\quad +\,\mbox{$\frac12$}\,\overline{z}_i\,z_i\,\left(\, \overline{\partial}{}^{\,i}\partial_- f\,\partial^i\partial_- g +\partial^i\partial_- f\,\overline{\partial}{}^{\,i}\partial_- g -\partial_- ^2f\,\overline{\partial}{}^{\,i}\partial^ig -\overline{\partial}{}^{\,i}\partial^if\,\partial_-^2g\right) \\ \nonumber &&\qquad\qquad\quad\quad +\,\mbox{$\frac14$}\,\overline{z}_i^{\,2}\, \left((\partial^i)^2f\,\partial_-^2g -2\partial^i\partial_- f\,\partial^i\partial_- g +\partial_-^2f\,(\partial^i)^2g\right)\\ \nonumber &&\qquad\qquad\quad\quad +\,\mbox{$\frac14$}\,z_i^2\, \left((\,\overline{\partial}{}^{\,i})^2f\,\partial_-^2g -2\overline{\partial}{}^{\,i}\partial_- f\,\overline{\partial}{}^{\,i}\partial_- g +\partial_-^2f\,(\,\overline{\partial}{}^{\,i})^2g\right) \\ \nonumber &&\qquad\qquad\quad\quad +\Bigl.\mbox{$\frac14$}\,\overline{z}_i\,\left(\partial^if\,\partial_-^2g +\partial_-^2f\,\partial^ig\right) +\mbox{$\frac14$}\,z_i\,\left(\partial_-^2f\,\overline{\partial}{}^{\,i}g +\overline{\partial}{}^{\,i}f\,\partial_-^2g\right) \Bigr]+O\left(\theta^3\right) \ . \\ && \end{eqnarray} \subsection{Weyl Ordering\label{WOP}} The original Gutt product~\cite{Gutt1} is based on the ``Weyl ordering'' prescription whereby all monomials in $U({\mathfrak n})$ are completely symmetrized over all elements of ${\mathfrak n}$. On $\mathcal{N}^\complex$ it is defined by \begin{eqnarray} \Omega_\star\left({\,\rm e}\,^{{\,{\rm i}\,}{\mbf k}\cdot{\mbf x}}\right)= \NO\,{\,\rm e}\,^{{\,{\rm i}\,} k^a\,{\sf X}_a}\,\NO:={\,\rm e}\,^{{\,{\rm i}\,} k^a\,{\sf X}_a} \ . \label{Weylgpprods}\end{eqnarray} While this ordering is usually thought of as the ``canonical'' ordering for the construction of star-products, in our case it turns out to be drastically more complicated than the other orderings. Nevertheless, we shall present here its explicit construction for the sake of completeness and for later comparisons. It is an extremely arduous task to compute products of the group elements (\ref{Weylgpprods}) directly from the Baker-Campbell-Hausdorff formula (\ref{eq:BCH}). Instead, we shall construct an isomorphism $\mathcal{G}:\overline{U({\mathfrak n})^\complex}\to \overline{U({\mathfrak n})^\complex}$ which sends the time-ordered product defined by (\ref{TOgpprods}) into the Weyl-ordered product defined by (\ref{Weylgpprods}), i.e. \begin{eqnarray} \mathcal{G}\circ\Omega_*=\Omega_\star \ . \label{1ststudy}\end{eqnarray} Then by defining $\mathcal{G}_\Omega:=\Omega_*^{-1}\circ\mathcal{G}\circ\Omega^{~}_\star$, the star-product $\star$ associated with the Weyl ordering prescription (\ref{Weylgpprods}) may be computed as \begin{eqnarray} f\star g=\mathcal{G}^{~}_\Omega\bigl(\mathcal{G}_\Omega^{-1}(f)* \mathcal{G}_\Omega^{-1}(g)\bigr) \ , ~~ f,g\in{\rm C}^\infty({\mathfrak n}^{\vee\,}) \ . \label{WeylTOrel}\end{eqnarray} Explicitly, if \begin{eqnarray} \NOa\,{\,\rm e}\,^{{\,{\rm i}\,} k^a\,{\sf X}_a}\,\NOa={\,\rm e}\,^{{\,{\rm i}\,} G^a({\mbf k})\,{\sf X}_a} \label{1ststudyexpl}\end{eqnarray} for some function ${\mbf G}=(G^a):\real^6\to\real^6$, then the isomorphism $\mathcal{G}_\Omega:{\rm C}^\infty({\mathfrak n}^{\vee\,})\to{\rm C}^\infty({\mathfrak n}^{\vee\,})$ may be represented as the invertible differential operator \begin{eqnarray} \mathcal{G}_\Omega={\,\rm e}\,^{{\,{\rm i}\,}{\mbf x}\cdot[{\mbf G}(-{\,{\rm i}\,}\mbf\partial)+{\,{\rm i}\,}\mbf \partial]} \ . \label{Gdiffop}\end{eqnarray} This relation just reflects the fact that the time-ordered and Weyl-ordered star-products, although not identical, simply represent different ordering prescriptions for the same algebra and are therefore (cohomologically) {\it equivalent}. We will elucidate this property more thoroughly in Section~\ref{WeylSystems}. Thus once the map (\ref{1ststudyexpl}) is known, the Weyl ordered star-product $\star$ can be computed in terms of the time-ordered star-product $*$ of Section~\ref{TOP}. The functions $G^a({\mbf k})$ appearing in (\ref{1ststudyexpl}) are readily calculable through the Baker-Campbell-Hausdorff formula. It is clear from (\ref{TOgpprods}) that the coefficient of the time translation generator ${\sf J}\in{\mathfrak n}$ is simply \begin{eqnarray} G^j(j,t,{\mbf p}^\pm)=j \ . \label{Gj}\end{eqnarray} {}From (\ref{eq:BCH}) it is also clear that the only terms proportional to ${\sf P}^i_+$ come from commutators of the form $[{\sf J},[\dots,[{\sf J},{\sf P}^i_+]\,]\dots]$, and gathering all terms we find \begin{eqnarray} \mbox{$\sum\limits_{i=1,2}$}\, G^{p_i^+}(j,t,{\mbf p}^\pm)~{\sf P}^i_+&=&-{\,{\rm i}\,}\sum_{n=0}^\infty \frac{B_n}{n!}~\bigl[~\underbrace{{\,{\rm i}\,} j\,{\sf J}\,,\,\bigl[\dots\,,\, \bigl[{\,{\rm i}\,} j{\sf J}}_n\,,\,{\,{\rm i}\,} p_i^+\,{\sf P}^i_+\,\bigr]\,\bigr] \dots\bigr]\nonumber\\ &=& p_i^+\,\sum_{n=0}^\infty\frac{B_n}{n!}\, (-\theta\,j)^n~{\sf P}^i_+ \ . \label{GomzBn}\end{eqnarray} Since $B_0=1$, $B_1=-\frac12$ and $B_{2k+1}=0~~\forall k\geq1$, from (\ref{eq:BCH:K}) we thereby find \begin{eqnarray} G^{{\mbf p}^+}(j,t,{\mbf p}^\pm)=\frac{{\mbf p}^+} {\phi_\theta(j)} \label{Gomz}\end{eqnarray} where we have introduced the function \begin{eqnarray} \phi_\theta(j)=\frac{1-{\,\rm e}\,^{-\theta\,j}}{\theta\,j} \label{phithetadef}\end{eqnarray} obeying the identities \begin{eqnarray} \phi_\theta(j)~{\,\rm e}\,^{\theta\,j}=\phi_{-\theta}(j) \ , \quad \phi_\theta(j)\,\phi_{-\theta}(j)=-\frac2{(\theta\,j)^2}\,\bigl( 1-\cos(\theta\,j)\bigr) \ . \label{phithetaids}\end{eqnarray} In a completely analogous way one finds the coefficient of the ${\sf P}^i_-$ term to be given by \begin{eqnarray} G^{{\mbf p}^-}(j,t,{\mbf p}^\pm)=\frac{{\mbf p}^-} {{\phi_{-\theta}(j)}} \ . \label{Gmz}\end{eqnarray} Finally, the non-vanishing contributions to the central element ${\sf T}\in{\mathfrak n}$ are given by \begin{eqnarray} G^t(j,t,{\mbf p}^\pm)~{\sf T}&=& t~{\sf T}-{\,{\rm i}\,}\sum_{n=1}^\infty\frac{B_{n+1}}{n!}\, \left(\bigl[{\,{\rm i}\,} p_i^+\,{\sf P}^i_+\,,\,\bigl[~ \underbrace{{\,{\rm i}\,} j\,{\sf J}\,,\,\dots\bigl[{\,{\rm i}\,} j\,{\sf J}}_n\,,\,{\,{\rm i}\,} p_i^-\, {\sf P}^i_-\,\bigr]\dots\bigr]\,\bigr]\right.\nonumber\\ &&\qquad\qquad\qquad +\left.\bigl[{\,{\rm i}\,} p_i^-\,{\sf P}^i_-\,,\,\bigl[~ \underbrace{{\,{\rm i}\,} j\,{\sf J}\,,\,\dots\bigl[{\,{\rm i}\,} j\,{\sf J}}_n\,,\, {\,{\rm i}\,} p_i^+\,{\sf P}^i_+\,\bigr]\dots\bigr]\,\bigr]\right) \nonumber\\ &=&t~{\sf T}+4\theta\,{\mbf p}^+\cdot{\mbf p}^-\,\sum_{n=1}^\infty\frac{B_{n+1}} {n!}\,(-\theta\,j)^n~{\sf T} \ . \label{GtBn}\end{eqnarray} By differentiating (\ref{GomzBn}) and (\ref{phithetadef}) with respect to $s=-\theta\,j$ we arrive finally at \begin{eqnarray} G^t(j,t,{\mbf p}^\pm)=t+4\theta\,{\mbf p}^+\cdot{\mbf p}^-\, \gamma_\theta(j) \label{Gt}\end{eqnarray} where we have introduced the function \begin{eqnarray} \gamma_\theta(j)=\frac12+\frac{(1+\theta\,j)~{\,\rm e}\,^{-\theta\,j}-1} {\left({\,\rm e}\,^{-\theta\,j}-1\right)^2} \ . \label{gammathetadef}\end{eqnarray} {}From (\ref{Gdiffop}) we may now write down the explicit form of the differential operator implementing the equivalence between the star-products $*$ and $\star$ as \begin{eqnarray} \mathcal{G}_\Omega&=&\exp\left[-2{\,{\rm i}\,}\theta\,x^+\,\overline{{\mbf\partial}} \cdot{\mbf\partial}\left(1+\frac{2(1-{\,{\rm i}\,}\theta\,\partial_-)~ {\,\rm e}\,^{{\,{\rm i}\,}\theta\,\partial_-}-1}{\left({\,\rm e}\,^{{\,{\rm i}\,}\theta\,\partial_-}-1 \right)^2}\right)\right.\nonumber\\ &&\qquad\quad+\left. \overline{{\mbf z}}\cdot{\mbf\partial}\left(\frac{{\,{\rm i}\,}\theta\,\partial_-} {{\,\rm e}\,^{{\,{\rm i}\,}\theta\,\partial_-}-1}-1\right)-{\mbf z}\cdot\overline{{\mbf\partial}} \left(\frac{{\,{\rm i}\,}\theta\,\partial_-} {{\,\rm e}\,^{-{\,{\rm i}\,}\theta\,\partial_-}-1}+1\right)\right] \ . \label{Gdiffopexpl}\end{eqnarray} {}From (\ref{TOgpprodexpl}) and (\ref{1ststudyexpl}) we may readily compute the products of Weyl symbols with the result \begin{eqnarray} &&\NO~~\NO\,{\,\rm e}\,^{{\,{\rm i}\,} k^a\,{\sf X}_a}\,\NO\cdot\NO\,{\,\rm e}\,^{{\,{\rm i}\,} k^{\prime\,a}\,{\sf X}_a} \,\NO~~\NO \nonumber\\ && {~~~~}_{~~}^{~~} \nonumber\\ &&~=~\exp{\,{\rm i}\,}\left\{\frac{\phi_\theta(j)\,p_i^++ {\,\rm e}\,^{-\theta\,j}\,\phi_\theta(j'\,)\,p_i^{\,\prime\,+}} {\phi_\theta(j+j'\,)}~{\sf P}^i_+ +\frac{{\phi_{-\theta}(j)}\,p_i^-+ {\,\rm e}\,^{\theta\,j}\,{\phi_{-\theta}(j'\,)}\,p_i^{\prime\,-}} {{\phi_{-\theta}(j+j'\,)}}~{\sf P}^i_-\right.\nonumber\\ &&\quad\qquad\qquad+\,(j+j'\,)~{\sf J}+\left[t+t'+\theta\, \bigl(\,{\phi_{-\theta}(j)\, \phi_{-\theta}(j'\,)}\,{\mbf p}^+\cdot{\mbf p}^{\prime\,-}-{\phi_{\theta}(j)\, \phi_{\theta}(j'\,)}\,{\mbf p}^-\cdot{\mbf p}^{\prime\,+}\bigr)\right. \nonumber\\ &&\quad\qquad\qquad-\,4\theta\,\left(\gamma_\theta(j+j'\,) \,\bigl({\phi_{-\theta}(j)}\, {\mbf p}^++{\,\rm e}\,^{\theta\,j}\,{\phi_{-\theta}(j'\,)}\,{\mbf p}^{\prime\,+}\,\bigr) \cdot\bigl(\,\phi_\theta(j)\, {\mbf p}^-+{\,\rm e}\,^{-\theta\,j}\,\phi_\theta(j'\,)\,{\mbf p}^{\prime\,-}\,\bigr) \right.\nonumber\\ &&\quad\qquad\qquad\quad\qquad-\biggl.\left.\left. \gamma_\theta(j)\,\phi_\theta(j)\,\phi_{-\theta}(j)\,{\mbf p}^+\cdot{\mbf p}^-- \gamma_\theta(j'\,)\,\phi_\theta(j'\,)\,\phi_{-\theta}(j'\,) \,{\mbf p}^{\prime\,+} \cdot{\mbf p}^{\prime\,-}\right)\right]~{\sf T}\biggr\} \ . \nonumber\\ && \label{Weylgpprodexpl}\end{eqnarray} {}From (\ref{xastarxb}) we may now compute the star-products $\star$ between the coordinate functions on ${\mathfrak n}^{\vee\,}$ to be \begin{eqnarray} x_a\star x_a&=&(x_a)^2 \ , \nonumber\\ x_a\star x^+&=&x^+\star x_a~=~x_a\,x^+ \ , \nonumber\\zeta_1\star z_2&=&z_2\star z_1 {}~=~z_1\,z_2 \ ,\nonumber\\\overline{z}_1\star \overline{z}_2 &=&\overline{z}_2\star \overline{z}_1 {}~=~\overline{z}_1\,\overline{z}_2 \ , \nonumber\\ x^-\star z_i&=&x^-\,z_i-\mbox{$\frac\ii2$}\, \theta\,z_i \ , \nonumber\\ z_i\star x^-&=&x^-\,z_i+\mbox{$\frac\ii2$}\, \theta\,z_i \ , \nonumber\\x^-\star \overline{z}_i &=&x^-\,\overline{z}_i+\mbox{$\frac\ii2$}\, \theta\,\overline{z}_i \ , \nonumber\\\overline{z}_i\star x^- &=&x^-\,\overline{z}_i-\mbox{$\frac\ii2$}\, \theta\,\overline{z}_i \ , \nonumber\\ z_i\star \overline{z}_i&=&z_i\,\overline{z}_i-{\,{\rm i}\,}\theta\,x^+ \ , \nonumber\\ \overline{z}_i\star z_i &=&z_i\,\overline{z}_i+ {\,{\rm i}\,}\theta\,x^+ \ . \label{Weylsymcoordstarprods}\end{eqnarray} These products are identical to those of the symmetric time ordering prescription (\ref{TOsymcoordstarprods}). After some computation, from (\ref{NOproductsBCH},\ref{fstargbidiff}) we find for generic functions $f,g\in{\rm C}^\infty({\mathfrak n}^{\vee\,})$ the formula \begin{eqnarray} f\star g&=&\mu\circ\exp\left\{\theta\,x^+\,\left[~~ \frac{1\otimes1+\bigl({\,{\rm i}\,} \theta\,(\partial_-\otimes1+1\otimes\partial_-)-1\otimes1 \bigr)~{\,\rm e}\,^{{\,{\rm i}\,}\theta\,\partial_-}\otimes{\,\rm e}\,^{{\,{\rm i}\,}\theta\,\partial_-}} {\left({\,\rm e}\,^{{\,{\rm i}\,}\theta\,\partial_-}\otimes{\,\rm e}\,^{{\,{\rm i}\,}\theta\,\partial_-}-1\otimes1 \right)^2}\right.\right.\nonumber\\ &&\times\, \left(\frac{4{\mbf\partial}^\top\left({\,\rm e}\,^{-{\,{\rm i}\,}\theta\,\partial_-}-1 \right)}{\theta\,\partial_-}\otimes \frac{\overline{{\mbf\partial}}\left({\,\rm e}\,^{-{\,{\rm i}\,}\theta\,\partial_-}-1 \right)}{\theta\,\partial_-}-\frac{3\overline{{\mbf\partial}}{}^{\,\top} \left({\,\rm e}\,^{{\,{\rm i}\,}\theta\,\partial_-}-1 \right)}{\theta\,\partial_-}\otimes \frac{{\mbf\partial}\left({\,\rm e}\,^{{\,{\rm i}\,}\theta\,\partial_-}-1 \right)}{\theta\,\partial_-}\right.\nonumber\\ &&+\left. \frac{4\overline{{\mbf\partial}}\cdot{\mbf\partial}\sin^2\left(\frac\theta2 \,\partial_-\right)}{\theta^2\,\partial_-^2}\otimes1-1\otimes \frac{4\overline{{\mbf\partial}}\cdot{\mbf\partial}\sin^2\left(\frac\theta2 \,\partial_-\right)}{\theta^2\,\partial_-^2}\right) \nonumber\\ &&+\, \frac{4{\,{\rm i}\,}\overline{{\mbf\partial}}\cdot{\mbf\partial}}{\theta\,\partial_-} \left(\frac{{\,{\rm i}\,}\sin^2\left(\frac\theta2 \,\partial_-\right)}{\theta\,\partial_-\left( {\,\rm e}\,^{{\,{\rm i}\,}\theta\,\partial_-}-1\right)}-1\right)\otimes1+1\otimes \frac{4{\,{\rm i}\,}\overline{{\mbf\partial}}\cdot{\mbf\partial}}{\theta\,\partial_-} \left(\frac{{\,{\rm i}\,}\sin^2\left(\frac\theta2 \,\partial_-\right)}{\theta\,\partial_-\left( {\,\rm e}\,^{{\,{\rm i}\,}\theta\,\partial_-}-1\right)}-1\right)\nonumber\\ && +\left.\frac{3\overline{{\mbf\partial}}{}^{\,\top}\left({\,\rm e}\,^{{\,{\rm i}\,}\theta\,\partial_-}-1 \right)}{\theta\,\partial_-}\otimes\frac{ {\mbf\partial}\left({\,\rm e}\,^{{\,{\rm i}\,}\theta\,\partial_-}-1 \right)}{\theta\,\partial_-}+\frac{{\mbf\partial}^\top \left({\,\rm e}\,^{-{\,{\rm i}\,}\theta\,\partial_-}-1 \right)}{\theta\,\partial_-}\otimes\frac{\overline{{\mbf\partial}} \left({\,\rm e}\,^{-{\,{\rm i}\,}\theta\,\partial_-}-1\right)}{\theta\,\partial_-}~~\right] \nonumber\\&& +\,\frac{\overline{z}_i}{1\otimes{\,\rm e}\,^{-{\,{\rm i}\,}\theta\, \partial_-}-{\,\rm e}\,^{{\,{\rm i}\,}\theta\,\partial_-}\otimes1}\,\left[~~ \frac{\partial^i}{\partial_-}\,\left(1-{\,\rm e}\,^{{\,{\rm i}\,}\theta\,\partial_-} \right)\otimes\partial_--\partial_-\otimes\frac{\partial^i} {\partial_-}\,\left(1-{\,\rm e}\,^{-{\,{\rm i}\,}\theta\,\partial_-}\right)\right.\nonumber\\ && +\Biggl.\,1\otimes\partial^i~ {\,\rm e}\,^{-{\,{\rm i}\,}\theta\,\partial_-}-\partial^i~{\,\rm e}\,^{{\,{\rm i}\,}\theta\,\partial_-} \otimes1-1\otimes2\partial^i~~\Biggr]\nonumber\\ && +\,\frac{z_i}{1\otimes{\,\rm e}\,^{{\,{\rm i}\,}\theta\, \partial_-}-{\,\rm e}\,^{-{\,{\rm i}\,}\theta\,\partial_-}\otimes1}\,\left[~~ \frac{\overline{\partial}{}^{\,i}}{\partial_-}\,\left(1-{\,\rm e}\,^{-{\,{\rm i}\,}\theta\,\partial_-} \right)\otimes\partial_--\partial_-\otimes\frac{\overline{\partial}{}^{\,i}} {\partial_-}\,\left(1-{\,\rm e}\,^{{\,{\rm i}\,}\theta\,\partial_-}\right)\right.\nonumber\\ && +\left.\Biggl.1\otimes\overline{\partial}{}^{\,i}~ {\,\rm e}\,^{{\,{\rm i}\,}\theta\,\partial_-}-\overline{\partial}{}^{\,i}~{\,\rm e}\,^{-{\,{\rm i}\,}\theta\,\partial_-} \otimes1-1\otimes2\overline{\partial}{}^{\,i}~~\Biggr]\right\}f\otimes g \ . \label{Weylstargen}\end{eqnarray} To second order in the deformation parameter $\theta$ we obtain \begin{eqnarray} \label{eq:weyl:positionspace} f\star g&=&f\,g-\mbox{$\frac{{\,{\rm i}\,}}2$}\,\theta\,\left[ 2x^+\,\left(\,\overline{{\mbf\partial}}f\cdot{\mbf\partial} g - {\mbf\partial} f\cdot\overline{{\mbf\partial}}g\right)\right.\\\nonumber &&\qquad\qquad\qquad+\left.\overline{{\mbf z}}\cdot\left({\mbf\partial} f\,\partial_- g - \partial_- f\,{\mbf\partial} g\right)+ {\mbf z}\cdot\left(\partial_- f\,\overline{{\mbf\partial}}g - \overline{{\mbf\partial}}f\,\partial_- g\right)\right]\\ \nonumber &&-\,\mbox{$\frac1{2}$}\,\theta^2\,\mbox{$\sum\limits_{i=1,2}$}\,\left[ \left(x^+\right)^2\, \left((\,\overline{\partial}{}^{\,i})^2f\,(\partial^i)^2g +(\partial^i)^2f\,(\,\overline{\partial}{}^{\,i})^2g -2\overline{\partial}{}^{\,i}\partial^if\,\overline{\partial}{}^{\,i}\partial^ig \right)\right.\\ \nonumber &&\qquad\qquad\quad\quad -\,\mbox{$\frac13$}\,x^+\,\left(\partial^if\,\overline{\partial}{}^{\,i}\partial_- g +\overline{\partial}{}^{\,i}f\,\partial^i\partial_- g +\overline{\partial}{}^{\,i}\partial_- f\,\partial^ig +\partial^i\partial_- f\,\overline{\partial}{}^{\,i}g\right.\\ \nonumber &&\qquad\qquad\qquad\qquad\qquad\quad\quad -\left.2\partial_-f\,\overline{\partial}{}^{\,i} \partial^ig-2\overline{\partial}{}^{\,i}\partial^if\, \partial_-g\right)\\\nonumber&&\qquad\qquad\quad\quad +\,x^+\,\overline{z}_i\,\left(\,\overline{\partial}{}^{\,i} \partial^if\,\partial^i\partial_- g -\overline{\partial}{}^{\,i}\partial_- f\,(\partial^i)^2g +\partial^i\partial_- f\,\overline{\partial}{}^{\,i}\partial^ig -(\partial^i)^2f\,\overline{\partial}{}^{\,i}\partial_- g\right) \\ \nonumber &&\qquad\qquad\quad\quad +\,x^+\,z_i\,\left(\,\overline{\partial}{}^{\,i}\partial^if\,\overline{\partial}{}^{\,i}\partial_- g -\partial^i\partial_- f\,(\,\overline{\partial}{}^{\,i})^2g +\overline{\partial}{}^{\,i}\partial_- f\,\overline{\partial}{}^{\,i}\partial^ig -(\,\overline{\partial}{}^{\,i})^2f\,\partial^i\partial_- g\right)\\ \nonumber &&\qquad\qquad\quad\quad +\,\mbox{$\frac12$}\,\overline{z}_i\,z_i\,\left(\, \overline{\partial}{}^{\,i}\partial_- f\,\partial^i\partial_- g +\partial^i\partial_- f\,\overline{\partial}{}^{\,i}\partial_- g -\partial_- ^2f\,\overline{\partial}{}^{\,i}\partial^ig -\overline{\partial}{}^{\,i}\partial^if\,\partial_-^2g\right) \\ \nonumber &&\qquad\qquad\quad\quad +\,\mbox{$\frac14$}\,\overline{z}_i^{\,2}\, \left((\partial^i)^2f\,\partial_-^2g -2\partial^i\partial_- f\,\partial^i\partial_- g +\partial_-^2f\,(\partial^i)^2g\right)\\ \nonumber &&\qquad\qquad\quad\quad +\,\mbox{$\frac14$}\,z_i^2\, \left((\,\overline{\partial}{}^{\,i})^2f\,\partial_-^2g -2\overline{\partial}{}^{\,i}\partial- f\,\overline{\partial}{}^{\,i}\partial_- g +\partial_-^2f\,(\,\overline{\partial}{}^{\,i})^2g\right) \\ \nonumber &&\qquad\qquad\quad\quad +\,\mbox{$\frac16$}\,\overline{z}_i\,\left(\partial^if\,\partial_-^2g +\partial_-^2f\,\partial^ig-\partial_-f\,\partial^i \partial_-g-\partial^i\partial_-f\,\partial_-g\right) \\&&\qquad\qquad\quad\quad +\Bigl.\mbox{$\frac16$}\,z_i\,\left(\partial_-^2f\,\overline{\partial}{}^{\,i}g +\overline{\partial}{}^{\,i}f\,\partial_-^2g-\partial_-f\, \overline{\partial}{}^{\,i}\partial_-g- \overline{\partial}{}^{\,i}\partial_-f\,\partial_-g\right) \Bigr]+O\left(\theta^3\right) \ . \nonumber\\ && \end{eqnarray} Although extremely cumbersome in form, the Weyl-ordered product has several desirable features over the simpler time-ordered products. For instance, the Schwartz subspace of ${\rm C}^\infty({\mathfrak n}^{\vee\,})$ is closed under the Weyl ordered product, whereas the other products are only formal in this regard and do not define strict deformation quantizations. It is also hermitean owing to the property \begin{eqnarray} \overline{f\star g}=\overline{g}\star\overline{f} \ . \label{Weylstarherm}\end{eqnarray} Moreover, while the ${\mathfrak n}$-covariance condition (\ref{xyPoisson}) holds for all of our star-products, the Weyl product is in fact ${\mathfrak n}$-invariant, because for any $x\in\complex_{(1)}({\mathfrak n}^{\vee\,})$ one has the stronger compatibility condition \begin{eqnarray} [x,f]_\star={\,{\rm i}\,}\theta\,\Theta(x,f) ~~~~ \forall f\in{\rm C}^\infty({\mathfrak n}^{\vee\,}) \label{strongcompcond}\end{eqnarray} with the action of the Lie algebra ${\mathfrak n}$. In the next section we shall see that the Weyl-ordered star-product is, in a certain sense, the generator of all other star-products making it the ``universal'' product for the quantization of the spacetime $\NW_6$. \setcounter{equation}{0}\section{Weyl Systems\label{WeylSystems}} In this section we will use the notion of a generalized Weyl system introduced in~\cite{ALZ1} to describe some more formal aspects of the star-products that we have constructed and to analyse the interplay between them. This generalizes the standard Weyl systems~\cite{Sz1} which may be used to provide a purely operator theoretic characterization of the Moyal product, associated to the (untwisted) Heisenberg algebra. In that case, it can be regarded as a projective representation of the translation group in an even-dimensional real vector space. However, for the twisted Heisenberg algebra such a representation is not possible, since by definition the appropriate arena should be a central extension of the non-abelian subgroup $\mathcal S$ of the full euclidean group ${\rm ISO}(4)$. This requires a generalization of the standard notion which we will now describe and use it to obtain a very useful characterization of the noncommutative geometry induced by the algebra ${\mathfrak n}$. Let $\mathbb{V}$ be a five-dimensional real vector space. In a suitable (canonical) basis, vectors ${\mbf k}\in\mathbb{V}\cong\real\times\complex^2$ will be denoted (with respect to a chosen complex structure) as \begin{eqnarray} {\mbf k}=\begin{pmatrix}j\\{\mbf p}^+\,\\{\mbf p}^-\end{pmatrix} \label{Svectors}\end{eqnarray} with $j\in\real$ and ${\mbf p}^\pm=\overline{{\mbf p}^\mp}\in\complex^2$. As the notation suggests, we regard $\mathbb{V}$ as the ``momentum space'' of the dual ${\mathfrak n}^{\vee\,}$. Note that we do not explicitly incorporate the component corresponding to the central element ${\sf T}$, as it will instead appear through the appropriate projective representation that we will construct, similarly to the Moyal case. As an abelian group, $\mathbb{V}\cong\real^5$ with the usual addition $+$ and identity $\mbf0$. Corresponding to a deformation parameter $\theta\in\real$, we deform this abelian Lie group structure to a generically non-abelian one. The deformed composition law is denoted $\comp$. It is associative and in general will depend on $\theta$. The identity element with respect to $\comp$ is still defined to be $\mbf0$, and the inverse of any element ${\mbf k}\in\mathbb{V}$ is denoted $\underline{{\mbf k}}$, so that \begin{eqnarray} {\mbf k}\comp\underline{{\mbf k}}=\underline{{\mbf k}}\comp{\mbf k}=\mbf0 \ . \label{compinverse}\end{eqnarray} Being a deformation of the underlying abelian group structure on $\mathbb{V}$ means that the composition of any two vectors ${\mbf k},{\mbf q}\in\mathbb{V}$ has a formal small $\theta$ expansion of the form \begin{eqnarray} {\mbf k}\comp{\mbf q}={\mbf k}+{\mbf q}+O(\theta) \ , \label{compsmalltheta}\end{eqnarray} from which it follows that \begin{eqnarray} \underline{{\mbf k}}=-{\mbf k}+O(\theta) \ . \label{compinvsmalltheta}\end{eqnarray} In other words, rather than introducing star-products that deform the pointwise multiplication of functions on ${\mathfrak n}^{\vee\,}$, we now deform the ``momentum space'' of ${\mathfrak n}^{\vee\,}$ to a non-abelian Lie group. We will see below that the five-dimensional group $(\mathbb{V},\comp)$ is isomorphic to the original subgroup $\mathcal{S}\subset{\rm ISO}(4)$, and that the two notions of quantization are in fact the same. Given such a group, we now define a (generalized) Weyl system for the algebra ${\mathfrak n}$ as a quadruple $(\mathbb{V},\comp,\weyl,\omega)$, where the map \begin{eqnarray} \weyl\,:\,\mathbb{V}~\longrightarrow~\overline{U({\mathfrak n})^\complex} \label{genweylmapdef}\end{eqnarray} is a projective representation of the group $(\mathbb{V},\comp)$ with projective phase $\omega:\mathbb{V}\times\mathbb{V}\to\complex$. This means that for every pair of elements ${\mbf k},{\mbf q}\in\mathbb{V}$ one has the composition rule \begin{eqnarray} \weyl({\mbf k})\cdot\weyl({\mbf q})={\,\rm e}\,^{\frac\ii2\,\omega({\mbf k},{\mbf q})\,{\sf T}}~\cdot~ \weyl({\mbf k}\comp{\mbf q}) \label{Weylcomprule}\end{eqnarray} in the completed, complexified universal enveloping algebra of ${\mathfrak n}$. The associativity of $\comp$ and the relation (\ref{Weylcomprule}) imply that the subalgebra $\weyl(\mathbb{V})\subset\overline{U({\mathfrak n})^\complex}$ is associative if and only if \begin{eqnarray} \omega({\mbf k}\comp\mbf p,{\mbf q})=\omega({\mbf k},\mbf p\comp{\mbf q})+\omega(\mbf p,{\mbf q})- \omega({\mbf k},\mbf p) \label{cocyclecond}\end{eqnarray} for all vectors ${\mbf k},{\mbf q},\mbf p\in{\mathbb V}$. This condition means that $\omega$ defines a one-cocycle in the group cohomology of $(\mathbb{V},\comp)$. It is automatically satisfied if $\omega$ is a bilinear form with respect to $\comp$. We will in addition require that $\omega({\mbf k},{\mbf q})=O(\theta)~~\forall {\mbf k},{\mbf q}\in\mathbb{V}$ for consistency with (\ref{compsmalltheta}). The identity element of $\weyl(\mathbb{V})$ is $\weyl(\mbf0)$ while the inverse of $\weyl({\mbf k})$ is given by \begin{eqnarray} \weyl({\mbf k})^{-1}=\weyl(\,\underline{{\mbf k}}\,) \ . \label{weylinverse}\end{eqnarray} The standard Weyl system on $\real^{2n}$ takes $\comp$ to be ordinary addition and $\omega$ to be the Darboux symplectic two-form, so that $\weyl(\real^{2n})$ is a projective representation of the translation group, as is appropriate to the Moyal product. Given a Weyl system defined as above, we can now introduce another isomorphism \begin{eqnarray} \Pi\,:\,{\rm C}^\infty\left(\real^5\right)~\longrightarrow~ \weyl\bigl(\mathbb{V}\bigr) \label{Omegaquantmap}\end{eqnarray} defined by the symbol \begin{eqnarray} \Pi(f):=\int\limits_{\real^5}\frac{{\rm d}{\mbf k}}{(2\pi)^5}~ \tilde{f}({\mbf k})~\weyl({\mbf k}) \label{Omegafdef}\end{eqnarray} where as before $\tilde f$ denotes the Fourier transform of $f\in{\rm C}^\infty(\real^5)$. This definition implies that \begin{eqnarray} \Pi\bigl({\,\rm e}\,^{{\,{\rm i}\,}{\mbf k}\cdot{\mbf x}}\bigr)=\weyl({\mbf k}) \ , \label{weylmkDelta}\end{eqnarray} and that we may introduce a $*$-involution $\dag$ on both algebras ${\rm C}^\infty(\real^5)$ and $\weyl({\mathbb V})$ by the formula \begin{eqnarray} \Pi\bigl(f^\dag\bigr)=\Pi\bigl(f\bigr)^\dag:= \int\limits_{\real^5}\frac{{\rm d}{\mbf k}}{(2\pi)^5}~\overline{ \tilde f(\,\underline{{\mbf k}}\,)}~\weyl({\mbf k}) \ . \label{involdef}\end{eqnarray} The compatibility condition \begin{eqnarray} \bigl(\Pi(f)\cdot\Pi(g)\bigr)^\dag=\Pi(g)^\dag\cdot \Pi(f)^\dag \label{compconddag}\end{eqnarray} with the product in $\overline{U({\mathfrak n})^\complex}$ imposes further constraints on the group composition law $\comp$ and cocycle $\omega$~\cite{ALZ1}. From (\ref{Weylcomprule}) we may thereby define a $\dag$-hermitean star-product of $f,g\in{\rm C}^\infty(\real^5)$ by the formula \begin{eqnarray} f\star g:=\Pi^{-1}\bigl(\Pi(f)\cdot\Pi(g)\bigr) =\int\limits_{\real^5}\frac{{\rm d}{\mbf k}}{(2\pi)^5}~ \int\limits_{\real^5}\frac{{\rm d}{\mbf q}}{(2\pi)^5}~\tilde f({\mbf k})\, \tilde g({\mbf q})~{\,\rm e}\,^{\frac\ii2\,\omega({\mbf k},{\mbf q})}~ \Pi^{-1}\circ\weyl({\mbf k}\comp{\mbf q}) \ , \label{fstargweyl}\end{eqnarray} and in this way we have constructed a quantization of the algebra ${\mathfrak n}$ solely from the formal notion of a Weyl system. The associativity of $\star$ follows from associativity of $\comp$. We may also rewrite the star-product (\ref{fstargweyl}) in terms of a bi-differential operator as \begin{eqnarray} f\star g=f~{\,\rm e}\,^{\frac\ii2\,\omega(\,-{\,{\rm i}\,}\overleftarrow{\mbf\partial}\,,\, -{\,{\rm i}\,}\overrightarrow{\mbf\partial}\,)+{\,{\rm i}\,}{\mbf x}\cdot(-{\,{\rm i}\,}\overleftarrow{\mbf\partial}\,\comp -{\,{\rm i}\,}\overrightarrow{\mbf\partial}+{\,{\rm i}\,}\overleftarrow{\mbf\partial}+{\,{\rm i}\,} \overrightarrow{\mbf\partial}\,)}~g \ . \label{bidiffweyl}\end{eqnarray} This deformation is completely characterized in terms of the new algebraic structure and its projective representation provided by the Weyl system. It is straightforward to show that the Lie algebra of $(\mathbb{V},\comp)$ coincides precisely with the original subalgebra ${\mathfrak s}\subset{\rm iso}(4)$, while the cocycle $\omega$ generates the central extension of ${\mathfrak s}$ to ${\mathfrak n}$ in the usual way. From (\ref{fstargweyl}) one may compute the star-products of coordinate functions on $\real^5$ as \begin{eqnarray} x_a\star x_b=x_a\,x_b-{\,{\rm i}\,}{\mbf x}\cdot\left.\frac\partial{\partial k^a}\frac\partial {\partial q^b}({\mbf k}\comp{\mbf q})\right|_{{\mbf k}={\mbf q}=\mbf0}-\left.\frac\ii2\, \frac\partial{\partial k^a}\frac\partial {\partial q^b}\omega({\mbf k},{\mbf q})\right|_{{\mbf k}={\mbf q}=\mbf0} \ . \label{xastarxbweyl}\end{eqnarray} The corresponding star-commutator may thereby be written as \begin{eqnarray} [x_a,x_b]_\star={\,{\rm i}\,}\theta\,C_{ab}^{~~c}\,x_c+{\,{\rm i}\,}\theta\,\xi_{ab} \ , \label{xaxbstarcommxi}\end{eqnarray} where the relation \begin{eqnarray} \theta\,C_{ab}^{~~c}=-\left.\left(\frac\partial{\partial k^a}\frac\partial {\partial q^b}-\frac\partial{\partial k^b}\frac\partial {\partial q^a}\right)({\mbf k}\comp{\mbf q})^c\right|_{{\mbf k}={\mbf q}=\mbf0} \label{Cabccomp}\end{eqnarray} gives the structure constants of the Lie algebra defined by the Lie group $(\mathbb{V},\comp)$, while the cocycle term \begin{eqnarray} \theta\,\xi_{ab}=-\frac12\,\left.\left(\frac\partial{\partial k^a}\frac\partial {\partial q^b}-\frac\partial{\partial k^b}\frac\partial {\partial q^a}\right)\omega({\mbf k},{\mbf q})\right|_{{\mbf k}={\mbf q}=\mbf0} \label{cocycleterm}\end{eqnarray} gives the usual form of a central extension of this Lie algebra. Demanding that this yield a deformation quantization of the Kirillov-Kostant Poisson structure on ${\mathfrak n}^{\vee\,}$ requires that $C_{ab}^{~~c}$ coincide with the structure constants of the subalgebra ${\mathfrak s}\subset{\rm iso}(4)$ of ${\mathfrak n}$, and also that $\xi_{{\mbf p}^-,{\mbf p}^+}=-\xi_{{\mbf p}^+,{\mbf p}^-}=2t$ be the only non-vanishing components of the central extension. It is thus possible to define a broad class of deformation quantizations of ${\mathfrak n}^{\vee\,}$ solely in terms of an abstract Weyl system $({\mathbb V},\comp,\weyl,\omega)$, without explicit realization of the operators $\weyl({\mbf k})$. In the remainder of this section we will set $\Pi=\Omega$ above and describe the Weyl systems underpinning the various products that we constructed previously. This entails identifying the appropriate maps (\ref{genweylmapdef}), which enables the calculation of the projective representations (\ref{Weylcomprule}) and hence explicit realizations of the group composition laws $\comp$ in the various instances. This unveils a purely algebraic description of the star-products which will be particularly useful for our later constructions, and enables one to make the equivalences between these products explicit. \subsection{Time Ordering\label{TOPGWS}} Setting $t=t'=0$ in (\ref{TOgpprodexpl}), we find the ``time-ordered'' non-abelian group composition law $\compa$ for any two elements of the form (\ref{Svectors}) to be given by \begin{eqnarray} {\mbf k}\compa{\mbf k}'=\begin{pmatrix}j+j'\,\\{\mbf p}^++{\,\rm e}\,^{-\theta\,j}\, {\mbf p}^{\prime\,+}\,\\{\mbf p}^-+{\,\rm e}\,^{\theta\,j}\,{\mbf p}^{\prime\,-} \,\end{pmatrix} \ . \label{TOcomplaw}\end{eqnarray} {}From (\ref{TOcomplaw}) it is straightforward to compute the inverse $\underline{{\mbf k}}$ of a group element (\ref{Svectors}), satisfying (\ref{compinverse}), to be \begin{eqnarray} \underline{{\mbf k}}=-\begin{pmatrix}j\\{\,\rm e}\,^{\theta\,j}\,{\mbf p}^+\\ {\,\rm e}\,^{-\theta\,j}\,{\mbf p}^-\,\end{pmatrix} \ . \label{TOinverse}\end{eqnarray} The group cocycle is given by \begin{eqnarray} \omega_*({\mbf k},{\mbf k}'\,)=2{\,{\rm i}\,}\theta\,\left({\,\rm e}\,^{\theta\,j}\, {\mbf p}^+\cdot{\mbf p}^{\prime\,-}-{\,\rm e}\,^{-\theta\,j}\, {\mbf p}^-\cdot{\mbf p}^{\prime\,+}\right) \label{TOcocycle}\end{eqnarray} and it defines the canonical symplectic structure on the $j={\rm constant}$ subspaces $\complex^2\subset\mathbb{V}$. Note that in this representation, the central coordinate function $x^+$ is not written explicitly and is simply understood as the unit element of $\complex(\real^5)$, as is conventional in the case of the Moyal product. For ${\mbf k}\in\mathbb{V}$ and ${\sf X}_a\in{\mathfrak s}$ the projective representation (\ref{Weylcomprule}) is generated by the time-ordered group elements \begin{eqnarray} \weyl_*({\mbf k})=\NOa\,{\,\rm e}\,^{{\,{\rm i}\,} k^a\,{\sf X}_a}\,\NOa \label{TOweylop}\end{eqnarray} defined in (\ref{eq:time:defn}). \subsection{Symmetric Time Ordering\label{TSOPGWS}} In a completely analogous manner, inspection of (\ref{TOsymgpprodexpl}) reveals the ``symmetric time-ordered'' non-abelian group composition law $\compb$ defined by \begin{eqnarray} {\mbf k}\compb{\mbf k}'=\begin{pmatrix}j+j'\,\\{\,\rm e}\,^{\frac{\theta}2\,j'}\,{\mbf p}^++ {\,\rm e}\,^{-\frac{\theta}2\,j}\,{\mbf p}^{\prime\,+}\,\\{\,\rm e}\,^{-\frac{\theta}2\,j'}\, {\mbf p}^-+{\,\rm e}\,^{\frac{\theta}2\,j}\,{\mbf p}^{\prime\,-}\,\end{pmatrix} \ , \label{TOsymcomplaw}\end{eqnarray} for which the inverse $\underline{{\mbf k}}$ of a group element (\ref{Svectors}) is simply given by \begin{eqnarray} \underline{{\mbf k}}=-{\mbf k} \ . \label{TOsyminverse}\end{eqnarray} The group cocycle is \begin{eqnarray} \omega_\bullet({\mbf k},{\mbf k}'\,)=2{\,{\rm i}\,}\theta\,\left({\,\rm e}\,^{\frac{\theta}2\, (j+j'\,)}\,{\mbf p}^+\cdot{\mbf p}^{\prime\,-}-{\,\rm e}\,^{-\frac{\theta}2\, (j+j'\,)}\,{\mbf p}^-\cdot{\mbf p}^{\prime\,+}\right) \label{TOsymcocycle}\end{eqnarray} and it again induces the canonical symplectic structure on $\complex^2\subset{\mathbb V}$. The corresponding projective representation of $({\mathbb V},\compb)$ is generated by the symmetric time-ordered group elements \begin{eqnarray} \weyl_\bullet({\mbf k})=\NOb\,{\,\rm e}\,^{{\,{\rm i}\,} k^a\,{\sf X}_a}\,\NOb \label{TOsymweylop}\end{eqnarray} defined in (\ref{TOsymgpprods}). \subsection{Weyl Ordering\label{WOPGWS}} Finally, we construct the Weyl system $({\mathbb V},\compc,\weyl_\star,\omega_\star)$ associated with the Weyl-ordered star-product of Section~\ref{WOP}. Starting from (\ref{Weylgpprodexpl}) we introduce the non-abelian group composition law $\compc$ by \begin{eqnarray} {\mbf k}\compc{\mbf k}'=\begin{pmatrix}j+j'\,\\[2mm] \frac{\phi_\theta(j)\,{\mbf p}^++ {\,\rm e}\,^{-\theta\,j}\,\phi_\theta(j'\,)\,{\mbf p}^{\prime\,+}} {\phi_\theta(j+j'\,)}\\[3mm] \frac{\phi_{-\theta}(j)\,{\mbf p}^-+ {\,\rm e}\,^{\theta\,j}\,\phi_{-\theta}(j'\,)\,{\mbf p}^{\prime\,-}} {\phi_{-\theta}(j+j'\,)}\end{pmatrix} \ , \label{Weylcomplaw}\end{eqnarray} from which we may again straightforwardly compute the inverse $\underline{{\mbf k}}$ of a group element (\ref{Svectors}) simply as \begin{eqnarray} \underline{{\mbf k}}=-{\mbf k} \ . \label{Weylinverse}\end{eqnarray} When combined with the definition (\ref{involdef}), one has $f^\dag=\overline{f}~~\forall f\in{\rm C}^\infty(\real^5)$ and this explains the hermitean property (\ref{Weylstarherm}) of the Weyl-ordered star-product $\star$. This is also true of the product $\bullet$, whereas $*$ is only hermitean with respect to the modified involution $\dag$ defined by (\ref{involdef}) and~(\ref{TOinverse}). The group cocycle is given by \begin{eqnarray} \omega_\star({\mbf k},{\mbf k}'\,)&=&-2{\,{\rm i}\,}\theta\,\Bigl( \phi_{-\theta}(j)\,\phi_{-\theta}(j'\,)\, {\mbf p}^+\cdot{\mbf p}^{\prime\,-}-\phi_{\theta}(j)\,\phi_{\theta}(j'\,)\, {\mbf p}^-\cdot{\mbf p}^{\prime\,+}\Bigr.\nonumber\\ &&\qquad -\,\gamma_\theta(j+j'\,)\,\bigl(\phi_\theta(j)\, {\mbf p}^++{\,\rm e}\,^{-\theta\,j}\,\phi_\theta(j'\,)\,{\mbf p}^{\prime\,+}\bigr) \cdot\bigl(\phi_{-\theta}(j)\, {\mbf p}^-+{\,\rm e}\,^{\theta\,j}\,\phi_{-\theta}(j'\,)\, {\mbf p}^{\prime\,-}\bigr)\nonumber\\ && \qquad+\Bigl. \gamma_\theta(j)\,\phi_\theta(j)\,\phi_{-\theta}(j)\,{\mbf p}^+\cdot{\mbf p}^-+ \gamma_\theta(j'\,)\,\phi_\theta(j'\,)\,\phi_{-\theta}(j'\,)\, {\mbf p}^{\prime\,+}\cdot{\mbf p}^{\prime\,-} \Bigr) \ . \label{Weylcocycle}\end{eqnarray} In contrast to the other cocycles, this does {\it not} induce any symplectic structure, at least not in the manner described earlier. The corresponding projective representation (\ref{Weylcomprule}) is generated by the completely symmetrized group elements \begin{eqnarray} \weyl_\star({\mbf k})={\,\rm e}\,^{{\,{\rm i}\,} k^a\,{\sf X}_a} \label{Weylweylop}\end{eqnarray} with ${\mbf k}\in{\mathbb V}$ and ${\sf X}_a\in{\mathfrak s}$. The Weyl system $({\mathbb V},\compc,\weyl_\star,\omega_\star)$ can be used to generate the other Weyl systems that we have found~\cite{ALZ1}. From (\ref{1ststudyexpl}) and (\ref{Weylgpprodexpl}) one has the identity \begin{eqnarray} \weyl_*(j,{\mbf p}^\pm)=\Omega_\star\left( {\,\rm e}\,^{{\,{\rm i}\,}(\mbf p^+\cdot\overline{{\mbf z}} +{\mbf p}^-\cdot{\mbf z})}\star{\,\rm e}\,^{{\,{\rm i}\,} j\, x^-}\right) \label{weylTOWeylDelta}\end{eqnarray} which implies that the time-ordered star-product $*$ can be expressed by means of a choice of different Weyl system generating the product $\star$. Since $\Omega_\star$ is an algebra isomorphism, one has \begin{eqnarray} \weyl_*(j,{\mbf p}^\pm)=\weyl_\star(0,{\mbf p}^\pm) \cdot\weyl_\star(j,\mbf0) \ . \label{weylTOWeylprods}\end{eqnarray} This explicit relationship between the Weyl systems for the star-products $*$ and $\star$ is another formulation of the statement of their cohomological equivalence, as established by other means in Section~\ref{WOP}. Similarly, the symmetric time-ordered star-product $\bullet$ can be expressed in terms of $\star$ through the identity \begin{eqnarray} \weyl_\bullet(j,{\mbf p}^\pm)=\Omega_\star\left( {\,\rm e}\,^{\frac\ii2\,j\,x^-}\star{\,\rm e}\,^{{\,{\rm i}\,}({\mbf p}^+\cdot\overline{{\mbf z}} +{\mbf p}^-\cdot{\mbf z})}\star{\,\rm e}\,^{\frac\ii2\,j\,x^-}\right) \ , \label{WeylTOsymWeylDelta}\end{eqnarray} which implies the relationship \begin{eqnarray} \weyl_\bullet\bigl(j,{\mbf p}^\pm\bigr)=\weyl_\star \bigl(\mbox{$\frac j2$} ,\mbf0\bigr)\cdot\weyl_\star\bigl(0,{\mbf p}^\pm\bigr) \cdot\weyl_\star\bigl(\mbox{$\frac j2$},\mbf0\bigr) \label{WeylTOsymWeylprods}\end{eqnarray} between the corresponding Weyl systems. This shows explicitly that the star-products $\bullet$ and $\star$ are also equivalent. \setcounter{equation}{0}\section{Twisted Isometries\label{Coprod}} We will now start working our way towards the explicit construction of the geometric quantities required to define field theories on the noncommutative plane wave $\NW_6$. We will begin with a systematic construction of derivative operators on the present noncommutative geometry, which will be used later on to write down kinetic terms for scalar field actions. In this section we will study some of the basic spacetime symmetries of the star-products that we constructed in Section~\ref{StarProds}, as they are directly related to the actions of derivations on the noncommutative algebras of functions. Classically, the isometry group of the gravitational wave $\NW_6$ is the group ${\mathcal N}_{\rm L}\times{\mathcal N}_{\rm R}$ induced by the left and right regular actions of the Lie group ${\mathcal N}$ on itself. The corresponding Killing vectors live in the 11-dimensional Lie algebra ${\mathfrak g}:={\mathfrak n}_{\rm L}\oplus{\mathfrak n}_{\rm R}$ (The left and right actions generated by the central element ${\sf T}$ coincide). This isometry group contains an ${\rm SO}(4)$ subgroup acting by rotations in the transverse space ${\mbf z}\in\complex^2\cong\real^4$, which is broken to ${\rm U}(2)$ by the Neveu-Schwarz background (\ref{NS2formBrink}). This symmetry can be restored upon quantization by instead letting the generators of ${\mathfrak g}$ act in a twisted fashion~\cite{CPT1,CKNT1,Wess1}, as we now proceed to describe. The action of an element $\nabla\in U({\mathfrak g})$ as an algebra automorphism ${\rm C}^\infty({\mathfrak n}^{\vee\,})\to{\rm C}^\infty({\mathfrak n}^{\vee\,})$ will be denoted $f\mapsto\nabla\triangleright f$. The universal enveloping algebra $U({\mathfrak g})$ is given the structure of a cocommutative bialgebra by introducing the ``trivial'' coproduct $\Delta:U({\mathfrak g})\to U({\mathfrak g})\otimes U({\mathfrak g})$ defined by the homomorphism \begin{eqnarray} \Delta(\nabla)=\nabla\otimes1+1\otimes\nabla \ , \label{trivialcoprod}\end{eqnarray} which generates the action of $U({\mathfrak g})$ on the tensor product ${\rm C}^\infty({\mathfrak n}^{\vee\,})\otimes{\rm C}^\infty({\mathfrak n}^{\vee\,})$. Since $\nabla$ is an automorphism of ${\rm C}^\infty({\mathfrak n}^{\vee\,})$, the action of the coproduct is compatible with the pointwise (commutative) product of functions $\mu:{\rm C}^\infty({\mathfrak n}^{\vee\,})\otimes{\rm C}^\infty({\mathfrak n}^{\vee\,})\to {\rm C}^\infty({\mathfrak n}^{\vee\,})$ in the sense that \begin{eqnarray} \nabla\triangleright\mu(f\otimes g)=\mu\circ\Delta(\nabla) \triangleright(f\otimes g) \ . \label{commcoprodcomp}\end{eqnarray} For example, the standard action of spacetime translations is given by \begin{eqnarray} \partial^a\triangleright f=\partial^af \label{commtranslaction}\end{eqnarray} for which (\ref{commcoprodcomp}) becomes the classical symmetric Leibniz rule. Let us now pass to a noncommutative deformation of the algebra of functions on $\NW_6$ via a quantization map $\Omega:{\rm C}^\infty({\mathfrak n}^{\vee\,})\to\overline{U({\mathfrak n})^\complex}$ corresponding to a specific star-product $\star$ on ${\rm C}^\infty({\mathfrak n}^{\vee\,})$ (or equivalently a specific operator ordering in $U({\mathfrak n})$). This isomorphism can be used to induce an action of $U({\mathfrak g})$ on the algebra $\overline{U({\mathfrak n})^\complex}$ through \begin{eqnarray} \Omega(\nabla_\star)\triangleright\Omega(f):= \Omega(\nabla\triangleright f) \ , \label{nablastar}\end{eqnarray} which defines a set of quantized operators $\nabla_\star=\nabla+O(\theta):{\rm C}^\infty({\mathfrak n}^{\vee\,})\to{\rm C}^\infty({\mathfrak n}^{\vee\,})$. However, the bialgebra $U({\mathfrak g})$ will no longer generate automorphisms with respect to the noncommutative star-product on ${\rm C}^\infty({\mathfrak n}^{\vee\,})$. It will only do so if its coproduct can be deformed to a non-cocommutative one $\Delta_\star=\Delta+O(\theta)$ such that the covariance condition \begin{eqnarray} \nabla_\star\triangleright\mu_\star(f\otimes g)=\mu_\star\circ \Delta_\star(\nabla_\star)\triangleright(f\otimes g) \label{NCcoprodcomp}\end{eqnarray} is satisfied, where $\mu_\star(f\otimes g):=f\star g$. This deformation is constructed by writing the star-product $f\star g=\hat{\mathcal D}(f,g)$ in terms of a bi-differential operator as in (\ref{fstargbidiff}) or (\ref{bidiffweyl}) to define an invertible abelian Drinfeld twist element~\cite{Resh1} $\hat{\mathcal F}_\star\in \overline{U({\mathfrak g})^\complex}\otimes\overline{U({\mathfrak g})^\complex}$ through \begin{eqnarray} f\star g=\mu\circ\hat{\mathcal F}{}_\star^{-1}\triangleright(f\otimes g) \ . \label{Dtwistdef}\end{eqnarray} It obeys the cocycle condition \begin{eqnarray} (\hat{\mathcal F}_\star\otimes1)\,(\Delta\otimes1)\,\hat {\mathcal F}_\star=(1\otimes\hat{\mathcal F}_\star)\, (\Delta\otimes1)\,\hat{\mathcal F}_\star \label{twistcocycle}\end{eqnarray} and defines the twisted coproduct through \begin{eqnarray} \Delta_\star:=\hat{\mathcal F}_\star^{~}\circ\Delta\circ \hat{\mathcal F}{}_\star^{-1} \ , \label{Deltastardef}\end{eqnarray} where $(f\otimes g)\circ(f'\otimes g'\,):=f\,f'\otimes g\,g'$. This new coproduct obeys the requisite coassociativity condition $(\Delta_\star\otimes\mathbb{1})\circ\Delta_\star=(\mathbb{1}\otimes\Delta_\star) \circ\Delta_\star$. The important property of the twist element $\hat{\mathcal F}_\star$ is that it modifies only the coproduct on the bialgebra $U({\mathfrak g})$, while leaving the original product structure (inherited from the Lie algebra ${\mathfrak g}={\mathfrak n}_{\rm L}\oplus{\mathfrak n}_{\rm R}$) unchanged. As an example, let us illustrate how to compute the twisting of the quantized translation generators by the noncommutative geometry of $\NW_6$. For this, we introduce a Weyl system $({\mathbb V},\comp,\weyl,\omega)$ corresponding to the chosen star-product $\star$. With the same notations as in the previous section, for $a=1,\dots,5$ we may use (\ref{Weylcomprule}), (\ref{involdef}) with $\Pi=\Omega$, and (\ref{nablastar}) with $\nabla=\partial^a$ to compute \begin{eqnarray} \Omega\bigl(\partial_\star^a\bigr)\triangleright\Omega\bigl( {\,\rm e}\,^{{\,{\rm i}\,}{\mbf k}\cdot{\mbf x}}\bigr)\cdot\Omega\bigl({\,\rm e}\,^{{\,{\rm i}\,}{\mbf k}'\cdot{\mbf x}} \bigr)&=&\Omega\bigl(\partial_\star^a\bigr)\triangleright {\,\rm e}\,^{\frac\ii2\,\omega({\mbf k},{\mbf k}'\,)\,{\sf T}}~\cdot~\Omega\bigl( {\,\rm e}\,^{{\,{\rm i}\,}({\mbf k}\comp{\mbf k}'\,)\cdot{\mbf x}}\bigr) \nonumber\\[4pt] &=& {\,{\rm i}\,}~{\,\rm e}\,^{\frac\ii2\,\omega({\mbf k},{\mbf k}'\,)\,{\sf T}}~\cdot~\Omega \bigl(({\mbf k}\comp{\mbf k}'\,)^a~{\,\rm e}\,^{{\,{\rm i}\,}({\mbf k}\comp{\mbf k}'\,)\cdot{\mbf x}}\bigr) \nonumber\\[4pt] &=& {\,{\rm i}\,}~\mbox{$\sum\limits_i$}~\Omega\bigl(d^a_{(1)\,i}( -{\,{\rm i}\,}{\mbf\partial}_\star)\bigr)\triangleright\Omega\bigl({\,\rm e}\,^{{\,{\rm i}\,}{\mbf k}\cdot{\mbf x}} \bigr)\nonumber\\ && \qquad\qquad \cdot~\Omega\bigl(d^a_{(2)\,i}(-{\,{\rm i}\,}{\mbf\partial}_\star)\bigr) \triangleright\Omega\bigl({\,\rm e}\,^{{\,{\rm i}\,}{\mbf k}'\cdot{\mbf x}}\bigr) \ , \label{partialstarderiv}\end{eqnarray} where we have assumed that the group composition law of the Weyl system has an expansion of the form $({\mbf k}\comp{\mbf k}'\,)^a:=\sum_i\,d^a_{(1)\,i}({\mbf k})\,d^a_{(2)\,i}({\mbf k}'\,)$. From the covariance condition (\ref{NCcoprodcomp}) it then follows that the twisted coproduct assumes a Sweedler form \begin{eqnarray} \Delta_\star\left(\partial_\star^a\right)={\,{\rm i}\,}~ \mbox{$\sum\limits_i$}~d^a_{(1)\,i}(-{\,{\rm i}\,}{\mbf\partial}_\star)\otimes d^a_{(2)\,i}(-{\,{\rm i}\,}{\mbf\partial}_\star) \ . \label{Deltastarexpl}\end{eqnarray} Analogously, if we assume that the group cocycle of the Weyl system admits an expansion of the form $\omega({\mbf k},{\mbf k}'\,):=\sum_i\,w^i_{(1)}({\mbf k})\,w^i_{(2)}({\mbf k}'\,)$, then a similar calculation gives the twisted coproduct of the quantized plane wave time derivative as \begin{eqnarray} \Delta_\star\left(\partial_+^\star\right)=\partial_+^\star \otimes1+1\otimes\partial_+^\star-\mbox{$\frac12\,\sum\limits_i$}~ w^i_{(1)}(-{\,{\rm i}\,}{\mbf\partial}_\star)\otimes w^i_{(2)}(-{\,{\rm i}\,}{\mbf\partial}_\star) \ . \label{Deltastartime}\end{eqnarray} Note that now the corresponding Leibniz rules (\ref{NCcoprodcomp}) are no longer the usual ones associated with the product $\star$ but are the deformed, generically non-symmetric ones given by \begin{eqnarray} \partial^a_\star\triangleright(f\star g)&=&{\,{\rm i}\,}~\mbox{$\sum\limits_i$}\, \bigl(d^a_{(1)\,i}(-{\,{\rm i}\,}{\mbf\partial}_\star)\triangleright f\bigr)~\star~\bigl( d^a_{(2)\,i}(-{\,{\rm i}\,}{\mbf\partial}_\star)\triangleright g\bigr) \ , \nonumber\\[4pt] \partial_+^\star\triangleright(f\star g)&=&\left(\partial_+^\star \triangleright f\right)\star g+f\star\left(\partial_+^\star \triangleright g\right)-\mbox{$\frac12\,\sum\limits_i$}\, \bigl(w^i_{(1)}(-{\,{\rm i}\,}{\mbf\partial}_\star)\triangleright f\bigr)~\star~ \bigl(w^i_{(2)}(-{\,{\rm i}\,}{\mbf\partial}_\star)\triangleright g\bigr) \nonumber\\ && \label{defLeibniz}\end{eqnarray} arising from the twisting of the coproduct. Thus these derivatives do {\it not} define derivations of the noncommutative algebra of functions, but rather implement the twisting of isometries of flat space appropriate to the plane wave geometry~\cite{PK1,CFS1,BlauOL1,HSz1}. In the language of quantum groups~\cite{QG1}, the twisted isometry group of the spacetime $\NW_6$ coincides with the quantum double of the cocommutative Hopf algebra $U({\mathfrak n})$. The antipode ${S}_\star:U({\mathfrak g})\to U({\mathfrak g})$ of the given non-cocommutative Hopf algebra structure on the bialgebra $U({\mathfrak g})$ gives the dual action of the isometries of the noncommutative plane wave and provides the analog of inversion of isometry group elements. This analogy is made precise by computing ${S}_\star$ from the group inverses $\underline{{\mbf k}}$ of elements ${\mbf k}\in{\mathbb V}$ of the corresponding Weyl system. Symbolically, one has ${S}_\star({\mbf\partial}_\star)=\underline{{\mbf\partial}_\star}$. In particular, if $\underline{{\mbf k}}=-{\mbf k}$ (as in the case of our symmetric star-products) then ${S}_\star(\partial_\star^a)=-\partial_\star^a$ and the action of the antipode is trivial. In all three instances the counit $\varepsilon_\star:U({\mathfrak g})\to\complex$ describes the action on the trivial representation as $\varepsilon_\star(\partial_\star^a)=0$, and it obeys the compatibility condition \begin{eqnarray} (\varepsilon_\star\otimes1)\,\hat{\cal F}_\star~=~1~=~ (1\otimes\varepsilon_\star)\,\hat{\cal F}_\star \label{counitcond}\end{eqnarray} with the Drinfeld twist. In what follows we will only require the underlying bialgebra structure of $U({\mathfrak g})$. The compatibility condition (\ref{NCcoprodcomp}) means that the action of $U({\mathfrak g})$ on ${\rm C}^\infty({\mathfrak n}^{\vee\,})$ defines quantum isometries of the noncommutative pp-wave, in that the star-product is an intertwiner and the noncommutative algebra of functions is covariant with respect to the action of the quantum group. The generic non-triviality of the twisted coproducts (\ref{Deltastarexpl}) and (\ref{Deltastartime}) is consistent with and extends the fact that generic translations are not classically isometries of the plane wave geometry, but rather only appropriate twisted versions are~\cite{PK1,CFS1,BlauOL1,HSz1}. Similar computations can also be carried through for the remaining five isometry generators of ${\mathfrak g}$ and correspond to the right-acting counterparts of the derivatives above, giving the full action of the noncommutative isometry group on $\NW_6$. We shall not display these formulas here. In the next section we will explicitly construct the quantized derivative operators $\partial_\star^a$ and $\partial_+^\star$ above. We now proceed to list the coproducts corresponding to our three star-products. \subsection{Time Ordering \label{TOcoprod}} The Drinfeld twist $\hat{\mathcal F}_*$ for the time-ordered star-product is the inverse of the exponential operator appearing in (\ref{TOstargen}). Following the general prescription given above, from the group composition law (\ref{TOcomplaw}) of the corresponding Weyl system we deduce the time-ordered coproducts \begin{eqnarray} \Delta_*\left(\partial_-^*\right)&=&\partial_-^*\otimes1+ 1\otimes\partial_-^* \ , \nonumber\\ \Delta_*\left(\partial^i_*\right)&=&\partial^i_*\otimes1+ {\,\rm e}\,^{{\,{\rm i}\,}\theta\,\partial_-^*}\otimes\partial^i_* \ , \nonumber\\ \Delta_*\left(\,\overline{\partial}{}^{\,i}_*\right)&=& \overline{\partial}{}^{\,i}_*\otimes1+{\,\rm e}\,^{-{\,{\rm i}\,}\theta\,\partial_-^*} \otimes\overline{\partial}{}^{\,i}_* \ , \label{TOcoprods}\end{eqnarray} while from the group cocycle (\ref{TOcocycle}) we obtain \begin{eqnarray} \Delta_*\left(\partial_+^*\right)=\partial_+^*\otimes1+ 1\otimes\partial_+^*+\theta~{\,\rm e}\,^{-{\,{\rm i}\,}\theta\, \partial_-^*}\,{\mbf\partial}_*{}^\top\otimes\overline{{\mbf\partial}}_*-\theta~ {\,\rm e}\,^{{\,{\rm i}\,}\theta\,\partial_-^*}\, \overline{{\mbf\partial}}_*{}^\top\otimes{\mbf\partial}_* \ . \label{TOcoprodtime}\end{eqnarray} The corresponding Leibniz rules read \begin{eqnarray} \partial_-^*\triangleright(f*g)&=&\left(\partial_-^*\triangleright f\right)*g+f*\left(\partial_-^*\triangleright g\right) \ , \nonumber\\ \partial_+^*\triangleright(f*g)&=&\left(\partial_+^*\triangleright f\right)*g+f*\left(\partial_+^*\triangleright g\right)\nonumber\\ && +\,\theta\,\bigl({\,\rm e}\,^{-{\,{\rm i}\,}\theta\,\partial_-^*}\,{\mbf\partial}_*{}^\top \triangleright f\bigr)~*~\left(\,\overline{{\mbf\partial}}_* \triangleright g\right)-\theta\,\bigl({\,\rm e}\,^{{\,{\rm i}\,}\theta\,\partial_-^*}\, \overline{{\mbf\partial}}_*{}^\top \triangleright f\bigr)~*~\left({\mbf\partial}_*\triangleright g\right) \ , \nonumber\\ \partial^i_*\triangleright(f*g)&=&\left(\partial^i_* \triangleright f\right)*g+\bigl({\,\rm e}\,^{{\,{\rm i}\,}\theta\,\partial_-^*} \triangleright f\bigr)*\left(\partial^i_*\triangleright g\right) \ , \nonumber\\ \overline{\partial}{}^{\,i}_* \triangleright(f*g)&=&\left(\,\overline{\partial}{}^{\,i}_* \triangleright f\right)*g+\bigl({\,\rm e}\,^{-{\,{\rm i}\,}\theta\,\partial_-^*} \triangleright f\bigr)*\left(\,\overline{\partial}{}^{\,i}_* \triangleright g\right) \ . \label{TOLeibniz}\end{eqnarray} \subsection{Symmetric Time Ordering \label{STOcoprod}} The Drinfeld twist $\hat{\mathcal F}_\bullet$ associated to the symmetric time-ordered star-product is given by the inverse of the exponential operator in (\ref{TOsymstargen}). From the group composition law (\ref{TOsymcomplaw}) of the corresponding Weyl system we deduce the symmetric time-ordered coproducts \begin{eqnarray} \Delta_\bullet\left(\partial_-^\bullet\right)&=&\partial_-^\bullet \otimes1+1\otimes\partial_-^\bullet \ , \nonumber\\ \Delta_\bullet\left(\partial^i_\bullet\right)&=&\partial^i_\bullet\otimes {\,\rm e}\,^{-\frac{{\,{\rm i}\,}\theta}2\,\partial_-^\bullet}+{\,\rm e}\,^{\frac{{\,{\rm i}\,}\theta}2\, \partial_-^\bullet}\otimes\partial^i_\bullet \ , \nonumber\\ \Delta_\bullet\left(\,\overline{\partial}{}^{\,i}_\bullet\right)&=& \overline{\partial}{}^{\,i}_\bullet\otimes {\,\rm e}\,^{\frac{{\,{\rm i}\,}\theta}2\,\partial_-^\bullet}+{\,\rm e}\,^{-\frac{{\,{\rm i}\,}\theta}2\, \partial_-^\bullet}\otimes\overline{\partial}{}^{\,i}_\bullet \ , \label{STOcoprods}\end{eqnarray} while from the group cocycle (\ref{TOsymcocycle}) we find \begin{eqnarray} \Delta_\bullet\left(\partial_+^\bullet\right)&=&\partial_+^\bullet\otimes1+ 1\otimes\partial_+^\bullet\nonumber\\ && +\,\theta~{\,\rm e}\,^{-\frac{{\,{\rm i}\,}\theta}2\, \partial_-^\bullet}\,{\mbf\partial}_\bullet{}^\top\otimes {\,\rm e}\,^{-\frac{{\,{\rm i}\,}\theta}2\,\partial_-^\bullet}\,\overline{{\mbf\partial}}_\bullet- \theta~{\,\rm e}\,^{\frac{{\,{\rm i}\,}\theta}2\,\partial_-^\bullet}\, \overline{{\mbf\partial}}_\bullet{}^\top\otimes{\,\rm e}\,^{\frac{{\,{\rm i}\,}\theta}2\, \partial_-^\bullet}\,{\mbf\partial}_\bullet \ . \label{STOcoprodtime}\end{eqnarray} The corresponding Leibniz rules are given by \begin{eqnarray} \partial_-^\bullet\triangleright(f\bullet g)&=& \left(\partial_-^\bullet\triangleright f\right)\bullet g+ f\bullet\left(\partial_-^\bullet\triangleright g\right) \ , \nonumber\\ \partial_+^\bullet\triangleright(f\bullet g)&=& \left(\partial_+^\bullet\triangleright f\right)\bullet g+ f\bullet\left(\partial_+^\bullet\triangleright g\right) +\theta\,\bigl({\,\rm e}\,^{-\frac{{\,{\rm i}\,}\theta}2\,\partial_-^\bullet}\, {\mbf\partial}_\bullet{}^\top\triangleright f\bigr)~\bullet~\bigl( {\,\rm e}\,^{-\frac{{\,{\rm i}\,}\theta}2\,\partial_-^\bullet}\, \overline{{\mbf\partial}}_\bullet\triangleright g\bigr) \nonumber\\ && \qquad\qquad\qquad\qquad\qquad\qquad -\, \theta\,\bigl({\,\rm e}\,^{\frac{{\,{\rm i}\,}\theta}2\,\partial_-^\bullet}\, \overline{{\mbf\partial}}_\bullet{}^\top\triangleright f\bigr)~\bullet~ \bigl({\,\rm e}\,^{\frac{{\,{\rm i}\,}\theta}2\,\partial_-^\bullet}\,{\mbf\partial}_\bullet \triangleright g\bigr) \ , \nonumber\\ \partial^i_\bullet\triangleright(f\bullet g)&=&\left(\partial^i_\bullet \triangleright f\right)\bullet\bigl({\,\rm e}\,^{-\frac{{\,{\rm i}\,}\theta}2\, \partial_-^\bullet}\triangleright g\bigr)+\bigl( {\,\rm e}\,^{\frac{{\,{\rm i}\,}\theta}2\,\partial_-^\bullet}\triangleright f \bigr)\bullet\left(\partial^i_\bullet\triangleright g\right) \ , \nonumber\\ \overline{\partial}{}^{\,i}_\bullet\triangleright(f\bullet g) &=&\left(\,\overline{\partial}{}^{\,i}_\bullet \triangleright f\right)\bullet\bigl({\,\rm e}\,^{\frac{{\,{\rm i}\,}\theta}2\, \partial_-^\bullet}\triangleright g\bigr)+\bigl( {\,\rm e}\,^{-\frac{{\,{\rm i}\,}\theta}2\,\partial_-^\bullet}\triangleright f \bigr)\bullet\left(\,\overline{\partial}{}^{\,i}_\bullet\triangleright g\right) \ . \label{STOLeibniz}\end{eqnarray} \subsection{Weyl Ordering \label{WOcoprod}} Finally, for the Weyl-ordered star-product (\ref{Weylstargen}) we read off the twist element $\hat{\mathcal F}_\star$ in the standard way, and use the associated group composition law (\ref{Weylcomplaw}) to write down the coproducts \begin{eqnarray} \Delta_\star\left(\partial_-^\star\right)&=&\partial_-^\star\otimes1+ 1\otimes\partial_-^\star \ , \nonumber\\ \Delta_\star\left(\partial^i_\star\right)&=&\mbox{$\frac{\phi_{-\theta}\left({\,{\rm i}\,} \partial_-^\star\right)\,\partial^i_\star\otimes1+{\,\rm e}\,^{{\,{\rm i}\,}\theta\,\partial_-^\star} \otimes\phi_{-\theta}\left({\,{\rm i}\,}\partial_-^\star\right)\,\partial^i_\star} {\phi_{-\theta}\left({\,{\rm i}\,}\partial_-^\star\otimes1+1\otimes{\,{\rm i}\,} \partial_-^\star\right)}$} \ , \nonumber\\ \Delta_\star\left(\,\overline{\partial}{}^{\,i}_\star\right)&=& \mbox{$\frac{\phi_{\theta}\left({\,{\rm i}\,} \partial_-^\star\right)\,\overline{\partial}{}^{\,i}_\star\otimes1+ {\,\rm e}\,^{-{\,{\rm i}\,}\theta\,\partial_-^\star} \otimes\phi_{\theta}\left({\,{\rm i}\,}\partial_-^\star\right)\, \overline{\partial}{}^{\,i}_\star} {\phi_{\theta}\left({\,{\rm i}\,}\partial_-^\star\otimes1+1\otimes{\,{\rm i}\,} \partial_-^\star\right)}$} \ . \label{Weylcoprods}\end{eqnarray} The remaining coproduct may be determined from the cocycle (\ref{Weylcocycle}) as \begin{eqnarray} \Delta_\star\left(\partial_+^\star\right)&=&\partial_+^\star\otimes1+ 1\otimes\partial_+^\star \nonumber\\ &&+\,2{\,{\rm i}\,}\theta\,\Bigl[ \phi_\theta\left({\,{\rm i}\,}\partial_-^\star\right)\,{\mbf\partial}_\star {}^\top\otimes\phi_\theta\left({\,{\rm i}\,}\partial_-^\star\right)\, \overline{{\mbf\partial}}_\star-\phi_{-\theta}\left({\,{\rm i}\,}\partial_-^\star \right)\,\overline{{\mbf\partial}}_\star {}^\top\otimes\phi_{-\theta}\left({\,{\rm i}\,}\partial_-^\star\right)\, {\mbf\partial}_\star\Bigr. \nonumber\\ && \qquad\quad +\,\bigl(\gamma_\theta({\,{\rm i}\,}\partial_-^\star)\otimes1- \gamma_\theta({\,{\rm i}\,}\partial_-^\star\otimes1+ 1\otimes{\,{\rm i}\,}\partial_-^\star)\bigr)\bigl(\phi_\theta({\,{\rm i}\,}\partial_-^\star) \,\phi_{-\theta}({\,{\rm i}\,}\partial_-^\star)\,\overline{{\mbf\partial}}_\star \cdot{\mbf\partial}_\star\otimes1\bigr) \nonumber\\ &&\qquad\quad +\,\bigl(1\otimes\gamma_\theta({\,{\rm i}\,}\partial_-^\star)- \gamma_\theta({\,{\rm i}\,}\partial_-^\star\otimes1+ 1\otimes{\,{\rm i}\,}\partial_-^\star)\bigr)\bigl(1\otimes \phi_\theta({\,{\rm i}\,}\partial_-^\star) \,\phi_{-\theta}({\,{\rm i}\,}\partial_-^\star)\,\overline{{\mbf\partial}}_\star \cdot{\mbf\partial}_\star\bigr) \nonumber\\ &&\qquad\quad -\Bigl.\gamma_\theta\left( {\,{\rm i}\,}\partial_-^\star\otimes1+1\otimes{\,{\rm i}\,}\partial_-^\star\right) \bigl({\,\rm e}\,^{-{\,{\rm i}\,}\theta\,\partial_-^\star}\,\phi_{-\theta}({\,{\rm i}\,} \partial_-^\star)\,{\mbf\partial}_\star{}^\top\otimes\phi_\theta({\,{\rm i}\,} \partial_-^\star)\,\overline{{\mbf\partial}}_\star\bigr.\nonumber\\ && \qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad+\bigl.{\,\rm e}\,^{{\,{\rm i}\,}\theta\, \partial_-^\star}\,\phi_{\theta}({\,{\rm i}\,} \partial_-^\star)\,\overline{{\mbf\partial}}_\star{}^\top\otimes \phi_{-\theta}({\,{\rm i}\,}\partial_-^\star)\,{\mbf\partial}_\star\bigr)\Bigr] \ . \nonumber\\ && \label{Weylcoprodtime}\end{eqnarray} In (\ref{Weylcoprods}) and (\ref{Weylcoprodtime}) the functionals of the derivative operator ${\,{\rm i}\,}\partial_-^\star\otimes1+1\otimes{\,{\rm i}\,}\partial_-^\star$ are understood as usual in terms of the power series expansions given in Section~\ref{WOP}. This leads to the corresponding Leibniz rules \begin{eqnarray} \partial_-^\star\triangleright(f\star g)&=&\left(\partial_-^\star \triangleright f\right)\star g+f\star\left(\partial_-^\star \triangleright g\right) \ , \nonumber\\ \partial_+^\star\triangleright (f\star g)&=&\left(\partial_+^\star \triangleright f\right)\star g+f\star\left(\partial_+^\star \triangleright g\right) \nonumber\\ &&+\,2{\,{\rm i}\,}\theta\left\{ \Bigl(\mbox{$\frac{(1-{\,\rm e}\,^{-{\,{\rm i}\,}\theta\,\partial_-^\star})\, {\mbf\partial}_\star{}^\top}{{\,{\rm i}\,}\theta\,\partial_-^\star}$}\triangleright f \Bigr)~\star~\Bigl(\mbox{$\frac{(1-{\,\rm e}\,^{-{\,{\rm i}\,}\theta\,\partial_-^\star})\, \overline{{\mbf\partial}}_\star}{{\,{\rm i}\,}\theta\,\partial_-^\star}$}\triangleright g\Bigr)\right. \nonumber\\ && \qquad\qquad -\,\Bigl(\mbox{$\frac{(1-{\,\rm e}\,^{{\,{\rm i}\,}\theta\, \partial_-^\star})\,\overline{{\mbf\partial}}_\star{}^\top} {{\,{\rm i}\,}\theta\,\partial_-^\star}$}\triangleright f \Bigr)~\star~\Bigl(\mbox{$\frac{(1-{\,\rm e}\,^{{\,{\rm i}\,}\theta\,\partial_-^\star})\, {\mbf\partial}_\star}{{\,{\rm i}\,}\theta\,\partial_-^\star}$}\triangleright g\Bigr)\nonumber\\ &&\qquad\qquad +\,\Bigl(\mbox{$\Bigl[\frac12+\frac{(1+{\,{\rm i}\,}\theta\, \partial_-^\star)~{\,\rm e}\,^{-{\,{\rm i}\,}\theta\,\partial_-^\star}-1}{({\,\rm e}\,^{-{\,{\rm i}\,}\theta\, \partial_-^\star}-1)^2}\Bigr]\,\frac{\sin^2(\frac\theta2\,\partial_-^\star)\, \overline{{\mbf\partial}}_\star\cdot{\mbf\partial}_\star}{(\theta\,\partial_-^\star)^2}$} \triangleright f\Bigr)~\star~g\nonumber\\ &&\qquad\qquad+\,f~\star~ \Bigl(\mbox{$\Bigl[\frac12+\frac{(1+{\,{\rm i}\,}\theta\, \partial_-^\star)~{\,\rm e}\,^{-{\,{\rm i}\,}\theta\,\partial_-^\star}-1}{({\,\rm e}\,^{-{\,{\rm i}\,}\theta\, \partial_-^\star}-1)^2}\Bigr]\,\frac{\sin^2(\frac\theta2\,\partial_-^\star)\, \overline{{\mbf\partial}}_\star\cdot{\mbf\partial}_\star}{(\theta\,\partial_-^\star)^2}$} \triangleright g\Bigr) \nonumber\\ && +\,\sum_{n=1}^\infty~ \sum_{k=0}^n\,\frac{B_{n+1}\,(-{\,{\rm i}\,}\theta)^{n-2}}{k!\,(n-k)!}\, \left[\bigl((\partial_-^\star)^{n-k-2}\,\sin^2(\mbox{$\frac\theta2$}\, \partial_-^\star)\,\overline{{\mbf\partial}}_\star\cdot{\mbf\partial}_\star \triangleright f\bigr)~\star~\bigl((\partial_-^\star)^k \triangleright g\bigr)\right.\nonumber\\ && \qquad +\, \bigl((\partial_-^\star)^{n-k}\triangleright f\bigr)~\star~\bigl((\partial_-^\star)^{k-2}\,\sin^2(\mbox{$\frac\theta2$}\, \partial_-^\star)\,\overline{{\mbf\partial}}_\star\cdot{\mbf\partial}_\star \triangleright g\bigr) \nonumber\\ && \qquad -\, \bigl(({\,\rm e}\,^{-{\,{\rm i}\,}\theta\,\partial_-^\star}-1)\,(\partial_-^\star)^{n-k-1} \,{\mbf\partial}_\star{}^\top\triangleright f\bigr)~\star~\bigl( ({\,\rm e}\,^{-{\,{\rm i}\,}\theta\,\partial_-^\star}-1)\,(\partial_-^\star)^{k-1} \,\overline{{\mbf\partial}}_\star\triangleright g\bigr) \nonumber\\ && \qquad -\left. \bigl(({\,\rm e}\,^{{\,{\rm i}\,}\theta\,\partial_-^\star}-1)\,(\partial_-^\star)^{n-k-1} \,\overline{{\mbf\partial}}_\star{}^\top\triangleright f\bigr)~\star~\bigl( ({\,\rm e}\,^{{\,{\rm i}\,}\theta\,\partial_-^\star}-1)\,(\partial_-^\star)^{k-1} \,{\mbf\partial}_\star\triangleright g\bigr)\right] \ , \nonumber\\ \partial^i_\star\triangleright(f\star g)&=&\sum_{n=0}^\infty~ \sum_{k=0}^n\,\frac{B_n\,({\,{\rm i}\,}\theta)^{n-1}}{k!\,(n-k)!}\,\left[ \bigl(({\,\rm e}\,^{{\,{\rm i}\,}\theta\,\partial_-^\star}-1)\, (\partial_-^\star)^{n-k-1}\,\partial^i_\star\triangleright f\bigr)~\star~ \bigl((\partial_-^\star)^k\triangleright g\bigr)\right.\nonumber\\ && \qquad\qquad\qquad\quad +\left.\bigl({\,\rm e}\,^{{\,{\rm i}\,}\theta\,\partial_-^\star}\,(\partial_-^\star)^{n-k} \triangleright f\bigr)~\star~\bigl(({\,\rm e}\,^{{\,{\rm i}\,}\theta\,\partial_-^\star}-1) \,(\partial_-^\star)^{k-1}\,\partial^i_\star\triangleright g\bigr)\right] \ , \nonumber\\\overline{\partial}{}^{\,i}_\star\triangleright(f\star g)&=& \sum_{n=0}^\infty~\sum_{k=0}^n\,\frac{B_n\,(-{\,{\rm i}\,}\theta)^{n-1}}{k!\,(n-k)!} \,\left[\bigl(({\,\rm e}\,^{-{\,{\rm i}\,}\theta\,\partial_-^\star}-1)\, (\partial_-^\star)^{n-k-1}\,\overline{\partial}{}^{\,i}_\star \triangleright f\bigr)~\star~ \bigl((\partial_-^\star)^k\triangleright g\bigr)\right.\nonumber\\ && \qquad\qquad\qquad\quad +\left.\bigl({\,\rm e}\,^{-{\,{\rm i}\,}\theta\,\partial_-^\star}\,(\partial_-^\star)^{n-k} \triangleright f\bigr)~\star~\bigl(({\,\rm e}\,^{-{\,{\rm i}\,}\theta\,\partial_-^\star}-1) \,(\partial_-^\star)^{k-1}\,\overline{\partial}{}^{\,i}_\star \triangleright g\bigr)\right] \ . \nonumber\\ && \label{WeylLeibniz}\end{eqnarray} Note that a common feature to all three deformations is that the coproduct of the quantization of the light-cone position translation generator $\partial_-$ coincides with the trivial one (\ref{trivialcoprod}), and thereby yields the standard symmetric Leibniz rule with respect to the pertinent star-product. This owes to the fact that the action of $\partial_-$ on the spacetime $\NW_6$ corresponds to the commutative action of the central Lie algebra generator ${\sf T}$, whose left and right actions coincide. In the next section we shall see that the action of the quantized translations in $x^-$ on ${\rm C}^\infty({\mathfrak n}^{\vee\,})$ coincides with the standard commutative action (\ref{commtranslaction}). This is consistent with the fact that all frames of reference for the spacetime $\NW_6$ possess an $x^-$-translational symmetry, while translational symmetries in the other coordinates depend crucially on the frame and generally need to be twisted in order to generate an isometry of $\NW_6$. Notice also that ordinary time translation invariance is always broken by the time-dependent Neveu-Schwarz background (\ref{NS2formBrink}). \setcounter{equation}{0}\section{Derivative Operators \label{Derivatives}} In this section we will systematically construct a set of quantized derivative operators $\partial^a_\star$, $a=1,\dots,6$ satisfying the conditions of the previous section. In general, there is no unique way to build up such derivatives. To this end, we will impose some weak conditions, namely that the quantized derivatives be deformations of ordinary derivatives, $\partial_\star^a=\partial^a+O(\theta)$, and that they commute among themselves, $[\partial_\star^a,\partial_\star^b]_\star=0$. The latter condition is understood as a requirement for the iterated action of the derivatives on functions $f\in{\rm C}^\infty({\mathfrak n}^{\vee\,})$, $[\partial_\star^a,\partial_\star^b]_\star\triangleright f=0$ or equivalently \begin{eqnarray} \partial_\star^a\triangleright\bigl(\partial_\star^b\triangleright f \bigr)=\partial_\star^b\triangleright\bigl(\partial_\star^a\triangleright f\bigr) \ . \label{derivcommute}\end{eqnarray} For the former condition, the simplest consistent choice is to assume a linear derivative deformation on the coordinate functions, $[\partial_\star^a,x_b]_\star= \delta^a_{~b}+{\,{\rm i}\,}\theta\,\rho^a_{~bc}\,\partial^c_\star$, which is understood as the requirement \begin{eqnarray} \left[\partial_\star^a\,,\,x_b\right]_\star\triangleright f:= \partial^a_\star\triangleright\left(x_b\star f\right)- x_b\star\left(\partial_\star^a\triangleright f\right)= \delta^a_{~b}\,f+{\,{\rm i}\,}\theta\,\rho^a_{~bc}\,\partial^c_\star \triangleright f \ . \label{dxreq}\end{eqnarray} A set of necessary conditions on the constant tensors $\rho^a_{~bc}\in\real$ may be derived by demanding consistency of the derivatives with the original star-commutators of coordinates (\ref{xaxbstarcomm}). Applying the Jacobi identity for the star-commutators between $\partial^a_\star$, $x_b$ and $x_c$ leads to the relations \begin{eqnarray} \rho^a_{~bc}-\rho^a_{~cb}&=&C_{bc}^{~~a} \ , \nonumber\\ \rho^a_{~bc}\,\rho^c_{~de}-\rho^a_{~dc}\,\rho^c_{~be}&=& C_{bd}^{~~c}\,\rho^a_{~ce} \ . \label{rhoCrels}\end{eqnarray} With these requirements we now seek to find quantized derivative operators $\partial_\star^a$ as functionals of ordinary derivatives $\partial^a$ acting on ${\rm C}^\infty({\mathfrak n}^{\vee\,})$ as in (\ref{commtranslaction}). However, there are (uncountably) infinitely many solutions $\rho^a_{~bc}$ obeying (\ref{rhoCrels})~\cite{DMT1} with $C_{ab}^{~~c}$ the structure constants of the Lie algebra ${\mathfrak n}$ given by (\ref{NW4algdef}). We will choose the simplest consistent one defined by the star-commutators \begin{align} \nonumber \left[\partial_-^\star\,,\,x^-\right]_\star&=1 \ , &\left[\partial_+^\star\,,\,x^-\right]_\star&=0 \ , &\left[\partial_\star^i\,,\,x^-\right]_\star&=-{\,{\rm i}\,}\theta\,\partial_\star^i \ , &\left[\,\overline{\partial}{}_\star^{\,i}\,,\,x^-\right]_\star&= {\,{\rm i}\,}\theta\,\overline{\partial}{}_\star^{\,i} \ , \\ \nonumber \left[\partial_-^\star\,,\,x^+\right]_\star&=0 \ , &\left[\partial_+^\star\,,\,x^+\right]_\star &=1 \ , &\left[\partial_\star^i\,,\,x^+\right]_\star&=0 \ , &\left[\,\overline{\partial}{}_\star^{\,i}\,,\,x^+\right]_\star&=0 \ , \\ \nonumber \left[\partial_-^\star\,,\,z_i\right]_\star&=0 \ , &\left[\partial_+^\star\,,\,z_i\right]_\star&=-{\,{\rm i}\,}\theta\, \overline{\partial}{}^{\,i}_\star \ , &\left[\partial_\star^i\,,\,z_j\right]_\star&=\delta^i_{~j} \ , &\left[\,\overline{\partial}{}_\star^{\,i}\,,\,z_j\right]_\star&=0 \ , \\ \left[\partial_-^\star\,,\,\overline{z}_i\right]_\star&=0 \ , &\left[\partial_+^\star\,,\,\overline{z}_i\right]_\star&= {\,{\rm i}\,}\theta\,\partial_\star^i \ , &\left[\partial_\star^i\,,\,\overline{z}_j\right]_\star&=0 \ , &\left[\,\overline{\partial}{}_\star^{\,i}\,,\,\overline{z}_j\right]_\star& =\delta^i_{~j} \ , \label{eq:rho:nw4}\end{align} whose $O(\theta)$ parts mimick the structure of the Lie brackets (\ref{NW4algdef}). This choice ensures that the derivatives $\partial_\star^a$ will generate the isometries appropriate to the quantization of the curved spacetime $\NW_6$. All other admissible choices for $\rho^a_{~bc}$ can be mapped into those given by (\ref{eq:rho:nw4}) via non-linear redefinitions of the derivative operators $\partial^a_\star$~\cite{DMT1}. It is important to realize that the quantized derivatives do not generally obey the classical Leibniz rule, i.e. $\partial_\star^a\triangleright(f\,g)\neq f\,(\partial_\star^a\triangleright g)+(\partial_\star^a\triangleright f)\,g$ in general, but rather the generalized Leibniz rules spelled out in the previous section in order to achieve consistency for $\theta\neq0$. Let us now construct the three sets of derivatives of interest to us here. \subsection{Time Ordering \label{TOderiv}} For the time ordered case, we use (\ref{TOstargen}) to compute the star-products \begin{eqnarray} x^-*f&=&\left(x^--{\,{\rm i}\,}\theta\,{\mbf z}\cdot{\mbf\partial}+{\,{\rm i}\,}\theta\, \overline{{\mbf z}}\cdot\overline{{\mbf\partial}}\,\right)\,f \ , \nonumber\\ x^+*f&=&x^+\,f \ , \nonumber\\ z_i*f&=&\left(z_i-{\,{\rm i}\,}\theta\,x^+\,\overline{\partial}{}^{\,i} \right)\, f \ , \nonumber\\ \overline{z}_i*f&=&\left(\,\overline{z}_i+{\,{\rm i}\,}\theta\,x^+\,\partial^i \right)\,f \ . \label{TOxfprods}\end{eqnarray} Substituting these into (\ref{dxreq}) using (\ref{eq:rho:nw4}) then shows that the actions of the $*$-derivatives simply coincide with the canonical actions of the translation generators on ${\rm C}^\infty({\mathfrak n}^{\vee\,})$, so that \begin{eqnarray} \partial_*^a\triangleright f=\partial^af \ . \label{TOderivs}\end{eqnarray} Thus the time-ordered noncommutative geometry of $\NW_6$ is invariant under {\it ordinary} translations of the spacetime in all coordinate directions, with the generators obeying the twisted Leibniz rules~(\ref{TOLeibniz}). \subsection{Symmetric Time Ordering \label{STOderiv}} Next, consider the case of symmetric time ordering. From (\ref{TOsymstargen}) we compute the star-products \begin{eqnarray} x^-\bullet f&=&\left(x^--\mbox{$\frac{{\,{\rm i}\,}\theta}2$}\, {\mbf z}\cdot{\mbf\partial}+\mbox{$\frac{{\,{\rm i}\,}\theta}2$}\,\overline{{\mbf z}} \cdot\overline{{\mbf\partial}}\,\right)\,f \ , \nonumber\\ x^+\bullet f&=&x^+\,f \ , \nonumber\\ z_i\bullet f&=&{\,\rm e}\,^{\frac{{\,{\rm i}\,}\theta}2\,\partial_-}\, \left(z_i-{\,{\rm i}\,}\theta\,x^+\,\overline{\partial}{}^{\,i} \right)\,f \ , \nonumber\\ \overline{z}_i\bullet f&=&{\,\rm e}\,^{-\frac{{\,{\rm i}\,}\theta}2\,\partial_-}\, \left(\,\overline{z}_i+{\,{\rm i}\,}\theta\, x^+\,\partial^i\right)\,f \ . \label{STOxfprods}\end{eqnarray} Substituting (\ref{STOxfprods}) into (\ref{dxreq}) using (\ref{eq:rho:nw4}) along with the derivative rule \begin{eqnarray} {\,\rm e}\,^{{\,{\rm i}\,}\theta\,\partial_-}x^-=(x^-+{\,{\rm i}\,}\theta)~{\,\rm e}\,^{{\,{\rm i}\,}\theta\, \partial_-} \ , \label{derivrule1}\end{eqnarray} we find that the actions of the $\bullet$-derivatives on ${\rm C}^\infty({\mathfrak n}^{\vee\,})$ are generically non-trivial and given by \begin{eqnarray} \partial_-^\bullet\triangleright f&=&\partial_-f \ , \nonumber\\ \partial_+^\bullet\triangleright f&=&\partial_+f \ , \nonumber\\ \partial^i_\bullet\triangleright f&=&{\,\rm e}\,^{-\frac{{\,{\rm i}\,}\theta}2\, \partial_-}\,\partial^if \ , \nonumber\\ \overline{\partial}{}^{\,i}_\bullet\triangleright f&=& {\,\rm e}\,^{\frac{{\,{\rm i}\,}\theta}2\,\partial_-}\, \overline{\partial}{}^{\,i}f \ . \label{STOderivs}\end{eqnarray} Only the transverse space derivatives are modified owing to the fact that the Brinkman coordinate system is invariant under translations of the light-cone coordinates $x^\pm$. Again the twisted Leibniz rules (\ref{STOLeibniz}) are straightforward to verify in this instance. \subsection{Weyl Ordering \label{Weylderiv}} Finally, from the Weyl-ordered star-product (\ref{Weylstargen}) we compute \begin{eqnarray} x^-\star f&=&\left[x^-+\left(1-\frac1{\phi_{-\theta}({\,{\rm i}\,}\partial_-)} \right)\,\frac{{\mbf z}\cdot{\mbf\partial}}{\partial_-}+ \left(1-\frac1{\phi_{\theta}({\,{\rm i}\,}\partial_-)} \right)\,\frac{\overline{{\mbf z}}\cdot\overline{{\mbf\partial}}}{\partial_-} \right.\nonumber\\ &&\qquad-\left.2\theta\,x^+\,\left(\frac{2} {\theta\,\partial_-}-\cot\left(\mbox{$\frac\theta2\,\partial_-$} \right)\right)\,\frac{\overline{{\mbf\partial}} \cdot{\mbf\partial}}{\partial_-}\right]\,f \ , \nonumber\\ x^+\star f&=&x^+\,f \ , \nonumber\\ z_i\star f&=&\left[\frac{z_i}{\phi_{-\theta}({\,{\rm i}\,}\partial_-)}+ 2x^+\,\left(1-\frac1{\phi_{-\theta}({\,{\rm i}\,}\partial_-)}\right)\, \frac{\overline{\partial}{}^{\,i}}{\partial_-}\right]\,f \ , \nonumber\\ \overline{z}_i\star f&=& \left[\frac{\overline{z}_i}{\phi_{\theta}({\,{\rm i}\,}\partial_-)}+ 2x^+\,\left(1-\frac1{\phi_{\theta}({\,{\rm i}\,}\partial_-)}\right)\, \frac{\partial^i}{\partial_-}\right]\,f \ . \label{Weylxfprods}\end{eqnarray} {}From (\ref{dxreq}), (\ref{eq:rho:nw4}) and the derivative rule \begin{eqnarray} \phi_\theta({\,{\rm i}\,}\partial_-)x^-= \frac{{\,\rm e}\,^{{\,{\rm i}\,}\theta\,\partial_-}-\phi_\theta({\,{\rm i}\,}\partial_-)} {{\,{\rm i}\,}\partial_-}+x^-\,\phi_\theta({\,{\rm i}\,}\partial_-) \ , \label{derivrule2}\end{eqnarray} it then follows that the actions of the $\star$-derivatives on ${\rm C}^\infty({\mathfrak n}^{\vee\,})$ are given by \begin{eqnarray} \partial_-^\star\triangleright f&=&\partial_-f \ , \nonumber\\ \partial_+^\star\triangleright f&=&\left[\partial_++2\, \left(1-\frac{\sin(\theta\,\partial_-)}{\theta\,\partial_-} \right)\,\frac{\overline{{\mbf\partial}}\cdot{\mbf\partial}}{\partial_-} \right]f \ , \nonumber\\ \partial_\star^i\triangleright f&=& -\frac{1-{\,\rm e}\,^{{\,{\rm i}\,}\theta\,\partial_-}}{{\,{\rm i}\,}\theta\,\partial_-}\, \partial^if \ , \nonumber\\ \overline{\partial}{}_\star^{\,i} \triangleright f&=&\frac{1-{\,\rm e}\,^{-{\,{\rm i}\,}\theta\,\partial_-}} {{\,{\rm i}\,}\theta\,\partial_-}\,\overline{\partial}{}^{\,i}f \ . \label{Weylderivs}\end{eqnarray} Thus in the completely symmetric noncommutative geometry of $\NW_6$ both the light-cone and the transverse space of the plane wave are generically only invariant under rather complicated twisted translations, obeying the involved Leibniz rules (\ref{WeylLeibniz}). \setcounter{equation}{0}\section{Traces\label{Integrals}} The final ingredient required to construct noncommutative field theory action functionals is a definition of integration. At the algebraic level, we define an integral to be a trace on the algebra $\overline{U({\mathfrak n})^\complex}$, i.e. a map $\int\!\!\!\!\!\! - ~:\overline{U({\mathfrak n})^\complex}\to\complex$ which is linear, \begin{eqnarray} \int\!\!\!\!\!\! - ~\bigl(c_1\,\Omega(f)+c_2\,\Omega(g)\bigr)= c_1\,\int\!\!\!\!\!\! - ~\Omega(f)+c_2\,\int\!\!\!\!\!\! - ~\Omega(g) \label{ncintlin}\end{eqnarray} for all $f,g\in{\rm C}^\infty({\mathfrak n}^{\vee\,})$ and $c_1,c_2\in\complex$, and which is cyclic, \begin{eqnarray} \int\!\!\!\!\!\! - ~\Omega(f)\cdot\Omega(g)=\int\!\!\!\!\!\! - ~\Omega(g)\cdot\Omega(f) \ . \label{ncintcyclic}\end{eqnarray} We define the integral in the star-product formalism using the usual definitions for the integration of commuting Schwartz functions in ${\rm C}^\infty(\real^6)$. Then the linearity property (\ref{ncintlin}) is automatically satisfied. To satisfy the cyclicity requirement (\ref{ncintcyclic}), we introduce~\cite{CalWohl1,BehrSyk1,AA-CAA1,DJMTWW1,FelShoi1} a measure $\kappa$ on $\real^6$ which deforms the flat space volume element ${\rm d}\mbf x$ and define \begin{eqnarray} \int\!\!\!\!\!\! - ~\Omega(f):=\int\limits_{\real^6}\,{\rm d}\mbf x~\kappa(\mbf x)~f(\mbf x) \ . \label{ncintdef}\end{eqnarray} The measure $\kappa$ is chosen in order to achieve the property (\ref{ncintcyclic}), so that \begin{eqnarray} \int\limits_{\real^6}\,{\rm d}\mbf x~\kappa(\mbf x)~(f\star g)(\mbf x)= \int\limits_{\real^6}\,{\rm d}\mbf x~\kappa(\mbf x)~(g\star f)(\mbf x) \ . \label{mucyclic}\end{eqnarray} Such a measure always exists~\cite{CalWohl1,DJMTWW1,FelShoi1} and its inclusion in the present context is natural for the curved spacetime $\NW_6$ which we are considering here. It is important note that, for the star-products that we use, a measure which satisfies (\ref{mucyclic}) gives the integral the additional property \begin{eqnarray} \int\limits_{\real^6}\,{\rm d}\mbf x~\kappa(\mbf x)~(f\star g)(\mbf x)= \int\limits_{\real^6}\,{\rm d}\mbf x~\kappa(\mbf x)~f(\mbf x)\,g(\mbf x) \ , \label{ncintaddprop}\end{eqnarray} providing an explicit realization of the Connes-Flato-Sternheimer conjecture~\cite{FelShoi1}. Since the coordinate functions $x_a$ generate the noncommutative algebra, the cyclicity constraint (\ref{mucyclic}) is equivalent to the star-commutator condition \begin{eqnarray} \int\limits_{\real^6}\,{\rm d}\mbf x~\kappa(\mbf x)~\bigl[(x_a)^n\,,\,f(\mbf x) \bigr]_\star=0 \label{starcommcond}\end{eqnarray} which must hold for arbitrary functions $f\in{\rm C}^\infty(\real^6)$ (for which the integral makes sense) and for all $n\in\nat$, $a=1,\dots,6$. Expanding the star-commutator bracket using its derivation property brings (\ref{starcommcond}) to the form \begin{eqnarray} \int\limits_{\real^6}\,{\rm d}\mbf x~\kappa(\mbf x)~\sum_{m=0}^n\,{n\choose m}\,(x_a)^{n-m}\star\bigl[x_a\,,\,f(\mbf x)\bigr]_\star\star (x_a)^m=0 \ . \label{commcondexp}\end{eqnarray} We may thus insert the explicit form of $[x_a,f]_\star$ for generic $f$ and use the ordinary integration by parts property \begin{eqnarray} \int\limits_{\real^6}\,{\rm d}\mbf x~f(\mbf x)\,g(\mbf x)\,(\partial^a)^nh(\mbf x)= (-1)^n\,\int\limits_{\real^6}\,{\rm d}\mbf x~\bigl(f(\mbf x)\,(\partial^a)^n g(\mbf x)\, h(\mbf x)+(\partial^a)^nf(\mbf x)\,g(\mbf x)\,h(\mbf x)\bigr) \label{intpartsfgh}\end{eqnarray} for Schwartz functions $f,g,h\in{\rm C}^\infty(\real^6)$. This will lead to a number of constraints on the measure~$\kappa$. The trace (\ref{ncintdef}) can also be used to define an inner product $(-,-):{\rm C}^\infty({\mathfrak n}^{\vee\,})\times{\rm C}^\infty({\mathfrak n}^{\vee\,})\to\complex$ through \begin{eqnarray} (f,g):=\int\limits_{\real^6}\,{\rm d}\mbf x~\kappa(\mbf x)~\bigl(\,\overline{f}\star g\bigr)(\mbf x) \ . \label{ncintinnprod}\end{eqnarray} Note that this is different from the inner product introduced in Section~\ref{Defs}. When we come to deal with the variational principle in the next section, we shall require that our star-derivative operators $\partial^a_\star$ be anti-hermitean with respect to the inner product (\ref{ncintinnprod}), i.e. $(f,\partial^a_\star\triangleright g)=-(\partial^a_\star\triangleright f,g)$, or equivalently \begin{eqnarray} \int\limits_{\real^6}\,{\rm d}\mbf x~\kappa(\mbf x)~\bigl(\,\overline{f}\star \partial_\star^a \triangleright g\bigr)(\mbf x)=-\int\limits_{\real^6}\,{\rm d}\mbf x~\kappa(\mbf x)~ \bigl(\,\overline{\partial_\star^a\triangleright f}\star g\bigr)(\mbf x) \ . \label{ncintparts}\end{eqnarray} This allows for a generalized integration by parts property~\cite{DJMTWW1} for our noncommutative integral. As always, we will now go through our list of star-products to explore the properties of the integral in each case. We will find that the measure $\kappa$ is not uniquely determined by the above criteria and there is a large flexibility in the choices that can be made. We will also find that the derivatives of the previous section must be generically modified by a $\kappa$-dependent shift in order to satisfy~(\ref{ncintparts}). \subsection{Time Ordering\label{TOint}} Using (\ref{TOxfprods}) along with the analogous $*$-products $f*x_a$ we arrive at the $*$-commutators \begin{eqnarray} \cb{x^-}{f}_\ast&=&{\,{\rm i}\,}\theta\,\left(\,\overline{\mbf z}\cdot\overline{\mbf\partial} -{\mbf z}\cdot{\mbf\partial}\right)f \ , \nonumber \\ \nonumber \cb{x^+}{f}_\ast&=&0 \ , \\ \nonumber \cb{z_i}{f}_\ast&=&z_i\,\left(1 -{\,\rm e}\,^{-{\,{\rm i}\,}\theta\,\partial_-}\right)f-{\,{\rm i}\,}\theta\,x^+\,\left(1 +{\,\rm e}\,^{-{\,{\rm i}\,}\theta\,\partial_-}\right)\overline{\partial}{}^{\,i}f \ , \\ \cb{\,\overline{z}_i}{f}_\ast&=&\overline{z}_i\,\left(1 -{\,\rm e}\,^{{\,{\rm i}\,}\theta\,\partial_-}\right)f+{\,{\rm i}\,}\theta\,x^+\,\left(1 +{\,\rm e}\,^{{\,{\rm i}\,}\theta\,\partial_-}\right)\partial^if \ . \label{eq:time:comm}\end{eqnarray} When inserted into \eqref{commcondexp}, after integration by parts and application of the derivative rule (\ref{derivrule1}) these expressions imply constraints on the corresponding measure $\kappa_\ast$ given by \begin{eqnarray} \left(1-{\,\rm e}\,^{{\,{\rm i}\,}\theta\,\partial_-}\right)\kappa_\ast&=&0 \ , \nonumber \\ \left(1+{\,\rm e}\,^{{\,{\rm i}\,}\theta\,\partial_-}\right)\overline{\partial}{}^{\,i}\kappa_\ast&=&0 \ , \nonumber \\ \nonumber \left(1-{\,\rm e}\,^{-{\,{\rm i}\,}\theta\,\partial_-}\right)\partial^i\kappa_\ast&=&0 \ , \\ {\mbf z}\cdot{\mbf\partial}\kappa_\ast&=&\overline{\mbf z}\cdot\overline{\mbf\partial}\kappa_\ast \ . \label{eq:time:mu:all}\end{eqnarray} It is straightforward to see that the equations (\ref{eq:time:mu:all}) imply that the measure must be independent of both the light-cone position and transverse coordinates, so that \begin{equation} \label{eq:time:mu} \partial_-\kappa_\ast=\partial^i\kappa_\ast=\overline{\partial}{}^{\,i}\kappa_\ast=0 \ . \end{equation} However, the derivative $\partial_+^\ast$ in \eqref{TOderivs} does not satisfy the anti-hermiticity requirement \eqref{ncintparts}. This can be remedied by translating it by a logarithmic derivative of the measure $\kappa_*$ and defining the modified $*$-derivative \begin{eqnarray} \label{eq:time:d} \widetilde\partial{}^{\,\ast}_+=\partial_+ + \mbox{$\frac12$}\,\partial_+\ln\kappa_\ast \ . \end{eqnarray} The remaining $*$-derivatives in (\ref{TOderivs}) are unaltered. While this redefinition has no adverse effects on the commutation relations \eqref{eq:rho:nw4}, the action $\widetilde\partial{}^{\,\ast}_+\triangleright f$ contains an additional linear term in $f$ even if the function $f$ is independent of the time coordinate $x^+$. \subsection{Symmetric Time Ordering\label{STOint}} Using \eqref{STOxfprods} along with the corresponding $\bullet$-products $f\bullet x_a$ we arrive at the $\bullet$-commutators \begin{eqnarray} \nonumber \cb{x^-}{f}_\bullet&=&{\,{\rm i}\,}\theta\,\left(\, \overline{\mbf z}\cdot\overline{\mbf\partial} -{\mbf z}\cdot{\mbf\partial}\right)f \ , \\ \nonumber \cb{x^+}{f}_\bullet&=&0 \ , \\ \nonumber \cb{z_i}{f}_\bullet&=&2{\,{\rm i}\,} z_i\,\sin\left(\mbox{$\frac\theta2$}\,\partial_- \right)f-2{\,{\rm i}\,}\theta\,x^+\,\overline{\partial}{}^{\,i} \cos\left(\mbox{$\frac\theta2$}\,\partial_-\right)f \ , \\ \cb{\,\overline{z}_i}{f}_\bullet&=&-2{\,{\rm i}\,}\overline{z}_i\,\sin\left(\mbox{$\frac\theta2$} \,\partial_-\right)f+2{\,{\rm i}\,}\theta\,x^+\partial^i\cos \left(\mbox{$\frac\theta2$}\,\partial_-\right)f \ . \label{eq:symtime:comm}\end{eqnarray} Substituting these into \eqref{commcondexp} and integrating by parts, we arrive at constraints on the measure $\kappa_\bullet$ given by \begin{eqnarray} \nonumber \bigl(1-\overline{\partial}{}^{\,i}\bigr)\sin\left(\mbox{$\frac\theta2$}\, \partial_-\right)\kappa_\bullet&=&0 \ , \\ \nonumber \bigl(1+\partial^i\bigr)\sin\left(\mbox{$\frac\theta2$}\, \partial_-\right)\kappa_\bullet&=&0 \ , \\ {\mbf z}\cdot{\mbf\partial}\kappa_\bullet&=&\overline{\mbf z}\cdot\overline{\mbf\partial}\kappa_\bullet \label{eq:symtime:mu:all}\end{eqnarray} which can be reduced to the conditions \begin{equation} \label{eq:symtime:mu:rest} {\mbf z}\cdot{\mbf\partial}\kappa_\bullet=\overline{\mbf z}\cdot\overline{\mbf\partial}\kappa_\bullet \ , \quad \partial_-\kappa_\bullet=0 \ . \end{equation} Now the derivative operators $\partial_+^\bullet$, $\partial^i_\bullet$ and $\overline{\partial}{}^{\,i}_\bullet$ all violate the requirement \eqref{ncintparts}. Introducing translates of $\partial^i_\bullet$ and $\overline{\partial}{}^{\,i}_\bullet$ analogously to what we did in (\ref{eq:time:d}) is problematic. While such a shift does not alter the canonical commutation relations between the coordinates and derivatives, i.e. the algebraic properties of the differential operators, it does violate the $\bullet$-commutator relationships \eqref{dxreq} and \eqref{eq:rho:nw4} for generic functions $f$. Consistency between differential operator and function commutators would only be possible in this case by demanding that multiplication from the left follow a Leibniz-like rule for the translated part. Thus in order to satisfy both sets of constraints, we are forced to further require that the measure $\kappa_\bullet$ depend only on the plane wave time coordinate $x^+$ so that (\ref{eq:symtime:mu:rest}) truncates to \begin{equation} \label{eq:symtime:mu} \partial^i\kappa_\bullet=\overline{\partial}{}^{\,i}\kappa_\bullet=\partial_-\kappa_\bullet=0 \ . \end{equation} The logarithmic translation of $\partial_+^\bullet$ must still be applied in order to ensure that the time derivative is anti-hermitean with respect to the noncommutative inner product. This modifies its action to \begin{eqnarray} \label{eq:symtime:d} \widetilde\partial{}^{\,\bullet}_+=\partial_++\mbox{$\frac12$}\,\partial_+\ln\kappa_\bullet \ . \end{eqnarray} The actions of all other $\bullet$-derivatives are as in \eqref{STOderivs}. Again this shifting has no adverse effects on \eqref{eq:rho:nw4}, but it carries the same warning as in the time ordered case regarding extra linear terms from the action $\widetilde\partial{}^{\,\bullet}_+\triangleright f$. \subsection{Weyl Ordering\label{Weylint}} Finally, the Weyl ordered star-products \eqref{Weylxfprods} along with the corresponding $f\star x_a$ products lead to the $\star$-commutators \begin{eqnarray} \nonumber \cb{x^-}{f}_\star&=&{\,{\rm i}\,}\theta\,\left(\,\overline{\mbf z}\cdot\overline{\mbf\partial} -{\mbf z}\cdot{\mbf\partial}\right)f \ , \\ \nonumber \cb{x^+}{f}_\star&=&0 \ , \\ \nonumber \cb{z_i}{f}_\star&=&{\,{\rm i}\,}\theta\,\left(z_i\,\partial_- -2x^+\,\overline{\partial}{}^{\,i}\right)f \ , \\ \cb{\,\overline{z}_i}{f}_\star&=&{\,{\rm i}\,}\theta\,\left(-\overline{z}_i\,\partial_- +2x^+\,\partial^i\right)f \ . \label{eq:weyl:comm}\end{eqnarray} Substituting these commutation relations into \eqref{commcondexp}, integrating by parts, and using the derivative rules (\ref{derivrule1}) and (\ref{derivrule2}) leads to the corresponding measure constraints \begin{eqnarray} \nonumber z_i\,\partial_-\kappa_\star&=&2x^+\,\overline{\partial}{}^{\,i}\kappa_\star \ , \\ \nonumber \overline{z}_i\,\partial_-\kappa_\star&=&2x^+\,\partial^i\kappa_\star \ , \\ {\mbf z}\cdot{\mbf\partial}\kappa_\star&=&\overline{\mbf z}\cdot\overline{\mbf\partial}\kappa_\star \ . \label{eq:weyl:mu:all}\end{eqnarray} Again these differential equations imply that the measure $\kappa_\star$ depends only on the plane wave time coordinate $x^+$ so that \begin{equation} \label{eq:weyl:mu} \partial_-\kappa_\star=\partial^i\kappa_\star=\overline{\partial}{}^{\,i}\kappa_\star=0 \ . \end{equation} Translating the derivative operator $\partial_+^\star$ as before in order to satisfy \eqref{ncintparts} yields the modified derivative \begin{eqnarray} \label{eq:weyl:d} \widetilde\partial{}^{\,\star}_+=\partial_++2\, \left(1-\frac{\sin(\theta\,\partial_-)}{\theta\,\partial_-} \right)\,\frac{\overline{{\mbf\partial}}\cdot{\mbf\partial}}{\partial_-}+ \mbox{$\frac12$}\,\partial_+\ln\kappa_\star \ , \end{eqnarray} with the remaining $\star$-derivatives in \eqref{Weylderivs} unchanged. Once again this produces no major alteration to \eqref{eq:rho:nw4} but does yield extra linear terms in the actions $\widetilde\partial{}^{\,\star}_+\triangleright f$. \setcounter{equation}{0}\section{Field Theory on $\mbf{\NW_6}$\label{FieldTheory}} We are now ready to apply the detailed constructions of the preceding sections to the analysis of noncommutative field theories on the plane wave $\NW_6$, regarded as the worldvolume of a non-symmetric D5-brane~\cite{KNSanjay1}. In this paper we will only study the simplest example of free scalar fields, leaving the detailed analysis of interacting field theories and higher spin (fermionic and gauge) fields for future work. The analysis of this section will set the stage for more detailed studies of noncommutative field theories in these settings, and will illustrate some of the generic features that one can expect. Given a real scalar field $\varphi\in{\rm C}^\infty({\mathfrak n}^{\vee\,})$ of mass $m$, we define an action functional using the integral (\ref{ncintdef}) by \begin{eqnarray} S[\varphi]=\int\limits_{\real^6}\,{\rm d}\mbf x~\kappa(\mbf x)~\left[ \mbox{$\frac12$}\,\eta_{ab}\,\bigl(\,\widetilde {\partial}{}^{\,a}_{\star}\triangleright\varphi\bigr)\star \bigl(\,\widetilde{\partial}{}^{\,b}_{\star}\triangleright\varphi\bigr)+ \mbox{$\frac12$}\,m^2\,\varphi\star\varphi\right] \ , \label{Svarphidef}\end{eqnarray} where $\eta_{ab}$ is the invariant Minkowski metric tensor induced by the inner product (\ref{NW4innerprod}) with the non-vanishing components $\eta_{\pm\,\mp}=1$ and $\eta_{z_i\,\overline{z}_j}=\frac12\,\delta_{ij}$. The tildes on the derivatives in (\ref{Svarphidef}) indicate that the time component must be appropriately shifted as described in the previous section. Using the property (\ref{ncintaddprop}) we may simplify the action to the form \begin{eqnarray} S[\varphi]=\int\limits_{\real^6}\,{\rm d}\mbf x~\kappa(\mbf x)~\left[ \mbox{$\frac12$}\,\eta_{ab}\,\bigl(\,\widetilde {\partial}{}^{\,a}_{\star}\triangleright\varphi\bigr) \bigl(\,\widetilde{\partial}{}^{\,b}_{\star}\triangleright\varphi\bigr)+ \mbox{$\frac12$}\,m^2\,\varphi^2\right] \ . \label{Svarphisimpl}\end{eqnarray} By using the integration by parts property (\ref{ncintparts}) on Schwartz fields $\varphi$, we may easily compute the first order variation of the action (\ref{Svarphisimpl}) to be \begin{eqnarray} \frac{\delta S[\varphi]}{\delta\varphi}~\delta\varphi:= S[\varphi+\delta\varphi]-S[\varphi]=-\int\limits_{\real^6}\,{\rm d}\mbf x~ \kappa(\mbf x)~\left[\eta_{ab}\,\overline{\widetilde {\partial}{}_\star^{\,a}}\triangleright\bigl(\,\widetilde {\partial}{}_\star^{\,b}\triangleright\varphi\bigr)-m^2\,\varphi^2 \right]~\delta\varphi \ . \label{actionvary1}\end{eqnarray} Applying the variational principle $\frac{\delta S[\varphi]}{\delta\varphi}=0$ to (\ref{actionvary1}) thereby leads to the noncommutative Klein-Gordan field equation \begin{eqnarray} \Box^\star\triangleright\varphi-m^2\,\varphi=0 \label{NCeom}\end{eqnarray} where \begin{eqnarray} \Box^\star\triangleright\varphi:=2\,\partial_+\triangleright \partial_-\varphi+{\mbf\partial}^\top\triangleright \overline{{\mbf\partial}}\triangleright\varphi+\mbox{$\frac12$}\, \partial_+\ln\kappa~\partial_-\varphi \label{Boxstardef}\end{eqnarray} and we have used $\partial_-\kappa=0$. The second order $\star$-differential operator $\Box^\star$ should be regarded as a deformation of the covariant Laplace operator $\Box_0^\star$ corresponding to the commutative plane wave geometry of $\NW_6$. This Laplacian coincides with the quadratic Casimir element \begin{eqnarray} {\sf C}:=\theta^{-2}\,\eta^{ab}\,{\sf X}_a\,{\sf X}_b =2\,{\sf J}\,{\sf T}+\mbox{$\frac12\,\sum\limits_{i=1,2}$}\,\bigl( {\sf P}_+^i\,{\sf P}_-^i+{\sf P}_-^i\,{\sf P}_+^i\bigr) \label{quadCasimir}\end{eqnarray} of the universal enveloping algebra $U({\mathfrak n})$, expressed in terms of left or right isometry generators for the action of the isometry group $\mathcal{N}_{\rm L}\times\mathcal{N}_{\rm R}$ on $\NW_6$~\cite{PK1,CFS1,HSz1}. However, in the manner which we have constructed things, this is not the case. Recall that the approximation in which our quantization of the geometry of $\NW_6$ holds is the small time limit $x^+\to0$ in which the plane wave approaches flat six-dimensional Minkowski space ${\mathbb E}^{1,5}$. To incorporate the effects of the curved geometry of $\NW_6$ into our formalism, we have to replace the derivative operators $\widetilde{\partial}{}_\star^{\,a}$ appearing in (\ref{Svarphidef}) with appropriate curved space analogs $\delta_\star^a$~\cite{BehrSyk1,HoMiao1}. Recall that the derivative operators $\partial_\star^{a}$ are {\it not} derivations of the star-product $\star$, but instead obey the deformed Leibniz rules (\ref{defLeibniz}). The deformation arose from twisting the co-action of the bialgebra $U({\mathfrak g})$ so that it generated automorphisms of the noncommutative algebra of functions, i.e. isometries of the noncommutative plane wave. The basic idea is to now ``absorb'' these twistings into derivations $\delta_\star^a$ obeying the usual Leibniz rule \begin{eqnarray} \delta_\star^a\triangleright(f\star g)=\left(\delta_\star^a \triangleright f\right)\star g+f\star\left(\delta_\star^a \triangleright g\right) \ . \label{deltaLeibniz}\end{eqnarray} These derivations generically act on ${\rm C}^\infty({\mathfrak n}^{\vee\,})$ as the noncommutative $\star$-polydifferential operators \begin{eqnarray} \delta_\star^a\triangleright f=\sum_{n=1}^\infty\,\xi_a{}^{a_1\cdots a_n}\star\left(\partial_{a_1}^\star\triangleright\cdots\triangleright \partial_{a_n}^\star\triangleright f\right) \label{polydiffops}\end{eqnarray} with $\xi_a{}^{a_1\cdots a_n}\in{\rm C}^\infty({\mathfrak n}^{\vee\,})$. Unlike the derivatives $\partial_\star^a$, these derivations will no longer star-commute among each other. There is a one-to-one correspondence~\cite{Kont1} between such derivations $\delta_a^\star$ and Poisson vector fields $E^a=E^a{}_b~\partial^b$ on ${\mathfrak n}^{\vee\,}$ obeying \begin{eqnarray} E^a\circ\Theta(f,g)=\Theta(E^af,g)+\Theta(f,E^ag) \label{Poissonvecfields}\end{eqnarray} for all $f,g\in{\rm C}^\infty({\mathfrak n}^{\vee\,})$. To leading order one has $\delta_\star^a\triangleright f=E^a{}_b\star(\partial_\star^b\triangleright f)+O(\theta)$. By identifying the Lie algebra ${\mathfrak n}$ with the tangent space to $\NW_6$, at this order the vector fields $E^a$ can be thought of as defining a natural local frame with flat metric $\eta_{ab}$ and a curved metric tensor $G^\star_{ab}=\frac12\,\eta_{cd}\,(E^c{}_a\star E^d{}_b+E^d{}_a\star E^c{}_b)$ on the noncommutative space $\NW_6$. However, for our star-products there are always higher order terms in (\ref{polydiffops}) which spoil this interpretation. The noncommutative frame fields $\delta_\star^a$ describe the {\it quantum} geometry of the plane wave $\NW_6$. In particular, the metric tensor $G^\star$ will in general differ from the classical open string metric $G_{\rm open}$. While the operators $\delta_\star^a$ always exist as a consequence of the Kontsevich formality map~\cite{Kont1,BehrSyk1}, computing them explicitly is a highly difficult problem. We will see some explicit examples below, as we now begin to tour through our three star-products. Throughout we shall take the natural choice of measure $\kappa=\sqrt{|\det G|}=\frac12$, the constant Riemannian volume density of the $\NW_6$ plane wave geometry. \subsection{Time Ordering\label{ScalarTO}} In the case of time ordering, we use (\ref{TOderivs}) to compute \begin{eqnarray} \Box^*\triangleright\varphi=\left(2\,\partial_+\,\partial_-+ \overline{{\mbf\partial}}\cdot{\mbf\partial}\right)\varphi \label{TOBoxeq}\end{eqnarray} and thus the equation of motion coincides with that of a free scalar particle on flat Minkowski space ${\mathbb E}^{1,5}$ (Deviations from flat spacetime can only come about here by choosing a time-dependent measure $\kappa_*$). This illustrates the point made above that the treatment of the present paper tackles only the semi-classical flat space limit of the spacetime $\NW_6$. The appropriate curved geometry for this ordering corresponds to the global coordinate system (\ref{NW4metricNW}) in which the classical Laplace operator is given by \begin{eqnarray} \Box_0^*=2\,\partial_+\,\partial_-+ \left|{\mbf\partial}+\mbox{$\frac\ii2$}\,\theta\,\overline{{\mbf z}}\, \partial_-\right|^2 \ , \label{TOBox0}\end{eqnarray} so that the free wave equation $(\Box_0^*-m^2)\varphi=0$ is equivalent to the Schr\"odinger equation for a particle of charge $p^+$ (the momentum along the $x^-$ direction) in a constant magnetic field of strength~$\theta$. A global pseudo-orthonormal frame is provided by the commutative vector fields \begin{eqnarray} E_-^*&=&\partial_- \ , \nonumber\\ E_+^*&=&\partial_+-{\,{\rm i}\,}\theta\,\left( {\mbf z}\cdot{\mbf\partial}-\overline{{\mbf z}}\cdot\overline{{\mbf\partial}}\,\right) \ , \nonumber\\ E^{i}_*&=&\partial^i \ , \nonumber\\ \overline{E}{}_{*}^{\,i}&=&\overline{\partial}{}^{\,i} \ . \label{TOorthoframe}\end{eqnarray} Determining the derivations $\delta_*^a$ corresponding to the commuting frame (\ref{TOorthoframe}) on the quantum space is in general rather difficult. Evidently, from the coproduct structure (\ref{TOLeibniz}) the action along the light-cone position is given by \begin{eqnarray} \delta_-^*\triangleright f=\partial_-f \ . \label{TOdeltaminus}\end{eqnarray} This is simply a consequence of the fact that translations along $x^-$ generate an automorphism of the noncommutative algebra of functions, i.e. an isometry of the noncommutative geometry. From the Hopf algebra coproduct (\ref{TOcoprods}) we have \begin{eqnarray} \Delta_*\bigl({\,\rm e}\,^{{\,{\rm i}\,}\theta\,\partial_-}\bigr)= {\,\rm e}\,^{{\,{\rm i}\,}\theta\,\partial_-}\otimes{\,\rm e}\,^{{\,{\rm i}\,}\theta\,\partial_-} \label{TOcoprodglobal}\end{eqnarray} and consequently \begin{eqnarray} {\,\rm e}\,^{{\,{\rm i}\,}\theta\,\partial_-}\triangleright(f*g)=\bigl( {\,\rm e}\,^{{\,{\rm i}\,}\theta\,\partial_-}\triangleright f\bigr)*\bigl( {\,\rm e}\,^{{\,{\rm i}\,}\theta\,\partial_-}\triangleright g\bigr) \ . \label{xmautoNC}\end{eqnarray} On the other hand, the remaining isometries involve intricate twistings between the light-cone and transverse space directions. For example, let us demonstrate how to unravel the coproduct rule for $\partial_+^*$ in (\ref{TOLeibniz}) into the desired symmetric Leibniz rule (\ref{deltaLeibniz}) for $\delta_+^*$. This can be achieved by exploiting the $*$-product identities \begin{eqnarray} z_i*f&=&\bigl({\,\rm e}\,^{{\,{\rm i}\,}\theta\,\partial_-}f\bigr)*z_i-2{\,{\rm i}\,}\theta\, x^+\,\overline{\partial}{}^{\,i}f \ , \nonumber\\ \overline{z}_i*f&=&\bigl({\,\rm e}\,^{-{\,{\rm i}\,}\theta\,\partial_-}f\bigr)* \overline{z}_i+2{\,{\rm i}\,}\theta\,x^+\,\partial^if \label{TOstarprodcomm}\end{eqnarray} along with the commutativity properties $[\partial_-^*,z_i]_*=[\partial_-^*,\overline{z}_i]_*=0$ for $i=1,2$ and for arbitrary functions $f$. Using in addition the modified Leibniz rules (\ref{TOLeibniz}) along with the $*$-multiplication properties (\ref{TOxfprods}) we thereby find \begin{eqnarray} \delta_+\triangleright f=\left[x^+\,\partial_++\mbox{$\frac1{2{\,{\rm i}\,}}$}\, \left({\mbf z}\cdot\overline{{\mbf\partial}}+\overline{{\mbf z}}\cdot{\mbf\partial}\right) \right]f \ . \label{TOdeltap}\end{eqnarray} This action mimicks the form of the classical frame field $E_+^*$ in (\ref{TOorthoframe}). Finally, for the transversal isometries, one can attempt to seek functions $g^i\in{\rm C}^\infty({\mathfrak n}^{\vee\,})$ such that $g^i*f=({\,\rm e}\,^{-{\,{\rm i}\,}\theta\,\partial_-}f)*g_i$ in order to absorb the light-cone translation in the Leibniz rule for $\partial_*^i$ in (\ref{TOLeibniz}). This would mean that the $x^-$ translations are generated by {\it inner} automorphisms of the noncommutative algebra. If such functions exist, then the corresponding derivations are given by $\delta^i_*\triangleright f=g^i*\partial_*^if$ (no sum over $i$) and similarly for $\overline{\delta}{}^{\,i}_*$. However, it is doubtful that such inner derivations exist and the transverse space frame fields are more likely to be given by higher-order $*$-polyvector fields. For example, using similar steps to those which led to (\ref{TOdeltap}), one can show that the actions \begin{eqnarray} \delta_*\triangleright f&:=&\left(\,\overline{{\mbf z}}\cdot{\mbf\partial}+2{\,{\rm i}\,} x^+\,\partial_+-{\,{\rm i}\,}\theta\,x^+\,\overline{{\mbf\partial}}\cdot{\mbf\partial}\right)f \ , \nonumber\\ \overline{\delta}{}_*\triangleright f&:=&\left({{\mbf z}}\cdot \overline{{\mbf\partial}}-2{\,{\rm i}\,} x^+\,\partial_++{\,{\rm i}\,}\theta\,x^+\,\overline{{\mbf\partial}}\cdot{\mbf\partial}\right)f \label{TOdeltatransv}\end{eqnarray} define derivations of the $*$-product on $\NW_6$, and hence naturally determine elements of a noncommutative transverse frame. The action of the corresponding noncommutative Laplacian $\eta_{ab}\,\delta_*^a\triangleright(\delta_*^b\triangleright\varphi)$ deforms the harmonic oscillator dynamics generated by (\ref{TOBox0}) by non-local higher spatial derivative terms. These extra terms will have significant ramifications at large energies for motion in the transverse space. This could have profound physical effects in the interacting noncommutative quantum field theory. In particular, it may alter the UV/IR mixing story~\cite{MVRS1} in an interesting way. For time-dependent noncommutativity with standard tree-level propagators, UV/IR mixing becomes intertwined with violations of energy conservation in an intriguing way~\cite{BG1,RS1}, and it would be interesting to see how our modified free field propagators affect this analysis. It would also be interesting to see if and how these modifications are related to the generic connection between wave propagation on homogeneous plane waves and the Lewis-Riesenfeld theory of time-dependent harmonic oscillators~\cite{BlauOL1}. \subsection{Symmetric Time Ordering\label{ScalarSTO}} The analysis in the case of symmetric time ordering is very similar to that just performed, so we will be very brief and only highlight the essential changes. From (\ref{STOderivs}) we find once again that the Laplacian (\ref{Boxstardef}) concides with the flat space wave operator \begin{eqnarray} \Box^\bullet\triangleright\varphi=\left(2\,\partial_+\,\partial_-+ \overline{{\mbf\partial}}\cdot{\mbf\partial}\right)\varphi \ . \label{STOBoxeq}\end{eqnarray} The relevant coordinate system in this case is given by the Brinkman metric (\ref{NW4metricBrink}) for which the classical Laplace operator reads \begin{eqnarray} \Box_0^\bullet=2\,\partial_+\,\partial_-+\overline{{\mbf\partial}}\cdot {\mbf\partial}-\mbox{$\frac14$}\,\theta^2\,|{\mbf z}|^2\,\partial_-^2 \ . \label{STOBox0}\end{eqnarray} A global pseudo-orthonormal frame in this case is provided by the vector fields \begin{eqnarray} E_-^\bullet&=&\partial_- \ , \nonumber\\ E_+^\bullet&=&\partial_++ \mbox{$\frac18$}\,\theta^2\,|{\mbf z}|^2\,\partial_- \ , \nonumber\\ E^{i}_\bullet&=&\partial^i \ , \nonumber\\ \overline{E}{}_{\bullet}^{\,i}&=&\overline{\partial}{}^{\,i} \ . \label{STOorthoframe}\end{eqnarray} The corresponding twisted derivations $\delta^a_\bullet$ which symmetrize the Leibniz rules (\ref{STOLeibniz}) can be constructed analogously to those of the time ordering case in Section~\ref{ScalarTO} above. \subsection{Weyl Ordering\label{ScalarWeyl}} Finally, the case of Weyl ordering is particularly interesting because the effects of curvature are present even in the flat space limit. Using (\ref{Weylderivs}) we find the Laplacian \begin{eqnarray} \Box^\star\triangleright\varphi=\left(2\,\partial_+\,\partial_- +2\,\left[2\,\left(1-\frac{\sin(\theta\,\partial_-)} {\theta\,\partial_-}\right)+\frac{1-\cos(\theta\,\partial_-)} {\theta^2\,\partial_-^2}\right]\,\overline{{\mbf\partial}}\cdot{\mbf\partial} \right)\varphi \label{WeylBoxeq}\end{eqnarray} which coincides with the flat space Laplacian only at $\theta=0$. To second order in the deformation parameter $\theta$, the equation of motion (\ref{NCeom}) thereby yields a second order correction to the usual flat space Klein-Gordan equation given by \begin{eqnarray} \left[\left(2\,\partial_+\,\partial_-+\overline{{\mbf\partial}}\cdot{\mbf\partial}-m^2 \right)+\mbox{$\frac7{12}$}\,\theta^2\,\partial_-^2\,\overline{{\mbf\partial}}\cdot {\mbf\partial}+O\left(\theta^4\right)\right]\varphi=0 \ . \label{KGeqcorr}\end{eqnarray} Again we find that only the transverse space motion is altered by noncommutativity, but this time through a non-local dependence on the light-cone momentum $p^+$ yielding a drastic modification of the dispersion relation for free wave propagation in the noncommutative spacetime. This dependence is natural. The classical mass-shell condition for motion in the curved background is $2\,p^+\,p^-+|4\,\theta\,p^+\,\mbf\lambda|^2=m^2$, where $\mbf\lambda\in\complex^2$ represents the position and radius of the circular trajectories in the background magnetic field~\cite{CFS1}. Thus the quantity $4\,\theta\,p^+\,\mbf\lambda$ can be interpreted as the momentum for motion in the transverse space. The operator (\ref{WeylBoxeq}) incorporates the appropriate noncommutative deformation of this motion. It illustrates the point that the fundamental quanta governing the interactions in the present class of noncommutative quantum field theories are not likely to involve the particle-like dipoles of the flat space cases~\cite{Sheikh1,BigSuss1}, but more likely string-like objects owing to the nonvanishing $H$-flux in (\ref{NS2formBrink}). These open string quanta become polarized as dipoles experiencing a net force due to their couplings to the non-uniform $B$-field. It is tempting to speculate that, in contrast to the other orderings, the Weyl ordering naturally incorporates the new vacua corresponding to long string configurations which are due entirely to the time-dependent nature of the background Neveu-Schwarz field~\cite{BDAKZ1}. While the Weyl ordered star-product is natural from an algebraic point of view, it does not correspond to a natural coordinate system for the plane wave $\NW_6$ due to the complicated form of the group product rule (\ref{Weylgpprodexpl}) in this case. In particular, the frame fields in this instance will be quite complicated. Computing the corresponding twisted derivations $\delta^a_\star$ directly would again be extremely cumbersome, but luckily we can exploit the equivalence between the star-products $\star$ and $*$ derived in Section~\ref{WOP}. Given the derivations $\delta^a_*$ constructed in Section~\ref{ScalarTO} above, we may use the differential operator (\ref{Gdiffopexpl}) which implements the equivalence (\ref{WeylTOrel}) to define \begin{eqnarray} \delta^a_\star\triangleright f:=\mathcal{G}^{~}_\Omega\circ \delta^a_*\triangleright\bigl(\mathcal{G}_\Omega^{-1}(f)\bigr) \ . \label{Weyldelta}\end{eqnarray} These noncommutative frame fields will lead to the appropriate curved space extension of the Laplace operator in (\ref{WeylBoxeq}). \setcounter{equation}{0}\section{Worldvolume Field Theories\label{D3Branes}} In this final section we will describe how to build noncommutative field theories on regularly embedded worldvolumes of D-branes in the spacetime $\NW_6$ using the formalism described above. We shall describe the general technique on a representative example by comparing the noncommutative field theory on $\NW_6$ which we have constructed in this paper to that of the noncommutative D3-branes which was constructed in~\cite{HSz1}. We shall do so in a general fashion which illustrates how the construction extends to generic D-branes. This will provide further perspective on the natures of the different quantizations we have used throughout, and also illustrate the overall consistency of our results. As we will now demonstrate, we can view the noncommutative geometry of $\NW_6$, in the manner constructed above, as a collection of all euclidean noncommutative D3-branes taken together. This is done by restricting the geometry to obtain the usual quantization of coadjoint orbits in ${\mathfrak n}^{\vee\,}$ (as opposed to all of ${\mathfrak n}^{\vee\,}$ as described above). This restriction defines an alternative and more geometrical approach to the quantization of these branes which does not rely upon working with representations of the Lie group $\mathcal{N}$, and which is more adapted to the flat space limit $\theta\to0$. This procedure can be thought of as somewhat opposite to the philosophy of~\cite{HSz1}, which quantized the geometry of a non-symmetric D5-brane wrapping $\NW_6$~\cite{KNSanjay1} by viewing it as a noncommutative foliation by these euclidean D3-branes. Here the quantization of the spacetime-filling brane in $\NW_6$ has been carried out independently leading to a much simpler noncommutative geometry which correctly induces the anticipated worldvolume field theories on the ${\mathbb E}^4$ submanifolds of $\NW_6$. The euclidean D3-branes of interest wrap the non-degenerate conjugacy classes of the group $\mathcal{N}$ and are coordinatized by the transverse space ${\mbf z}\in\complex^2\cong{\mathbb E}^4$~\cite{SF-OF1}. They are defined by the spacelike hyperplanes of constant time in $\NW_6$ given by the transversal intersections of the null hypersurfaces \begin{eqnarray} x^+&=&{\rm constant} \ , \nonumber\\ x^-+\mbox{$\frac14$}\,\theta\,|{\mbf z}|^2\,\cot\left(\mbox{$\frac12$}\, \theta\,x^+\right)&=&{\rm constant} \ , \label{D3subsps}\end{eqnarray} independently of the chosen coordinate frame. This describes the brane worldvolume as a wavefront expanding in a sphere $\Sphere^3$ in the transverse space. In the semi-classical flat space limit $\theta\to0$, the second constraint in (\ref{D3subsps}) to leading order becomes \begin{eqnarray} C:=2\,x^+\,x^-+|{\mbf z}|^2={\rm constant} \ . \label{Cdefconst}\end{eqnarray} The function $C$ on ${\mathfrak n}^{\vee\,}$ corresponds to the Casimir element (\ref{quadCasimir}) and the constraint (\ref{Cdefconst}) is analogous to the requirement that Casimir operators act as scalars in irreducible representations. Similarly, the constraint on the time coordinate $x^+$ in (\ref{D3subsps}) is analogous to the requirement that the central element ${\sf T}$ act as a scalar operator in any irreducible representation of $\mathcal{N}$. Let $\pi:\NW_6\to{\mathbb E}^4$ be the projection of the six-dimensional plane wave onto the worldvolume of the symmetric D3-branes. Let $\pi^\sharp:{\rm C}^\infty({\mathbb E}^4)\to{\rm C}^\infty(\NW_6)$ be the induced algebra morphism defined by pull-back $\pi^\sharp(f)=f\circ\pi$. To consistently reduce the noncommutative geometry from all of $\NW_6$ to its conjugacy classes, we need to ensure that the candidate star-product on ${\mathfrak n}^{\vee\,}$ respects the Casimir property of the functions $x^+$ and $C$, i.e. that $x^+$ and $C$ star-commute with every function $f\in{\rm C}^\infty({\mathfrak n}^{\vee\,})$. Only in that case can the star-product be consistently restricted from all of $\NW_6$ to a star-product $\star_{x^+}$ on the conjugacy classes ${\mathbb E}^4$ defined by \begin{eqnarray} f\star_{x^+}g:=\pi^\sharp(f)\star\pi^\sharp(g) \ . \label{starxpdef}\end{eqnarray} Then one has the compatibility condition \begin{eqnarray} \iota^\sharp(f\star g)=\iota^\sharp(f)\star_{x^+}\iota^\sharp(g) \label{compcondWeyl}\end{eqnarray} where $\iota^\sharp:{\rm C}^\infty(\NW_6)\to{\rm C}^\infty({\mathbb E}^4)$ is the pull-back induced by the inclusion map $\iota:{\mathbb E}^4\hookrightarrow\NW_6$. In this case one has an isomorphism ${\rm C}^\infty({\mathbb E}^4)\cong{\rm C}^\infty(\NW_6)/\mathcal{J}$ of associative noncommutative algebras~\cite{Waldmann1}, where $\mathcal{J}$ is the two-sided ideal of ${\rm C}^\infty(\NW_6)$ generated by the Casimir constraints $(x^+-{\rm constant})$ and $(C-{\rm constant})$. This procedure is a noncommutative version of Poisson reduction, with the Poisson ideal $\mathcal{J}$ implementing the geometric requirement that the Seiberg-Witten bi-vector $\Theta$ be tangent to the conjugacy classes. {}From the star-commutators (\ref{eq:time:comm}), (\ref{eq:symtime:comm}) and (\ref{eq:weyl:comm}) we see that $[x^+,f]_\star=0$ for all three of our star-products. However, the condition $[C,f]_\star=0$ is {\it not} satisfied. Although classically one has the Poisson commutation $\Theta(C,f)=0$, one can only consistently restrict the star-products by first defining an appropriate projection of the algebra of functions on ${\mathfrak n}^{\vee\,}$ onto the star-subalgebra $\mathcal{C}$ of functions which star-commute with the Casimir function $C$. One easily computes that $\mathcal{C}$ naturally consists of functions $f$ which are independent of the light-cone position, i.e. $\partial_-f=0$. Then the projection $\iota^\sharp$ above may be applied to the subalgebra $\mathcal{C}$ on which it obeys the requisite compatibility condition (\ref{compcondWeyl}). The general conditions for reduction of Kontsevich star-products to D-submanifolds of Poisson manifolds are described in~\cite{CattFel2,CFal1}. With these projections implicitly understood, one straightforwardly finds that all three star-products (\ref{TOstargen}), (\ref{TOsymstargen}) and (\ref{Weylstargen}) restrict to \begin{eqnarray} f\star_{x^+}g=\mu\circ\exp\left[{\,{\rm i}\,}\theta\,x^+\,\left( {\mbf\partial}^\top\otimes\overline{{\mbf\partial}}- {\overline{{\mbf\partial}}}{}^{\,\top}\otimes{\mbf\partial}\right)\right]f\otimes g \label{fstargrestrict}\end{eqnarray} for functions $f,g\in{\rm C}^\infty({\mathbb E}^4)$. This is just the Moyal product, with noncommutativity parameter $\theta\,x^+$, on the noncommutative euclidean D3-branes. It is cohomologically equivalent to the Voros product which arises from quantizing the conjugacy classes through endomorphism algebras of irreducible representations of the twisted Heisenberg algebra ${\mathfrak n}$, with a normal or Wick ordering prescription for the generators ${\sf P}_\pm^i$~\cite{HSz1}. In this case, the noncommutative euclidean space arises from a projection of $U({\mathfrak n})$ in the discrete representation $V^{p^+,p^-}$ whose second Casimir invariant (\ref{quadCasimir}) is given in terms of light-cone momenta as ${\sf C}=-2\,p^+\,(p^-+\theta)$ and with ${\sf T}=\theta\,p^+$. In this approach the noncommutativity parameter is naturally the {\it inverse} of the effective magnetic field $p^+\,\theta$. On the other hand, the present analysis is a more geometrical approach to the quantization of symmetric D3-branes in $\NW_6$ which deforms the euclidean worldvolume geometry by the time parameter $\theta\,x^+$ without resorting to endomorphism algebras. The relationship between the two sets of parameters is given by $x^+=p^+\,\tau$, where $\tau$ is the proper time coordinate for geodesic motion in the pp-wave geometry of $\NW_6$. In contrast to the coadjoint orbit quantization~\cite{HSz1}, the noncommutativity found here matches exactly that predicted from string theory in the semi-classical limit~\cite{DAK1}, which asserts that the Seiberg-Witten bi-vector on the D3-branes is given by $\Theta_{x^+}=\frac\ii2\,\sin(\theta\,x^+)~{\mbf\partial}^\top\wedge \overline{{\mbf\partial}}$. Note that the present analysis also covers as a special case the degenerate cylindrical null branes located at time $x^+=0$~\cite{SF-OF1}, for which (\ref{fstargrestrict}) becomes the ordinary pointwise product $f\star_0g=f\,g$ of worldvolume fields and as expected these branes support a {\it commutative} worldvolume geometry. In contrast, the commutative null branes correspond to the class of continuous representations of the twisted Heisenberg algebra having quantum number $p^+=0$ which must be dealt with separately~\cite{HSz1}. It is elementary to check that the rest of the geometrical constructs of this paper reduce to the standard ones appropriate for a Moyal space. By defining \begin{eqnarray} \partial_{\star_{x^+}}^a\triangleright f:=\iota^\sharp\circ\partial_\star^a\triangleright\bigl(\pi^\sharp(f)\bigr) \ , \label{derivxpdef}\end{eqnarray} one finds that the actions of the derivatives constructed in Section~\ref{Derivatives} all reduce to the standard ones of flat noncommutative euclidean space, i.e. $\partial_{\star_{x^+}}^i\triangleright f=\partial^if$, $\overline{\partial}{}^{\,i}_{\star_{x^+}}\triangleright f=\overline{\partial}{}^{\,i}f$ for $f\in{\rm C}^\infty({\mathbb E}^4)$. From Section~\ref{Coprod} one recovers the standard Hopf algebra of these derivatives with trivial coproducts $\Delta_{\star_{x^+}}$ defined by \begin{eqnarray} \Delta_{\star_{x^+}}(\nabla_{\star_{x^+}})\triangleright (f\otimes g):=\bigl(\iota^\sharp\otimes\iota^\sharp\bigr)\circ \Delta_\star(\nabla_\star) \triangleright\bigl(\pi^\sharp(f)\otimes\pi^\sharp(g)\bigr) \ , \label{Deltaxpdef}\end{eqnarray} and hence the symmetric Leibniz rules appropriate to the translational symmetry of field theory on Moyal space. Consistent with the restriction to the conjugacy classes, one also has $\partial_\pm^{\star_{x^+}}\triangleright f=0$. However, from (\ref{TOcoprodtime}), (\ref{STOcoprodtime}) and (\ref{Weylcoprodtime}) one finds a non-vanishing co-action of time translations given by \begin{eqnarray} \Delta_{\star_{x^+}}\bigl(\partial_+^{\star_{x^+}}\bigr)= \theta\,\bigl({\mbf\partial}_{\star_{x^+}}{}^\top\otimes \overline{{\mbf\partial}}_{\star_{x^+}}- \overline{{\mbf\partial}}_{\star_{x^+}}{}^\top\otimes{\mbf\partial}_{\star_{x^+}}\bigr) \ . \label{Moyalcoprodtime}\end{eqnarray} This formula is very natural. The isometries of $\NW_6$ in ${\mathfrak g}={\mathfrak n}_{\rm L}\oplus{\mathfrak n}_{\rm R}$ corresponding to the number operator ${\sf J}$ of the twisted Heisenberg algebra are generated by the vector fields~\cite{HSz1} $J_{\rm L}=\theta^{-1}\,\partial_+$ and $J_{\rm R}=-\theta^{-1}\,\partial_+-{\,{\rm i}\,}({\mbf z}\cdot{\mbf\partial}- \overline{{\mbf z}}\cdot\overline{{\mbf\partial}}\,)=\theta^{-1}\,E_+^*$ (in Brinkman coordinates). The vector field $J_{\rm L}+J_{\rm R}$ generates rigid rotations in the transverse space. Restricted to the D3-brane worldvolume, the time translation isometries thus truncate to rotations of ${\mathbb E}^4$ in ${\rm so}(4)$. The coproduct (\ref{Moyalcoprodtime}) gives the standard twisted co-action of rotations for the Moyal algebra which define quantum rotational symmetries of noncommutative euclidean space~\cite{CPT1,CKNT1,Wess1}. This discussion also drives home the point made earlier that our derivative operators $\partial_\star^a$ indeed do generate, through their twisted co-actions (Leibniz rules), quantum isometries of the full noncommutative plane wave. Finally, a trace on ${\rm C}^\infty({\mathbb E}^4)$ is induced from (\ref{ncintdef}) by restricting the integral to the submanifold $\iota:{\mathbb E}^4\hookrightarrow\NW_6$ and using the induced measure $\iota^\sharp(\kappa)$. For the measures constructed in Section~\ref{Integrals}, $\iota^\sharp(\kappa)$ is always a constant function on ${\mathbb E}^4$ and hence the integration measures all restrict to the constant volume form of ${\mathbb E}^4$. Thus noncommutative field theories on the spacetime $\NW_6$ consistently truncate to the anticipated worldvolume field theories on noncommutative euclidean D3-branes in $\NW_6$, together with the correct twisted implementation for the action of classical worldvolume symmetries. The advantage of the present point of view is that many of the novel features of these canonical Moyal space field theories naturally originate from the pp-wave noncommutative geometry when the Moyal space is regarded as a regularly embedded coadjoint orbit in ${\mathfrak n}^{\vee\,}$, as described above. Furthermore, the method detailed in this paper allows a more systematic construction of the deformed worldvolume field theories of {\it generic} D-branes in $\NW_6$ in the semi-classical regime, and not just the symmetric branes analysed here. For instance, the analysis can in principle be applied to describe the dynamics of symmetry-breaking D-branes which localize along products of twisted conjugacy classes in the Lie group $\mathcal{N}$~\cite{Quella1}. However, these branes have yet to be classified in the case of the gravitational wave $\NW_6$. \subsection*{Acknowledgments} We thank J.~Figueroa-O'Farrill, L.~Friedel, J.~Gracia-Bond\'{\i}a, P.-M.~Ho, G.~Landi, F.~Lizzi, B.~Schroers and S.~Waldmann for helpful discussions and correspondence. This work was supported in part by the EU-RTN Network Grant MRTN-CT-2004-005104. The work of S.H. was supported in part by an EPSRC Postgraduate Studentship. The work of R.J.S. was supported in part by PPARC Grant PPA/G/S/2002/00478.
1,108,101,566,278
arxiv
\section{Axion dark matter} The story we tell applies to any scalar or pseudo-scalar dark matter produced in the early universe by vacuum realignment and/or the related processes of string and domain wall decay. However, the best motivated particle with those properties is the QCD axion since it is not only a cold dark matter candidate but also solves the strong CP problem of the standard model of elementary particles \cite{PQ,WW}. So, for the sake of definiteness, we discuss the specific case of the QCD axion. The Lagrangian density for the axion field $\phi(x)$ may be written as \begin{equation} {\cal L}_a = {1 \over 2} \partial_\mu \phi \partial^\mu \phi - {1 \over 2} m^2 \phi^2 + {\lambda \over 4!} \phi^4 + ... \label{lagr} \end{equation} where the dots represent interactions of the axion with the known particles. The properties of the axion are mainly determined by one parameter $f$ with dimension of energy, called the `axion decay constant'. In particular the axion mass is \begin{equation} m \simeq {f_\pi m_\pi \over f} {\sqrt{m_u m_d} \over m_u + m_d} \simeq 6 \cdot 10^{-6} {\rm eV}~{10^{12}~{\rm GeV} \over f} \label{axm} \end{equation} in terms of the pion decay constant $f_\pi$, the pion mass $m_\pi$ and the masses $m_u$ and $m_d$ of the up and down quarks, and the axion self-coupling is \begin{equation} \lambda \simeq {m^2 \over f^2} {m_d^3 + m_u^3 \over (m_u + m_d)^3} \simeq 0.35 {m^2 \over f^2}~~\ . \label{axl} \end{equation} All couplings of the axion are inversely proportional to $f$. When the axion was first proposed, $f$ was thought to be of order the electroweak scale, but its value is in fact arbitrary \cite{ZZ}. However the combined limits from unsuccessful searches for the axion in particle and nuclear physics experiments and from stellar evolution imply $f \gtrsim 3 \cdot 10^9$ GeV \cite{axrev}. An upper limit $f \lesssim 10^{12}$ GeV is obtained from the requirement that axions are not overproduced in the early universe by the vacuum realignment mechanism \cite{axdm}, which may be briefly described as follows. The non-perturbative QCD effects that give the axion its mass turn on at a temperature of order 1 GeV. The critical time, defined by $m(t_1) t_1 = 1$, is $t_1 \simeq 2 \cdot 10^{-7}~{\rm sec}(f/10^{12}~{\rm GeV})^{1 \over 3}$. Before $t_1$, the axion field $\phi$ has magnitude of order $f$. After $t_1$, $\phi$ oscillates with decreasing amplitude, consistent with axion number conservation. The number density of axions produced by vacuum realignment is \begin{equation} n(t) \sim {f^2 \over t_1}\left({a(t_1) \over a(t)}\right)^3 = {4 \cdot 10^{47} \over {\rm cm}^3} \left({f \over 10^{12}~{\rm GeV}}\right)^{5 \over 3} \left({a(t_1) \over a(t)}\right)^3 \label{axden} \end{equation} where $a(t)$ is the cosmological scale factor. Their contribution to the energy density today equals the observed density of cold dark matter when the axion mass is of order $10^{-5}$ eV, with large uncertainties. The axions produced by vacuum realignment are a form of cold dark matter because they are non-relativistic soon after their production at time $t_1$. Indeed their typical momenta at time $t_1$ are of order $1/t_1$, and vary as $1/a(t)$, so that their velocity dispersion is \begin{equation} \delta v(t) \sim {1 \over m t_1} {a(t_1) \over a(t)}~~\ . \label{veldis} \end{equation} The average quantum state occupation number of the cold axions is therefore \begin{equation} {\cal N} \sim {(2 \pi)^3~ n(t) \over {4 \pi \over 3} (m \delta v(t))^3} \sim 10^{61} \left({f \over 10^{12}~{\rm GeV}}\right)^{8 \over 3}~~\ . \label{phspden} \end{equation} ${\cal N}$ is time-independent, in agreement with Liouville's theorem. Considering that the axions are highly degenerate, it is natural to ask whether they form a Bose-Einstein condensate \cite{CABEC,therm}. We discuss the process of Bose-Einstein condensation and its implications in the next section. The thermalization and Bose-Einstein condensation of cold dark matter axions is also discussed in refs. \cite{Davidson,Yamaguchi, Jaeckel,Guth} with conclusions that do not necessarily coincide with ours in all respects. \section{Bose-Einstein condensation} Bose-Einstein condensation occurs in a fluid made up of a huge number of particles if four conditions are satisfied: 1) the particles are identical bosons, 2) their number is conserved, 3) they are highly degenerate, i.e. ${\cal N}$ is much larger than one, and 4) they are in thermal equilibrium. Axion number is effectively conserved because all axion number changing processes, such as axion decay to two photons, occur on time scales vastly longer than the age of the universe. So the axions produced by vacuum realignment clearly satisfy the first three conditions. The fourth condition is not obviously satisfied since the axion is very weakly coupled. In contrast, for Bose-Einstein condensation in atoms, the fourth condition is readily satisfied whereas the third is hard to achieve. The fourth condition is a matter of time scales. Consider a fluid that satisfies the first three conditions and has a finite, albeit perhaps very long, thermal relaxation time scale $\tau$. Then, on time scales short compared to $\tau$ and length scales large compared to a certain Jeans' length (see below) the fluid behaves like cold dark matter (CDM), but on time scales large compared to $\tau$, the fluid behaves differently from CDM. Indeed, on time scales short compared to $\tau$, the fluid behaves as a classical scalar field since it is highly degenerate. In the non-relativistic limit, appropriate for axions, a classical scalar field is mapped onto a wavefunction $\psi$ by \begin{equation} \phi(\vec{r},t) = \sqrt{2} Re[e^{- i m t} \psi(\vec{r},t)]~~\ . \label{wvfn} \end{equation} The field equation for $\phi(x)$ implies the Schr\"odinger-Gross- Pitaevskii equation for $\psi$ \begin{equation} i \partial_t \psi = - {1 \over 2m} \nabla^2 \psi + V(\vec{r},t) \psi \label{SGP} \end{equation} where the potential energy is determined by the fluid itself: \begin{equation} V(\vec{r},t) = m \Phi(\vec{r},t) - {\lambda \over 8 m^2}|\psi(\vec{r},t)|^2~~\ . \label{pot} \end{equation} The first term is due to the fluid's gravitational self-interactions. The gravitational potential $\Phi(\vec{r},t)$ solves the Poisson equation: \begin{equation} \nabla^2 \Phi = 4 \pi G m n~~\ , \label{Poisson} \end{equation} where $n=|\psi|^2$. The fluid described by $\psi$ has density $n$ and velocity $\vec{v} = {1 \over m} \vec{\nabla} \arg(\psi)$. Eq.~(\ref{SGP}) implies that $n$ and $\vec{v}$ satisfy the continuity equation and the Euler-like equation \begin{equation} \partial_t \vec{v} + (\vec{v}\cdot\vec{\nabla})\vec{v} = - {1 \over m} \vec{\nabla} V - \vec{\nabla} q \label{Euler} \end{equation} where \begin{equation} q = - {1 \over 2 m^2} {\nabla^2 \sqrt{n} \over \sqrt{n}}~~\ . \label{q} \end{equation} $q$ is commonly referred to as `quantum pressure'. The $\vec{\nabla} q$ term in Eq.~(\ref{Euler}) is a consequence of the Heisenberg uncertainty principle and accounts, for example, for the intrinsic tendency of a wavepacket to spread. It implies a Jeans length \cite{Maxim} \begin{equation} \ell_J = (16 \pi G \rho m^2)^{-1 \over 4} = 1.01 \cdot 10^{14}~{\rm cm} \left({10^{-5}~{\rm eV} \over m}\right)^{1 \over 2} \left({10^{-29}~{\rm gr/cm}^3 \over \rho}\right)^{1 \over 4} \label{Jeans} \end{equation} where $\rho = nm$ is the energy density. On distance scales large compared to $\ell_J$, quantum pressure is negligible. CDM satisfies the continuity equation, the Poisson equation, and Eq.~(\ref{Euler}) without the quantum pressure term. So, on distance scales large compared to $\ell_J$ and time scales short compared to $\tau$, a degenerate non-relativistic fluid of bosons satisfies the same equations as CDM and hence behaves as CDM. The wavefunction describing density perturbations in the linear regime is given in ref. \cite{lin}. On time scales large compared to $\tau$, the fluid of degenerate bosons does not behave like CDM since it thermalizes and forms a BEC. Most of the particles go to the lowest energy state available to them through their thermalizing interactions. This behavior is not described by classical field theory and is different from that of CDM. When thermalizing, classical fields suffer from a ultraviolet catastrophe because the state of highest entropy is one in which each field mode has average energy $k_B T$ where $T$ is temperature. In contrast, for the quantum field, the average energy of each mode is given by the Bose-Einstein distribution, and the ultraviolet catastrophe is removed. To see whether Bose-Einstein condensation is relevant to axions one must estimate the relaxation rate $\Gamma \equiv {1 \over \tau}$ of the axion fluid. We do this in the next section. When the mass is of order $10^{-21}$ eV and smaller, the Jeans length is long enough to affect structure formation in an observable way \cite{VLALP}. Because we are focussed on the properties of QCD axions, we do not consider this interesting possibility here. \section{Thermalization rate} It is convenient to introduce a cubic box of size $L$ with periodic boundary conditions. In the non-relativistic limit, the Hamiltonian for the axion fluid in such a box has the form \begin{equation} H = \sum_j \omega_j a_j^\dagger a_j + \sum_{j,k,l,m} {1 \over 4} \Lambda_{jk}^{lm} a_j^\dagger a_k^\dagger a_l a_m \label{Hamil} \end{equation} with the oscillator label $j$ being the allowed particle momenta in the box $\vec{p} = {2 \pi \over L}(n_x,n_y,n_z)$, with $n_x$, $n_y$ and $n_z$ integers, and the $\Lambda_{jk}^{lm}$ given by \cite{therm} \begin{equation} \Lambda_{\vec{p}_1,\vec{p}_2}^{\vec{p}_3,\vec{p}_4} = \Lambda_{s~\vec{p}_1,\vec{p}_2}^{~~\vec{p}_3,\vec{p}_4}~~+~~ \Lambda_{g~\vec{p}_1,\vec{p}_2}^{~~\vec{p}_3,\vec{p}_4} \label{Lam} \end{equation} where the first term \begin{equation} \Lambda_{s~\vec{p}_1,\vec{p}_2}^{~~\vec{p}_3,\vec{p}_4} = - {\lambda \over 4 m^2 L^3}~ \delta_{\vec{p}_1 + \vec{p}_2, \vec{p}_3 + \vec{p}_4} \label{selfc} \end{equation} is due to the $\lambda \phi^4$ self-interactions, and the second term \begin{equation} \Lambda_{g~\vec{p}_1,\vec{p}_2}^{~~\vec{p}_3,\vec{p}_4} = - {4 \pi G m^2 \over L^3} \delta_{\vec{p}_1 + \vec{p}_2, \vec{p}_3 + \vec{p}_4}~ \left({1 \over |\vec{p}_1 - \vec{p}_3|^2} + {1 \over |\vec{p}_1 - \vec{p}_4|^2}\right) \label{gravc} \end{equation} is due to the gravitational self-interactions. In the particle kinetic regime, defined by the condition that the relaxation rate $\Gamma \equiv {1 \over \tau}$ is small compared to the energy dispersion $\delta \omega$ of the oscillators, the Hamiltonian of Eq.~(\ref{Hamil}) implies the evolution equation \begin{equation} \dot{\cal N}_l = \sum_{k,i,j= 1} {1 \over 2} |\Lambda_{ij}^{kl}|^2 \left[{\cal N}_i {\cal N}_j ({\cal N}_l + 1)({\cal N}_k + 1) - {\cal N}_l {\cal N}_k ({\cal N}_i + 1)({\cal N}_j + 1)\right] 2 \pi \delta(\omega_i + \omega_j - \omega_k - \omega_l) \label{Bolq} \end{equation} for the quantum state occupation number operators ${\cal N}_l(t) \equiv a_l^\dagger(t)a_l(t)$. The thermalization rate in the particle kinetic regime, is obtained by carrying out the sums in Eq.~(\ref{Bolq}) and estimating the time scale $\tau$ over which the ${\cal N}_j$ change completely. This yields \cite{CABEC,therm} \begin{equation} \Gamma \sim n~\sigma~\delta v~{\cal N} \label{pkg} \end{equation} where $\sigma$ is the scattering cross-section associated with the interaction, and ${\cal N}$ is the average state occupation number of those states that are highly occupied. The cross-section for scattering by $\lambda \phi^4$ self-interactions is $\sigma_\lambda = {\lambda^2 \over 64 \pi m^2}$. For gravitational self-interactions, one must take the cross-section for large angle scattering only, $\sigma_g \sim {4 G^2 m^2 \over (\delta v)^4}$, since forward scattering does not change the momentum distribution. However, the axion fluid does not thermalize in the particle kinetic regime. It thermalizes in the opposite ``condensed regime" defined by $\Gamma >> \delta \omega$. In the condensed regime, the relaxation rate due to $\lambda \phi^4$ self-interactions is \cite{CABEC,therm} \begin{equation} \Gamma_\lambda \sim {n \lambda \over 4 m^2} \label{gl} \end{equation} and that due to gravitational self-interactions is \begin{equation} \Gamma_g \sim 4 \pi G n m^2 \ell^2 \label{gg} \end{equation} where $\ell = {1 \over m \delta v}$ is, as before, the correlation length of the particles. One can show that the expressions for the relaxation rates in the condensed regime agree with those in the particle kinetic regime at the boundary $\delta \omega = \Gamma$. We apply Eqs.~(\ref{gl}) and (\ref{gg}) to the fluid of cold dark matter axions described at the end of Section 1. One finds that $\Gamma_\lambda(t)$ becomes of order the Hubble rate, and therefore the axions briefly thermalize as a result of their $\lambda \phi^4$ interactions, immediately after they are produced during the QCD phase transition. This brief period of thermalization has no known implications for observation. However, the axion fluid thermalizes again due to its gravitational self-interactions when the photon temperature is approximately \cite{CABEC,therm} \begin{equation} T_{\rm BEC} \sim 500~{\rm eV} \left({f \over 10^{12}~{\rm GeV}}\right)^{1 \over 2}~~\ . \label{Tbec} \end{equation} The axion fluid forms a BEC then. After BEC formation, the correlation length $\ell$ increases till it is of order the horizon and thermalization occurs on ever shorter time scales relative to the age of the universe. \section{Observational consequences} As was emphasized in Section 3, the axion fluid behaves differently from CDM when it thermalizes. Indeed when all four conditions for Bose-Einstein condensation the axions almost all go to their lowest energy available state. CDM does not do that. One can readily show that in first order of perturbation theory and within the horizon the axion fluid does not rethermalize and hence behaves like CDM. This is important because the cosmic microwave background observations provide very strong constraints in this arena and the constraints are consistent with CDM. In second order of perturbation theory and higher, axions generally behave differently from CDM. The rethermalization of the axion BEC is sufficiently fast that axions that are about to fall into a galactic gravitational potential well go to their lowest energy state consistent with the total angular momentum they acquired from nearby protogalaxies through tidal torquing \cite{therm}. That state is a state of net overall rotation. In contrast, CDM falls into galactic gravitational potential wells with an irrotational velocity field. The inner caustics are different in the two cases. In the case of net overall rotation, the inner caustics are rings \cite{crdm} whose cross-section is a section of the elliptic umbilic $D_{-4}$ catastrophe \cite{sing}, called caustic rings for short. If the velocity field of the infalling particles is irrotational, the inner caustics have a `tent-like' structure which is described in detail in ref.~{\cite{inner} and which is quite distinct from caustic rings. There is observational evidence for caustic rings \cite{MWha}. It was shown \cite{case} that the assumption that the dark matter is axions explains not only the existence of caustic rings but also their detailed properties, in particular the pattern of caustic ring radii and their overall size. Furthermore, it was shown \cite{angmom} that axion BEC solves the galactic angular momentum problem, the tendency of CDM to produce halos that are too concentrated at the center compared to observations. In a recent paper \cite{Newberg}, J. Dumas et al. compare the predictions of the caustic ring model with the rotation curve of the Milky Way and the observations of the Sagittarius sattelite's tidal disruption. \section{Acknowledgments} We would like to thank Joerg Jaeckel, Alan Guth, Mark Hertzberg and Chanda Prescod-Weinstein for stimulating discussions. This work was supported in part by the US Department of Energy under grant DE-FG02-97ER41209. \section{Bibliography} \begin{footnotesize}
1,108,101,566,279
arxiv
\section{INTRODUCTION} A wealth of information on the partonic structure of the nucleon is encoded in the generalized parton distributions (GPD's) and extensive research programs are being pursued to gain information on nucleon GPD's. Our strategy to this end is to determine the quark-nucleon vertex function from an investigation of nucleon electromagnetic (em) form factors within the light-front dynamics, and then to use the obtained vertex function to evaluate the nucleon GPD's. Indeed, light-front dynamics opens a unique possibility to study the hadronic state in both the valence and the nonvalence sector \cite{Bro}, since within a light-front framework no spontaneous pair production occurs and a meaningful Fock state expansion is possible : \begin{eqnarray} && | baryon \rangle = {|qqq \rangle} + {|qqq~q \bar{q} \rangle +|qqq~g \rangle .....} \label{Fock} \end{eqnarray} As a first step, in this contribution we present our preliminary results for the unpolarized longitudinal and transverse parton momentum distributions in the nucleon. \section{NUCLEON VERTEX FUNCTION AND NUCLEON ELECTROMAGNETIC FORM FACTORS} We describe the quark-nucleon vertex function through a Bethe-Salpeter amplitude (BSA), whose Dirac structure is suggested by an effective Lagrangian \cite{de}. Then the symmetrized BSA for the nucleon is approximated as follows \begin{eqnarray} && \hspace{-0.6cm}\Phi^\sigma_N(k_1,k_2,k_3,P) =\nonumber \\ && \imath \left [~S(k_1)~\tau_y ~ \gamma^5 ~ S_C(k_2)C ~\otimes~ S(k_3)~+ ~\right. \nonumber \\ && \hspace{-0.0cm}S(k_3)~ \tau_y ~ \gamma^5 ~S_C(k_1)C ~\otimes~ S(k_2)~+ \nonumber \\ && \left.~S(k_3)~ \tau_y ~ \gamma^5 ~S_C(k_2)C~\otimes~S(k_1) \right ] \nonumber \\ && \times ~~ \Lambda(k_1,k_2,k_3) ~ \chi_{\tau_N} ~ U_N(P,\sigma) \end{eqnarray} where $S(k)$ is the constituent quark (CQ) free propagator, $S_C(k)$ the charge conjugated propagator, $\Lambda(k_1,k_2,k_3)$ describes the symmetric momentum dependence of the vertex function upon the quark momentum variables, $k_i$, $U_N(P,\sigma)$ is the nucleon spinor and $\chi_{\tau_N}$ the isospin eigenstate. The matrix elements of the {\em{macroscopic}} current in the spacelike (SL) region are approximated {\em{microscopically}} by the Mandelstam formula \cite{mandel} \vspace{0.0cm} \begin{eqnarray} && \hspace{-0.7 cm}\langle \sigma',P'|j^\mu~|P,\sigma\rangle =~3~N_c ~ \times \nonumber \\ && \int {d^4k_1 \over (2\pi)^4}\int {d^4k_2 \over (2\pi)^4} ~ {\Large{\Sigma}} \left \{ \bar \Phi^{\sigma'}_N(k_1,k_2,k'_3,P') \right. ~ \times \nonumber \\ && \hspace{-0.7 cm} \left. S^{-1}(k_1) ~ S^{-1}(k_2)~{\cal I}^\mu(k_3,q)~ \Phi^\sigma_N(k_1,k_2,k_3,P)\right \} \end{eqnarray} where $N_c$ is the number of colors, $k'_{3} = k_{3} + q$, and ${\cal I}^\mu(k_3,q)$ is the quark-photon vertex. An analogous expression holds in the timelike (TL) region. We adopt a Breit reference frame where~$~~{\bf q}_{\perp}=0$ and $q^+=q^0 + q^3=|q^2|^{1/2}$. Our CQ mass is ~$m=m_u = m_d = 200 ~ MeV$. The Mandelstam Formula is projected out by an analytic integration on $k_1^-$ and $k_2^-$, taking into account only the poles of the propagators. Then the current becomes the sum of a purely valence contribution (diagram (a) in Fig. 1) and a nonvalence (NV), pair-production contribution (diagram (b) in Fig. 1). Clearly, after the $k^-$ integrations, the vertex functions depend only upon the light-front three-momentum. \begin{figure} \includegraphics[width=7.5cm]{nucfig1.eps} \vspace{-0.6cm} \caption{Diagrams for the SL nucleon current: (a) valence (triangle) contribution with $0 < k_{i}^+ < P^+$, and $~0 \le k_{3}^+ + q^+ < {P'}^{+}$; (b) nonvalence, pair-production contribution with $0 > k_{3}^+ \ge -q^+$. A cross indicates a quark on the $k^-$-shell, i.e. $k^- = k^-_{on} = (m^2 + k^2_{\perp})/k^+$. Solid circles and solid square represent valence and NV vertex functions, respectively (after Ref. \cite{nucleon}). } \end{figure} The quark-photon vertex has isoscalar and isovector contributions \begin{eqnarray} && {\cal I}^\mu=~{\cal I}^\mu_{IS} +\tau_z {\cal I}^\mu_{IV} \label{curr} \end{eqnarray} and each term in Eq. (\ref{curr}) contains a purely valence contribution (in the SL region only) and a contribution corresponding to the pair production (Z-diagram). In turn the Z-diagram contribution can be decomposed in a bare term $+$ a Vector Meson Dominance (VMD) term (according to the decomposition of the photon state in bare, hadronic [and leptonic] contributions), viz \begin{eqnarray} && \hspace{-0.7 cm} {\cal I}^\mu_{i}(k,q) = {\cal N}_{i} ~ \theta(P^+-k^+)\theta(k^+) \gamma^\mu+\theta({q}^+ + k^+) \nonumber \\ &&\hspace{-0.7 cm} \times ~ \theta(-k^+)~\left \{ Z_B~{\cal N}_{i} ~ \gamma^\mu+ Z^i_{VM}~\Gamma^\mu[k,q,i]\right\} \end{eqnarray} with $i = IS, IV$, ${\cal N}_{IS}=1/6$ and ${\cal N}_{IV}=1/2$. The constants $Z_B$ (bare term) and $Z^i_{VM}$ (VMD term) are unknown weights to be extracted from the phenomenological analysis of the data. According to $i$, the VMD term $\Gamma^\mu[k,q,i]$ includes isovector or isoscalar mesons. Indeed in \cite{nucleon} we extended to isoscalar mesons the microscopic model for the VMD successfully used in \cite{DFPS} for the pion form factor in the SL and in the TL region and based on the meson mass operator of Ref. \cite{Fred}. As explained in \cite{nucleon}, $\Gamma^\mu[k,q,i]$ does not involve free parameters. We consider up to 20 mesons for achieving convergence at high $|q^2|$. In the valence vertexes (solid circles in Fig. 1) the spectator quarks are on the $k^-$-shell, and the BSA momentum dependence is approximated through a nucleon wave function a la Brodsky (PQCD inspired), namely \begin{eqnarray} && \hspace{-0.6cm} \Psi_N(\tilde{k}_1,\tilde{k}_2,P) = P^+ ~ { {\Lambda}(k_1,k_2,k_3)|_{(k_{1on}^-,k_{2on}^-)} \over [m_N^2 - M^2_0(1,2,3)]^{~}} = \nonumber \\ && \nonumber \\ && = ~ P^+ ~ {\cal {N}}~ {~(9~m^2)^{7/2} \over (\xi_1\xi_2\xi_3)^{p}~ \left[\beta^2 + M^2_0(1,2,3)\right]^{7/2}}~ \end{eqnarray} where $\tilde{k}_i \equiv (k_i^+,{\bf k}_{i\perp})$, $M_0(1,2,3)$ is the free mass of the three-quark system, $\xi_i = k^+_i/P^+$ \quad ($i=1,2,3$) and ${\cal N}$ a normalization constant. The power $ 7/2 $ and the parameter $ p = 0.13 $ are chosen to have an asymptotic decrease of the triangle contribution faster than the dipole. Only the triangle diagram determines the magnetic moments, which are weakly dependent on $p$. Then $\beta = 0.645$ can be fixed by the magnetic moments and we obtain $\mu_p = 2.87\pm0.02$ ~ ($\mu_p^{exp}$ = 2.793) and $\mu_n = -1.85\pm0.02$ ~ ($\mu_n^{exp}$ = -1.913). \begin{figure}[htb] \vspace{-0.0cm} \hspace{-.0cm}{\includegraphics[width=7.5cm]{Ratio_prl.eps}} \vspace{-1.3cm} \caption{Ratio between electric and magnetic form factors for the proton vs $-q^2$. Solid line: full calculation, sum of triangle and pair-production terms; dotted line: triangle contribution only (after Ref. \cite{nucleon}). } \end{figure} \begin{figure}[htb] \hspace{+.0cm} {\includegraphics[width=7.5cm]{Gmp_prl.eps} } \vspace{-1.3cm} \caption{Magnetic proton form factor vs $-q^2$. Solid and dotted lines as in Fig. 2. $G_D(|q^2|) = [1 + |q^2|/(0.71$ (GeV/c)$^2]^{-2}$ (after Ref. \cite{nucleon}). } \end{figure} For the Z-diagram contribution, the NV vertex (solid square in Fig. 1) is needed. It can depend on the available invariants, i.e. on the free mass, $M_0(1,2)$, of the (1,2) quark pair and on the free mass, $M_0(N,\bar {3})$, of the ( nucleon - quark $\bar {3}$ ) system entering the NV vertex. Then in the SL region we approximate the momentum dependence of the NV vertex $ {\Lambda}_{NV}^{SL} = {\Lambda}(k_1,k_2,k_3)|_{k^-_{1on},k^{'-}_{3on}}$ by \begin{eqnarray} &&\hspace{-.9cm} {\Lambda}_{NV}^{SL}= [g_{12}]^{2}[g_{N\bar {3}}]^{3/2} \left [{k_{12}^+ \over P^{\prime +} } \right ] \left [ P^{\prime +} \over k_{\overline {3}}^+ \right ]^r \left [P^{+} \over k_{\overline {3}}^+ \right ]^{r} \end{eqnarray} with \begin{eqnarray} k_{12}^+ = k_1^+ + k_2^+ ~~ \quad g_{AB} = {(m_A ~ m_B) \over \left [\beta^2+M^2_0(A,B)\right]} \end{eqnarray} The power 2 of $[g_{12}]^{2}$ is suggested from counting rules. The power 3/2 of $[g_{N\bar {3}}]^{3/2}$ and the parameter $r=0.17$ are chosen to have an asymptotic dipole behavior for the NV contribution. \begin{figure}[htb] \vspace{-0.0cm} \hspace{-.0cm} {\includegraphics[width=7.cm]{Gen_prl.eps}} \vspace{-0.9cm} \caption{Electric neutron form factor vs $-q^2$. Solid and dotted lines as in Fig. 2 (after Ref. \cite{nucleon}). } \end{figure} \begin{figure}[htb] \hspace{+.0cm} {\includegraphics[width=7.cm]{Gmn_prl.eps} } \vspace{-0.9cm} \caption{The same as in Fig. 3, but for the neutron (after Ref. \cite{nucleon}). } \end{figure} Analogous expressions with the same parameters are used for the nonvalence vertexes in the TL region (see Ref. \cite{nucleon} for the explicit expressions). We perform a fit of our free parameters, $Z_B$, $Z^i_{VM}$, $p$, $r$ in the SL region and obtain the form factors shown in Figs. 2 - 5, with a $\chi ^2$/datum = 1.7. As a result the weights for the pair-production terms are $Z_B = Z_{VM}^{IV} = 2.283$ and $Z_{VM}^{IS} / Z_{VM}^{IV} = 1.12$, remarkably close to one. The same values of our weight parameters are adopted to evaluate the TL form factors shown in Figs. 6 and 7. Preliminary results of our model for the nucleon form factors were presented in \cite{MFPPS}. The Z-diagram turns out to be essential for the description of the form factors in our reference frame with $q^+ > 0$. In particular the possible zero in $G_E^p/G_M^p$ is strongly related to the pair-production contribution. In the TL region our parameter free calculations give a fair description of the proton data, apart for some missing strength at $q^2 = 4.5$ (GeV/c)$^2$ ~ and ~ $q^2 = 8$ (GeV/c)$^2$ (as occurs for the pion case \cite{DFPS}), which one could argue to be due to possible unknown vector mesons, missing in the spectrum of Ref. \cite{Fred}. \begin{figure}[htb] \centering \includegraphics[width=7.cm]{TLP_4a.eps} \vspace{-.9cm} \caption{Proton effective form factor $G^p_{eff}(q^2)/G_D(q^2)$ in the timelike region vs $q^2$. Solid line: bare + VMD; dotted line: bare term. $G_{eff}(q^2) = [(|G_M(q^2)|^2 ~ - ~ \eta ~|G_E(q^2)|^2 ) / (1 ~ - ~ \eta)]^{0.5}$ with $\eta = - 2 m_N^2 / q^2$ (after Ref. \cite{nucleon}). } \end{figure} \begin{figure}[htb] \includegraphics[width=7.cm]{TLN_4b.eps} \vspace{-.9cm} \caption{The same as in Fig. 5, but for the neutron. Dashed line: solid line arbitrarily multiplied by 2 (after Ref. \cite{nucleon}). } \end{figure} \section{LONGITUDINAL AND TRANSVERSE QUARK MOMENTUM DISTRIBUTIONS IN THE NUCLEON} The longitudinal distribution $q(x)$ is the limit in the forward case, $P' = P$, of the unpolarized generalized parton distribution ${H}^q(x,\xi,t)$. Indeed one can define the distributions ${H}^q(x,\xi,t)$ and $E^q(x,\xi,t)$ for the quark $q$ through the following relation \begin{eqnarray} \label{gpdisos} \frac{1}{2P^+} \bar{u}(P',\lambda')\left [ {H}^q~ \gamma^+ + {E}^q ~\imath \frac{\sigma^{+\alpha}q_{\alpha}}{2 M} \right ] {u}(P,\lambda)= \nonumber \\ \int \frac{dz^-}{4\pi} e^{i x {\cal{P}}^+ z^-} \left . \langle P',\lambda' | \bar \psi_q (-\frac{z}{2} ) \gamma ^+ \, \psi_q (\frac{z}{2} ) | P,\lambda \rangle \right |_{\tilde{z}=0} \nonumber \end{eqnarray} where $\lambda$ is the nucleon helicity, $\tilde z\equiv \{z^+, {\bf z}_{\perp}\}$ , $\psi_q(z)$ is the quark field isodoublet, while \begin{eqnarray} && {\cal{P}}=\frac12(P'+P) \;\; \quad \quad t=(P' - P)^2 \label{kin1} \end{eqnarray} \begin{eqnarray} && \xi=-\frac{P'^+ - P^+}{2 {\cal{P}}^+} \quad \quad x=\frac{k^+}{{\cal{P}}^+} \label{kin2} \end{eqnarray} and $k$ is the average momentum of the quark that interacts with the photon, i.e. \begin{eqnarray} && k = \frac{k_3 + (k_3 +q)}{2} \end{eqnarray} For $P' = P$, both $q^+$ and $\xi$ are vanishing and $x = k_3^+ / P^+ = \xi_3$ coincides with the fraction of the longitudinal momentum carried by the active quark, i.e., with the Bjorken variable. As a consequence the function ${H}^q(x,\xi,t)$ reduces to the longitudinal parton distribution function $q(x)$ : \begin{eqnarray} \hspace{-2cm} {H}^q(x,0,0) = q(x) = \int d{\bf k}_{\perp} ~~ f_1^q(x,k_{\perp}) =&& \label{struc} \\ \int \frac{dz^-}{4\pi} e^{i x P^+ z^-} \left . \langle P | \bar \psi_q (-{z\over 2}) \gamma ^+ \, \psi_q ( {z\over 2}) | P \rangle \right |_{\tilde{z}=0} \nonumber \end{eqnarray} where an average on the nucleon helicities is understood. Once all the parameters of the nucleon light-front wave function $\Psi_N(\tilde k_1,\tilde k_2,P)$ have been determined, one can easily define the transverse-momentum-dependent distributions of the active quark in terms of the nucleon light-front wave function : \begin{eqnarray} && \hspace{-0.6cm} f_1^u(x,k_{\perp})=-{9~N_c\over 32~(2 \pi)^6}~{1\over P^{+2}} ~ \times ~\nonumber \\ && \hspace{-0.7cm}\int^{1 - x}_0 d\xi_2 {1 \over (1 - x -\xi_2) \xi_2} {1 \over x^2} \int d{\bf k}_{2 \perp} \left. {\cal H}_{u} \right|_{(k^-_{1on},k^-_{2on})} \nonumber \\ && \nonumber \\ && \hspace{-0.3cm} \times ~ |\Psi_N(P^+ ~ \xi_1,{\bf k}_{1\perp},P^+ ~ \xi_2,{\bf k}_{2\perp},P)|^2 \label{partutr} \end{eqnarray} \begin{eqnarray} && \hspace{-0.6cm} f_1^d(x,k_{\perp})={9~N_c\over 8~(2 \pi)^6}~ {1\over P^{+2}} ~ \times ~\nonumber \\ && \hspace{-0.7cm}\int^{1 - x}_0 d\xi_2 {1 \over (1 - x-\xi_2) \xi_2} {1 \over x^2} \int d{\bf k}_{2 \perp} \left. {\cal H}_{d} \right|_{(k^-_{1on},k^-_{2on})} \nonumber \\ && \nonumber \\ && \hspace{-0.3cm}\times ~ |\Psi_N(P^+ ~ \xi_1,{\bf k}_{1\perp},P^+ ~ \xi_2,{\bf k}_{2\perp},P)|^2~~ \quad \label{partdtr} \end{eqnarray} where ${\cal H}_{u}$ and ${\cal H}_{d}$ are proper traces of propagators and of the currents ${\cal {I}}^+_u$ and ${\cal {I}}^+_d$, respectively. From the nucleon light-front wave function $\Psi_N(\tilde k_1,\tilde k_2,P)$ one can easily define through Eq. (\ref{struc}) also the longitudinal distribution of the stuck quark and from the isospin symmetry one has \begin{eqnarray} && \hspace{-0.7cm}u_p(x)=d_n(x)=u(x); ~ d_p(x)=u_n(x)= d(x) \end{eqnarray} Our preliminary results for $f_1^{u(d)}(x,k_{\perp})$ in the proton and for $u(x)$ and $d(x)$ are shown in Figs. 8, 9 and in Figs. 10, 11, respectively. \begin{figure}[htb] \vspace{-.8cm} \hspace{-.0cm}{\includegraphics[width=7.5cm]{f1pnew.eps}} \vspace{-2.5cm} \caption{Valence transverse-momentum distributions for a $u$ quark inside the proton. $G(k_{\perp}) = (1 ~ + ~ k_{\perp}^2/m_{\rho}^2)^{-5.5}$, $m_{\rho}$ = 770 MeV and $k_{perp} = | k_{\perp} |$. } \end{figure} \begin{figure}[htb] \vspace{-.8cm} {\includegraphics[width=7.5cm]{f1nnew.eps}} \vspace{-2.5cm} \caption{The same as in Fig. 8, but for a $d$ quark inside the proton. } \end{figure} It can be observed that the decay of our $f_1(x,k_{\perp})$ vs $k_{\perp}$ is faster than in diquark models of nucleon \cite{Jacob}, while it is slower than in factorization models for the transverse momentum distributions \cite{Anselm}. As far as the longitudinal momentum distributions are concerned, a reasonable agreement of our $u(x)$ with the CTEQ4 fit to the experimental data \cite{Lai} can be seen in Fig. 10. \begin{figure}[htb] \centering \hspace{.0cm}{\includegraphics[width=7.cm]{PaceE-fig3.eps}} \vspace{-.9cm} \caption{Longitudinal momentum distribution for a $u$ quark inside the proton. Dashed lines: our preliminary results; thick solid lines: our results after evolution to $Q^2$ = 1.6 (GeV/c)$^2$; thin solid lines: CTEQ4 fit to the experimental data \cite{Lai}.} \end{figure} \begin{figure}[htb] \hspace{.0cm}{\includegraphics[width=7.cm]{PaceE-fig4.eps}} \vspace{-.9cm} \caption{The same as in Fig. 10, but for a $d$ quark inside the proton. } \end{figure} \section{CONCLUSIONS} A microscopical model for hadron em form factors in both SL and TL region has been proposed. In our model the quark-photon vertex for the process where a virtual photon materializes in a $q \bar{q}$ pair is approximated by a microscopic VMD model plus a bare term. Both for the pion and the nucleon good results are obtained in the SL region. The Z-diagram (i.e. higher Fock state component) has been shown to be essential, in the adopted reference frame ($q^+ \ne 0$). The possible zero in $G^p_E \mu_p/G^p_M$ turns out to be related to the pair-production contribution. In the TL region our calculations give a fair description of the proton and pion data, although some strength is lacking for $q^2 = 4.5$ ~ (GeV/c)$^2$ ~ and ~ $q^2 = 8$ ~ (GeV/c)$^2$. The analysis of nucleon form factors allows us to get a phenomenological Ansatz for the nucleon LF wave function, which reflects the asymptotic behaviour suggested by the one-gluon-exchange dominance. This LF wave function is then used to evaluate the unpolarized transverse momentum distributions and the longitudinal momentum distributions of the quarks in the nucleon. Our next step will be the calculation of polarized transverse and longitudinal momentum distributions.
1,108,101,566,280
arxiv
\section{Introduction} \subsection{Response-adaptive randomization} Randomization remains a pivotal methodology for advancement in medical knowledge. It removes any systematic bias and allows direct inference between the treatment group and outcome. Traditionally, fixed randomization scheme (1:1 or 2:1) is used to due to simplicity in design and execution of the trial. However, response-adaptive randomization (RAR) design utilizes accrual information to tilt the randomization ratio to the better performing treatment group. Patients enrolled in these trials are not only treated to obtain the effectiveness of treatment but also treated to the best way possible \citep{wei1978randomized}. Response-adaptive randomization design primarily targets to solve both the issues mentioned above at once with a good balance. The biasness in the trial can be greatly affected if one arm is very superior to the other arm(s) due to a drastic alteration in the randomization ratio. Response-adaptive randomization has been highly advocated due to ethical reasons. The primary use of adaptive designs in clinical trials is that it improves the benefit/risk for the patients enrolled in a trial \citep{hey2015outcome}. For instance, in Zidovudine Treatment (AZT) trial conducted to test the reduction of maternal-infant transmission of HIV type 1, patients were randomized to 1:1 \citep{connor1994reduction}. The equal randomization scheme used placed 239 women in the treatment group (AZT) and 238 patients in the placebo group. Out of the 238 women in the AZT group, 60 of the infants were transmitted with the HIV virus while only 20 infants contracted the HIV virus for the AZT group \citep{connor1994reduction}. The outcome of the trial confirmed that the new treatment works. Given the seriousness of the outcome of this study, it is reasonable to argue that 50-50 allocation was unethical. As the outcome of the trial becomes available, the randomization ratio should have tilted in favor of the AZT group. Response-adaptive randomization design could have reduced the number of infants that contracted HIV disease from their maternal. On the other hand, opponents of RAR have argued that adaptive randomization challenges the whole notion of equipoise \cite{begg2015ethical}. Hey and Kimmelman has also argued that most new treatments offer small improvement standard treatment, thus they offer limited benefit and require a larger sample size \cite{hey2015outcome}. Hey and Kimmelman also suggested that equal randomization helps reduce the trial size and length, thus it benefits future patients rather than current patients enrolled in the trial \cite{hey2015outcome}. Korn and Friedlin measure the difference in non-responders under equal and adaptive randomization and found that adaptive randomization required a larger trial to achieve the same power and type I error \cite{korn2011outcome}. The major drawback to response-adaptive randomization design is that the trial needs to be short to be able to obtain the outcome of the trial for future randomization \citep{karrison2003group}. \subsection{Time-trend issues} However, one of the main critics against response-adaptive randomization is the time-trend issue. This contributes to the main factor of why RAR is infrequently used. The type I error rate is usually not controlled at the nominal level under traditional Bayesian RAR design \citep{thall2015statistical}. Besides affecting type I error, studies have shown that there is a large bias in the estimation of treatment difference under traditional RAR design\citep{thall2015statistical}. Figure~\ref{plot:RARplot} shows that the increase in response in the treatment group tilts the randomization ratio in favor of the treatment. However, this is directly confounded by time. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{RAR_plot.pdf} \caption{Effect of time as a blocking factor on randomization ratio} \label{plot:RARplot} \end{figure} Example of time trend issues in response-adaptive randomization (RAR) design includes \citep{chappell2007continuous, karrison2003group}: \begin{itemize} \item The disease itself can change, sometimes radically (e.g., AIDS in the early 1990s). \item Our definition of the disease can change due to new scientific discoveries or diagnostic methods (e.g., stage migration in nasopharyngeal carcinoma due to the introduction of CAT scans to Hong Kong ~ 2005). \item Inclusion criteria can change, either formally (in which case we can stratify analysis on before vs. after the change) or informally due to “recruiting zeal” or other issues (in which case we can’t). \item Centers can change, such as when VAs enter the trial earlier or later than academic institutions. \item Patients within centers can change, especially but not only with chronic diseases, due to the phenomenon of “A queue of desperate patients lining up at the door”. \item In addition to these examples, an investigator who wants to game the system could cross his/her fingers that his favored treatment arm is ahead, then progressively enroll better prognosis patients over time \end{itemize} In long duration trials, time-trends more likely to occur. Patient's characteristics might be completely different throughout the trial or even at the beginning and end of the trial (which is also known as ``patient drift'') \citep{karrison2003group}. Since most of the RAR design have adaptiveness on the fly, there is one important assumption made. “The sequence of patients who arrive for entry into the trial represents samples drawn at random from two homogenous populations, with no drift in the probabilities of success” \citep{begg2015ethical, karrison2003group}. However, this assumption is usually violated. For example, there were more smokers enrolled in the latter part of the trial than the beginning of the trial in the Lung Cancer Elimination(BATTLE) \citep{liu2015overview}. Given the serious flaw in RAR design, there is not much literature on this area to address the time-trend issue. Randomization test for adjusting type I error inflation was proposed by Simon and Simon using different RAR rules for double-arm trials \citep{simon2011using}. Jennison and Turnbull explored a group-sequential method for RAR with continuous outcome utilizing multi-stage terminology \cite{jennison1999group}. Karrison et al. introduced a stratified group-sequential method with a simple example of altering the randomization ratio to address this issue \citep{karrison2003group}. Coad used a very similar stratified analysis to Karrison et al. \citep{coad1992comparative}. Rosenberger et al. introduced a covariate-adjusted RAR procedure where time mechanism can be added as a covariate in the model \citep{rosenberger2001covariate}. Thall et al. investigated type I error under a linear time-trend induced in the traditional response-adaptive randomization design and showed that the type I error is significantly above the nominal level \citep{thall2015statistical}. Villar et al. explored the hypothesis testing procedure and adjusting for covariates for correcting type I error inflation and the effect on power in RAR design with time-trend effect added to the model for two-armed and multi-armed trials \citep{villar2018response}. Time-trend can not only greatly affect the biasness in the difference in treatment effect but it can also wrongly reject a true null hypothesis. We propose a block (group-sequential) design where the randomization ratio is altered in a block level instead of a patient by patient basis using both frequentist and Bayesian approach. In each block, the randomization ratio is kept constant. The block design is similar to the stratified group design introduced by Karrison et al. \citep{karrison2003group}. Then, we further study the robustness in different block sizes using both frequentist and Bayesian approach. We also compare these results with traditional RAR design and with fixed (1:1) randomization. \section{Trial Design and Simulation} \subsection{Block Design for RAR and Why?} In binary outcomes, events (success/failure) are observed within a short period from the beginning of the treatment. In a block design, patients are enrolled in a sequential manner. For instance, in two-arm design (treatment A and B) patients are enrolled in block with sample size of $n_{Ak}$ and $n_{Bk}$, for $k = 1, 2, 3, ... , K$, where $n_{Ak}$ and $n_{Bk}$ represent the sample sizes in treatment group A and B in block $k$. In this design, the randomization ratio is constant for patients within each block and the randomization ratio is altered at the block level. Unlike traditional RAR which alters the randomization ratio by patient basis, this method speeds up the process of RAR trials since randomization ratio is modified for a group of patients in the block. The initial randomization ratio is set to 1:1. Bashir et al. have implemented two-block design where patients are randomized 1:1 in a group of 10 and then based on the outcome, they randomized the next ten patient to the superior treatment \citep{bashir2012randomized}. However, at the second block, they randomized all the patient to the lower dosage because the probability of randomizing patient to the lower dosage was 0.9 compared to 0.1 to the higher dosage. This design should be considered a randomized control trial for the first block and an observational study for the second block. As asserted by Karrison et al., the block design will eliminate bias due to drift through stratification \citep{karrison2003group}. However, the optimal block size remains unclear and it is highly dependent on the operating characteristics of a trial. Blocks with a large number of patients also help reduce the large probability of imbalance in the wrong direction, where more patients are assigned to the inferior treatment compared to traditional RAR. Blocks with fewer patients can reduce the power of the trial because if patients are not randomized to both treatments in a block, the block becomes uninformative. \subsection{Simulation and Design} We investigated the effect of a different number of blocks using simulations for both frequentist and Bayesian approach. The rules for altering the randomization ratio in both Bayesian and frequentist design are described in the subsection below. The target sample size of $N$ = 200 subjects were considered with number of blocks (strata), $K$ = 1, 2, 4, 5, 10, 20, 100, 200. When $K$ = 1, this follows the traditional equal allocation design and when $K$ = 200, this follows the traditional RAR design where stratification is ignored. Upon completion of enrollment and collection of data of the subjects in each block, the interim analyses are done to revise the allocation rule. The interim analysis also allows for early stopping and details of early stopping for both Bayesian and frequentist design is included below. 10, 000 independent simulations were performed for each design yielding a Monte-Carlo standard error of 0.25\%. In each simulation, the success rate of treatment A (control group), $p_A$ is set to 0.25. The alternatives (the success rate of treatment B, $p_B$) is set to 0.25 (null case), 0.35, 0.45. On top of simulating both Bayesian and frequentist design for $p_A$ and $p_B$ specified above, we also simulated RAR with drift effect. To examine the effects of drift, we increased both $p_A$ and $p_B$ linearly from their initial values to a final value of 0.25 larger. The drift was added onto each block rather than patient by patient basis. For instance, the success rate in strata k for treatment A is $p_A(k) = 0.25 + 0.25((k -1) / K)$, where $K$ is the total number of strata. While both the $p_A$ and $p_B$, the treatment effect remains constant throughout the trial. For both frequentist and Bayesian design, the final analysis is done using stratification for a number of blocks, K where $ 1< K < 200$. For traditional RAR and traditional RAR with 1:1 allocation ratio, the standard analysis is used. Details of the analysis are attached in the Bayesian and frequentist approach section. \subsection{Frequentist Approach} The allocation rule for the frequentist approach is altered using optimal allocation ratio for 2-armed trial specified by Rosenberger et al \cite{rosenberger2001optimal}. The allocation probability for treatment A in stratum $j$ is defined as $$\pi_{j, A} = \frac{\sqrt{\hat{p_A}}}{\sqrt{\hat{p_A}} + \sqrt{\hat{p_B}}},$$ where $\hat{p_A}$ is the estimated success rate of treatment A and $\hat{p_B}$ is the estimated success rate of treatment B up to block $j - 1$. However, the allocation probability is only altered upon observed both event (success and no response) in both the arm. Simulation for early stopping is also included, the alpha spending approach was incorporated to stop for early success or failure \cite{demets1994interim}. For traditional RAR and fixed allocation, one-sided chi-square test was used to analyze the outcome. However, for the block design, one-sided Cochran-Mantel-Haenszel test was utilized to deal with the stratification. Yates's correction was not applied for both chi-square and Cochran-Mantel-Haenszel test due to the overly conservative nature of the correction \cite{haviland1990yates}. Treatment B is proclaimed superior to treatment A if the one-sided p-value $< 0.05$. The computation of the mean proportion of treatment difference is computed as below for stratified design. In two treatment scenario (treatment A \& B), suppose there are $K$ strata, let $p_{Ak}$ and $p_{Bk}$ be the proportion of success in treatment A and stratum $k$. Let $n_{Ak}$ and $n_{Bk}$ be the number of patients in treatment A and treatment B and stratum $k$. Let $$\hat{\delta_k} = \hat{p_{Bk}}- \hat{p_{Ak}}$$ be the observed difference in proportion between treatment A and B in stratum $k$. The observed proportion of treatment difference ($\delta$) is computed as below, $$\hat{\delta} = \sum^K_{k = 1} w_k \hat{\delta_k}$$ where $w_k = \frac{(n_{Ak}^{- 1} + n_{Bk}^{- 1})^ {-1}}{\sum_{k = 1}^ K(n_{Ak}^{- 1} + n_{Bk}^{- 1})^ {-1}} $ is the weight of the stratum k and $\sum_{k = 1}^ K w_k = 1$. For non-stratified design, where the block size $K$ = 1, 200, the difference in proportion between treatment B and A ($\delta$) is obtained as follows, $$\hat{\delta} = \hat{p_B} - \hat{p_A}$$, where $p_B$ is the estimated proportion of success in treatment B and $p_A$ is the estimated proportion of success in treatment A. \subsection{Bayesian Approach} In the Bayesian approach, the Bayesian adaptive randomization (BAR(1/2)) method introduced by Thall and Wathen is employed \cite{thall2007practical}. The probability of randomizing subjects to treatment A in stratum j is defined as $$\pi_{j, A} = \frac{(p_{A>B} (data))^ {1/2}}{(p_{A>B} (data))^ {1/2} + (p_{B>A} (data))^ {1/2}},$$ where $p_{A > B}(data)$ is the posterior probability that treatment A has a higher success rate than treatment B and $p_{A>B} (data)) = 1 - p_{B>A} (data))$ . The beta-Binomial conjugate prior is used for the estimation of the posterior probability. The posterior probability that the treatment A has a higher event rate than treatment B is \begin{equation*} \begin{split} p_{A > B}(data) \sim & beta(y_A + a_0, N_A - y_A + b_0) - \\ & beta(y_B + a_0, N_B - y_B + b_0) > 0, \end{split} \end{equation*} where $y_A$ and $y_B$ denote the number of events in treatment A and B, $N_A$ and $N_B$ denote the number of subjects in treatment A and B, $a_0$ and $b_0$ denote the prior rate parameter of the beta distribution. Similar to Thall and Wathen's approach, beta(0.5, 0.5) priors were assumed for both treatment A and B \cite{thall2007practical}. Since there is no closed-form solution for the difference in beta distributed random variables, Monte Carlo simulations were performed to estimate the posterior of the treatment difference. Similar to the frequentist design, simulation for the possibility of early stopping are included. If treatment B is selected to be better than treatment A, then if $P_{B > A}(data) > .99$, the trial is stopped early for success and if $P_{B>A}(data) < .01$, the trial is stopped early for failure. For non-stratified design, where $K$ = 1, 200, if the final posterior probability $P_{B > A}(data) > .95$, then treatment B is declared superior to treatment A. The mean estimate of the proportion of treatment difference is estimated using Monte Carlo simulation. The mean value of $P_{B>A}(data)$ is used to estimate the proportion of treatment difference. For stratified design by block, Bayesian logistic regression was implemented to estimate the posterior probability of treatment difference. The logistic regression model is defined as below $$Logit(y_{ij}) \sim \beta_0 + \beta_{trt} x_{ij} + \sum_{j = 1}^K \beta_{j} x_{ij}, $$ where $y_{ij}$ is the outcome of treatment $i$ and stratum $j$, $x_{ij}$ is the indicator variable of treatment $i$ and stratum $j$, $\beta_0$ is the intercept term, $\beta_{trt}$ is the treatment effect, $\beta_j$ is the stratum effect of stratum $j$. The uninformative prior applied in the model are as defined $$\beta_0, \beta_{trt}, \beta_j \sim N(0, \sigma^2_j), \qquad \sigma_j^2 \sim Inv-\chi^2(1, 2.5).$$ The posterior value of $\beta_{trt}$ is used to estimate the proportion difference in treatment. The R package \textit{arm} with \textit{quasi} family and $\mu(1-\mu)$ link was used to fit Bayesian logistic regression and obtain the posterior samples of $\beta_{trt}$ \cite{arm}. The proportion difference in treatment is estimated using the mean posterior value of $\beta_{trt}$. If difference of proportion between B and A is estimated using $\beta_{trt}$, then treatment B is declared superior if $E(\beta_{trt} > 0) > 0.95$. \section{Results} For all cases, the simulation was replicated for 10,000 times. The simulation is reproduced for both Bayesian and frequentist design with and without time trends. The number of blocks (K) choosen are as following, K = 1, 2, 4, 5, 10, 20, 100, 200. For all the simulation, the power (treatment B is superior to treatment A), bias, probability of sample imbalance of more than 20 patients assigned to treatment A over treatment B, the mean, 2.5\% and 97.5\% percentile of difference between $N_B - N_A$ (sample size imbalance favoring treatment B over treatment A) are reported. The power reflects the proportion of 10,00 trial that declares treatment B superior to treatment A. The type I error (false-poitive) is simulated by setting both $p_A, p_B = 0.25$. The bias computes the estimated difference in proportion of treatment estimate, bias = $\hat{\delta} - \delta$. $\pi_{20}$, the probability of assigning 20 or more subjects to treatment A over treatment B ($P(N_A -N_B > 20)$) is included in the simulation. Due to nontrivial possibility of assigning more patients to the inferior arm by chance, it is vital to analyze $\pi_20$. To highlight the main advantage of RAR compared to equal randomization, the difference in sample size between the superior arm and inferior arm is presented. Since the difference between subjects assigned to treatment B and treatment A ($N_B - N_A$) is skewed and dispersed as illustrated in Figure~\ref{plot:ssdiff}, the mean, 2.5\% and 97.5\% of $N_B- N_A$ are reported. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{samplesizediff.pdf} \caption{Distribution of difference in number of patients assigned to treatment B compared to treatment A ($N_B- N_A$) for a 200 patient trial with no early stopping using both Bayesian and frequentist stratified design with 10 number of blocks. The sample size difference is collected for 100 different simulation with $p_A = 0.25$ and $p_B = 0.35$. The black dashed line included shows $\pi_{20}$. } \label{plot:ssdiff} \end{figure} \subsection{Simulation with No Time-Trend} Table~\ref{freqnoearlystopnodrift} and Table~\ref{bayesnoearlystopnodrift} displays the results of simulations for a number of blocks, K = 1, 2, 4, 5, 10, 20, 100, 200 using the frequentist and Bayesian approach with no drift applied and not stopping early for success or failure. The frequentist approach (Table~\ref{freqnoearlystopnodrift}) manages to control the type I error unlike the Bayesian design in Table~\ref{bayesnoearlystopnodrift}. Even though stratified RAR design has higher type I error, the type I error is still controlled under the nominal level (Table~\ref{freqnoearlystopnodrift}). However, type I error is high for 4, 5, 10 and 20 number of blocks under the Bayesian design (Table~\ref{bayesnoearlystopnodrift}). Block design with 2, 4, 5 and 10 number of blocks provides a higher power when $p_B = 0.35, 0.45$ (Table~\ref{freqnoearlystopnodrift}). In the Bayesian design, the fixed randomization ratio provides the best power. Although the type I error is close to the nominal level under K = 2, 4 and 5, with a slight increase in sample size it could be lowered to the nominal level. A small increase in sample size might still be favorable if more subjects are treated with the best possible cure. Block design with a small number of subjects in a block should not be considered due to low power in the design as shown in Table~\ref{freqnoearlystopnodrift} and \ref{bayesnoearlystopnodrift} with 100 blocks. This poor performance is due to the reality that some of the blocks can be noninformative if subjects in the block are randomly assigned to the small treatment group. Ethically, this design (2 subjects per block) places more subjects at risk without contributing to the advancement in science. \begin{table}[ht] \resizebox{\columnwidth}{!}{% \begin{tabular}{|c|c|c|c|c|c|c|} \hline $p_B$ & Block & Power & Bias & $\pi_{20}$ & $N_B - N_A$ \\ \hline 0.25 & 1 & 0.03 & 0.00 & 0.06 & 0.00 (-28, 28) \\ & 2 & 0.05 & 0.00 & 0.03 & 6.97 (-22.00, 36) \\ & 4 & 0.05 & 0.00 & 0.02 & 11.80 (-18, 44) \\ & 5 & 0.05 & 0.00 & 0.01 & 13.01 (-18, 46) \\ & 10 & 0.05 & 0.00 & 0.01 & 14.63 (-16, 48) \\ & 20 & 0.05 & 0.00 & 0.01 & 15.02 (-16, 48) \\ & 100 & 0.05 & 0.00 & 0.01 & 15.24 (-16, 48) \\ & 200 & 0.03 & 0.00 & 0.01 & 15.11 (-16, 48) \\ \hline 0.35 & 1 & 0.33 & 0.00 & 0.07 & 0.15 (-28, 28) \\ & 2 & 0.45 & 0.00 & 0.02 & 9.94 (-20, 42) \\ & 4 & 0.45 & 0.00 & 0.01 & 15.83 (-16, 52) \\ & 5 & 0.46 & 0.00 & 0.01 & 16.83 (-16, 52) \\ & 10 & 0.44 & 0.00 & 0.01 & 19.28 (-14, 58) \\ & 20 & 0.42 & 0.00 & 0.01 & 19.97 (-14, 58) \\ & 100 & 0.20 & 0.00 & 0.01 & 20.26 (-12, 58) \\ & 200 & 0.35 & 0.00 & 0.01 & 20.61 (-14, 58) \\ \hline 0.45 & 1 & 0.85 & 0.00 & 0.07 & 0.04 (-28, 28) \\ & 2 & 0.91 & 0.00 & 0.01 & 14.99 (-16, 48) \\ & 4 & 0.90 & 0.00 & 0.00 & 23.26 (-10, 60) \\ & 5 & 0.91 & 0.00 & 0.00 & 25.26 (-8, 62) \\ & 10 & 0.90 & 0.00 & 0.00 & 28.62 (-6, 66) \\ & 20 & 0.88 & 0.00 & 0.00 & 29.03 (-6, 68) \\ & 100 & 0.47 & 0.00 & 0.00 & 29.75 (-4, 68) \\ & 200 & 0.84 & 0.00 & 0.00 & 29.80 (-6, 70) \\ \hline \end{tabular} } \caption{RAR using frequentist approach with no early stopping criteria and no drift applied. The $p_A$ is set to 0.25 for all cases. $Bias = \hat{\delta} - \delta$. $\pi_{20}$ denote the probability of assigning more than 20 patients in the inferior treatment group, $P(N_A -N_B > 20)$. $N_A$ and $N_B$ denote the number of patient assigned to treatment A and B. The mean (2.5\%, 97.5\%) of $N_B - N_A$ is reported in the last column. 10,000 simulation was done for each case. } \label{freqnoearlystopnodrift} \end{table} The estimation of the actual treatment effect is unbiased under the frequentist design regardless of the number of blocks used. However, the estimation of treatment effect is biased for most stratified RAR design and traditional RAR as shown in Table~\ref{bayesnoearlystopnodrift}. Unlike the Bayesian approach (Table~\ref{bayesnoearlystopnodrift}), the variability in sample size assigned to treatment A and B is smaller in the frequentist approach (Table~\ref{freqnoearlystopnodrift}). The mean difference in the subject's assignment of treatment is also smaller in the frequentist design compared to the Bayesian design. Thus, the frequentist design is a little conservative of assigning patients to the better-performing treatment, unlike the Bayesian design. At the largest difference in proportion $p_B = 0.45$, $p_A = 0.25$, there is still room for imbalance in the wrong direction in the frequentist design compared to the Bayesian design (Table~\ref{freqnoearlystopnodrift} and \ref{bayesnoearlystopnodrift} ). The imbalance in sample size favoring the inferior treatment is small under all scenario as shown by $\pi_{20}$ in Table~\ref{freqnoearlystopnodrift}. The difference in sample size ($N_B - N_A$) is relatively small and close to 0 for the frequentist design. On the other hand, the imbalance is large for the Bayesian design as illustrated by $\pi_{20}$ in Table~\ref{bayesnoearlystopnodrift}. The large difference in sample size (more than half the total sample size) is seen in the Bayesian design with the most extreme difference in the outcome of treatment A and B. Table~\ref{freqearlystopwnodrift} and Table~\ref{bayesearlystopnodrift} displays the results of simulations for a number of blocks, K = 1, 2, 4, 5, 10, 20, 100, 200 using the frequentist and Bayesian approach and with early stopping criteria for success or failure implemented. Parallel to the earlier results, the drift is higher in Table~\ref{freqearlystopwnodrift} and Table~\ref{bayesearlystopnodrift}. Table~\ref{bayesearlystopwdrift} emphasizes that K = 10, 20, 100 and 200 have a inflated type I error of 0.12, 0.16 and 0.18 similar to Table~\ref{bayesnoearlystopwdrift}. As seen in type I error and power in Table~\ref{freqearlystopwdrift} and \ref{bayesearlystopwdrift}, a large number of stratum with Bayesian design should not be considered for clinical studies. \begin{table}[ht] \resizebox{\columnwidth}{!}{% \begin{tabular}{|c|c|c|c|c|c|} \hline $p_B$ & Block & Power & Bias & $\pi_{20}$ & $N_B - N_A$ \\ \hline 0.25 & 1 & 0.05 & 0.00 & 0.07 & -0.09 (-28, 28) \\ & 2 & 0.06 & 0.00 & 0.31 & -0.01 (-78, 76) \\ & 4 & 0.08 & 0.00 & 0.36 & -1.13 (-106, 104) \\ & 5 & 0.08 & 0.00 & 0.36 & -0.14 (-112, 110) \\ & 10 & 0.13 & 0.00 & 0.37 & 0.63 (-124, 124) \\ & 20 & 0.14 & 0.01 & 0.39 & -1.48 (-132, 128) \\ & 100 & 0.01 & 0.00 & 0.38 & 0.58 (-132, 132) \\ & 200 & 0.07 & 0.00 & 0.39 & -1.05 (-132, 132) \\ \hline 0.35 & 1 & 0.46 & 0.00 & 0.07 & -0.06 (-28, 28) \\ & 2 & 0.46 & 0.01 & 0.06 & 39.73 (-38, 98) \\ & 4 & 0.46 & 0.01 & 0.07 & 57.26 (-48, 136) \\ & 5 & 0.46 & 0.01 & 0.07 & 60.77 (-50, 144) \\ & 10 & 0.50 & 0.02 & 0.07 & 66.64 (-56, 156) \\ & 20 & 0.58 & 0.01 & 0.07 & 69.57 (-62, 162) \\ & 100 & 0.08 & -0.08 & 0.07 & 71.26 (-64, 164) \\ & 200 & 0.44 & 0.01 & 0.07 & 72.48 (-60, 164) \\ \hline 0.45 & 1 & 0.91 & 0.00 & 0.07 & -0.03 (-28, 28) \\ & 2& 0.89 & 0.01 & 0.01 & 69.77 (2, 110) \\ & 4 & 0.87 & 0.03 & 0.01 & 99.58 (12, 150) \\ & 5 & 0.86 & 0.02 & 0.01 & 103.36 (14, 158) \\ & 10 & 0.86 & 0.03 & 0.00 & 113.34 (22, 170) \\ & 20 & 0.88 & 0.02 & 0.01 & 117.73 (26, 176) \\ & 100 & 0.29 & -0.12 & 0.01 & 119.93 (24, 178) \\ & 200 & 0.86 & 0.02 & 0.00 & 120.38 (24, 176) \\ \hline \end{tabular} } \caption{RAR using Bayesian approach with no early stopping criteria and with no drift applied. The $p_A$ is set to 0.25 for all cases. $Bias = \hat{\delta} - \delta$. $\pi_{20}$ denote the probability of assigning more than 20 patients in the inferior treatment group. $N_A$ and $N_B$ denote the number of patient assigned to treatment A and B. The mean (2.5\%, 97.5\%) of $N_B - N_A$ is reported in the last column. 10,000 simulation was done for each case. } \label{bayesnoearlystopnodrift} \end{table} \subsection{Simulation with Time-Trend} Table~\ref{freqnoearlystopwdrift} and Table~\ref{bayesnoearlystopwdrift} displays the results of simulation for number of blocks, K = 1, 2, 4, 5, 10, 20, 100, 200 using the frequentist and Bayesian approach with 0.25 drift applied and not stopping early for success or failure. With time drift, the false positive rate is still under the nominal level for the frequentist design with all block sizes as seen in Table~\ref{freqnoearlystopwdrift}. However, the type I error is inflated for traditional RAR, 10 and 20 blocks in the Bayesian design. Having a 2, 4, 5, 10 and 20 blocks still remain the most powerful design with the most extreme difference in outcome of $p_A$ and $p_B$. In the Bayesian design, block size with 50 subjects is still comparable to the traditional fixed randomization ratio design, since the type I error is controlled under 0.05 and it has the highest power of 0.91 under the maximum difference between $p_A$ and $p_B$. Large number of blocks still remains as a poor design as illustrated earlier in Table~\ref{bayesnoearlystopnodrift}. The estimatation of treatment difference remains the similar to the simulation with no time-trend except the bias is a little higher under traditional RAR design as presented in Table~\ref{bayesnoearlystopwdrift}. The difference in sample size, the imbalance in the wrong direction and the variability between the arms ($N_B - N_A$) remains comparable to the simulation without time drift applied. Table~\ref{freqearlystopwdrift} and Table~\ref{bayesearlystopwdrift} displays the results of simulation for number of blocks, K = 1, 2, 4, 5, 10, 20, 100, 200 using the frequentist and Bayesian approach and with early stopping criteria for success or failure implemented. The output in Table~\ref{freqearlystopwdrift} is similar to Table~\ref{freqnoearlystopwdrift} except the treatment different is slightly biased. However, it is shown that clinical trials with early stopping are slightly biased compared to trials without early stopping criteria imposed. Table~\ref{bayesearlystopwdrift} emphasizes that K = 10, 20, 100 and 200 have a inflated type I error of 0.14, 0.17 and 0.11 similar to Table~\ref{bayesnoearlystopwdrift}. As seen in type I error and power in Table~\ref{bayesnoearlystopwdrift} and \ref{bayesearlystopwdrift}, large number of stratum design should not be considered for clinical studies. \begin{table}[ht] \resizebox{\columnwidth}{!}{% \begin{tabular}{|c|c|c|c|c|c|} \hline $p_B$ & Block & Power & Bias & $\pi_{20}$ & $N_B - N_A$ \\ \hline 0.25 & 1 & 0.03 & 0.00 & 0.07 & 0.00 (-28, 28) \\ & 2 & 0.05 & 0.00 & 0.04 & 5.28 (-24, 34.00)\\ & 4 & 0.05 & 0.00 & 0.02 & 9.56 (-20, 40) \\ & 5 & 0.05 & 0.00 & 0.02 & 10.81(-20, 42) \\ & 10 & 0.05 & 0.00 & 0.01 & 12.90 (-16, 44) \\ & 20 & 0.05 & 0.00 & 0.01 & 13.21 (-16, 46) \\ & 100 & 0.05 & 0.00 & 0.01 & 13.28 (-16, 44) \\ & 200 & 0.03 & 0.00 & 0.01 & 13.23 (-16, 46) \\ \hline 0.35 & 1 & 0.30 & 0.00 & 0.07 & 0.06 (-28, 28) \\ & 2 & 0.40 & 0.00 & 0.03 & 7.09 (-22, 38) \\ & 4 & 0.41 & 0.00 & 0.02 & 12.16 (-18, 44) \\ & 5 & 0.41 & 0.00 & 0.02 & 13.55 (-18, 48) \\ & 10 & 0.40 & 0.00 & 0.01 & 16.36 (-14, 52) \\ & 20 & 0.38 & 0.00 & 0.01 & 16.92 (-14, 50) \\ & 100 & 0.18 & 0.00 & 0.01 & 17.30 (-14, 52) \\ & 200 & 0.29 & 0.00 & 0.01 & 17.61 (-14, 52) \\ \hline 0.45 & 1 & 0.82 & 0.00 & 0.07 & -0.07 (-28, 28) \\ & 2 & 0.89 & 0.00 & 0.02 & 11.02 (-20, 40) \\ & 4 & 0.88 & 0.00 & 0.01 & 18.41 (-14, 52) \\ & 5 & 0.88 & 0.00 & 0.01 & 20.40 (-12, 56) \\ & 10 & 0.88 & 0.00 & 0.00 & 23.70 (-10, 58) \\ & 20 & 0.85 & 0.00 & 0.00 & 24.32 (-8, 60) \\ & 100 & 0.46 & 0.00 & 0.00 & 25.02 (-8, 60) \\ & 200 & 0.80 & 0.00 & 0.00 & 25.31(-8, 62) \\ \hline \end{tabular} } \caption{RAR using frequentist approach with no early stopping criteria and with 0.25 drift applied. The $p_A$ is set to 0.25 for all cases. $Bias = \hat{\delta} - \delta$. $\pi_{20}$ denote the probability of assigning more than 20 patients in the inferior treatment group. $N_A$ and $N_B$ denote the number of patient assigned to treatment A and B. The mean (2.5\%, 97.5\%) of $N_B - N_A$ is reported in the last column. 10,000 simulation was done for each case.} \label{freqnoearlystopwdrift} \end{table} \begin{table}[ht] \resizebox{\columnwidth}{!}{% \begin{tabular}{|c|c|c|c|c|c|c|} \hline $p_B$ & Block & Power & Bias & $\pi_{20}$ & $N_B - N_A$ \\ \hline 0.25& 1 & 0.03 & -0.01 & 0.03 & 0.44 (-21, 28) \\ & 2 & 0.07 & 0.00 & 0.29 & -0.30 (-80, 77) \\ & 4 & 0.05 & -0.01 & 0.41 & -5.84 (-96, 86) \\ & 5 & 0.06 & 0.00 & 0.43 & -1.66 (-117, 113) \\ & 10 & 0.12 & 0.01 & 0.34 & 3.32 (-148, 148) \\ & 20 & 0.20 & 0.02 & 0.33 & 12.40 (-137, 140) \\ & 100 & 0.01 & 0.00 & 0.36 & 4.18 (-130, 141) \\ & 200 & 0.17 & 0.02 & 0.32 & 12.74 (-106, 138) \\ \hline 0.35 & 1 & 0.51 & 0.00 & 0.10 & -1.74 (-32, 26) \\ & 2 & 0.42 & 0.01 & 0.06 & 38.02 (-35, 97) \\ & 4 & 0.35 & 0.01 & 0.07 & 52.78 (-87, 140) \\ & 5 & 0.46 & 0.03 & 0.06 & 68.38 (-48, 148) \\ & 10 & 0.45 & 0.03 & 0.03 & 72.88 (-34, 149) \\ & 20 & 0.51 & 0.02 & 0.11 & 73.24 (-50, 162) \\ & 100 & 0.11 & -0.07 & 0.11 & 74.40 (-80, 162) \\ & 200 & 0.59 & 0.05 & 0.07 & 75.92 (-60, 162) \\ \hline 0.45 & 1 & 0.91 & 0.00 & 0.08 & -0.10 (-35, 28) \\ & 2 & 0.93 & 0.02 & 0.00 & 68.28 (20, 111) \\ & 4 & 0.91 & 0.04 & 0.01 & 105.34 (26, 148) \\ & 5 & 0.79 & 0.03 & 0.01 & 105.96 (5, 160) \\ & 10 & 0.82 & 0.04 & 0.00 & 119.52 (27, 173) \\ & 20 & 0.75 & 0.03 & 0.01 & 119.74 (13, 177) \\ & 100 & 0.22 & -0.13 & 0.00 & 123.12 (34, 175) \\ & 200 & 0.89 & 0.06 & 0.00 & 125.48 (54, 179) \\ \hline \end{tabular} } \caption{RAR using Bayesian approach with no early stopping criteria and with drift applied.The $p_A$ is set to 0.25 for all cases. $Bias = \hat{\delta} - \delta$. $\pi_{20}$ denote the probability of assigning more than 20 patients in the inferior treatment group. $N_A$ and $N_B$ denote the number of patient assigned to treatment A and B. The mean (2.5\%, 97.5\%) of $N_B - N_A$ is reported in the last column. 10,000 simulation was done for each case.} \label{bayesnoearlystopwdrift} \end{table} \section{Conclusion} RAR design has a lot of appealing properties mainly assigning more patients to better-performing treatment. However, trialist needs to be careful with issues of time-trend and methods to alter the randomization ratio. Time-trends can significantly impact the type I error rate which can affect the validity of clinical studies. As a statistician, we cannot emphasize more on the importance of controlling the false-positive rate in a clinical setting. Thall et al. have shown that methods that are able to control type-I error have failed to detect the true treatment difference \citep{thall2015statistical}. Besides controlling for false positive rate, trialist needs to make sure the method of altering the randomization ratio is not too extreme which can greatly affect the bias and question the notion of randomization in clinical studies. Thall et al. have emphasized on the difference in simulation between BAR(1) and BAR(1/2), with BAR(1) having a large imbalance in the wrong direction and larger false-positive rate. Zelen's play-the-winner rule was implemented to an extracorporeal membrane oxygenation (ECMO) trial where the first patient was assigned to both control and treatment group \citep{zelen1969play, mike1993neonatal}. Due to the failure in the control and success in the treatment group, all subsequent patient's were randomized to the treatment group \citep{mike1993neonatal}. However, it was later discovered that the first patient assigned to the control group was much sicker than all the patients randomized to the treatment groups. On the other hand, scientists need to be aware of the risk that RAR poses which includes having assigned more patients to the inferior treatment. As highlight by Thall et al., ``\textit{The practical and ethical point is that AR may behave pathologically in that it carries a nontrivial risk of creating a large sample size imbalance in favor of the inferior treatment}'' \citep{thall2015statistical}. Large imbalance in the wrong direction can also be controlled by methods that do not alter the randomization ratio rapidly. It is shown that a small number of blocks (K = 2, 4 and 5) has a good tradeoff between efficiency and ethically treating patients to the best known superior treatment. A large number of blocks should be clearly avoided for both ethical reason and poor design. Traditional RAR does not only delay the trial but also affects the clinical conclusion achieved. We have not considered the multiple treatment design with more than 2-arms. The design would be much more complex and it should be examined further. An R package (blockRAR), for the frequentist and Bayesian models, is implemented in R and released as open source software under the MIT license. The blockRAR package is available at Comprehensive R Archive Network (CRAN) and at \href{https://thevaachandereng.github.io/blockRAR/}{https://thevaachandereng.github.io/blockRAR/}. We used blockRAR version 1.0.0 for all analyses. \begin{sm} Supplementary material are available. \end{sm} \begin{dci} None declared. \end{dci} \begin{funding} None declared. \end{funding} \begin{acks} We are grateful to Tom Cook for his helpful comments and suggestions. \end{acks} \bibliographystyle{SageV} \section{Supplementary Table} \begin{table}[H] \centering \resizebox{0.75\textwidth}{!}{% \begin{tabular}{|c|c|c|c|c|c|c|} \hline $p_B$ & Block & Power & Bias & $E(N)$ & $\pi_{20}$ & $N_B - N_A$ \\ \hline 0.25 & 1 & 0.02 & 0.00 & 200.00 & 0.07 & -0.33 (-28, 26) \\ & 2 & 0.05 & 0.00 & 198.87 & 0.03 & 6.98 (-22, 36) \\ & 4 & 0.06 & 0.00 & 197.13 & 0.02 & 11.41 (-18, 42) \\ & 5 & 0.06 & 0.00 & 196.80 & 0.02 & 12.10 (-18, 44) \\ & 10 & 0.06 & 0.00 & 195.50 & 0.01 & 13.64 (-16, 46) \\ & 20 & 0.06 & 0.00 & 194.94 & 0.01 & 14.26 (-16, 44) \\ & 100 & 0.06 & 0.00 & 194.88 & 0.01 & 14.40 (-16, 46) \\ & 200 & 0.05 & 0.00 & 194.75 & 0.01 & 14.36 (-16, 44) \\ \hline 0.35 & 1 & 0.33 & 0.10 & 200.00 & 0.07 & 0.10 (-28, 28) \\ & 2 & 0.46 & 0.01 & 193.04 & 0.03 & 8.04 (-22, 38) \\ & 4 & 0.47 & 0.01 & 183.53 & 0.01 & 12.63 (-18, 42) \\ & 5 & 0.46 & 0.01 & 182.57 & 0.01 & 13.28 (-16, 44) \\ & 10 & 0.48 & 0.02 & 177.72 & 0.01 & 15.57 (-14, 46) \\ & 20 & 0.46 & 0.02 & 176.53 & 0.01 & 16.20 (-14, 46) \\ & 100 & 0.21 & 0.02 & 175.45 & 0.01 & 15.69 (-14, 44) \\ & 200 & 0.43 & 0.02 & 174.71 & 0.01 & 15.90 (-12, 45) \\ \hline 0.45 & 1 & 0.84 & 0.00 & 200.00 & 0.07 & 0.14 (-28, 28) \\ & 2 & 0.91 & 0.02 & 166.63 & 0.02 & 7.65 (-18, 38) \\ & 4 & 0.91 & 0.02 & 147.84 & 0.01 & 12.78 (-14, 40) \\ & 5 & 0.91 & 0.02 & 144.01 & 0.00 & 13.82 (-12, 40) \\ & 10 & 0.91 & 0.03 & 135.66 & 0.00 & 15.86 (-10, 42) \\ & 20 & 0.88 & 0.03 & 132.36 & 0.00 & 16.30 (-8, 42) \\ & 100 & 0.40 & 0.03 & 129.44 & 0.00 & 16.30 (-8, 42) \\ & 200 & 0.88 & 0.03 & 129.05 & 0.00 & 16.33 (-8, 42) \\ \hline \end{tabular} } \caption{RAR using frequentist approach with early stopping criteria and no drift applied. The $p_A$ is set to 0.25 for all cases. $E(N)$ represents the mean sample size for 10,000 simulation. $Bias = \hat{\delta} - \delta$. $\pi_{20}$ denote the probability of assigning more than 20 patients in the inferior treatment group, $P(N_A -N_B > 20)$. $N_A$ and $N_B$ denote the number of patient assigned to treatment A and B. The mean (2.5\%, 97.5\%) of $N_B - N_A$ is reported in the last column. 10,000 simulation was done for each case.} \label{freqearlystopwnodrift} \end{table} \begin{table}[H] \centering \resizebox{0.75\columnwidth}{!}{% \begin{tabular}{|c|c|c|c|c|c|c|} \hline $p_B$ & Block & Power & Bias & $E(N)$ & $\pi_{20}$ & $N_B - N_A$ \\ \hline 0.25 & 1 & 0.02 & 0.00 & 200.00 & 0.07 & 0.20 (-28, 28) \\ & 2 & 0.05 & 0.00 & 198.81 & 0.04 & 5.07 (-24, 34) \\ & 4 & 0.06 & 0.00 & 197.06 & 0.02 & 9.37 (-20, 38) \\ & 5 & 0.06 & 0.00 & 196.58 & 0.02 & 10.20 (-18, 40) \\ & 10 & 0.06 & 0.00 & 195.63 & 0.01 & 12.04 (-18, 42) \\ & 20 & 0.07 & 0.00 & 194.63 & 0.01 & 12.65 (-18, 42) \\ & 100 & 0.05 & 0.00 & 194.73 & 0.01 & 12.72 (-16, 44) \\ & 200 & 0.05 & 0.00 & 194.43 & 0.01 & 12.88 (-16, 42) \\ \hline 0.35 & 1 & 0.30 & 0.00 & 200.00 & 0.07 & -0.34 (-28, 28) \\ & 2 & 0.42 & 0.01 & 193.08 & 0.03 & 5.77 (-22, 34) \\ & 4 & 0.43 & 0.01 & 184.22 & 0.02 & 10.03 (-18, 40) \\ & 5 & 0.44 & 0.01 & 182.46 & 0.01 & 11.32 (-16, 40) \\ & 10 & 0.44 & 0.02 & 179.00 & 0.01 & 13.23 (-16, 42) \\ & 20 & 0.41 & 0.02 & 177.90 & 0.01 & 14.00 (-16, 42) \\ & 100 & 0.19 & 0.02 & 176.47 & 0.01 & 13.82 (-14, 42) \\ & 200 & 0.40 & 0.02 & 175.74 & 0.01 & 13.82 (-14, 42) \\ \hline 0.45 & 1 & 0.83 & 0.00 & 200.00 & 0.07 & -0.18 (-28, 26) \\ & 2 & 0.89 & 0.02 & 170.19 & 0.02 & 5.86 (-20, 34) \\ & 4 & 0.89 & 0.02 & 149.63 & 0.01 & 10.35 (-16, 36) \\ & 5 & 0.89 & 0.02 & 144.74 & 0.01 & 11.99 (-14, 38) \\ & 10 & 0.88 & 0.03 & 137.67 & 0.00 & 13.82 (-12, 40) \\ & 20 & 0.86 & 0.03 & 134.38 & 0.00 & 14.52 (-10, 40) \\ & 100 & 0.38 & 0.03 & 131.54 & 0.00 & 14.73 (-10, 40) \\ & 200 & 0.86 & 0.03 & 130.83 & 0.00 & 14.71 (-10, 41) \\ \hline \end{tabular} } \caption{RAR using frequentist approach with early stopping criteria and with 0.25 drift applied. The $p_A$ is set to 0.25 for all cases. $E(N)$ represents the mean sample size for 10,000 simulation. $Bias = \hat{\delta} - \delta$. $\pi_{20}$ denote the probability of assigning more than 20 patients in the inferior treatment group. $N_A$ and $N_B$ denote the number of patient assigned to treatment A and B. The mean (2.5\%, 97.5\%) of $N_B - N_A$ is reported in the last column. 10,000 simulation was done for each case.} \label{freqearlystopwdrift} \end{table} \begin{table}[H] \centering \resizebox{0.75\columnwidth}{!}{% \begin{tabular}{|c|c|c|c|c|c|c|} \hline $p_B$ & Block & Power & Bias & $E(N)$ & $\pi_{20}$ & $N_B - N_A$ \\ \hline 0.25 & 1 & 0.05 & 0.00 & 200.00 & 0.07 & -0.11 (-28, 28) \\ & 2 & 0.07 & 0.00 & 197.63 & 0.30 & -0.13 (-72, 72) \\ & 4 & 0.08 & 0.00 & 194.81 & 0.35 & -0.60 (-94, 92) \\ & 5 & 0.09 & 0.00 & 193.40 & 0.34 & 0.39 (-96, 96) \\ & 10 & 0.14 & 0.01 & 187.42 & 0.36 & -0.51 (-100, 100) \\ & 20 & 0.17 & 0.02 & 183.41 & 0.35 & -0.03 (-102, 102) \\ & 100 & 0.03 & 0.01 & 179.80 & 0.36 & -0.84 (-102, 100) \\ & 200 & 0.11 & 0.00 & 178.16 & 0.35 & 0.78 (-100, 102) \\ \hline 0.35 & 1 & 0.46 & 0.00 & 200.00 & 0.07 & 0.01 (-28, 28) \\ & 2 & 0.46 & 0.01 & 188.94 & 0.07 & 29.57 (-42, 86) \\ & 4 & 0.47 & 0.02 & 181.31 & 0.06 & 40.88 (-46, 110) \\ & 5 & 0.47 & 0.02 & 178.76 & 0.07 & 42.38 (-50, 112) \\ & 10 & 0.53 & 0.04 & 171.69 & 0.06 & 45.63 (-50, 118) \\ & 20 & 0.58 & 0.04 & 164.57 & 0.07 & 43.95 (-52, 118) \\ & 100 & 0.14 & -0.04 & 158.05 & 0.07 & 43.38 (-52, 118) \\ & 200 & 0.51 & 0.05 & 154.43 & 0.07 & 41.65 (-50, 116) \\ \hline 0.45 & 1 & 0.91 & 0.00 & 200.00 & 0.07 & -0.09 (-28, 28) \\ & 2 & 0.89 & 0.01 & 158.55 & 0.01 & 31.54 (-18, 90) \\ & 4 & 0.88 & 0.04 & 139.12 & 0.01 & 44.53 (-10, 112) \\ & 5 & 0.88 & 0.04 & 136.40 & 0.00 & 47.81 (-8, 118) \\ & 10 & 0.87 & 0.07 & 124.71 & 0.00 & 48.81 (-6, 120) \\ & 20 & 0.89 & 0.07 & 117.98 & 0.00 & 48.68 (-4, 122) \\ & 100 & 0.35 & -0.04 & 107.96 & 0.01 & 44.84 (-2, 118) \\ & 200 & 0.90 & 0.07 & 106.20 & 0.01 & 43.46 (-2, 118) \\ \hline \end{tabular} } \caption{RAR using Bayesian approach with early stopping criteria and with no drift applied. The $p_A$ is set to 0.25 for all cases. $E(N)$ represents the mean sample size for 10,000 simulation. $Bias = \hat{\delta} - \delta$. $\pi_{20}$ denote the probability of assigning more than 20 patients in the inferior treatment group. $N_A$ and $N_B$ denote the number of patient assigned to treatment A and B. The mean (2.5\%, 97.5\%) of $N_B - N_A$ is reported in the last column. 10,000 simulation was done for each case. } \label{bayesearlystopnodrift} \end{table} \begin{table}[H] \centering \resizebox{0.75\columnwidth}{!}{% \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline $p_B$ & Block & Power & Bias & $E(N)$ & $\pi_{20}$ & $N_B - N_A$ \\ \hline 0.25 & 1 & 0.05 & 0.00 & 200.00 & 0.07 & 0.31 (-28, 28) \\ & 2 & 0.06 & 0.00 & 197.99 & 0.30 & -0.01(-74, 72) \\ & 4 & 0.08 & 0.00 & 191.95 & 0.34 & 0.34 (-96, 94) \\ & 5 & 0.09 & 0.00 & 189.51 & 0.36 & -0.31 (-100, 98) \\ & 10 & 0.12 & 0.01 & 182.17 & 0.35 & 1.12 (-98, 100) \\ & 20 & 0.16 & 0.02 & 177.34 & 0.36 & -0.09 (-102, 100) \\ & 100 & 0.04 & 0.01 & 172.12 & 0.35 & -0.01 (-98, 98) \\ & 200 & 0.18 & 0.00 & 169.73 & 0.34 & -0.11 (-98, 96) \\ \hline 0.35 & 1 & 0.41 & 0.00 & 200.00 & 0.06 & 0.36 (-26, 28) \\ & 2 & 0.41 & 0.01 & 190.52 & 0.08 & 28.20 (-40, 86) \\ & 4 & 0.43 & 0.02 & 175.93 & 0.08 & 37.48 (-58, 108) \\ & 5 & 0.44 & 0.03 & 172.72 & 0.08 & 39.44 (-56, 110) \\ & 10 & 0.46 & 0.04 & 163.12 & 0.07 & 40.31 (-58, 114) \\ & 20 & 0.52 & 0.05 & 154.79 & 0.07 & 39.74 (-54, 114) \\ & 100 & 0.16 & -0.03 & 148.25 & 0.07 & 37.99 (-50,110) \\ & 200 & 0.60 & 0.06 & 144.84 & 0.07 & 36.12 (-56,110) \\ \hline 0.45 & 1 & 0.90 & 0.00 & 200.00 & 0.07 & -0.02 (-28, 28) \\ & 2 & 0.86 & 0.01 & 162.58 & 0.01 & 33.17 (-18, 90) \\ & 4 & 0.84 & 0.03 & 135.21 & 0.01 & 41.46 (-10, 110) \\ & 5 & 0.83 & 0.04 & 129.45 & 0.01 & 42.66 (-8, 112) \\ & 10 & 0.82 & 0.06 & 117.76 & 0.01 & 44.40 (-6, 116) \\ & 20 & 0.82 & 0.08 & 108.13 & 0.01 & 42.12 (-4, 114) \\ & 100 & 0.35 & -0.04 & 100.34 & 0.01 & 39.46 (-4, 112) \\ & 200 & 0.93 & 0.09 & 95.66 & 0.01 & 37.27 (-4, 110) \\ \hline \end{tabular} } \caption{RAR using Bayesian approach with early stopping criteria and with drift applied.The $p_A$ is set to 0.25 for all cases. $E(N)$ represents the mean sample size for 10,000 simulation. $Bias = \hat{\delta} - \delta$. $\pi_{20}$ denote the probability of assigning more than 20 patients in the inferior treatment group. $N_A$ and $N_B$ denote the number of patient assigned to treatment A and B. The mean (2.5\%, 97.5\%) of $N_B - N_A$ is reported in the last column. 10,000 simulation was done for each case.} \label{bayesearlystopwdrift} \end{table} \bibliographystyle{SageV}
1,108,101,566,281
arxiv
\section{Introduction}\label{intro} Let $(p,q)$ be an interval of the real line and $p_0=p<p_1<\dots<p_{k-1} <p_k=q$ be a finite collection of points that split $(p,q)$ into subintervals $(p_{i-1},p_i)$, $i=1,2,\dots,k$. A transformation $f$ of the interval $(p,q)$ that rearranges the subintervals by translation is called an {\em interval exchange transformation\/} (see Figure \ref{fig1}). To be precise, the restriction of $f$ to any $(p_{i-1},p_i)$ is a translation, the translated subinterval remains within $(p,q)$ and does not overlap with the other translated subintervals. The definition is still incomplete as values of $f$ at the points $p_i$ are not specified. The standard way to do this, which we adopt, is to require that $f$ be right continuous. That is, we consider the half-closed interval $I=[p,q)$ partitioned into smaller half-closed intervals $I_i=[p_{i-1},p_i)$. The interval exchange transformation $f$ is to translate each $I_i$ so that the images $f(I_1),f(I_2),\dots,f(I_k)$ form another partition of $I$. Let $\la$ be a $k$-dimensional vector whose coordinates are lengths of the intervals $I_1,I_2,\dots,I_k$. Let $\pi$ be a permutation on $\{1,2,\dots,k\}$ that tells how the intervals are rearranged by $f$. Namely, $\pi(i)$ is the position of $f(I_i)$ when the the intervals $f(I_1),\dots,f(I_k)$ are ordered from left to right. For the example in Figure \ref{fig1}, $\pi=(1\,2\,4\,3)$. We refer to the pair $(\la,\pi)$ as a combinatorial description of $f$. Given an integer $k\ge1$, a $k$-dimensional vector $\la$ with positive coordinates that add up to the length of $I$, and a permutation $\pi$ on $\{1,2,\dots,k\}$, the pair $(\la,\pi)$ determines a unique interval exchange transformation of $I$. The converse is not true. Any partition of $I$ into subintervals that are translated by $f$ gives rise to a distinct combinatorial description. Clearly, such a partition is not unique. However there is a unique partition with the smallest possible number of subintervals. \begin{figure}[t] \centerline{\includegraphics[scale=1.2]{iet-example.eps}} \caption{Interval exchange transformation.}\label{fig1} \end{figure} The interval exchange transformations have been popular objects of study in ergodic theory. First of all, the exchange of two intervals is equivalent to a rotation of the circle (it becomes one when we identify the endpoints of the interval $I$ thus producing a circle). The exchanges of three or more intervals were first considered by Katok and Stepin \cite{KS}. The systematic study started since the paper by Keane \cite{Keane} who coined the term. For an account of the results, see the survey by Viana \cite{Viana}. All interval exchange transformations of a fixed interval $I=[p,q)$ form a transformation group $\cG_I$. Changing the interval, one obtains an isomorphic group. Indeed, let $J=[p',q')$ be another interval and $h$ be an affine map of $I$ onto $J$. Then $f\in G_J$ if and only if $h^{-1}fh\in G_I$. We refer to any of the groups $G_I$ as the {\em group of interval exchange transformations}. The present notes are concerned with group-theoretic properties of $G_I$. An important tool here is the {\em scissors congruence invariant\/} or the {\em Sah-Arnoux-Fathi (SAF) invariant\/} of $f\in G_I$ introduced independently by Sah \cite{Sah} and Arnoux and Fathi \cite{Arnoux}. The invariant can be informally defined by $$ \SAF(f)=\int_I 1\otimes\bigl(f(x)-x\bigr)\,dx $$ (the integral is actually a finite sum). The importance stems from the fact that $\SAF$ is a homomorphism of the group $G_I$ onto $\bR\wedge_{\bQ}\bR$. As a consequence, the invariant vanishes on the commutator group, which is a subgroup of $G_I$ generated by commutators $f^{-1}g^{-1}fg$, where $f$ and $g$ run over $G_I$. In this paper, we establish the following properties of the commutator group of the group of interval exchange transformations $G_I$. \begin{theorem}\label{main1} The following four groups are the same: \begin{itemize} \item the group of interval exchange transformations with zero SAF invariant, \item the commutator group of the group of interval exchange transformations, \item the group generated by interval exchange transformations of order $2$, \item the group generated by interval exchange transformations of finite order. \end{itemize} \end{theorem} \begin{theorem}\label{main2} The quotient of the group of interval exchange transformations by its commutator group is isomorphic to $\bR\wedge_{\bQ}\bR$. \end{theorem} \begin{theorem}\label{main3} The commutator group of the group of interval exchange transformations is simple. \end{theorem} It has to be noted that most of these results are already known. A theorem by Sah reproduced in Veech's paper \cite{Veech} contains Theorems \ref{main2}, \ref{main3}, and part of Theorem \ref{main1}. Unfortunately, the preprint of Sah \cite{Sah} was never published. Hence we include complete proofs. A new result of the present paper is that the commutator group of $\cG_I$ is generated by elements of order $2$. This is the central result of the paper as our proofs of the theorems are based on the study of elements of order $2$. The paper is organized as follows. Section \ref{elem} contains some elementary constructions that will be used in the proofs of the theorems. The scissors congruence invariant is considered in Section \ref{inv}. Section \ref{comm} is devoted to the proof of Theorem \ref{main1}. Theorem \ref{main2} is proved in the same section. Section \ref{simp} is devoted to the proof of Theorem \ref{main3}. The author is grateful to Michael Boshernitzan for useful and inspiring discussions. \section{Elementary constructions}\label{elem} Let us choose an arbitrary interval $I=[p,q)$. In what follows, all interval exchange transformations are assumed to be defined on $I$. Also, all subintervals of $I$ are assumed to be half-closed intervals of the form $[x,y)$. The proofs of Theorems \ref{main1} and \ref{main3} are based on several elementary constructions described in this section. First of all, we introduce two basic types of transformations used in those constructions. An {\em interval swap map of type $a$} is an interval exchange transformation that interchanges two nonoverlapping intervals of length $a$ by translation while fixing the rest of the interval $I$. A {\em restricted rotation of type $(a,b)$} is an interval exchange transformation that exchanges two neighboring intervals of lengths $a$ and $b$ (the interval of length $a$ must be to the left of the interval of length $b$) while fixing the rest of $I$. The type of an interval swap map is determined uniquely, and so is the type of a restricted rotation. Clearly, any interval swap map is an involution. The inverse of a restricted rotation of type $(a,b)$ is a restricted rotation of type $(b,a)$. Any restricted rotation of type $(a,a)$ is also an interval swap map of type $a$. \begin{figure}[t] \centerline{\includegraphics[scale=1]{iet-basic.eps}} \caption{An interval swap map and a restricted rotation.}\label{fig2} \end{figure} \begin{lemma}\label{elem1} Any interval exchange transformation $f$ is a product of restricted rotations. Moreover, if $f$ exchanges at least two intervals and $\cL$ is the set of their lengths, it is enough to use restricted rotations of types $(a,b)$ such that $a,b\in\cL$. \end{lemma} \begin{proof} The exchange of one interval is the identity. In this case, take any restricted rotation $h$. Then $f=hh^{-1}$, which is a product of restricted rotations. Now assume that $f$ exchanges $k\ge2$ intervals. Let $I_1,I_2,\dots,I_k$ be the intervals, ordered from left to right, and $(\la,\pi)$ be the corresponding combinatorial description of $f$. Then $\cL$ is the set of coordinates of the vector $\la$. For any permutation $\si$ on $\{1,2,\dots,k\}$, let $f_\si$ denote a unique interval exchange transformation with the combinatorial description $(\la,\si)$. Given two permutations $\si$ and $\tau$ on $\{1,2,\dots,k\}$, let $g_{\si,\tau}=f_{\si}f_\tau^{-1}$. For any $i$, $1\le i\le k$, the transformation $g_{\si,\tau}$ translates the interval $f_\tau(I_i)$ onto $f_\si(I_i)$. It follows that $g_{\si,\tau}$ has combinatorial description $(\la',\si\tau^{-1})$, where $\la'_i=\la_{\tau^{-1}(i)}$ for $1\le i\le k$. Now suppose that $\pi$ is expanded into a product of permutations $\pi=\si_1\si_2\dots\si_m$. For any $j$, $1\le j\le m$, let $\pi_j=\si_j\si_{j+1}\dots\si_m$. Then $f=h_1h_2\dots h_m$, where $h_m=f_{\si_m}$ and $h_j=g_{\pi_j,\pi_{j+1}}$ for $1\le j<m$. By the above each $h_j$ has combinatorial description $(\la^{(j)},\si_j)$, where the vector $\la^{(j)}$ is obtained from $\la$ by permutation of its coordinates. In the case $\si_j$ is a transposition of neighboring numbers, $\si_j=(i,\,i+1)$, the transformation $h_j$ is a restricted rotation of type $\bigl(\la^{(j)}_i,\la^{(j)}_{i+1}\bigr)$. Notice that $\la^{(j)}_i,\la^{(j)}_{i+1}\in\cL$. It remains to observe that any permutation $\pi$ on $\{1,2,\dots,k\}$ is a product of transpositions of neighboring numbers. Indeed, we can represent $\pi$ as a product of cycles. Further, any cycle $(n_1\,n_2\,\dots\,n_m)$ of length $m\ge2$ is a product of $m-1$ transpositions: $(n_1\,n_2\,\dots\,n_m)=(n_1\,n_2)(n_2\,n_3)\dots(n_{m-1}\,n_m)$. A cycle of length $1$ is the identity, hence it equals $(1\,2)(1\,2)$. Finally, any transposition $(n\,l)$, $n<l$, is expanded into a product of transpositions of neighboring numbers: $(n\,l)=\tau_n\tau_{n+1}\dots \tau_{l-2}\tau_{l-1}\tau_{l-2}\dots\tau_{n+1}\tau_n$, where $\tau_i =(i,\,i+1)$. \end{proof} Notice that the set $\cL$ in Lemma \ref{elem1} depends on the combinatorial description of the interval exchange transformation $f$. The lemma holds for every version of this set. \begin{lemma}\label{elem2} Any interval exchange transformation of finite order is a product of interval swap maps. \end{lemma} \begin{proof} Suppose $J_1,J_2,\dots,J_k$ are nonoverlapping intervals of the same length $a$ contained in an interval $I$. Let $g$ be an interval exchange transformation of $I$ that translates $J_i$ onto $J_{i+1}$ for $1\le i<k$, translates $J_k$ onto $J_1$, and fixes the rest of $I$. If $k\ge2$ then $g$ is the product of $k-1$ interval swap maps of type $a$. Namely, $g=h_1h_2\dots h_{k-1}$, where each $h_i$ interchanges $J_i$ with $J_{i+1}$ by translation while fixing the rest of $I$. In the case $k=1$, $g$ is the identity. Then $g=h_0h_0$ for any interval swap map $h_0$ on $I$. Let $f$ be an interval exchange transformation of $I$ that has finite order. Since there are only finitely many distinct powers of $f$, there are only finitely many points in $I$ at which one of the powers has a discontinuity. Let $I=I_1\cup I_2\cup\ldots\cup I_m$ be a partition of $I$ into subintervals created by all such points. By construction, the restriction of $f$ to any $I_i$ is a translation and, moreover, the translated interval $f(I_i)$ is contained in another element of the partition. Since the same applies to the inverse $f^{-1}$, it follows that $f(I_i)$ actually coincides with some element of the partition. Hence $f$ permutes the intervals $I_1,I_2,\dots,I_m$ by translation. Therefore these intervals can be relabeled as $J_{ij}$, $1\le i\le l$, $1\le j\le k_i$ ($l$ and $k_1,\dots,k_l$ are some positive integers), so that $f$ translates each $J_{ij}$ onto $J_{i,j+1}$ if $j<k_i$ and onto $J_{i1}$ if $j=k_i$. For any $i\in\{1,2,\dots,l\}$ let $g_i$ be an interval exchange transformation that coincides with $f$ on the union of intervals $J_{ij}$, $1\le j\le k_i$, and fixes the rest of $I$. It is easy to observe that the transformations $g_1,\dots,g_l$ commute and $f=g_1g_2\dots g_l$. By the above each $g_i$ can be represented as a product of interval swap maps. Hence $f$ is a product of interval swap maps as well. \end{proof} \begin{lemma}\label{elem3} Any interval swap map is a commutator of two interval exchange transformations of order $2$. \end{lemma} \begin{proof} Let $f$ be an interval swap map of type $a$. Let $I_1=[x,x+a)$ and $I_2=[y,y+a)$ be nonoverlapping intervals interchanged by $f$. We split the interval $I_1$ into two subintervals $I_{11}=[x,x+a/2)$ and $I_{12}=[x+a/2,x+a)$. Similarly, $I_2$ is divided into $I_{21}=[y,y+a/2)$ and $I_{22}=[y+a/2,y+a)$. Now we introduce three interval swap maps of type $a/2$: $g_1$ interchanges $I_{11}$ with $I_{12}$, $g_2$ interchanges $I_{21}$ with $I_{22}$, and $g_3$ interchanges $I_{11}$ with $I_{21}$. The maps $f,g_1,g_2,g_3$ permute the intervals $I_{11},I_{12},I_{21},I_{22}$ by translation and fix the rest of the interval $I$. It is easy to see that $g_1g_2=g_2g_1$. Hence $g=g_1g_2$ is an element of order $2$. Further, we check that $g_3g=g_3g_2g_1$ maps $I_{11}$ onto $I_{12}$, $I_{12}$ onto $I_{21}$, $I_{21}$ onto $I_{22}$, and $I_{22}$ onto $I_{11}$. Therefore the second iteration $(g_3g)^2$ interchanges $I_{11}$ with $I_{21}$ and $I_{12}$ with $I_{22}$, which is exactly how $f$ acts. Thus $f=(g_3g)^2 =g_3^{-1}g^{-1}g_3g$. \end{proof} For the next two constructions, we need another definition. The {\em support\/} of an interval exchange transformation $f$ is the set of all points in $I$ moved by $f$. It is the union of a finite number of (half-closed) intervals. For instance, the support of a restricted rotation of type $(a,b)$ is a single interval of length $a+b$. The support of an interval swap map of type $a$ is the union of two nonoverlapping intervals of length $a$. Note that any interval swap map is uniquely determined by its type and support. The same holds true for any restricted rotation. \begin{lemma}\label{elem4} Let $f_1$ and $f_2$ be interval swap maps of the same type. If the supports of $f_1$ and $f_2$ do not overlap then there exists an interval exchange transformation $g$ of order $2$ such that $f_2=gf_1g$. \end{lemma} \begin{proof} Let $a$ be the type of $f_1$ and $f_2$. Let $I_1$ and $I'_1$ be nonoverlapping intervals of length $a$ interchanged by $f_1$. Let $I_2$ and $I'_2$ be nonoverlapping intervals of length $a$ interchanged by $f_2$. Assume that the supports of $f_1$ and $f_2$ do not overlap, i.e., the intervals $I_1,I'_1,I_2,I'_2$ do not overlap with each other. Let us introduce two more interval swap maps of type $a$: $g_1$ interchanges $I_1$ with $I_2$ and $g_2$ interchanges $I'_1$ with $I'_2$. Since the supports of $g_1$ and $g_2$ do not overlap, the transformations commute. Hence the product $g=g_1g_2$ is an element of order $2$. The maps $f_1$, $f_2$, and $g$ permute the intervals $I_1,I'_1,I_2,I'_2$ by translation and fix the rest of the interval $I$. One easily checks that $f_2=gf_1g$. \end{proof} \begin{lemma}\label{elem5} Let $f_1$ and $f_2$ be restricted rotations of the same type. If the supports of $f_1$ and $f_2$ do not overlap then $f_1^{-1}f_2$ is the product of three interval swap maps. \end{lemma} \begin{proof} Let $(a,b)$ be the type of $f_1$ and $f_2$. Let $I_1=[x,x+a+b)$ be the support of $f_1$ and $I_2=[y,y+a+b)$ be the support of $f_2$. The transformation $f_2$ translates the interval $I_{21}=[y,y+a)$ by $b$ and the interval $I_{22}=[y+a,y+a+b)$ by $-a$. The inverse $f_1^{-1}$ is a restricted rotation of type $(b,a)$ with the same support as $f_1$. It translates the interval $I_{11}=[x,x+b)$ by $a$ and the interval $I_{12}=[x+b,x+a+b)$ by $-b$. Assume that the supports $I_1$ and $I_2$ do not overlap. Let $g_1$ be the interval swap map of type $a$ that interchanges the intervals $I_{12}$ and $I_{21}$, let $g_2$ be the interval swap map of type $b$ that interchanges $I_{11}$ and $I_{22}$, and let $g_3$ be the interval swap map of type $a+b$ that interchanges $I_1$ and $I_2$. It is easy to check that $f_1^{-1}f_2=g_3g_2g_1=g_3g_1g_2$ (see Figure \ref{fig3}). \end{proof} \begin{lemma}\label{elem6} Suppose $f$ is a restricted rotation of type $(a,b)$, where $a>b$. Then there exist interval swap maps $g_1$ and $g_2$ such that $g_1f=fg_2$ is a restricted rotation of type $(a-b,b)$. \end{lemma} \begin{figure}[t] \centerline{\includegraphics[scale=1]{iet-proof.eps}} \caption{Proof of Lemma \ref{elem5}.}\label{fig3} \end{figure} \begin{proof} Let $I_0=[x,x+a+b)$ be the support of $f$. We define three more transformations with supports inside $I_0$: $g_1$ is an interval swap map of type $b$ that interchanges the intervals $[x,x+b)$ and $[x+a,x+a+b)$, $g_2$ is an interval swap map of type $b$ that interchanges $[x+a-b,x+a)$ and $[x+a,x+a+b)$, and $h$ is a restricted rotation of type $(a-b,b)$ with support $[x,x+a)$. Let us check that $g_1h=f$. Since $a>b$, the points $x+a-b$ and $x+a$ divide $I_0$ into three subintervals $I_1=[x,x+a-b)$, $I_2=[x+a-b,x+a)$, and $I_3=[x+a,x+a+b)$. The map $h$ translates $I_1$ by $b$, $I_2$ by $b-a$, and fixes $I_3$. Then the map $g_1$ translates $I_3$ by $-a$, $[x,x+b)=h(I_2)$ by $a$, and fixes $[x+b,x+a)=h(I_1)$. Therefore the product $g_1h$ translates $I_1$ by $b$, $I_2$ by $b$, and $I_3$ by $-a$. This is exactly how $f$ acts. Similarly, we check that $f=hg_2$. It remains to notice that $g_1f=g_1^2h=h=hg_2^2=fg_2$. \end{proof} \begin{lemma}\label{elem7} Let $f$ be a nontrivial interval exchange transformation. Then there exist $\eps_0>0$ and, for any $0<\eps<\eps_0$, interval swap maps $g_1,g_2$ such that $g_2f^{-1}g_1fg_1g_2$ is an interval swap map of type $\eps$. \end{lemma} \begin{proof} Since $f$ is not the identity, we can find an interval $J=[x,y)$ such that $f$ translates $J$ by some $t\ne0$. Let $\eps_0=\min(y-x,|t|)$. Given any $\eps$, $0<\eps<\eps_0$, we introduce two intervals $I_0=[x,x+\eps)$ and $I_1=[x+t,x+t+\eps)$. By construction, $I_0$ and $I_1$ do not overlap. Besides, $f$ translates $I_0$ onto $I_1$. Let $g_0$ be an interval swap map of type $\eps/2$ that interchanges two halves $I_{01}=[x,x+\eps/2)$ and $I_{02}=[x+\eps/2,x+\eps)$ of the interval $I_0$. Let $g_1$ be an interval swap map of type $\eps/2$ that interchanges two halves $I_{11}=[x+t,x+t+\eps/2)$ and $I_{12}=[x+t+\eps/2,x+t+\eps)$ of $I_1$. Since $f$ translates $I_0$ onto $I_1$, it follows that $g_0=f^{-1}g_1f$. Further, let $g_2$ be an interval swap map of type $\eps/2$ that interchanges $I_{02}$ with $I_{11}$. The maps $g_0,g_1,g_2$ permute the nonoverlapping intervals $I_{01},I_{02},I_{11},I_{12}$ by translation and fix the rest of the interval $I$. It is easy to check that $g_2g_0g_1g_2 =g_2f^{-1}g_1fg_1g_2$ is an interval swap map of type $\eps$ that interchanges $I_0$ with $I_1$. \end{proof} \section{Scissors congruence invariant}\label{inv} Let us recall the construction of the tensor product. Suppose $V$ and $W$ are vector spaces over a field $F$. Let $Z(V,W)$ be a vector space over $F$ with basis $\{z[v,w]\}_{(v,w)\in V\times W}$. Let $Y(V,W)$ denote the subspace of $Z(V,W)$ spanned by all vectors of the form $z[v_1+v_2,w]-z[v_1,w]-z[v_2,w]$, $z[v,w_1+w_2]-z[v,w_1]-z[v,w_2]$, $z[\alpha v,w]-\alpha z[v,w]$, and $z[v,\alpha w]-\alpha z[v,w]$, where $v,v_1,v_2\in V$, $w,w_1,w_2\in W$, and $\alpha\in F$. The {\em tensor product\/} of the spaces $V$ and $W$ over the field $F$, denoted $V\otimes_F W$, is the quotient of the vector space $Z(V,W)$ by $Y(V,W)$. For any $v\in V$ and $w\in W$ the coset $z[v,w]+Y(V,W)$ is denoted $v\otimes w$. By construction, $(v,w)\mapsto v\otimes w$ is a bilinear mapping on $V\times W$. In the case $V=W$, for any vectors $v,w\in V$ we define the {\em wedge product\/} $v\wedge w=v\otimes w-w\otimes v$. The subspace of $V\otimes_F V$ spanned by all wedge products is denoted $V\wedge_F V$. By construction, $(v,w)\mapsto v\wedge w$ is a bilinear, skew-symmetric mapping on $V\times V$. \begin{lemma}\label{inv1} Suppose $V$ is a vector space over a field $F$ and $v_1,v_2,\dots,v_k\in V$ are linearly independent vectors. Then the wedge products $v_i\wedge v_j$, $1\le i<j\le k$, are linearly independent in $V\wedge_F V$. \end{lemma} \begin{proof} For any bilinear function $\om:V\times V\to F$ let $\tilde\om$ denote a unique linear function on $Z(V,V)$ such that $\tilde\om(z[v,w])=\om(v,w)$ for all $v,w\in V$. Since $\om$ is bilinear, the function $\tilde\om$ vanishes on the subspace $Y(V,V)$. Hence it gives rise to a linear function $\hat\om:V\otimes_F V\to F$. By construction, $\hat\om(v\otimes w)=\om(v,w)$ for all $v,w\in V$. Let us extend the set $\{v_1,v_2,\dots,v_k\}$ to a basis $S$ for the vector space $V$. For any $l,m\in\{1,2,\dots,k\}$ we denote by $\om_{lm}$ a unique bilinear function on $V\times V$ such that $\om_{lm}(v,w)=1$ if $(v,w)=(v_l,v_m)$ and $\om_{lm}(v,w)=0$ for any other pair $(v,w)\in S\times S$. The function $\om_{lm}$ gives rise to a linear function $\hat\om_{lm}$ on $V\otimes_F V$ as described above. For any $i,j\in\{1,2,\dots,k\}$, $i\ne j$, we have $\hat\om_{lm}(v_i\wedge v_j)=1$ if $i=l$ and $j=m$, $\hat\om_{lm}(v_i\wedge v_j)=-1$ if $i=m$ and $j=l$, and $\hat\om_{lm}(v_i\wedge v_j)=0$ otherwise. Consider an arbitrary linear combination $$ \xi=\sum\nolimits_{1\le i<j\le k}r_{ij}(v_i\wedge v_j) $$ with coefficients $r_{ij}$ from $F$. It is easy to observe that $\hat\om_{lm}(\xi)=r_{lm}$ for any $1\le l<m\le k$. Therefore $\xi\ne0$ unless all $r_{ij}$ are zeros. Thus the wedge products $v_i\wedge v_j$, $1\le i<j\le k$, are linearly independent over $F$. \end{proof} Let $f$ be an interval exchange transformation of an interval $I=[p,q)$. Consider an arbitrary partition of $I$ into subintervals, $I=I_1\cup I_2 \cup\ldots\cup I_k$, such that the restriction of $f$ to any $I_i$ is a translation by some $t_i$. Let $\la_i$ be the length of $I_i$, $1\le i\le k$. The {\em scissors congruence invariant}, also known as the {\em Sah-Arnoux-Fathi (SAF) invariant}, of $f$ is $$ \SAF(f)=\la_1\otimes t_1+\la_2\otimes t_2+\cdots+\la_k\otimes t_k $$ regarded as an element of the tensor product $\bR\otimes_{\bQ}\bR$. One can easily check that $\SAF(f)=a\wedge b$ for any restricted rotation $f$ of type $(a,b)$ and $\SAF(g)=0$ for any interval swap map $g$. The term `scissors congruence invariant' is partially explained by the following lemma. \begin{lemma}\label{inv2} The scissors congruence invariant $\SAF(f)$ of an interval exchange transformation $f$ does not depend on the combinatorial description of $f$. \end{lemma} \begin{proof} Let $I=I_1\cup\ldots\cup I_k$ be a partition of the interval $I$ into subintervals such that the restriction of $f$ to any $I_i$ is a translation by some $t_i$. Let $I=I'_1\cup\ldots\cup I'_m$ be another partition into subintervals such that the restriction of $f$ to any $I'_j$ is a translation by some $t'_j$. Let $\la_i$ denote the length of $I_i$ ($1\le i\le k$) and $\la'_j$ denote the length of $I'_j$ ($1\le j\le m$). We have to show that $\xi=\la_1\otimes t_1+\cdots+\la_k\otimes t_k$ coincides with $\xi'=\la'_1\otimes t'_1+\cdots+\la'_m\otimes t'_m$ in $\bR\otimes_{\bQ}\bR$. For any $1\le i\le k$ and $1\le j\le m$ the intersection $I_i\cap I'_j$ is either an interval or the empty set. We let $\mu_{ij}$ be the length of the interval in the former case and $\mu_{ij}=0$ otherwise. Further, let $$ \eta=\sum\nolimits_{i=1}^k\sum\nolimits_{j=1}^m \mu_{ij}\otimes t_i, \qquad \eta'=\sum\nolimits_{i=1}^k\sum\nolimits_{j=1}^m \mu_{ij}\otimes t'_j. $$ Clearly, $t_i=t'_j$ whenever $I_i\cap I'_j$ is an interval. Otherwise $\mu_{ij}=0$ and $0\otimes t_i=0=0\otimes t'_j$. In any case, $\mu_{ij}\otimes t_i=\mu_{ij}\otimes t'_j$. Therefore $\eta=\eta'$. For any $i\in\{1,2,\dots,k\}$, nonempty intersections $I_i\cap I'_j$, $1\le j\le m$, form a partition of the interval $I_i$ into subintervals. Hence $\la_i=\mu_{i1}+\mu_{i2}+\dots+\mu_{im}$. It follows that $\eta=\xi$. Similarly, we obtain that $\eta'=\xi'$. Thus $\xi=\eta=\eta'=\xi'$. \end{proof} In view of Lemma \ref{inv2}, for any interval $I=[p,q)$ we can consider the invariant $\SAF$ as a function on $\cG_I$, the set of all interval exchange transformations of $I$. \begin{lemma}\label{inv3} The scissors congruence invariant $\SAF$ is a homomorphism of the group $\cG_I$ to $\bR\otimes_{\bQ}\bR$. \end{lemma} \begin{proof} Consider arbitrary interval exchange transformations $f$ and $g$ of the interval $I$. We have to show that $\SAF(fg)=\SAF(f)+\SAF(g)$. Let $I=I_1\cup I_2\cup\ldots\cup I_k$ be a partition of $I$ into subintervals such that the restrictions of both $g$ and $fg$ to any $I_i$ are translations by some $t_i$ and $t'_i$, respectively. Let $\la_i$ be the length of $I_i$, $1\le i\le k$. Then \begin{eqnarray*} \SAF(g) &=& \la_1\otimes t_1+\la_2\otimes t_2+\cdots+\la_k\otimes t_k,\\ \SAF(fg) &=& \la_1\otimes t'_1+\la_2\otimes t'_2+\cdots+\la_k\otimes t'_k. \end{eqnarray*} It is easy to see that for any $1\le i\le k$ the image $g(I_i)$ is an interval of length $\la_i$ and the restriction of $f$ to $g(I_i)$ is the translation by $t'_i-t_i$. Besides, the intervals $g(I_1),g(I_2),\dots, g(I_k)$ form another partition of $I$. It follows that $$ \SAF(f)=\la_1\otimes(t'_1-t_1)+\la_2\otimes(t'_2-t_2)+\cdots +\la_k\otimes(t'_k-t_k). $$ Since $\la_i\otimes(t'_i-t_i)+\la_i\otimes t_i=\la_i\otimes t'_i$ for all $1\le i\le k$, we obtain that $\SAF(fg)=\SAF(f)+\SAF(g)$. \end{proof} In the remainder of this section we show that $\SAF$ is actually a homomorphism of $\cG_I$ onto $\bR\wedge_{\bQ}\bR$. \begin{lemma}\label{inv4} For any $a,b,\eps>0$ there exist pairs of positive numbers $(a_1,b_1)$, $(a_2,b_2),\dots,(a_n,b_n)$ such that \begin{itemize} \item $(a_1,b_1)=(a,b)$, \item $(a_{i+1},b_{i+1})=(a_i-b_i,b_i)$ or $(a_{i+1},b_{i+1})=(a_i,b_i-a_i)$ for $1\le i\le n-1$, \item $a_n+b_n<\eps$ or $a_n=b_n$. \end{itemize} \end{lemma} \begin{proof} We define a finite or infinite sequence of pairs inductively. First of all, $(a_1,b_1)=(a,b)$. Further, assume that the pair $(a_i,b_i)$ is defined for some positive integer $i$. If $a_i=b_i$ then this is the last pair in the sequence. Otherwise we let $(a_{i+1},b_{i+1})=(a_i-b_i,b_i)$ if $a_i>b_i$ and $(a_{i+1},b_{i+1})=(a_i,b_i-a_i)$ if $a_i<b_i$. Since $a,b>0$, it follows by induction that $a_i,b_i>0$ for all $i$. If the sequence $(a_1,b_1),(a_2,b_2),\dots$ is finite and contains $n$ pairs, then $a_n=b_n$ and we are done. If the sequence is infinite, it is enough to show that $a_n+b_n\to0$ as $n\to\infty$. For any positive integer $n$ let $c_n=\min(a_n,b_n)$. Since $a_1,a_2,\dots$ and $b_1,b_2,\dots$ are nonincreasing sequences of positive numbers, so is the sequence $c_1,c_2,\dots$. By construction, $a_{i+1}+b_{i+1}=(a_i+b_i)-c_i$ for all $i$. It follows that the series $c_1+c_2+\cdots$ is convergent. In particular, $c_n\to0$ as $n\to\infty$. Note that if $c_{i+1}<c_i$ for some $i$, then $c_i=\max(a_{i+1},b_{i+1})$ so that $a_{i+1}+b_{i+1} =c_i+c_{i+1}$. This implies $a_n+b_n\to0$ as $n\to\infty$. \end{proof} \begin{lemma}\label{inv5} For any $a,b\in\bR$ and $\eps>0$ there exist $a_0,b_0>0$, $a_0+b_0<\eps$, such that $a\wedge b=a_0\wedge b_0$ in $\bR\wedge_{\bQ}\bR$. \end{lemma} \begin{proof} Note that $c\wedge c=0$ for all $c\in\bR$. Therefore in the case $a\wedge b=0$ it is enough to take $a_0=b_0=c$, where $0<c<\eps/2$. Now assume that $a\wedge b\ne0$. Clearly, in this case $a$ and $b$ are nonzero. Since $(-a)\wedge(-b)=a\wedge b$ and $(-a)\wedge b=a\wedge(-b) =b\wedge a$ for all $a,b\in\bR$, it is no loss to assume that $a$ and $b$ are positive. By Lemma \ref{inv4}, there exist pairs of positive numbers $(a_1,b_1)=(a,b)$, $(a_2,b_2),\dots,(a_n,b_n)$ such that $(a_{i+1},b_{i+1})=(a_i-b_i,b_i)$ or $(a_{i+1},b_{i+1})=(a_i,b_i-a_i)$ for $1\le i\le n-1$, and also $a_n+b_n<\eps$ or $a_n=b_n$. Since $(a'-b')\wedge b'=a'\wedge b'-b'\wedge b'=a'\wedge b'$ and $a'\wedge(b'-a')=a'\wedge b'-a'\wedge a'=a'\wedge b'$ for all $a',b'\in\bR$, it follows by induction that $a_i\wedge b_i=a\wedge b$, $i=1,2,\dots,n$. Then $a_n\ne b_n$ as $a_n\wedge b_n=a\wedge b\ne0$. Thus $a_n+b_n<\eps$. \end{proof} \begin{lemma}\label{inv6} An element $\xi\in\bR\otimes_{\bQ}\bR$ is the scissors congruence invariant of some interval exchange transformation in $\cG_I$ if and only if $\xi\in\bR\wedge_{\bQ}\bR$. \end{lemma} \begin{proof} As already mentioned before, the SAF invariant of a restricted rotation of type $(a,b)$ is $a\wedge b$. By Lemma \ref{elem1}, any $f\in\cG_I$ is a product of restricted rotations. Since $\SAF$ is a homomorphism of the group $\cG_I$ due to Lemma \ref{inv3}, we obtain that $\SAF(f)$ is a finite sum of wedge products. Hence $\SAF(f)\in\bR\wedge_{\bQ}\bR$. Let $l$ denote the length of the interval $I$. By Lemma \ref{inv5}, for any $a,b\in\bR$ one can find $a_0,b_0>0$, $a_0+b_0<l$, such that $a\wedge b =a_0\wedge b_0$. By the choice of $a_0$ and $b_0$, the group $\cG_I$ contains a restricted rotation of type $(a_0,b_0)$. It follows that any wedge product in $\bR\wedge_{\bQ}\bR$ is the SAF invariant of some interval exchange transformation in $\cG_I$. Since $\SAF$ is a homomorphism of $\cG_I$, any sum of wedge products is also the SAF invariant of some $f\in\cG_I$. Any $\xi\in\bR\wedge_{\bQ}\bR$ is a linear combination of wedge products with rational coefficients. Since $r(a\wedge b)=(ra)\wedge b$ for all $a,b\in\bR$ and $r\in\bQ$, the element $\xi$ can also be represented as a sum of wedge products. By the above, $\xi=\SAF(f)$ for some $f\in\cG_I$. \end{proof} \section{Commutator group}\label{comm} We begin this section with a technical lemma that will be used in the proof of the principal Lemma \ref{comm2} below. \begin{lemma}\label{comm1} Suppose $L_1,L_2,\dots,L_k$ are positive numbers. Then there exist positive numbers $l_1,l_2,\dots,l_m$ linearly independent over $\bQ$ such that each $L_i$ is a linear combination of $l_1,l_2,\dots,l_m$ with nonnegative integer coefficients. \end{lemma} \begin{proof} The proof is by induction on the number $k$ of the reals $L_1,L_2,\dots,L_k$. The case $k=1$ is trivial. Now assume that $k>1$ and the lemma holds for the numbers $L_1,L_2,\dots,L_{k-1}$. That is, there exist positive numbers $l_1,l_2,\dots,l_m$ linearly independent over $\bQ$ such that each $L_i$, $1\le i<k$ is a linear combination of $l_1,l_2,\dots,l_m$ with nonnegative integer coefficients. If the reals $l_1,\dots,l_m$ and $L_k$ are linearly independent over $\bQ$, then we are done. Otherwise $L_k$ is a linear combination of $l_1,\dots,l_m$ with rational coefficients. Let us separate positive and negative terms in this linear combination: $L_k=a_1l_{i_1}+\cdots+a_sl_{i_s}-(b_1l_{j_1}+\cdots+ b_pl_{j_p})$, where $a_{i_t},b_{j_t}$ are positive rationals and the indices $i_1,\dots,i_s,j_1,\dots,j_p$ are all distinct. It is possible that there is no negative term at all. Since $l_1,\dots,l_m$ and $L_k$ are positive numbers, we can find positive rationals $r_1,\dots,r_s$ such that $r_1+\cdots+r_s=1$ and $l'_{i_t}=a_tl_{i_t}-r_t(b_1l_{j_1}+\cdots+ b_pl_{j_p})$ is positive for $1\le t\le s$. Let $l'_i=l_i$ for any $1\le i\le m$ different from $i_1,\dots,i_s$. Then $l'_1,\dots,l'_m$ are positive numbers linearly independent over $\bQ$. By construction, $L_k=l'_{i_1}+\cdots+l'_{i_s}$ and $l_{i_t}=a_t^{-1}l'_{i_t} +a_t^{-1}r_t(b_1l'_{j_1}+\cdots+b_pl'_{j_p})$ for $1\le t\le s$. Therefore each of the numbers $l_1,\dots,l_m$ and $L_k$ is a linear combination of $l'_1,\dots,l'_m$ with nonnegative rational coefficients. It follows that each of the numbers $L_1,L_2,\dots,L_k$ is also a linear combination of $l'_1,\dots,l'_m$ with nonnegative rational coefficients. Then there exists a positive integer $N$ such that each $L_i$ is a linear combination of $l'_1/N,\dots,l'_m/N$ with nonnegative integer coefficients. \end{proof} Let us call a product of restricted rotations {\em balanced\/} if for any $a,b>0$ the number of factors of type $(a,b)$ in this product matches the number of factors of type $(b,a)$. \begin{lemma}\label{comm2} Any interval exchange transformation with zero SAF invariant can be represented as a balanced product of restricted rotations. \end{lemma} \begin{proof} Consider an arbitrary interval exchange transformation $f$ of an interval $I$. If $f$ is the identity, then for any restricted rotation $h$ on $I$ we have $f=hh^{-1}$, which is a balanced product of restricted rotations. Now assume $f$ is not the identity. Let $I=I_1\cup\ldots\cup I_k$ be a partition of $I$ into subintervals such that the restriction of $f$ to any $I_i$ is a translation. Note that $k\ge2$. Let $L_1,L_2,\dots,L_k$ be lengths of the intervals $I_1,I_2,\dots,I_k$. By Lemma \ref{comm1}, one can find positive numbers $l_1,l_2,\dots,l_m$ linearly independent over $\bQ$ such that each $L_i$ is a linear combination of $l_1,l_2,\dots,l_m$ with nonnegative integer coefficients. Then each $I_i$ can be partitioned into smaller intervals with lengths in the set $\cL=\{l_1,l_2,\dots,l_m\}$. Clearly, the restriction of $f$ to any of the smaller intervals is a translation, hence Lemma \ref{elem1} applies here. We obtain that $f$ can be represented as a product of restricted rotations, $f=f_1f_2\dots f_n$, such that the type $(a,b)$ of any factor satisfies $a,b\in\cL$. For any $i,j\in\{1,2,\dots,m\}$ let $s_{ij}$ denote the number of factors of type $(l_i,l_j)$ in this product. Then $$ \SAF(f)=\sum\nolimits_{i=1}^n \SAF(f_n)=\sum\nolimits_{i=1}^m \sum\nolimits_{j=1}^m s_{ij}(l_i\wedge l_j) =\sum\nolimits_{1\le i<j\le m}(s_{ij}-s_{ji})(l_i\wedge l_j). $$ Since the numbers $l_1,\dots,l_m$ are linearly independent over $\bQ$, it follows from Lemma \ref{inv1} that the wedge products $l_i\wedge l_j$, $1\le i<j\le m$, are linearly independent in $\bR\wedge_{\bQ}\bR$. Therefore $\SAF(f)=0$ only if $s_{ij}=s_{ji}$ for all $i,j$, $i<j$. Then $s_{ij}=s_{ji}$ for all $i,j\in\{1,2,\dots,m\}$, which means that the product $f_1f_2\dots f_n$ is balanced. \end{proof} The next lemma is an extension of Lemma \ref{elem6} that will be used in the proofs of Lemmas \ref{comm4} and \ref{comm5} below. \begin{lemma}\label{comm3} Given $a,b,\eps>0$, there exist $a_0,b_0>0$, $a_0+b_0<\eps$, such that any restricted rotation $f$ of type $(a,b)$ can be represented as $f=hg$, where $h$ is a restricted rotation of type $(a_0,b_0)$ and $g$ is a product of interval swap maps. \end{lemma} \begin{proof} Consider an arbitrary restricted rotation $f$ of type $(a',b')$, where $a'\ne b'$. If $a'>b'$ then Lemma \ref{elem6} implies that $f=hg$, where $h$ is a restricted rotation of type $(a'-b',b')$ and $g$ is an interval swap map. In the case $a'<b'$, we observe that the inverse map $f^{-1}$ is a restricted rotation of type $(b',a')$. The same Lemma \ref{elem6} implies that $f^{-1}=\tilde g\tilde h$, where $\tilde h$ is a restricted rotation of type $(b'-a',a')$ and $\tilde g$ is an interval swap map. Note that $f=\tilde h^{-1}\tilde g^{-1}=\tilde h^{-1}\tilde g$ and $\tilde h^{-1}$ is a restricted rotation of type $(a',b'-a')$. By Lemma \ref{inv4}, there exist pairs of positive numbers $(a_1,b_1),\dots,(a_n,b_n)$ such that \begin{itemize} \item $(a_1,b_1)=(a,b)$, \item $(a_{i+1},b_{i+1})=(a_i-b_i,b_i)$ or $(a_{i+1},b_{i+1})=(a_i,b_i-a_i)$ for $1\le i\le n-1$, \item $a_n+b_n<\eps$ or $a_n=b_n$. \end{itemize} Clearly, $a_i\ne b_i$ for $1\le i<n$. By induction, it follows from the above that there exist interval exchange transformations $f_1=f,f_2,\dots,f_n$ and $g_2,\dots,g_n$ such that $f_i$ is a restricted rotation of type $(a_i,b_i)$, $g_i$ is an interval swap map, and $f_{i-1}=f_ig_i$ for $2\le i\le n$. We have $f=f_ng$, where $g=g_ng_{n-1} \dots g_2$ is a product of interval swap maps. If $a_n+b_n<\eps$ then we are done. Otherwise $a_n=b_n$ so that $f_n$ itself is an interval swap map, hence $f$ is a product of interval swap maps. In this case, take an arbitrary restricted rotation $h$ of type $(a_0,b_0)$, where $a_0=b_0<\eps/2$. Since $h$ is also an interval swap map, we obtain $f=h(hf)$, where $hf$ is a product of interval swap maps. \end{proof} \begin{lemma}\label{comm4} Let $f_1$ and $f_2$ be restricted rotations of the same type. Then $f_1^{-1}f_2$ is a product of interval swap maps. \end{lemma} \begin{proof} The lemma has already been proved in one case. If the supports of $f_1$ and $f_2$ do not overlap then $f_1^{-1}f_2$ is the product of three interval swap maps due to Lemma \ref{elem5}. We are going to reduce the general case to this particular one. Let $(a,b)$ be the type of the restricted rotations $f_1$ and $f_2$. First we assume there exists an interval $I_0\subset I$ of length $a+b$ that does not overlap with supports of $f_1$ and $f_2$. Let $f_0$ denote a unique restricted rotation of type $(a,b)$ with support $I_0$. By Lemma \ref{elem5}, both $f_1^{-1}f_0$ and $f_0^{-1}f_2$ are products of three interval swap maps. Hence $f_1^{-1}f_2=(f_1^{-1}f_0)(f_0^{-1}f_2)$ is the product of six interval swap maps. The above assumption always holds in the case when $a+b\le l/5$, where $l$ is the length of $I$. Indeed, let us divide the interval $I$ into $5$ pieces of length $l/5$. Then the support of $f_1$, which is an interval of length $a+b$, overlaps with at most two pieces. The same is true for the support of $f_2$. Therefore we have at least one piece with interior disjoint from both supports. This piece clearly contains an interval of length $a+b$. Now consider the general case. It follows from Lemma \ref{comm3} that $f_1=h_1g_1$ and $f_2=h_2g_2$, where $g_1,g_2$ are products of interval swap maps while $h_1,h_2$ are restricted rotations of the same type $(a_0,b_0)$ such that $a_0+b_0<l/5$. Note that $g_1^{-1}$ and $g_2^{-1}$ are also products of interval swap maps. By the above $h_1^{-1}h_2$ is the product of six interval swap maps. Then $f_1^{-1}f_2= g_1^{-1}(h_1^{-1}h_2)g_2$ is a product of interval swap maps as well. \end{proof} \begin{lemma}\label{comm5} Let $f$ be a restricted rotation and $g$ be an arbitrary interval exchange transformation. Then the commutator $f^{-1}g^{-1}fg$ is a product of interval swap maps. \end{lemma} \begin{proof} Let $(a,b)$ be the type of the restricted rotation $f$ and $J$ be the support of $f$. First assume that the restriction of the transformation $g^{-1}$ to $J$ is a translation. Then $g^{-1}fg$ is also a restricted rotation of type $(a,b)$, with support $g^{-1}(J)$. Therefore $f^{-1}g^{-1}fg$ is a product of interval swap maps due to Lemma \ref{comm4}. In the general case, we choose an interval $I_0\subset I$ such that $g^{-1}$ is a translation when restricted to $I_0$. Let $\eps$ denote the length of $I_0$. According to Lemma \ref{comm3}, we have $f=f_0g_0$, where $g_0$ is a product of interval swap maps and $f_0$ is a restricted rotation of some type $(a_0,b_0)$ such that $a_0+b_0<\eps$. Obviously, $g_0^{-1}$ is also a product of interval swap maps. Since $a_0+b_0<\eps$, there exists a restricted rotation $f_1$ of type $(a_0,b_0)$ with support contained in $I_0$. By the above the commutator $f_1^{-1}g^{-1}f_1g$ is a product of swap maps. By Lemma \ref{comm4}, $f_0^{-1}f_1$ and $f_1^{-1}f_0$ are also products of interval swap maps. Note that $$ f^{-1}g^{-1}fg=g_0^{-1}f_0^{-1}g^{-1}f_0g_0g =g_0^{-1}(f_0^{-1}f_1)(f_1^{-1}g^{-1}f_1g)g^{-1}(f_1^{-1}f_0)g_0g. $$ Therefore $f^{-1}g^{-1}fg=g_1g^{-1}g_2g$, where $g_1$ and $g_2$ are products of interval swap maps. Consider an arbitrary factorization $g_2=h_1h_2\dots h_n$ such that each $h_i$ is an interval swap map. Then $g^{-1}g_2g=(g^{-1}h_1g)(g^{-1}h_2g)\dots(g^{-1}h_ng)$. Clearly, each $g^{-1}h_ig$ is an interval exchange transformation of order $2$ and hence a product of interval swap maps due to Lemma \ref{elem2}. It follows that $f^{-1}g^{-1}fg$ can also be represented as a product of interval swap maps. \end{proof} \begin{lemma}\label{comm6} Any balanced product of restricted rotations is also a product of interval swap maps. \end{lemma} \begin{proof} The proof is by strong induction on the number $n$ of factors in a balanced product. Let $f=f_1f_2\dots f_n$ be a balanced product of $n$ restricted rotations and assume that the lemma holds for any balanced product of less than $n$ factors. Let $(a,b)$ be the type of $f_1$. First consider the case $a=b$. In this case, $f_1$ is an interval swap map. If $n=1$ then we are done. Otherwise $f=f_1g$, where $g=f_2\dots f_n$ is a balanced product of $n-1$ restricted rotations. By the inductive assumption, $g$ is a product of interval swap maps, and so is $f$. Now consider the case $a\ne b$. In this case, there is also a factor $f_k$ of type $(b,a)$. Let $g_1$ be the identity if $k=2$ and $g_1=f_2\dots f_{k-1}$ otherwise. Let $g_2$ be the identity if $k=n$ and $g_2=f_{k+1}\dots f_n$ otherwise. We have $$ f=f_1g_1f_kg_2=(f_1f_k)(f_k^{-1}g_1f_kg_1^{-1})(g_1g_2). $$ Since $f_1^{-1}$ is a restricted rotation of type $(b,a)$, it follows from Lemma \ref{comm4} that $f_1f_k=(f_1^{-1})^{-1}f_k$ is a product of interval swap maps. Since $f_k^{-1}g_1f_kg_1^{-1}$ is the commutator of the restricted rotation $f_k$ and the interval exchange transformation $g_1^{-1}$, it is a product of interval swap maps due to Lemma \ref{comm5}. If $n=2$ then $g_1g_2$ is the identity and we are done. Otherwise we observe that $g_1g_2$ is a balanced product of $n-2$ restricted rotations. By the inductive assumption, $g_1g_2$ is a product of interval swap maps, and so is $f$. \end{proof} \begin{proofof}{Theorem \ref{main1}} Let $\cG=\cG_I$ be the group of interval exchange transformations of an arbitrary interval $I=[p,q)$. Let $\cG_0$ be the set of all elements in $\cG$ with zero SAF invariant. $\cG_0$ is a normal subgroup of $\cG$ as it is the kernel of the homomorphism $\SAF$ (see Lemma \ref{inv3}). Let $\cG_1$ denote the commutator group of $\cG$, i.e., the subgroup of $\cG$ generated by commutators $f^{-1}g^{-1}fg$, where $f,g\in\cG$. Also, let $\cG_2$ be the subgroup of $\cG$ generated by all elements of order $2$ and $\cG_3$ be the subgroup generated by all elements of finite order. We have to prove that the groups $\cG_0$, $\cG_1$, $\cG_2$, and $\cG_3$ coincide. Since the scissors congruence invariant $\SAF$ is a homomorphism of $\cG$ to an abelian group, it vanishes on every commutator. It follows that $\cG_1\subset\cG_0$. Lemmas \ref{comm2} and \ref{comm6} imply that any element of $\cG_0$ is a product of interval swap maps, which are elements of order $2$. Therefore $\cG_0\subset\cG_2$. The inclusion $\cG_2\subset \cG_3$ is trivial. By Lemma \ref{elem2}, any element of $\cG_3$ is a product of interval swap maps, which are commutators due to Lemma \ref{elem3}. Hence $\cG_3\subset\cG_1$. We conclude that $\cG_0=\cG_1 =\cG_2=\cG_3$. \end{proofof} \begin{proofof}{Theorem \ref{main2}} According to Lemma \ref{inv3}, the SAF invariant $\SAF$, regarded as a function on the group $\cG_I$ of interval exchange transformations of an interval $I$, is a homomorphism to $\bR\otimes_{\bQ}\bR$. Therefore the quotient of $\cG_I$ by the kernel of this homomorphism is isomorphic to its image. By Lemma \ref{inv6}, the image of the homomorphism is $\bR\wedge_{\bQ}\bR$. By Theorem \ref{main1}, the kernel is the commutator group of $\cG_I$. \end{proofof} \section{Simplicity}\label{simp} Let $\cG=\cG_I$ be the group of interval exchange transformations of an arbitrary interval $I=[p,q)$. In this section we show that the commutator group $[\cG,\cG]$ of $\cG$ is simple. \begin{lemma}\label{simp1} For any $\eps>0$ the commutator group of $\cG$ is generated by interval swap maps of types less than $\eps$. \end{lemma} \begin{proof} Let $f$ be an arbitrary interval swap map in $\cG$. Denote by $a$ the type of $f$. Let $[x,x+a)$ and $[y,y+a)$ be the nonoverlapping intervals interchanged by $f$. We choose a sufficiently large positive integer $N$ such that $a/N<\eps$. For any $i\in\{1,2,\dots,N\}$ let $f_i$ denote the interval exchange transformation that interchanges intervals $[x+(i-1)a/N,x+ia/N)$ and $[y+(i-1)a/N,y+ia/N)$ by translation while fixing the rest of the interval $I$. It is easy to see that $f=f_1f_2\dots f_N$. Note that each $f_i$ is an interval swap map of type $a/N<\eps$. Let $H_\eps$ be the subgroup of $\cG$ generated by all interval swap maps of types less than $\eps$. By the above the group $H_\eps$ contains all interval swap maps in $\cG$. In view of Lemma \ref{elem2}, $H_\eps$ coincides with the subgroup of $\cG$ generated by all elements of finite order. By Theorem \ref{main1}, $H_\eps=[\cG,\cG]$. \end{proof} \begin{lemma}\label{simp2} There exists $\eps>0$ such that any two interval swap maps in $\cG$ of the same type $a<\eps$ are conjugated in $[\cG,\cG]$. \end{lemma} \begin{proof} Let $l$ be the length of the interval $I$. Consider arbitrary interval swap maps $f_1,f_2\in\cG$ of the same type $a<l/10$. Let us divide the interval $I$ into $10$ pieces of length $l/10$. The support of $f_1$ is the union of two intervals of length $a$. Since $a<l/10$, each interval of length $a$ overlaps with at most two of the ten pieces. Hence the support of $f_1$ overlaps with at most $4$ pieces. The same is true for the support of $f_2$. Therefore we have at least two pieces with interior disjoint from both supports. Clearly, one can find two nonoverlapping intervals $I_1$ and $I_2$ of length $a$ in these pieces. Let $f_0$ be the interval swap map of type $a$ that interchanges $I_1$ and $I_2$ by translation and fixes the rest of $I$. By construction, the support of $f_0$ does not overlap with the supports of $f_1$ and $f_2$. It follows from Lemma \ref{elem4} that $f_1=g_1f_0g_1$ and $f_0=g_2f_2g_2$ for some elements $g_1,g_2\in\cG$ of order $2$. By Theorem \ref{main1}, the commutator group $[\cG,\cG]$ contains all elements of order $2$ in $\cG$. In particular, it contains $f_1$, $f_2$, $g_1$, and $g_2$. Then $g_2g_1\in[\cG,\cG]$ as well. Since $f_1=g_1(g_2f_2g_2)g_1 =(g_2g_1)^{-1}f_2(g_2g_1)$, the elements $f_1$ and $f_2$ are conjugated in $[\cG,\cG]$. \end{proof} \begin{proofof}{Theorem \ref{main3}} Suppose $H$ is a nontrivial normal subgroup of $[\cG,\cG]$. Let $f$ be an arbitrary element of $H$ different from the identity. By Lemma \ref{elem7}, there exist $\eps_1>0$ and, for any $0<\eps<\eps_1$, interval swap maps $g_1,g_2\in\cG$ such that $g_2f^{-1}g_1fg_1g_2$ is an interval swap map of type $\eps$. The interval swap maps $g_1$ and $g_2$ are involutions. They belong to $[\cG,\cG]$ due to Lemma \ref{elem3}. Since $H$ is a normal subgroup of $[\cG,\cG]$ that contains $f$, it also contains the interval exchange transformations $f^{-1}$, $g_1^{-1}fg_1=g_1fg_1$, $f^{-1}g_1fg_1$, and $g_2^{-1}(f^{-1}g_1fg_1)g_2=g_2f^{-1}g_1fg_1g_2$. We obtain that for any $0<\eps<\eps_1$ the subgroup $H$ contains an interval swap map of type $\eps$. By Lemma \ref{simp2}, there exists $\eps_2>0$ such that any two interval swap maps in $\cG$ of the same type $\eps<\eps_2$ are conjugated in $[\cG,\cG]$. It follows that all interval swap maps in $\cG$ of types less than $\min(\eps_1,\eps_2)$ are also in $H$. According to Lemma \ref{simp1}, the commutator group of $\cG$ is generated by these maps. Hence $H=[\cG,\cG]$. Thus the only nontrivial normal subgroup of $[\cG,\cG]$ is $[\cG,\cG]$ itself. That is, $[\cG,\cG]$ is a simple group. \end{proofof}
1,108,101,566,282
arxiv
\section{Introduction} Graph pebbling has its origin in number theory. It is a model for the transportation of resources. Starting with a pebble distribution on the vertices of a simple connected graph, a \emph{pebbling move} removes two pebbles from a vertex and adds one pebble at an adjacent vertex. We can think of the pebbles as fuel containers. Then the loss of the pebble during a move is the cost of transportation. A vertex is called \emph{reachable} if a pebble can be moved to that vertex using pebbling moves. There are several questions we can ask about pebbling. How many pebbles will guarantee that every vertex is reachable, or that all vertices are reachable at the same time? How can we place the smallest number of pebbles such that every vertex is reachable? For a comprehensive list of references for the extensive literature see the survey papers \cite{Hurlbert_survey1,Hurlbert_survey2}. In the current paper we propose the study of an extension of pebbling called \emph{rubbling}. In this version we also allow a move that removes a pebble from the vertices $v$ and $w$ that are adjacent to a vertex $u$, and adds a pebble at vertex $u$. We find rubbling versions of some of the well known pebbling tools such as the transition digraph, the No Cycle Lemma, squishing and smoothing. We use these tools to find the rubbling number and the optimal rubbling number for some families of graphs including complete graphs, complete bipartite graphs, paths, wheels and cycles. \section{Preliminaries} Let $G$ be a simple graph. We use the notation $V(G)$ for the vertex set and $E(G)$ for the edge set. A \emph{pebble function} on a graph $G$ is a function $p:V(G)\to{\bf Z}$ where $p(v)$ is the number of pebbles placed at $v$. A \emph{pebble distribution} is a nonnegative pebble function. The \emph{size} of a pebble distribution $p$ is the total number of pebbles $\sum_{v\in V(G)}p(v)$. We are going to use the notation $p(v_{1},\ldots,v_{n},*)=(a_{1},\ldots,a_{n},q(*))$ to indicate that $p(v_{i})=a_{i}$ for $i\in\{1,\ldots,n\}$ and $p(w)=q(w)$ for all $w\in V(G)\setminus\{ v_{1},\ldots,v_{n}\}$. \begin{defn} Consider a pebble function $p$ on the graph $G$. If $\{ v,u\}\in E(G)$ then the \emph{pebbling move} $(v,v\to u)$ removes two pebbles at vertex $v$ and adds one pebble at vertex $u$ to create a new pebble function\[ p_{(v,v\to u)}(v,u,*)=(p(v)-2,p(u)+1,p(*)).\] If $\{ w,u\}\in E(G)$ and $v\not=w$ then the \emph{strict rubbling move} $(v,w\to u)$ removes one pebble each at vertices $v$ and $w$ and adds one pebble at vertex $u$ to create a new pebble function\[ p_{(v,w\to u)}(v,w,u,*)=(p(v)-1,p(w)-1,p(u)+1,p(*)).\] A \emph{rubbling move is} either a pebbling move or a strict rubbling move. \end{defn} Note that the rubbling moves $(v,w\to u)$ and $(w,v\to u)$ are the same. Also note that the resulting pebble function might not be a pebble distribution even if $p$ is. \begin{defn} A \emph{rubbling sequence} is a finite sequence $s=(s_{1},\ldots,s_{k})$ of rubbling moves. The pebble function gotten from the pebble function $p$ after applying the moves in $s$ is denoted by $p_{s}$. \end{defn} The concatenation of the rubbling sequences $r=(r_{1},\ldots,r_{k})$ and $s=(s_{1},\ldots,s_{l})$ is denoted by $rs=(r_{1},\ldots,r_{k},s_{1},\ldots,s_{l})$. \begin{defn} A rubbling sequence $s$ is \emph{executable} from the pebble distribution $p$ if $p_{(s_{1},\ldots,s_{i})}$ is nonnegative for all $i$. A vertex $v$ of $G$ is \emph{reachable} from the pebble distribution $p$ if there is an executable rubbling sequence $s$ such that $p_{s}(v)\ge1$. The \emph{rubbling number} $\rho(G)$ of a graph $G$ is the minimum number $m$ such that every vertex of $G$ is reachable from any pebble distribution of size $m$. \end{defn} A vertex is reachable if a pebble can be moved to that vertex using rubbling moves with actual pebbles without ever running out of pebbles. Changing the order of moves in an executable rubbling sequence $s$ may result in a sequence $r$ that is no longer executable. On the other hand the ordering of the moves has no effect on the resulting pebble function, that is, $p_{s}=p_{r}$. This justifies the following definition. \begin{defn} Let $S$ be a multiset of rubbling moves. The pebble function gotten from the pebble function $p$ after applying the moves in $S$ in any order is denoted by $p_{S}$. \end{defn} \section{The transition digraph and the No Cycle Lemma} \begin{defn} Given a multiset $S$ of rubbling moves on $G$, the \emph{transition digraph} $T(G,S)$ is a directed multigraph whose vertex set is $V(G)$, and each move $(v,w\to u)$ in $S$ is represented by two directed edges $(v,u)$ and $(w,u)$. The transition digraph of a rubbling sequence $s=(s_{1},\ldots,s_{n})$ is $T(G,s)=T(G,S)$, where $S=\{ s_{1},\ldots,s_{n}\}$ is the multiset of moves in $s$. Let $d_{T(G,S)}^{-}$ represent the in-degree and $d_{T(G,S)}^{+}$ the out-degree in $T(G,S)$. We simply write $d^{-}$ and $d^{+}$ if the transition digraph is clear from context. \end{defn} The transition digraph only depends on the rubbling moves and the graph but not on the pebble distribution or on the order of the moves. It is possible that $T(G,S)=T(G,R)$ even if $S\not=R$. If $T(G,S)=T(G,R)$ then $p_{S}=p_{R}$, so the effect of a rubbling sequence on a pebble function only depends on the transition digraph. In fact we have the following. \begin{lem} If $p$ is a pebble function on $G$ and $S$ is a multiset of rubbling moves then\[ p_{S}(v)=p(v)+d^{-}(v)/2-d^{+}(v)\] for all $v\in V(G)$. \end{lem} \begin{proof} The three terms on the right hand side represent the original number of pebbles, the number of pebbles arrived at $v$ and the number of pebbles moved away from $v$. \end{proof} We are often interested in the value of $q_{R}(v)-p_{S}(v)$. The function $\Delta$ defined in the following lemma is going to simplify our notation. The three parameters of $\Delta$ represent the change in the number of pebbles, the change in the in-degree and the change in the out-degree. The proof is a trivial calculation. \begin{lem} Define $\Delta(a,b,c)=a+b/2-c$. Then\[ q_{R}(v)-p_{S}(v)=\Delta(q(v)-p(v),d_{T(G,R)}^{-}(v)-d_{T(G,S)}^{-}(v),d_{T(G,R)}^{+}(v)-d_{T(G,S)}^{+}(v)).\] \end{lem} If the rubbling sequence $s$ is executable from a pebble distribution $p$ then we must have $p_{s}\ge0$. This motivates the following terminology. \begin{defn} A multiset $S$ of rubbling moves on $G$ is \emph{balanced} with a pebble distribution $p$ \emph{at vertex} $v$ if $p_{S}(v)\ge0$. We say $S$ is \emph{balanced} with $p$ if $S$ is balanced with $p$ at all $v\in V(G)$, that is, $p_{S}\ge0$. We say that a rubbling sequence \emph{}$s$ is balanced with $p$ if the multiset of moves in $s$ is balanced with $p$. \end{defn} $S$ is trivially balanced with a pebble distribution at $v$ if $d_{T(G,S)}^{+}(v)=0$. The balance condition is necessary but not sufficient for a rubbling sequence to be executable. The pebble distribution $p(u,v,w)=(1,1,1)$ on the cycle $C_{3}$ is balanced with $s=((u,u\to v),(v,v\to w),(w,w\to u))$, but $s$ is not executable. The problem is caused by the cycle in the transition digraph. The goal of this section is to overcome this difficulty. \begin{defn} A multiset of rubbling moves or a rubbling sequence is called \emph{acyclic} if the corresponding transition digraph has no directed cycles. Let $S$ be a multiset of rubbling moves. An acyclic multiset $R\subseteq S$ is called an \emph{untangling} of $S$ if $p_{R}\ge p_{S}$. \end{defn} \begin{prop} \label{pro:unfolding}Every multiset of rubbling moves has an untangling. \end{prop} \begin{figure} \begin{center}~\input{cycle.inc}\end{center} \caption{\label{cap:Arrows-representing-moves} Arrows of $T(G,Q)$. The solid arrows belong to $C$. \protect \\ } \end{figure} \begin{proof} Let $S$ be the multiset of rubbling moves. Suppose that $T(G,S)$ has a directed cycle $C$. Let $Q$ be the multiset of elements of $S$ corresponding to the arrows of $C$, see Figure~\ref{cap:Arrows-representing-moves}. We show that $p_{R}\ge p_{S}$ where $R=S\setminus Q$. If $v\in V(C)$ then there is an $a\le-1$ such that\[ p_{R}(v)-p_{S}(v)=\Delta(0,-2,a)=-1-a\ge0.\] If $v\in V(G)\setminus V(C)$ then there is an $a\le0$ such that\[ p_{R}(v)-p_{S}(v)=\Delta(0,0,a)\ge0.\] We can repeat this process on $R$ until we eliminate all the cycles. This can be finished in finitely many steps since every step decreases the number of edges in $R$. The resulting multiset is an untangling of $S$. \end{proof} Note that a multiset of moves can have several untanglings. Also note that if a pebble distribution $p$ is balanced with $S$ and $R$ is an untangling of $S$ then $p_{R}\ge p_{S}\ge0$ and so $p$ is also balanced with $R$. \begin{lem} \label{lem:sourse}If $p$ is a pebble distribution on $G$ that is balanced with the multiset $S$ of moves and $t=(v,w\to u)\in S$ such that $d^{-}(v)=0=d^{-}(w)$ then $t$ is executable from $p$. \end{lem} \begin{proof} If $v\not=w$ then $p(v)\ge d^{+}(v)\ge1$ and $p(w)\ge d^{+}(w)\ge1$. If $v=w$ then $p(v)\ge d^{+}(v)\ge2$. In both cases $s$ is executable from $p$. \end{proof} \begin{prop} \label{pro:orderability}If the pebble distribution $p$ on $G$ is balanced with the acyclic multiset $S$ of rubbling moves then there is a sequence $s$ of the elements of $S$ such that $s$ is executable from $p$. \end{prop} \begin{proof} We define $s$ recursively. Let $R_{1}=S$. Since $R_{1}$ is acyclic, we must have a move $s_{1}=(v_{1},w_{1}\to u_{1})\in R_{1}$ such that $d_{T(G,R_{1})}^{-}(v_{1})=0=d_{T(G,R_{1})}^{-}(w_{1})$. Then $s_{1}$ is executable from $p$ by Lemma~\ref{lem:sourse}. Let $R_{i}=R_{i-1}\setminus\{ s_{i-1}\}$. Then $R_{i}$ is acyclic so we must have a move $s_{i}=(v_{i},w_{i}\to u_{i})\in R_{i}$ such that $d_{T(G,R_{i})}^{-}(v_{i})=0=d_{T(G,R_{i})}^{-}(w_{i})$. Then $p_{(s_{1},\ldots,s_{i-1})}$ is balanced with $R_{i}$ since $(p_{(s_{1},\ldots,s_{i-1})})_{R_{i}}=p_{S}\ge0$ and so $s_{i}$ is executable from $p_{(s_{1},\ldots,s_{i-1})}$. The sequence $s=(s_{1},\ldots,s_{|S|})$ is an ordering of the elements of $S$ that is executable from $p$. \end{proof} The following is the rubbling version of the No-Cycle Lemma for pebbling \cite{Betsy,Milans,Moews}. \begin{lem} \emph{(No Cycle)} Let $p$ be a pebble distribution on $G$ and $v\in V(G)$. The following are equivalent. \end{lem} \begin{enumerate} \item $v$ is reachable from $p$. \item There is a multiset $S$ of rubbling moves such that $S$ is balanced with $p$ and $p_{S}(v)\ge1$. \item There is an acyclic multiset $R$ of rubbling moves such that $R$ is balanced with $p$ and $p_{R}(v)\ge1$. \item $v$ is reachable from $p$ through an acyclic rubbling sequence. \end{enumerate} \begin{proof} If $v$ is reachable from $p$ then there is an executable sequence $s$ of rubbling moves. The multiset $S$ of rubbling moves of $s$ is balanced with $p$ and $p_{S}(v)\ge1$. So (1) implies (2). If $S$ satisfies (2) then an untangling $R$ of $S$ satisfies (3). Suppose $R$ satisfies (3). By Proposition~\ref{pro:orderability}, there is an executable ordering $r$ of the moves of $R$. This $r$ is acyclic and $v$ is reachable through $r$ since $p_{r}(v)=p_{R}(v)\ge1$. So (3) implies (4). Finally, (4) clearly implies (1). \end{proof} \begin{cor} \label{cor:no-flip-flop}If a vertex is reachable from a pebble distribution $p$ on $G$ then it is also reachable by a rubbling sequence in which no move of the form $(v,a\to u)$ is followed by a move of the form $(u,b\to v)$. \end{cor} \section{Basic results} It is clear from the definition that for all graphs $G$ we have $\rho(G)\le\pi(G)$ where $\pi$ is the pebbling number. For the pebbling number we have $2^{\text{{\rm diam}}(G)}\le\pi(G)$. This is also true for the rubbling number. To see this we need to find the rubbling number of a path first. \begin{prop} \label{pro:path}The rubbling number of the path with $n$ vertices is $\rho(P_{n})=2^{n-1}$. \end{prop} \begin{proof} Let $v_{1},\ldots,v_{n}$ be the consecutive vertices of $P_{n}$. Let $p(v_{n},*)=(m,0)$ be a pebble distribution from which $v_{1}$ is reachable through the acyclic rubbling sequence $s$. We show that $m\ge2^{n-1}$. Since $v_{1}$ is reachable and $p(v_{1})=0$, the balance condition at $v_{1}$ implies that $T(G,s)$ has at least 2 arrows from $v_{2}$ to $v_{1}$ and so $d^{+}(v_{2})\ge2$. Since $T(G,s)$ has no cycles, there are no arrows from $v_{1}$ to $v_{2}$. The balance condition at $v_{2}$ now implies that $T(G,s)$ has at least 4 arrows from $v_{3}$ to $v_{2}$ and so $d^{+}(v_{3})\ge2^{2}$. An inductive argument shows that $d^{+}(v_{n})\ge2^{n-1}$ and $d^{-}(v_{n})=0$. The balance condition at $v_{n}$ implies that $m\ge d^{+}(v_{n})\ge2^{n-1}$. This shows that $2^{n-1}\le\rho(P_{n})$. It is known \cite{Hurlbert_survey1} that $\pi(P_{n})=2^{n-1}$. The result now follows from the inequality $2^{n-1}\le\rho(P_{n})\le\pi(P_{n})=2^{n-1}$. \end{proof} \begin{figure} \begin{center}~\input{quotient.inc}\end{center} \caption{\label{cap:quotient}Arrows in $T(G,S)$ representing the possible types of rubbling moves in $E$. The vertices in the same box are equivalent. The solid arrows connect equivalent vertices. The calculation on the left shows the change in $\sum_{i}(\frac{1}{2}d^{-}(v_{i})-d^{+}(v_{i}))$ after the removal of one of the rubbling moves. } \end{figure} \begin{prop} If the graph $G$ has diameter $d$ then $2^{d}\le\rho(G)$. \end{prop} \begin{proof} Let $v_{0}$ and $v_{d}$ be vertices at distance $d$. Let $p(v_{0},*)=(m,0)$ be a pebble distribution from which $v_{d}$ is reachable through the rubbling sequence $s$. We now build a quotient rubbling problem. Let $[v]$ be the equivalence class of $v$ in the partition of the vertices of $G$ according to their distances from $v_{0}$. The quotient simple graph $H$ is isomorphic to $P_{d+1}$ with leafs $[v_{0}]=\{ v_{0}\}$ and $[v_{d}]$. Let $q([v])=\sum_{w\in[v]}p(w)$ for all $[v]\in V(H)$ and note that $q([v_{0}],*)=(m,0)$. The rubbling sequence $s$ induces a multiset $R$ of rubbling moves on $H$. We construct this $R$ from the multiset $S$ of rubbling moves of $s$. Let $E$ be the multiset of moves of $S$ of the form $(v,w\to u)$ where $v\in[u]$ or $w\in[u]$. Define $R$ to be the multiset of moves of the form $([v],[w]\to[u])$ where $(v,w\to u)$ runs through the elements of $S\setminus E$. We show that $R$ is balanced with $q$ . Figure~\ref{cap:quotient} shows the possible types of moves in $E$. The removal of any of these moves does not decrease the value of $\sum_{v_{i}\in[v]}(\frac{1}{2}d^{-}(v_{i})-d^{+}(v_{i}))$ and so\[ q_{R}([v])=\sum_{v_{i}\in[v]}p_{S\setminus E}(v_{i})\ge\sum_{v_{i}\in[v]}p_{S}(v_{i})\ge0\] since $p$ is balanced with $S$. We also have $q_{R}([v_{d}])\ge1$ since $v_{d}$ is reachable and so $p_{S}(v_{d})\ge1$. Thus $[v_{d}]$ is reachable from $q$ and so the result now follows from Proposition~\ref{pro:path}. \end{proof} For the pebbling number we have $\pi(G)\ge|V(G)|$. This inequality does not hold for the rubbling number as we can see in the next result. \begin{prop} We have the following values for the rubbling number: \emph{a.} $\rho(K_{n})=2$ for $n\ge2$ where $K_{n}$ is the complete graph with $n$ vertices\emph{;} \emph{b.} $\rho(W_{n})=4$ for $n\ge4$ \emph{}where $W_{n}$ is the wheel with $n$ spikes\emph{;} \emph{c.} $\rho(K_{m,n})=4$ for $m,n\ge2$ \emph{}where $K_{m,n}$ is a complete bipartite graph\emph{;} \emph{d.} $\rho(Q^{n})=2^{n}$ \emph{}for $n\ge1$ \emph{}where $Q^{n}$ is the $n$-dimensional hypercube\emph{;} \emph{e.} $\rho(G)=2^{s+1}$ where $s$ is the number of vertices in the spine of the caterpillar $G$. \end{prop} \begin{proof} a. A single pebble is clearly not sufficient but any vertex is reachable with two pebbles using a single move. b. If we have 4 pebbles then we can move 2 pebbles to the center using two moves. Then any other vertex is reachable from the center in a single move. On the other hand $\rho(W_{n})\ge2^{\text{diam}(W_{n})}=2^{2}=4$. c. It is easy to see that from any pebble distribution of size 4 any vertex is reachable in at most 3 moves. On the other hand we have $\rho(K_{m,n})\ge2^{\text{diam}(K_{m,n})}=2^{2}=4$. d. We know \cite{Chung} that $\pi(Q^{n})=2^{n}$. The result now follows from the inequality $2^{n}=2^{\text{diam}(Q^{n})}\le\rho(Q^{n})\le\pi(Q^{n})=2^{n}$. e. The result follows easily from Proposition~\ref{pro:path}. \end{proof} \begin{figure} \begin{center}~\input{petersen.inc}\end{center} \caption{\label{cap:The-Petersen-graph}The Petersen graph $P$.\protect \\ } \end{figure} \begin{prop} The rubbling number of the Petersen graph $P$ is $\rho(P)=5$. \end{prop} \begin{proof} Consider Figure~\ref{cap:The-Petersen-graph}. It is easy to see that vertex $w$ is not reachable from the pebble distribution $p(r,s,*)=(3,1,0)$ and so $\rho(P)>4$. To show that $\rho(P)\le5$, assume that a vertex is not reachable from a pebble distribution $p$ of size 5. Since $P$ is vertex transitive, we can assume that this vertex is $w$. Then we must have \[ p(a)+p(b)+p(c)+\left\lfloor \frac{p(q)+p(r)}{2}\right\rfloor +\left\lfloor \frac{p(s)+p(t)}{2}\right\rfloor +\left\lfloor \frac{p(u)+p(v)}{2}\right\rfloor \le1,\] otherwise we could make the total number of pebbles at vertices $a$, $b$ and $c$ more than 2 after which $w$ is reachable. This inequality forces $p(a)=p(b)=p(c)=0$ and two of the remaining terms to be 0 as well. So by symmetry we can assume that the last term is 1 and all the other terms are 0. Then we must have $p(u)+p(v)=3$ and $p(q)+p(r)=1=p(s)+p(t)$. A simple case analysis shows that $w$ is reachable from this $p$, which is a contradiction. \end{proof} \section{Squishing} The following terms are needed for the rubbling version of the squishing lemma of \cite{Bunde_optimal}. A \emph{thread} in a graph is a path containing vertices of degree 2. A pebble distribution is \emph{squished} on a thread $P$ if all the pebbles on $P$ are placed on a single vertex of $P$ or on two adjacent vertices of $P$. \begin{lem} \label{lem:notype2}Let $P$ be a thread in $G$. If vertex $x\not\in V(P)$ is reachable from the pebble distribution $p$ then $x$ is reachable from $p$ through a rubbling sequence in which there is no strict rubbling move of the form $(v,w\to u)$ where $u\in V(P)$. \end{lem} \begin{proof} Let $S$ be an acyclic multiset of rubbling moves balanced with $p$ such that $p_{S}(x)\ge1$. Let $E$ be the multiset of strict rubbling moves of $S$ of the form $(v,w\to u)$ where $u\in V(P)$. If $e=(v,w\to u)\in E$ then we have $d_{T(G,S\setminus\{ e\})}^{+}(u)=d_{T(G,S)}^{+}(u)=0$ since $S$ is acyclic and so $S\setminus\{ e\}$ is balanced with $p$ at $u$. It is clear that $p_{S\setminus\{ e\}}(y)\ge p_{S}(y)$ for all $y\in V(G)\setminus\{ u\}$ and so $S\setminus\{ e\}$ is balanced with $p$. We still know that $S\setminus\{ e\}$ is acyclic and $p_{S\setminus\{ e\}}(x)\ge1$, so induction shows that $R=S\setminus E$ is balanced with $p$. By Proposition~\ref{pro:orderability}, there is an ordering $r$ of the elements of $R$ that is executable from $p$. Then $v$ is reachable through $r$ since $p_{r}(v)=p_{S}(v)\ge1$. \end{proof} The following is the rubbling version of the Squishing Lemma for pebbling \cite{Bunde_optimal}. \begin{lem} \emph{(Squishing)} If vertex $v$ is not reachable from a pebble distribution with size $n$ then there is a pebble distribution $r$ of size $n$ that is squished on each thread not containing $v$ such that $v$ is not reachable from $r$ either. \end{lem} \begin{proof} The result follows from \cite[Lemma 4]{Bunde_optimal} and \ref{lem:notype2}. \end{proof} \section{Rubbling $C_{n}$} The Squishing Lemma allows us to find the rubbling numbers of cycles. For the pebbling numbers of $C_{n}$ see \cite{Pachter,Bunde_optimal}. \begin{prop} The rubbling number of an even cycle is $\rho(C_{2k})=2^{k}$. \end{prop} \begin{proof} It is well known \cite{Pachter} that $\pi(C_{2k})=2^{k}$. The first result now follows since\[ 2^{k}=2^{\text{diam}(C_{2k})}\le\rho(C_{2k})\le\pi(C_{2k})=2^{k}.\] \end{proof} \begin{prop} The rubbling number of an odd cycle is $\rho(C_{2k+1})=\lfloor\frac{7\cdot2^{k-1}-2}{3}\rfloor+1$. \end{prop} \begin{proof} Let $C_{2k+1}$ be the cycle with consecutive vertices \[ x_{k},x_{k-1},\ldots,x_{1},v,y_{1},y_{2},\ldots,y_{k},x_{k}.\] First we show that $\rho(C_{2k+1})\le\lfloor\frac{7\cdot2^{k-1}-2}{3}\rfloor+1$. Let $p$ be a pebble distribution on $C_{2k+1}$ from which not every vertex is reachable. It suffices to show that $p$ contains at most $\lfloor\frac{7\cdot2^{k-1}-2}{3}\rfloor$ pebbles. By symmetry, we can assume that $v$ is the vertex that is not reachable from $p$. By the Squishing Lemma, we can assume that $p$ is squished on the thread with consecutive vertices $y_{1},\ldots,y_{k},x_{k},\ldots,x_{1}$. First we consider the case when all the pebbles are at distance $k$ from $v$, that is, $p(x_{k},y_{k},*)=(a,b,0)$. By symmetry, we can assume that $0\le a\le b$. Then we must have\begin{equation} \left\lfloor \frac{a}{2}\right\rfloor +b\le2^{k}-1,\label{eq:1}\end{equation} otherwise we could move $\lfloor\frac{a}{2}\rfloor$ pebbles from vertex $x_{k}$ to vertex $y_{k}$ and then reach $v$ from $b_{k}$. Hence $\frac{a}{2}<\left\lfloor \frac{a}{2}\right\rfloor +1\le2^{k}-1-b+1=2^{k}-b$ and so\begin{equation} a+2b\le2^{k+1}-1.\label{eq:1a}\end{equation} We also must have \begin{equation} \left\lfloor \frac{b-2^{k-1}}{2}\right\rfloor +a\le2^{k-1}-1,\label{eq:2}\end{equation} otherwise we could move $\lfloor\frac{b-2^{k-1}}{2}\rfloor$ pebbles from vertex $y_{k}$ to vertex $x_{k}$ after which $x_{1}$ is reachable from $x_{k}$ and $y_{1}$ is reachable from $y_{k}$, and so $v$ would be reachable by the move $(x_{1},y_{1}\to v)$. Hence $\frac{b-2^{k-1}}{2}<\left\lfloor \frac{b-2^{k-1}}{2}\right\rfloor +1\le2^{k-1}-1-a+1=2^{k-1}-a$ and so \begin{equation} b+2a\le2^{k}+2^{k-1}-1.\label{eq:2a}\end{equation} Adding (\ref{eq:1a}) and (\ref{eq:2a}) gives\[ 3(a+b)\le2^{k+1}-1+2^{k}+2^{k-1}-1=7\cdot2^{k-1}-2,\] which shows that $|p|=a+b\le\lfloor\frac{7\cdot2^{k-1}-2}{3}\rfloor$. Now we consider the case when some pebbles are closer to $v$ than $k$, that is, $p(x_{i},x_{i+1},*)=(b,a,0)$ with $b\ge1$ and $a\ge0$ for some $1\le i<k$. Then we must have $\left\lfloor \frac{a}{2}\right\rfloor +b\le2^{i}-1\le2^{k-1}-1$ otherwise $v$ is reachable. Hence\begin{eqnarray*} |p| & = & a+b\le a-\left\lfloor \frac{a}{2}\right\rfloor +\left\lfloor \frac{a}{2}\right\rfloor +b\\ & \le & \left\lfloor \frac{a}{2}\right\rfloor +1+2^{k-1}-1\le2^{k-1}-1-b+1+2^{k-1}-1\\ & = & 2\cdot2^{k-1}-2<\left\lfloor \frac{7\cdot2^{k-1}-2}{3}\right\rfloor .\end{eqnarray*} Now we show that we can always distribute $\lfloor\frac{7\cdot2^{k-1}-2}{3}\rfloor$ pebbles so that $v$ is unreachable and so $\rho(C_{2k+1})\ge\lfloor\frac{7\cdot2^{k-1}-2}{3}\rfloor+1$. Let $a=\lfloor\frac{2^{k}}{3}\rfloor$ and $b=\lfloor\frac{5\cdot2^{k-1}}{3}\rfloor$. It is easy to check that \[ a=\begin{cases} \frac{2^{k}-2}{3}, & \text{$k$ odd}\\ \frac{2^{k}-1}{3}, & \text{$k$ even}\end{cases},\ b=\begin{cases} \frac{5\cdot2^{k-1}-2}{3}, & \text{$k$ odd}\\ \frac{5\cdot2^{k-1}-1}{3}, & \text{$k$ even}\end{cases},\ \left\lfloor \frac{7\cdot2^{k-1}-2}{3}\right\rfloor =\begin{cases} \frac{7\cdot2^{k-1}-4}{3}, & \text{$k$ odd}\\ \frac{7\cdot2^{k-1}-2}{3}, & \text{$k$ even}\end{cases}\] and so $a+b=\lfloor\frac{7\cdot2^{k-1}-2}{3}\rfloor$. We show that $v$ is unreachable from the pebble distribution $p(x_{k},y_{k},*)=(a,b,0)$. It is easy to see that $a$ and $b$ satisfy (\ref{eq:1a}) and (\ref{eq:2a}). Suppose that $v$ is reachable from $p$, that is, there is an acyclic multiset $S$ of rubbling moves that is balanced with $p$ satisfying $p_{S}(v)\ge1$. The balance condition at $v$ shows that $d^{-}(v)\ge2$. Hence $S$ must have at least one of $(x_{1},y_{1}\to v)$, $(x_{1},x_{1}\to v)$ or $(y_{1},y_{2}\to v)$. First assume that $(x_{1},y_{1}\to v)\in S$. The argument used in the proof of Proposition~\ref{pro:path} shows that then $T(G,S)$ has at least $2^{i-1}$ arrows from $x_{i}$ to $x_{i-1}$ and from $y_{i}$ to $y_{i-1}$ for all $i\in\{2,\ldots,k\}$. Since $S$ is acyclic, any arrow in $T(G,S)$ pointing to $x_{k}$ must come from $y_{k}$. So the balance condition at $x_{k}$ requires $m$ arrows from $y_{k}$ to $x_{k}$ satisfying $2^{k-1}\le a+\frac{m}{2}$. The balance condition at $y_{k}$ gives $2^{k-1}+m\le b$. Combining the two inequalities gives $2^{k}+2^{k-1}\le b+2a$ which contradicts (\ref{eq:2a}). Next assume that $(y_{1},y_{1}\to v)\in S$. Then $T(G,S)$ has at least $2^{i}$ arrows from $y_{i}$ to $y_{i-1}$ for all $i\in\{2,\ldots,k\}$. The balance condition at $y_{k}$ requires $m$ arrows from $x_{k}$ to $y_{k}$ satisfying $2^{k}\le b+\frac{m}{2}$. We must have $d^{-}(x_{k})=0$, otherwise there is a directed path from $v$ to $x_{k}$ which is impossible since $S$ is acyclic. The balance condition at $x_{k}$ gives $m\le a$. Combining the two inequalities gives $2^{k+1}\le a+2b$ which contradicts (\ref{eq:1a}). Similar argument shows that $(x_{1},x_{1}\to v)\in S$ is also impossible. \end{proof} \section{Optimal rubbling} Optimal pebbling was studied in \cite{Pachter,Moews_optimal,Fu,Bunde_optimal}. In this section we investigate the optimal rubbling number of certain graphs. \begin{defn} The \emph{optimal rubbling number} $\rho_{\text{opt}}(G)$ of a graph $G$ is the minimum number $m$ for which there is a pebble distribution of size $m$ from which every vertex of $G$ is reachable. \end{defn} \begin{prop} We have the following values for the optimal rubbling number: \emph{a.} $\rho_{\text{{\rm opt}}}(K_{n})=2$ for $n\ge2$ where $K_{n}$ is the complete graph with $n$ vertices\emph{;} \emph{b.} $\rho_{{\rm opt}}(W_{n})=2$ for $n\ge4$ \emph{}where $W_{n}$ is the wheel with $n$ spikes\emph{;} \emph{c.} $\rho_{\text{{\rm opt}}}(K_{m,n})=3$ for $m,n\ge3$ \emph{}where $K_{m,n}$ is the complete bipartite graph\emph{;} \emph{d}. $\rho_{{\rm opt}}(P)=4$ where $P$ is the Petersen graph. \end{prop} \begin{proof} a. Not every vertex of $K_{n}$ is reachable from a distribution of size 1 since $n\ge2$. On the other hand any vertex is reachable by a single move from any distribution of size 2. b. Again, not every vertex of $W_{n}$ is reachable from a distribution of size 1. On the other hand, every vertex is reachable from the distribution that has 2 pebbles at the center of $W_{n}$. c. Let $A$ and $B$ be the natural partition of the vertex set of $K_{m,n}$. Let $p$ be a pebble distribution of size 2. If $p$ places both pebbles on vertices in $A$ then there is a vertex in $A$ that is not reachable from $p$. If $p$ places both pebbles on vertices in $B$ then there is a vertex in $B$ that is not reachable from $p$. If $p$ places one pebble on a vertex in $A$ and one pebble on a vertex in $B$ then both $A$ and $B$ have vertices that are unreachable from $p$. On the other hand any vertex is reachable in at most two moves from a pebble distribution that places one pebble on a vertex in $A$ and two pebbles on a vertex in $B$. d. Every vertex is reachable from the pebble distribution that has 4 pebbles on any of the vertices. A simple case analysis shows that 3 pebbles are not sufficient to make every vertex reachable. \end{proof} Rolling moves serve the same purpose as the smoothing move of \cite{Bunde_optimal}. \begin{defn} Let $v_{1},\ldots,v_{n}$ be the consecutive vertices of a path such that the degree of $v_{1}$ is 1 and the degrees of $v_{2},v_{3},\ldots,v_{n-1}$ are all 2. The subgraph induced by $\{ v_{1},\ldots,v_{n}\}$ is called an \emph{arm} of the graph. Let $p$ be a pebble distribution such that $p(v_{i})\ge2$ for some $i\in\{1,\ldots,n-1\}$, $p(v_{n})=0$, and $p(v_{j})\ge1$ for all $j\in\{1,\ldots,n-1\}$. A \emph{single rolling move} creates a new pebble distribution $q$ by taking one pebble from $v_{i}$ and placing it on $v_{n}$, that is $q(v_{i},v_{n},*)=(p(v_{i})-1,1,p(*))$. See Figure~\ref{cap:rollvis}. \begin{figure} ~\input{rollvis.inc} \caption{\label{cap:rollvis}Visualization of a single rolling move with $i=2$ and $n=5$. An arrow indicates the transfer of a single pebble} \end{figure} \end{defn} \begin{lem} \label{lem:roll}Let $q$ be a pebble distribution on $G$ gotten from the pebble distribution $p$ by applying a single rolling move from $v_{i}$ to $v_{n}$ on the arm with vertices $v_{1},\ldots,v_{n}$. If vertex $u\in G$ is reachable from $p$ then $u$ is also reachable from $q$. \begin{figure} \begin{center}~\input{roll.inc}\end{center} \caption{\label{cap:roll}Four possible configurations for $T(G,S\setminus R)$. The solid arrows represent the arrows of $P$.} \end{figure} \end{lem} \begin{proof} If $u$ is a vertex of the arm then it is clearly reachable from $q$ so we can assume that $u$ is not on the arm. Let $S$ be an acyclic multiset of rubbling moves balanced with $p$ such that $p_{S}(u)\ge1$. Let $P$ be a maximum length directed path in $T(G,S)$ starting at $v_{i}$ and not going further than $v_{n}$. Then $P$ has consecutive vertices $v_{i}=v_{n_{0}},v_{n_{1}}\ldots,v_{n_{k}}$ on the arm. Let $R$ be the multiset containing the elements of $S$ without the moves corresponding to the arrows of $P$. We show that $R$ is balanced with $q$ and so $u$ is reachable from $q$ since $q_{R}(u)=p_{S}(u)\ge1$. Figure ~\ref{cap:roll} shows the possible configurations for $T(G,S\setminus R)$. We have $d_{T(G,S)}^{+}(v_{n_{k}})=0$ even if $n_{k}=1$. If $n_{k}=n$ then \[ q_{R}(v_{n_{k}})=p_{S}(v_{n_{k}})+\Delta(1,-2,0)=p_{S}(v_{n_{k}})\ge1\ge0,\] while if $n_{k}\not=n$ then\[ q_{R}(v_{n_{k}})=p_{S}(v_{n_{k}})+\Delta(0,-2,0)\ge p_{S}(v_{n_{k}})-1\ge2-1\ge0.\] So $R$ is balanced with $q$ at $v_{n_{k}}$. If $d_{T(G,S)}^{+}(v_{n_{0}})=0$ then $n_{0}=n_{k}$, otherwise there is an $a\in\{-1,-2\}$ such that\[ q_{R}(v_{n_{0}})=p_{S}(v_{n_{0}})+\Delta(-1,0,a)\ge p_{S}(v_{n_{0}})\ge0\] and so $R$ is balanced with $q$ at $v_{n_{0}}$. If $0<j<k$ then there is an $a\in\{-1,-2\}$ such that\[ q_{R}(v_{n_{j}})=p_{S}(v_{n_{j}})+\Delta(0,-2,a)\ge p_{S}(v_{n_{j}})\ge0\] and so $R$ is balanced with $q$ at $v_{n_{j}}.$ It is clear that $R$ is balanced with $q$ at every other vertex. \end{proof} \begin{defn} Let $v_{1},\ldots,v_{n}$ be the consecutive vertices of a path such that the degrees of $v_{2},v_{3},\ldots,v_{n-1}$ are all 2. Let $p$ be a pebble distribution such that $p(v_{1})=0=p(v_{n})$, $p(v_{i})\ge2$ for some $i\in\{2,\ldots,n-1\}$ and $p(v_{j})\ge1$ for all $j\in\{2,\ldots,n-1\}$. A \emph{double rolling move} creates a new pebble distribution $q$ by taking two pebbles from $v_{i}$ and placing one pebble on $v_{1}$ and one pebble on $v_{n}$, that is $q(v_{i},v_{1},v_{n},*)=(p(v_{i})-2,1,1,p(*))$. See Figure~\ref{cap:drollvis}. \begin{figure} ~\input{drollvis.inc} \caption{\label{cap:drollvis}Visualization of a double rolling move with $i=2$ and $n=5$. An arrow indicates the transfer of a single pebble.} \end{figure} \end{defn} \begin{lem} \label{lem:droll}Let $q$ be a pebble distribution on $G$ gotten from the pebble distribution $p$ by applying a double rolling move from vertex $v_{i}$ to vertices $v_{1}$ and $v_{n}$ on the path with consecutive vertices $v_{1},\ldots,v_{n}$. If vertex $u\in G$ is reachable from $p$ then $u$ is also reachable from $q$. \end{lem} \begin{proof} If $u\in\{ v_{1},\ldots,v_{n}\}$ then it is clearly reachable from $q$ so we can assume that $u\not\in\{ v_{1},\ldots,v_{n}\}$. Let $S$ be an acyclic multiset of rubbling moves balanced with $p$ such that $p_{S}(u)\ge1$. Let $P$ be a maximum length directed path in $T(G,S)$ starting at $v_{i}$ and not going further than $v_{1}$ or $v_{n}$. Then $P$ has consecutive vertices $v_{i}=v_{n_{0}},v_{n_{1}}\ldots,v_{n_{k}}\in\{ v_{1},\ldots,v_{n}\}$. Let $R$ be the multiset containing the elements of $S$ without the moves corresponding to the arrows of $P$. An argument similar to the one in the proof of Lemma~\ref{lem:roll} shows that $R$ is clearly balanced with $q$ at every vertex except maybe at $v_{i}$. If $n_{k}=n_{0}$ or the arrow $(v_{n_{0}},v_{n_{1}})$ in $P$ corresponds to a pebbling move, then $R$ is balanced with $q$ at $v_{i}$ as well. Then $u$ is reachable from $q$ since $q_{R}(u)=p_{S}(u)\ge1$. So we can assume that $(v_{n_{0}},v_{n_{1}})$ corresponds to a strict rubbling move and that $k=1$. Let $\tilde{P}$ be a maximum length path in $T(G,R)$. Since $k=1$, the length of $\tilde{P}$ is either 0 or 1. If this length is 0, then $q$ is balanced with $R$ at $v_{i}$ since $d_{T(G,R)}^{+}(v_{i})=0$ and we are done. If the length of $\tilde{P}$ is 1, then let $\tilde{R}$ be the multiset containing the elements of $R$ without the moves corresponding to the arrows of $\tilde{P}$. Figure~\ref{cap:droll} shows the possibilities for $T(G,S\setminus\tilde{R})$. It is easy to check that $\tilde{R}$ is balanced with $q$ in each case. Thus $u$ is reachable from $q$ since $q_{\tilde{R}}(u)\ge p_{S}(u)$. \end{proof} \begin{figure} \begin{center}~\input{droll.inc}\end{center} \caption{\label{cap:droll}The four possible configurations for $T(G,S\setminus\tilde{R})$. The solid arrows represent the moves corresponding to the arrows of $\tilde{P}$. The dotted arrows represent the moves corresponding to the arrows of $P$.} \end{figure} Rolling moves make it possible to find the optimal rubbling number of paths and cycles. \begin{prop} \emph{The optimal rubbling number of the path is} $\rho_{\text{{\rm opt}}}(P_{n})=\lceil\frac{n+1}{2}\rceil$. \end{prop} \begin{proof} Let $P_{n}$ be the path with consecutive vertices $v_{1},\ldots,v_{n}$. It is clear that every vertex is reachable from the pebble distribution\[ p(v_{i})=\begin{cases} 1, & \text{$i$ is odd or $i=n$}\\ 0, & \text{else}\end{cases}\] which has size $\lceil\frac{n+1}{2}\rceil$. Now assume that there is a pebble distribution of size $\lceil\frac{n+1}{2}\rceil-1$ from which every vertex of $P_{n}$ is reachable. Let us apply all available rolling moves (single or double). The process ends in finitely many steps since a rolling move reduces the number of pebbles on vertices with more than one pebble by at least one. If there is a vertex with more than one pebble and a vertex with no pebbles, then a rolling move is available. The number of pebbles is not larger than the number of vertices, so the resulting pebble distribution $q$ has at most one pebble on each vertex. Every vertex of $P_{n}$ still must be reachable from $q$ by Lemma~\ref{lem:droll}. The only moves executable directly from $q$ are strict rubbling moves. By the No Cycle Lemma we can assume that every vertex is reachable by a sequence of moves in which a strict rubbling move $(x,y\to z)$ is not followed by a move of the form $(z,z\to x)$ or $(z,z\to y)$. So we can assume that every vertex is reachable through strict rubbling moves. Then we must have $q(v_{1})=1=q(v_{n})$ otherwise $v_{1}$ or $v_{n}$ is not reachable. A pigeon hole argument shows that there must be two neighbor vertices $u$ and $w$ such that $q(u)=0=q(w)$. But then neither $u$ nor $w$ is reachable from $q$, which is a contradiction. \end{proof} \begin{prop} The optimal rubbling number of the cycle is $\rho_{\text{{\rm opt}}}(C_{n})=\lceil\frac{n}{2}\rceil$ for $n\ge3$. \end{prop} \begin{proof} Let $C_{n}$ be the cycle with consecutive vertices $v_{1},\ldots,v_{n}$. It is clear that every vertex is reachable from the pebble distribution\[ p(v_{i})=\begin{cases} 1, & \text{$i$ is odd}\\ 0, & \text{else}\end{cases}\] which has size $\lceil\frac{n}{2}\rceil$. Now assume that there is a pebble distribution of size $\lceil\frac{n}{2}\rceil-1$ from which every vertex of $C_{n}$ is reachable. Let us apply all available double rolling moves. The process ends in finitely many steps since a double rolling move reduces the number of pebbles on vertices with more than one pebble by two . If there is a vertex with more than one pebble and two vertices with no pebbles, then a double rolling move is available. The number of pebbles is smaller than the number of vertices, so the resulting pebble distribution $q$ has at most one pebble on each vertex. Every vertex of $C_{n}$ still must be reachable from $q$. The only moves executable directly from $q$ are strict rubbling moves. The No Cycle Lemma implies that we can assume that every vertex is reachable through strict rubbling moves. A pigeon hole argument shows that there must be two neighbor vertices $u$ and $w$ such that $q(u)=0=q(w)$. But then neither $u$ nor $w$ is reachable from $q$ which is a contradiction. \end{proof} \section{Further questions} There are plenty of unanswered questions. The following might not be too hard to answer. \begin{itemize} \item What is the optimal rubbling number for the hypercube $Q^{n}$. \emph{}It is fairly easy to get answers for small $n$ with a computer. The known values are listed in Table~\ref{cap:Known-rho-opt-hyper}.% \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline $n$& 2 & 3 & 4 & 5 \tabularnewline \hline $\rho(B_{n})$& 4& 16& $>23$& \tabularnewline \hline $\rho_{{\rm opt}}(B_{n})$& 2& 4& 6& \tabularnewline \hline $\rho_{{\rm opt}}(Q^{n})$& 2 & 3 & 4 & 6 \tabularnewline \hline \end{tabular} ~ \caption{\label{cap:Known-rho-opt-hyper}Rubbling values without a known general formula.\protect \\ } \end{table} \item Does Graham's conjecture hold for the rubbling number? \item Is the cover rubbling number the same as the cover pebbling number for every graph? \item We have $\pi(P_{n})=\rho(P_{n})$, $\pi(Q^{n})=\rho(Q^{n})$ and it is easy to check that $\pi(L)=8=\rho(L)$ where $L$ is the Lemke graph \cite{Hurlbert_survey2}. This is not always the case though. Is it possible to characterize those graphs for which the pebbling and the rubbling numbers are the same? \item Let $f(d,n)=\max\{\rho(G)\mid|V(G)|=n\text{ and diam}(G)=d\}$. It is not hard to check that $f(2,n)\le5$ and $f(3,n)\le9$ for $n\in\{1,\ldots,7\}$. Do these upper limits hold for all $n$? Is it true that $f(d,n)\le2^{d}+1$ for all $d$ and $n$? \end{itemize} \bibliographystyle{amsplain}
1,108,101,566,283
arxiv
\section{Introduction} Let $k$ be a field of characteristic zero and $R=k[x_1,\dots,x_n]$ the ring of polynomials in $n$ variables. For any ideal $I\subseteq R$, the local cohomology modules $H_I^r(R)$ have a natural finitely generated module structure over the Weyl algebra $A_n$. Recently, there has been an effort made towards effective computation of these modules by using the theory of Gr\"obner bases over rings of differential operators. Algorithms given by U.~Walther \cite{Wa99} and T.~Oaku and N.~Takayama \cite{OT01} provide a utility for such computation and are both implemented in the package {\tt D-modules} \cite{LST} for {\tt Macaulay 2} \cite{GS}. \vskip 2mm Walther's algorithm is based on the construction of the $\check {\rm C}$ech complex in the category of $A_n$-modules. So it is necessary to give a description of the localization $R_f$ at a polynomial $f\in R$. An algorithm to compute these modules was given by T.~Oaku in \cite{Oa97}. The main ingredient of the algorithm is the computation of the Bernstein-Sato polynomial of $f$ which turns out to be a major bottleneck due to its complexity. \vskip 2mm To give a presentation of $R_f$ or $H_I^r(R)$ as $A_n$-modules is out of the scope of this work. Our aim is to provide an algorithm to compute an invariant that can be associated to a finitely generated $A_n$-module, the characteristic cycle. This invariant gives a description of the support of the $A_n$-module as an $R$-module, so it is a useful tool to prove the vanishing of local cohomology modules. Moreover, the characteristic cycles of local cohomology modules also give us some extra information since their multiplicities are a set of numerical invariants of the quotient ring $R/I$ (see \cite{Al02}). Among these invariants we may find Lyubeznik numbers that were introduced in \cite{Ly93}. \vskip 2mm We will present an algorithm to compute the characteristic cycle of any local cohomology module. It comes naturally from the structure of the \v{C}ech complex and the additivity of the characteristic cycle with respect to short exact sequences. The requirement is that we have to compute first the characteristic cycle of the localizations appearing in the \v{C}ech complex. To do so, we present a method based on a geometric formula given by V.~Ginsburg in \cite{Gi86} and reinterpreted by J.~Brian\c{c}on, P.~Maisonobe and M.~Merle in \cite{BMM94}. The advantage of this approach is that we will not have to compute the Bernstein-Sato polynomial of $f$ and we will be operating in a commutative graded ring in $2n$ variables instead of operating in the Weyl algebra $A_n$. The algorithm we will present is an elaboration of \cite[Thm. 3.4.2]{BMM94}, which -- we have to point out -- is stated in the complex analytic context. For our computational purposes we are interested in the algebraic context since we will need (absolute) primary decomposition so we have to make sure that, at least for the examples we will develop, this result is also true in the algebraic counterpart. The complex algebraic case may already be found in \cite{Gi86} but the approach we will use in this work is through flat base change. It allows us to work over any field of characteristic zero for a large sample of examples since localization modules and local cohomology modules have a good behavior with respect to this operation. Since absolute primary decomposition is not implemented in {\tt Macaulay 2}, we compute over the field of rational numbers. This replacement, however, is not an issue for the examples that we present. \vskip 2mm The scripts of the source codes we will use in this work as well as the output in full detail of the examples are available at the web page http://www2.math.uic.edu/$\sim$leykin/CC. In the future, we would explore the possibility of using numerical primary decomposition -- technique for varieties over $\mathbb{C}$ which is being developed by the second author. \section{Basics on the theory of ${\mathcal D}$-modules} Let $X=\mathbb{C}^n$ be the complex analytic space with coordinate system $x_1,\dots,x_n$. Given the convergent series ring $R=\mathbb{C}\{x_1,\dots,x_n\}$ consider the associated ring of differential operators $D_n:=R\langle {\partial}_1,\dots,{\partial}_n\rangle$, i.e. the ring extension generated by the partial derivatives ${\partial}_i=\frac{{\partial}}{{\partial} x_i}$, with the relations given by ${\partial}_i{\partial}_j={\partial}_j{\partial}_i$ and ${\partial}_i r - r {\partial}_i=\frac{{\partial}}{{\partial} x_i}$, where $ r\in R$. For any unexplained terminology concerning the theory of rings of differential operators we shall use \cite{Bj79}, \cite{Co95}. \vskip 2mm The ring $D_n$ has a natural increasing filtration given by the total order; the corresponding associated graded ring $gr(D_n)$ is isomorphic to the polynomial ring $R[a_1,\dots,a_n]$. A finitely generated $D_n$-module $M$ has an increasing sequence of finitely generated $R$-submodules such that the associated graded module $gr(M)$ is a finitely generated $gr(D_n)$-module. The { characteristic ideal} of $M$ is the ideal in $gr(D_n)=R[a_1,\dots,a_n]$ given by the radical ideal $J(M):= \mbox{\rm{rad}} \, ( \mbox{\rm{Ann}} \,_{gr(D_n)} (gr(M)))$. The ideal $J(M)$ is independent of the good filtration on $M$. The { characteristic variety} of $M$ is the closed algebraic set given by: $$C(M):= V(J(M))\subseteq \mathrm {Spec\, } (gr(D_n))=\mathrm {Spec\, } (R[a_1,\dots,a_n]).$$ The characteristic variety describes the support of a finitely generated $D_n$-module as $R$-module. Let $\pi: \mathrm {Spec\, }(R[a_1,\dots,a_n])\longrightarrow \mathrm {Spec\, }(R)$ be the map defined by $\pi(x,a)= x$. Then $\mbox{\rm{Supp}} _R(M)=\pi(C(M)).$ \vskip 2mm We single out the important class of regular holonomic $D_n$-modules. Namely, a finitely generated $D_n$-module $M$ is holonomic if $M=0$ or $\dim C(M)=n$. It is regular if there exists a good filtration on $M$ such that $\mbox{\rm{Ann}} \,_{gr(D_n)} (gr(M))$ is a radical ideal (\cite{BK}, see also \cite[\S 3]{Gi86}, \cite{Co95}). \vskip 2mm The { characteristic cycle} of $M$ is defined as:$$CC(M)= \sum m_i \hskip 2mm \Lambda_i$$ where the sum is taken over all the irreducible components $\Lambda_i=V(\mathfrak{q}_i)$ of the characteristic variety $C(M)$, where $\mathfrak{q}_i \in \mathrm {Spec\, } (gr(D_n))$ and $m_i$ is the multiplicity of $gr(M)$ at a generic point along each component $\Lambda_i$. These multiplicities can be computed via Hilbert functions (see \cite{Ca84},\cite{Le}). Notice that the contraction of $\mathfrak{q}_i$ to $R$ is a prime ideal so the variety $\pi(\Lambda_i)$ is irreducible. These components can be described in terms of conormal bundles to $X_i:=\pi(\Lambda_i)$ in $X$, i.e. $$CC(M)=\sum m_i \hskip 2mm T_{X_i}^*X.$$ In particular, the support of $M$ is $\mbox{\rm{Supp}} _{R}(M)= \bigcup X_i $. For details we refer, among others, to \cite[\S10]{Ph79}, \cite[\S7.5]{Ki88}. \subsection{Characteristic cycle of a localization} Let $M$ be a regular holonomic $D_n$-module. Then the localization $M_f$ at a polynomial $f\in R$ is a regular holonomic $D_n$-module as well. A geometric formula that provides the characteristic cycle of $M_f$ in terms of the characteristic cycle of $M$ is given by V.~Ginsburg in \cite{Gi86} and became known to us through the interpretation of J.~Brian\c{c}on, P.~Maisonobe and M.~Merle in~\cite{BMM94}. \vskip 2mm First we will recall how to compute the conormal bundle relative to $f$. Let $Y^\circ$ be the smooth part of a subvariety $Y\subseteq X$ where $f|_{Y}$ is a submersion. Set: $$W=\{(x,a)\in T^{\ast}X \hskip 2mm | \hskip 2mm x \in Y^\circ \hskip 2mm {\rm and} \hskip 2mm a \hskip 2mm {\rm annihilates} \hskip 2mm T_x(f|_{Y})^{-1}(f(x))\}.$$ The conormal bundle relative to $f$, denoted by $T_{f|_{Y}}^{\ast}$, is then the closure of $W$ in ${T^{\ast} X}|_{Y}$. \begin{theorem}\label{propBMM}{\rm (\cite[Thm. 3.4.2]{BMM94})} Let $M$ be a regular holonomic $D_n$-module with characteristic cycle $CC(M)=\sum_i m_i \hskip 2mm T_{X_i}^*X$ and let $f\in R$ be a polynomial. Then $$CC(M_f)=\sum_{f(X_i)\neq 0} m_i(\Gamma_i+T_{X_i}^*X)$$ with $\Gamma_i=\sum_j m_{ij} \Gamma_{ij}$, where $\Gamma_{ij}$ are the irreducible components of the divisor defined by $f$ in $T^*_{f|_{X_i}}$ and $m_{ij}$ are the corresponding multiplicities. \end{theorem} \begin{remark} Assume for simplicity that $M$ is a regular holonomic $D_n$-module such that $CC(M)= T_{Y}^*X$ and let $f\in R$ be a polynomial such that $f(Y)\neq 0$. By the formula above we have $CC(M_f)=T_{Y}^*X + \Gamma$. It is worthwhile to point out that the reduced variety associated to $\Gamma$ is the characteristic variety of the local cohomology module $H_{(f)}^1(M)$. \end{remark} \begin{example} Set $R=\mathbb{C}\{x,y,z\}$, $M=H^1_{(x)}(R)$, $f=x$ and $g=y$. \vskip 2mm We have $CC(R)=T^*_{X}X$. Then $T_{f|_{X}}^{\ast}= \{(x,y,z,a,b,c)\in T^{\ast}X \hskip 2mm | \hskip 2mm b=0,c=0\}$ and the divisor defined by $f$ in $T^*_{f|_{X}}$ is $\Gamma = \{(x,y,z,a,b,c)\in T^{\ast}X \hskip 2mm | \hskip 2mm b=0,c=0,x=0\}=T^*_{\{x=0\}}X$. Thus $$CC(R_x)=T^*_{X}X+T^*_{\{x=0\}}X$$ \vskip 2mm We have $CC(M)=T^*_{\{x=0\}}X$. Then $T_{g|_{\{x=0\}}}^{\ast}= \{(x,y,z,a,b,c)\in T^{\ast}X \hskip 2mm | \hskip 2mm c=0,x=0\}$ and the divisor defined by $g$ in $T^*_{g|_{\{x=0\}}}$ is $\Gamma = \{(x,y,z,a,b,c)\in T^{\ast}X \hskip 2mm | \hskip 2mm c=0,x=0,y=0\}=T^*_{\{x=y=0\}}X$. Thus $$CC(M_y)=T^*_{\{x=0\}}X+T^*_{\{x=y=0\}}X$$ \end{example} The multiplicities $m_{ij}$ appearing in the formula are the multiplicities of a generic point $x$ along each component $\Gamma_{ij}$ of $\Gamma_i$ and can be computed via Hilbert functions as in \cite{Le}. \begin{lemma}\label{mult} Let $e(\Gamma,x)$ denote the multiplicity of the variety $\Gamma \subseteq T^*X$ defined by the ideal $I\subseteq R[a_1,\dots,a_n]$ at a point $x$. Then, the multiplicity $m$ of a generic point $x$ along $\Gamma$ is $$m=e(\Gamma,x)/e(\Gamma^{red},x),$$ where $\Gamma^{red}$ is the variety defined by $\mbox{\rm{rad}} \,(I)$. \end{lemma} \begin{proof} A reformulation of \cite[Prop. 3.11]{HIO} for the particular case of $x$ being a generic point gives us the desired result, i.e. $e(\Gamma,x)= e(\Gamma^{red},x)\cdot m$. \end{proof} \subsection{Algebraic ${\mathcal D}$-modules} Let $X=\mathbb{C}^n$ be the complex affine space with coordinate system $x_1,\dots,x_n$. Given the polynomial ring $R=\mathbb{C}[x_1,\dots,x_n]$ consider the associated ring of differential operators $A_n:=R\langle {\partial}_1,\dots,{\partial}_n\rangle$, i.e. the Weyl algebra. The theories of algebraic ${\mathcal D}$-modules and analytic ${\mathcal D}$-modules are very closely related. If one mimics the constructions given for the ring $D_n$, one can check that the results we have considered before, conveniently reformulated, remain true for $A_n$. In particular we may construct an algebraic characteristic cycle as a counterpart to the analytic characteristic cycle described before. Our aim is to explain how both cycles are related. \vskip 2mm Set $\mathbb{C}\{x\}:=\mathbb{C}\{x_1,\dots,x_n\}$ and $\mathbb{C}[x]:=\mathbb{C}[x_1,\dots,x_n]$. Let $M$ be a regular holonomic $A_n$-module. The $D_n$-module $M^{an}:=\mathbb{C}\{x\}\otimes_{\mathbb{C}[x]}M$ is also regular holonomic. For a good filtration $\{M_i\}_{i\geq 0}$ on $M$ the filtration $\{M_i^{an}:= \mathbb{C}\{x\}\otimes_{\mathbb{C}[x]}M_i\}_{i\geq 0}$ is also good due to the fact that $\mathbb{C}\{x\}$ is flat over ${\mathbb{C}[x]}$. Therefore $gr(M^{an})\simeq \mathbb{C}\{x\}\otimes_{\mathbb{C}[x]} gr(M)$ so the characteristic variety of $M^{an}$ is the extension of the characteristic variety of $M$, i.e. $C(M^{an})= C(M)^{an}$. However, we should notice that the components of the characteristic variety may differ depending on the ring we are considering. In particular we may have algebraically irreducible components that are analytically reducible. \vskip 2mm The regular holonomic $A_n$-modules we will consider in this work, i.e. the polynomial ring $R=\mathbb{C}[x]$, the localization $R_f$ for a polynomial $f\in R$, and the local cohomology modules $H_I^r(R)$, all have a good behavior with respect to flat base change. We state that, roughly speaking, the formulas of the algebraic and analytical characteristic variety of these modules are the same but the components and multiplicities of the corresponding characteristic cycle may differ. \begin{remark} The results of this section can be stated in general for $X$ being any smooth algebraic variety over $\mathbb{C}$. It is worth to point out that $M\rightarrow M^{an}$ gives an equivalence between the category of regular holonomic ${\mathcal D}_X$-modules and the category of regular holonomic ${\mathcal D}_X^{an}$-modules when $X$ is projective (see \cite[\S 3]{Gi86}). \end{remark} \section{Algorithmic approach to Brian\c{c}on-Maisonobe-Merle's Theorem} From now on we will assume that $R=\mathbb{C}[x_1,\dots,x_n]$ is the polynomial ring so we will be working in the algebraic context. Let $M$ be a regular holonomic $A_n$-module with algebraic characteristic cycle $CC(M)=\sum m_i \hskip 2mm T_{X_i}^*X$ and let $f\in R$ be a polynomial. Our aim is to compute the characteristic cycle of the localization $M_f$ operating in the commutative graded ring $gr(A_n)=R[a_1,\dots,a_n]$. We are going to provide two algorithms that are an elaboration of Theorem \ref{propBMM}. The first one computes the part $\Gamma_i$ of the formula in Theorem \ref{propBMM} corresponding to each irreducible component $T_{X_i}^*X$ in the characteristic cycle of $M$. The second one computes the components and the corresponding multiplicities of the varieties $\Gamma_i$. \vskip 2mm Theorem \ref{propBMM} is a geometric reformulation of a result given by V.~Ginsburg \cite[Thm. 3.3]{Gi86}. Even though it is stated in the analytic context me may find in Ginsburg's paper the algebraic counterpart to the same result, see \cite[Thm. 3.2]{Gi86}. We may interpret it as in Section 2.2. through flat base change. \vskip 2mm \begin{algorithm} \label{alg1}{ (Divisor defined by $f$ in $T^*_{f|_{Y}}$, the conormal relative to $f$)} \vskip 2mm {\rm \noindent {\sc Input:} Generators $g_1,...,g_d$ of an ideal $I\subset R$ defining the algebraic variety $Y=V(I)\subseteq X$ and a polynomial $f\in R$. \noindent {\sc Output:} Divisor defined by $f$ in the conormal $T^*_{f|_{Y}}$ relative to $f$. \vskip 2mm {\bf Compute the smooth part $Y^\circ$ of $Y$ where $f|_{Y}$ is a submersion:} \begin{itemize} \item [\textbf{(0a)}] Compute $\nabla f=(\frac{\partial f}{\partial x_1},...,\frac{\partial f}{\partial x_n})$ \item [\textbf{(0b)}] Compute $Y^{\circ} = Y\setminus V(I^\circ)$, where $I^\circ\subset R$ is the defining ideal of $ \{x \in Y \ | \ \nabla f(x)=0 \}$ and the singular locus of $Y$. \end{itemize} {\bf Compute the conormal relative to $f$} \begin{itemize} \item[\textbf{(1a)}] Compute $K = \ker \phi$, where the $\phi: R^n\to R^{d+1}/I$ sends $$ s\mapsto (\nabla f, \nabla g_1, ..., \nabla g_d)\cdot s \in R^{d+1}/I. $$ \item[\textbf{(1b)}] Let $J\subset gr(A_n) = R[a_1,...,a_n]$ be the ideal generated by $\{ (a_1,...,a_n)\cdot b \ |\ b \in K \}.$ \item[\textbf{(1c)}] Compute $J_{sat}=J:(gr(A_n)I^\circ)^\infty$. (Note: $I(T^*_{f|_{Y}}) = \sqrt{J_{sat}}$.) \end{itemize} {\bf Compute the divisor defined by $f$ in $T^*_{f|_{Y}}$ } \begin{itemize} \item[\textbf{(2a)}] Compute $K_f = \ker \phi_f$, where the map $\phi_f: R^n\to R^{d+1}/(I+(f))$ sends $$ s\mapsto (\nabla f, \nabla g_1, ..., \nabla g_d)\cdot s \in R^{d+1}/(I+(f)). $$ \item[\textbf{(2b)}] Let $J_f \subset gr(A_n) = R[a_1,...,a_n]$ be the ideal generated by $\{ (a_1,...,a_n)\cdot b \ |\ b \in K_f \}$. \item[\textbf{(2c)}] $C = J_{sat} + (f) + J_f\subset gr(A_n)$. \end{itemize} \noindent {\sc Return:} The ideal $C$ that defines the divisor $f$ in $T^*_{f|_{Y}}$ } \end{algorithm} \begin{proof}(Correctness of the algorithm) The steps (0a), (0b) follow from the definition of $f|_{Y}$ being a submersion. The relative conormal $T^*_{f|_{Y}}$ is the closure of $$W=\{(x,a)\in T^*X \ | \ x\in Y^\circ , \forall s\in K,\ a(s)=0 \}.$$ For every point $ x\in Y^\circ$, the tangent space $T_{x}Y^\circ$ is a specialization of $V(K)$, where $K$ is computed in step (1a). A defining ideal of $W$ is produced in (1b) and, finally, taking the closure amounts to the saturation in (1c). In order to restrict to $f=0$, it is not enough to compute $J_{sat} + (f)$. However, step (2a) and (2b) that follow closely the idea of (1a) and (1b) provide the necessary correction term in (2c). \vskip 2mm Recall that the analytic extension of the ideal $C$ we obtain with the algorithm is what we would obtain applying Theorem \ref{propBMM} in order to compute the analytic characteristic cycle of the localization module (see Section 2.2). In our case, the ideal $C$ will give us the components of the algebraic characteristic variety. \end{proof} \begin{algorithm} \label{alg2} { (Components and multiplicities of the characteristic cycle)} \vskip 2mm {\rm \noindent {\sc Input:} The characteristic cycle $CC(M)=\sum m_i \hskip 2mm T_{X_i}^*X$ of a regular holonomic $A_n$-module $M$ and a polynomial $f\in R$. \noindent {\sc Output:} The characteristic cycle $CC(M_f)=\sum_{f(X_i)\neq 0} m_i(\Gamma_i+T_{X_i}^*X)$. \vskip 2mm For every component $Y=X_i$ we have to compute the ideal $C_i$ corresponding to the divisor defined by $f$ in $T^*_{f|_{Y}}$ using Algorithm \ref{alg1}. Then: \vskip 2mm {\bf Compute the components of $C_i$} \begin{itemize} \item[\textbf{(1a)}] Compute the associated primes $C_{ij}$ of $C_i$. \item[\textbf{(1b)}] Compute $I_{ij}= C_{ij} \cap R$ (if you need to know the defining ideal of $X_{ij}= \pi(\Gamma_{ij})$ in Theorem \ref{propBMM}). \end{itemize} {\bf Compute the multiplicities} \begin{itemize} \item[\textbf{(2)}] Compute the multiplicity $m_{ij}$ in Theorem \ref{propBMM} as the multiplicity of a generic point $x$ along each component $C_{ij}$ of $C_i$ as in Lemma \ref{mult} via Hilbert functions. \end{itemize} \noindent {\sc Return:} The components of $CC(M_f)$ and their corresponding multiplicities.} \end{algorithm} \begin{proof} The correctness of the algorithm is straightforward and follows from Lemma \ref{mult}. \end{proof} The algorithm we propose requires the computation of the associated primes of an ideal; primary decomposition is also needed in the implementation if we want to avoid choosing generic points when computing the multiplicities (see Lemma \ref{mult}). Therefore, we have to restrict ourselves to computations in the polynomial ring $R={\mathbb{Q}}[x_1,\dots,x_n]$ as we implemented the algorithm in the computer system {\tt Macaulay~2}. What we are going to construct is the characteristic cycle of a regular holonomic $A_n$-module where now $A_n$ stands for the Weyl algebra with rational coefficients. By flat base change we can extend the ideal $C$ we obtain with Algorithm \ref{alg1} to any ring of polynomials over a field of characteristic zero or to the convergent series ring over $\mathbb{C}$. As we stated in Section 2.2, the primary components may differ depending on the ring we are considering. \vskip 2mm In order to construct the algebraic characteristic cycle over ${\mathbb{Q}}$ we would need to find the absolute primary decomposition of the ideal $C$ we obtain with Algorithm \ref{alg1}. Even though the {\tt Macaulay 2} command for primary decomposition is not implemented over the algebraic closure of ${\mathbb{Q}}$, it suffices for the examples we treat in the next section. \vskip 2mm Another fine point in the implementation is the treatment of embedded components of the ideal $C$ outputted in Algorithm \ref{alg1}. The ideal $C$ contains the complete information about maximal components of the divisor $f$ on $T^*_{f|_{Y}}$, in particular, we can compute their multiplicities. However, the primary ideals in the decomposition of $C$ that correspond to an embedded component may not lead to the correct multiplicity due to the global nature of our computations. In order to obtain this multiplicity we restrict the divisor to the embedded component, which amounts to rerunning Algorithm \ref{alg1} `modulo' its defining ideal. The top-level routine in our implementation processes components recursively ``descending'' to, i.e., localizing at, the embedded components when needed. \section{Characteristic cycle and \v{C}ech complex} Let $I=(f_1,\dots,f_s)\subseteq R=\mathbb{C}[x_1,\dots,x_n]$ be an ideal and $M$ be a holonomic $A_n$-module. In this section we are going to compute the characteristic cycle of the local cohomology modules $H^r_I(M)$ using the \v{C}ech complex $$ {\check{C}}^{\bullet}(f_1,\dots,f_s;M) : \hskip 5mm 0 \longrightarrow M \stackrel{d_0}\longrightarrow \bigoplus_{i=1}^s M_{f_i} \stackrel{d_1}\longrightarrow \cdots \longrightarrow M_{f_1\cdots f_s}\longrightarrow 0. $$ For simplicity we will assume from now on that $M$ is indecomposable. Otherwise, if $M=M_1 \oplus M_2$, then $$ {\check{C}}^{\bullet}(f_1,\dots,f_s;M) = {\check{C}}^{\bullet}(f_1,\dots,f_s;M_1) \oplus {\check{C}}^{\bullet}(f_1,\dots,f_s;M_2)$$ and $ H^r_I(M)= H^r_I(M_1)\oplus H^r_I(M_2)$ for all $r$, so we can compute the characteristic cycle of both local cohomology modules separately. Sometimes we will denote the localization modules appearing in the \v{C}ech complex $M_{f_{\alpha}}$, where $f_{\alpha}=\prod_{\alpha_i=1} f_i$ for all $\alpha\in \{0,1\}^s$. We will also denote $|\alpha|= \alpha_1 +\cdots+\alpha_s$ and $\varepsilon_1,\dots, \varepsilon_s$ will be the natural basis of ${\mathbb{Z}}^s$. \vskip 2mm >From the characteristic cycle of the localization modules in the complex we develop an algorithm to extract the precise information needed to describe the characteristic cycles of the local cohomology modules. The algorithm comes naturally from the structure of the \v{C}ech complex and the additivity of the characteristic cycle with respect to short exact sequences. However the following assumption will be required: \vskip 2mm $(\dagger)$ For all $\alpha\in \{0,1\}^s$ such that $\alpha_i=0$, the localization map $M_{f_{\alpha}} {\longrightarrow} M_{f_{\alpha + \varepsilon_i}}$ is either a natural inclusion, i.e., $M_{f_{\alpha}}$ is saturated with respect to $f_{\varepsilon_i}$, or $M_{f_{\alpha + \varepsilon_i}}=0$. \vskip 2mm For unexplained terminology on the theory of complexes we refer to \cite{Ro}. To shed some light on the process we first present the case of $I$ being generated by one and two elements. \subsection{The case $s=1$} We have the short exact sequence $$ 0 \longrightarrow H^0_I(M) \longrightarrow M \stackrel{d_0}\longrightarrow M_{f_1} \longrightarrow H^1_I(M){\longrightarrow} 0 $$ Under the assumption $(\dagger)$ either $CC(H^1_I(M))= CC(M_{f_1})-CC(M)$ if $M$ is $f_1$-saturated or $CC(H^0_I(M))= CC(M)$ if $M_{f_1}= 0$. The algorithm we propose to compute the characteristic cycle works in both cases and boils down to the following step: \vskip 2mm {\it Prune} the characteristic cycles of $M_{f_1}$ and $M$, where `prune' means remove the components (counting multiplicities) that both modules have in common. \vskip 2mm The characteristic cycle of $H^0_I(R)$ (resp. $H^1_I(R)$) is the formal sum of components of $M$ (resp. $M_{f_1}$) that survived this process. \subsection{ The case $s=2$} The vertical sequences of the following diagram are exact $${\xymatrix { {\check{C}}^{\bullet}(f_1;M): & 0 \ar[r] & M\ar[r] & M_{f_1}\ar[r] & 0 & \\ {\check{C}}^{\bullet}(f_1,f_2;M): & 0 \ar[r] & M \ar[u] \ar[r] & M_{f_1} \oplus M_{f_2} \ar[u] \ar[r] & M_{f_1f_2} \ar[r]& 0 \\ {\check{C}}^{\bullet}(f_1;M_{f_2})[-1]: & & 0 \ar[r] & M_{f_2} \ar[u] \ar[r]&M_{f_1f_2} \ar[u] \ar[r]& 0 }} $$ so we have an exact sequence of \v{C}ech complexes $$ (i) \hskip 5mm 0 \longrightarrow {\check{C}}^{\bullet}(f_1;M_{f_2})[-1] \longrightarrow {\check{C}}^{\bullet}(f_1,f_2;M) \longrightarrow {\check{C}}^{\bullet}(f_1;M){\longrightarrow} 0$$ where $[-1]$ stands for the result of shifting the complex one place to the right. Analogously, i.e. switching the roles played by $f_1$ and $f_2$, we also have $$ (ii) \hskip 5mm 0 \longrightarrow {\check{C}}^{\bullet}(f_2;M_{f_1})[-1] \longrightarrow {\check{C}}^{\bullet}(f_1,f_2;M) \longrightarrow {\check{C}}^{\bullet}(f_2;M){\longrightarrow} 0$$ \vskip 2mm Notice that the vanishing of any localization module reduces the \v{C}ech complex to the case $s=1$. So we are going to consider the only case remaining under the assumption $(\dagger)$, i.e. $M$ is saturated with respect to $f_1f_2$. Consider the long exact sequence of cohomology modules associated to $(i)$ $$ 0 {\longrightarrow} H^{-1}_{(f_1)}(M_{f_2}) \longrightarrow H^0_I(M) \longrightarrow H^0_{(f_1)}(M)\stackrel{\delta^0}{\longrightarrow} H^0_{(f_1)}(M_{f_2}) \longrightarrow H^1_I(M) \longrightarrow \cdots $$ where $\delta^j$ are the connecting maps. Non-vanishing may occur only in the sequence $$ 0 \longrightarrow H^1_I(M) \longrightarrow H^1_{(f_1)}(M)\stackrel{\delta^1}{\longrightarrow} H^1_{(f_1)}(M_{f_2}) \longrightarrow H^2_I(M) \longrightarrow 0 $$ which breaks down into two short exact sequences with $C_1 = \mathrm {Coker\, } \delta^1$: \vskip 2mm \hskip 5mm $0\longrightarrow H^1_I(M) \longrightarrow H^1_{(f_1)}(M){\longrightarrow} C_1 \longrightarrow 0 $ \hskip 2mm and \hskip 2mm $ 0\longrightarrow C_1 {\longrightarrow} H^1_{(f_1)}(M_{f_2}) \longrightarrow H^2_I(M) \longrightarrow 0$ \vskip 2mm In order to get $CC(H^1_I(M))$ and $CC(H^2_I(M))$ we only have to compute the characteristic cycle of $C_1$ since we already know that $CC(H^1_{(f_1)}(M))=CC(M_{f_1})- CC(M)$ and $CC(H^1_{(f_1)}(M_{f_2}))=CC(M_{f_1f_2})- CC(M_{f_2})$. \vskip 2mm {\bf Claim:} $CC(C_1)= \sum m_i \hskip 1mm T_{X_i}^{\ast} X$ where the sum is taken over the components (counting multiplicities) that $CC(H^1_{(f_1)}(M))$ and $CC(H^1_{(f_1)}(M_{f_2}))$ have in common. \begin{proof} Assume that there is a component $T_{X_i}^{\ast} X$ in $CC(H^1_{(f_1)}(M))$ and $CC(H^1_{(f_1)}(M_{f_2}))$ not appearing in $CC(C_1)$, i.e. $T_{X_i}^{\ast} X$ is a component of $CC(H^1_I(M))$ and $CC(H^2_I(M))$. This component shows up in the computation of the characteristic cycle of the cohomology of the subcomplex ${\check{C}}^{\bullet}(f_2;M_{f_1})[-1]$, since it is in fact a component of $ CC(M_{f_1})$ and $CC(M_{f_1f_2})$, but is not a component of $CC(M)$ and $CC(M_{f_2})$. Consider the long exact sequence of cohomology modules associated to the complex $(ii)$: $$ \cdots {\longrightarrow} H^0_I(M) \longrightarrow H^0_{(f_2)}(M)\stackrel{\delta^0}{\longrightarrow} H^0_{(f_2)}(M_{f_1}) \longrightarrow H^1_I(M) \longrightarrow H^1_{(f_2)}(M) \longrightarrow \cdots $$ It follows that the component $T_{X_i}^{\ast} X$ should belong to $CC(H^0_{(f_2)}(M_{f_1}))$ and $CC(H^1_{(f_2)}(M_{f_1}))$ in order to fulfill the hypothesis of being a component of $CC(H^1_I(M))$ and $CC(H^2_I(M))$. Thus we get a contradiction since $H^0_{(f_2)}(M_{f_1})=0$ as $M_{f_1}$ has no $f_2$-torsion. \end{proof} To summarize the computation of the characteristic cycle of $H^r_I(M)$ using the sequence of \v{C}ech complexes $(i)$ we propose the following algorithm: \vskip 2mm $(1)$ {\it Prune} the characteristic cycle of $M_{f_1}$ and $M$. \hskip .6cm {\it Prune} the characteristic cycle of $M_{f_1f_2}$ and $M_{f_2}$. \vskip 2mm $(2)$ {\it Prune} the characteristic cycle of $M_{f_2}$ and $M$. \hskip .6cm {\it Prune} the characteristic cycle of $M_{f_1f_2}$ and $M_{f_1}$. \vskip 2mm `Prune' means remove the components (counting multiplicities) such that both characteristic cycles still have in common in that step of the algorithm. The characteristic cycle of $H^0_I(M)$ (resp. $H^1_I(M)$, $H^2_I(M)$) is the formal sum of comand $CC(H^1_{(f_2)}(M_{f_1}))$ponents of $M$ (resp. $M_{f_1}$ and $M_{f_2}$, $M_{f_1f_2}$), i.e., components of th characteristic cycles of ${\check{C}}^{0}(f_1,f_2;M)$ (resp. ${\check{C}}^{1}(f_1,f_2;M)$, ${\check{C}}^{2}(f_1,f_2;M)$) that survived to the process. \vskip 2mm \begin{remark} Naturally, one may permute the steps (1) and (2) in the above algorithm. \end{remark} \subsection{The general case} Let $I=(f_1,\dots,f_s)\subseteq R$ be an ideal and $M$ be a holonomic $A_n$-module satisfying \vskip 2mm $(\dagger)$ For all $\alpha\in \{0,1\}^s$ such that $\alpha_i=0$, the localization map $M_{f_{\alpha}} {\longrightarrow} M_{f_{\alpha + \varepsilon_i}}$ is a natural inclusion, i.e. $M_{f_{\alpha}}$ is saturated with respect to $f_{\varepsilon_i}$, or $M_{f_{\alpha + \varepsilon_i}}=0$. \vskip 2mm Our aim is to proceed inductively in order to compute the characteristic cycle of the local cohomology modules $H_I^r(M)$. To this purpose it is useful to visualize the \v{C}ech complex ${\check{C}}^{\bullet}(f_1,\dots,f_s;M)$ as a $s$-hypercube where the edges are the localization maps that, with the corresponding sign, describe the differentials of the complex. The exact sequence of complexes $$ 0 \longrightarrow {\check{C}}^{\bullet}(f_1,\dots,f_{s-1};M_{f_{s}})[-1] \longrightarrow {\check{C}}^{\bullet}(f_1,\dots,f_{s};M) \longrightarrow {\check{C}}^{\bullet}(f_1,\dots,f_{s-1};M){\longrightarrow} 0$$ can be easily identified in the $s$-hypercube. For the case $s=3$ we visualize the \v{C}ech complexes ${\check{C}}^{\bullet}(f_1,f_2,f_3;M)$, ${\check{C}}^{\bullet}(f_1,f_{2};M)$ and ${\check{C}}^{\bullet}(f_1,f_2;M_{f_{3}})[-1]$ as follows {\tiny $${\xymatrix { &M_{f_1f_2f_3} & \\ M_{f_1f_2} \ar[ur] & M_{f_1f_3} \ar[u] & M_{f_2f_3} \ar[ul] \\ M_{f_1} \ar[ur]|\hole \ar[u]&M_{f_2} \ar[ul] \ar[ur]& M_{f_3} \ar[ul]|\hole \ar[u] \\& M \ar[ul] \ar[u] \ar[ur]& }} \hskip 1cm {\xymatrix { &M_{f_1f_2f_3} & \\ M_{f_1f_2} & M_{f_1f_3} \ar[u] & M_{f_2f_3} \ar[ul] \\ M_{f_1} \ar[u]& M_{f_2} \ar[ul] & M_{f_3} \ar[ul] \ar[u] \\& M \ar[ul] \ar[u] & }} $$} Chasing the diagrams, one may check that the localization map with respect to $f_s$, i.e., the edges $M_{f_\alpha}{\longrightarrow} M_{f_{\alpha +\varepsilon_s}}$ in the $s$-hypercube, induces the connecting maps $\delta^j$ in the long exact sequence of cohomology modules $$ 0 {\longrightarrow} H^{-1}_{(f_1,...,f_{s-1})}(M_{f_s}) \longrightarrow H^0_I(M) \longrightarrow H^0_{(f_1,...,f_{s-1})}(M)\stackrel{\delta^0}{\longrightarrow} H^0_{(f_1,...,f_{s-1})}(M_{f_s}) \longrightarrow H^1_I(M) \longrightarrow \cdots$$ We are not going to give a precise description of the connecting maps, since we are only interested in the data given by the characteristic cycle. The formula we obtain in Theorem \ref{T} for the characteristic cycle of the local cohomology modules $H_I^r(M)$ is given just in terms of the components of the characteristic cycle of the localizations $M_{f_{\alpha}}$. The precise information we need to extract is given by the following algorithmic procedure. \begin{algorithm} \label{alg3} { (Characteristic cycle and \v{C}ech complex )} \vskip 2mm {\rm \noindent {\sc Input:} Characteristic cycles $CC(M_{f_{\alpha}})= \sum m_{\alpha,i} \hskip 1mm T_{X_i}^{\ast} X$ for $\alpha \in \{0,1\}^s$. \noindent {\sc Output:} (Pruned) characteristic subcycles $\overline{CC} (M_{f_{\alpha}}) \subseteq CC(M_{f_{\alpha}})$ for $\alpha \in \{0,1\}^s$. \vskip 2mm {\bf Prune the extra components} \vskip 2mm For $j$ from $1$ to $s$, incrementing by $1$ \vskip 2mm \begin{itemize} \item[\textbf{(j)}] ${\it Prune}$ the localizations $M_{f_{\alpha}} $ and $M_{f_{\alpha + \varepsilon_j}}$ for all $\alpha \in \{0,1\}^s$ such that $\alpha_j=0$, where `prune' means remove the components (counting multiplicities) such that both modules still have in common after step $(j-1)$. \end{itemize} \vskip 2mm \noindent {\sc Return:} The components and the corresponding multiplicities of a characteristic subcycle of $CC(M_{f_{\alpha}})$ for $\alpha \in \{0,1\}^s$. } \end{algorithm} For the case $s=3$ we visualize the steps of the algorithm as follows \hskip -1cm{\tiny $${\xymatrix { &M_{f_1f_2f_3} & \\ M_{f_1f_2} \ar@{.>}[ur] & M_{f_1f_3} \ar@{.>}[u] & M_{f_2f_3} \ar[ul] \\ M_{f_1} \ar@{.>}[ur]|\hole \ar@{.>}[u]&M_{f_2} \ar[ul] \ar@{.>}[ur]& M_{f_3} \ar[ul]|\hole \ar@{.>}[u] \\& M \ar[ul] \ar@{.>}[u] \ar@{.>}[ur]& }} \hskip .51cm {\xymatrix { &M_{f_1f_2f_3} & \\ M_{f_1f_2} \ar@{.>}[ur] & M_{f_1f_3} \ar[u] & M_{f_2f_3} \ar@{.>}[ul] \\ M_{f_1} \ar@{.>}[ur]|\hole \ar[u]&M_{f_2} \ar@{.>}[ul] \ar@{.>}[ur]& M_{f_3} \ar@{.>}[ul]|\hole \ar[u] \\& M \ar@{.>}[ul] \ar[u] \ar@{.>}[ur]& }} \hskip .51cm {\xymatrix { &M_{f_1f_2f_3} & \\ M_{f_1f_2} \ar[ur] & M_{f_1f_3} \ar@{.>}[u] & M_{f_2f_3} \ar@{.>}[ul] \\ M_{f_1} \ar[ur]|\hole \ar@{.>}[u]&M_{f_2} \ar@{.>}[ul] \ar[ur]& M_{f_3} \ar@{.>}[ul]|\hole \ar@{.>}[u] \\& M \ar@{.>}[ul] \ar@{.>}[u] \ar[ur]& }} $$} The solid arrows indicate the modules we must prune at each step. \begin{remark} {\rm As in the case $s=2$, the order we propose in the algorithm depends on the \v{C}ech subcomplexes we consider when computing the characteristic cycle of the cohomology of the \v{C}ech complex. We can obtain equivalent algorithms permuting the generators of the ideal $I$. It is also worth mentioning that the algorithm can be used in the algebraic context over any field of characteristic zero and in the analytic context.} \end{remark} \begin{theorem}\label{T} Let $I=(f_1,\dots,f_s)\subseteq R$ be an ideal and $M$ be an indecomposable holonomic $A_n$-module satisfying $(\dagger)$. Then $$ CC(H_I^r(M))= \sum_{|\alpha|=r} \overline{CC}(M_{f_\alpha}), $$ where the pruned characteristic subcycles are obtained with Algorithm \ref{alg3}. \end{theorem} \begin{proof} We proceed by induction on the number of generators $s$ of the ideal. The cases $s=1,2$ have been already done. For $s>2$ we have an exact sequence of complexes $$ 0 \longrightarrow {\check{C}}^{\bullet}(f_1,\dots,f_{s-1};M_{f_{s}})[-1] \longrightarrow {\check{C}}^{\bullet}(f_1,\dots,f_{s};M) \longrightarrow {\check{C}}^{\bullet}(f_1,\dots,f_{s-1};M){\longrightarrow} 0$$ Splitting the corresponding associated long exact sequence of cohomology modules $$ 0 {\longrightarrow} H^{-1}_{(f_1,...,f_{s-1})}(M_{f_s}) \longrightarrow H^0_I(M) \longrightarrow H^0_{(f_1,...,f_{s-1})}(M)\stackrel{\delta^0}{\longrightarrow} H^0_{(f_1,...,f_{s-1})}(M_{f_s}) \longrightarrow H^1_I(M) \longrightarrow \cdots$$ into short exact sequences we obtain \vskip 2mm \hskip 1cm $0{\longrightarrow} A_r {\longrightarrow} H^r_{I}(M){\longrightarrow} B_r {\longrightarrow} 0$ \vskip 2mm \hskip 1cm $0{\longrightarrow} B_r {\longrightarrow} H^r_{(f_1,\dots,f_{s-1})}(M){\longrightarrow} C_{r} {\longrightarrow} 0$ \vskip 2mm \hskip 1cm $0{\longrightarrow} C_{r} {\longrightarrow} H^{r}_{(f_1,\dots,f_{s-1})}(M_{f_s}){\longrightarrow} A_{r+1} {\longrightarrow} 0$ \vskip 2mm The characteristic cycle of $H^r_{(f_1,\dots,f_{s-1})}(M)$ (resp. $H^{r}_{(f_1,\dots,f_{s-1})}(M_{f_s})$) is the formal sum of components of $M_{f_{\alpha}}$ satisfying $\alpha_s=0$ and $|\alpha|=r$ (resp. $\alpha_s=1$ and $|\alpha|=r+1$) that survived to step $(s-1)$ of the algorithm. Thus, for every $r$, in order to get $CC(H^r_{I}(M))$ we only have to compute $CC(C_r)$ and use additivity of the characteristic cycle with respect to short exact sequences. \vskip 2mm {\bf Claim:} $CC(C_r)= \sum m_i \hskip 1mm T_{X_i}^{\ast} X$ where the sum is taken over the components (counting multiplicities) that $CC(H^r_{(f_1,\dots,f_{s-1})}(M))$ and $CC(H^{r}_{(f_1,\dots,f_{s-1})}(M_{f_s}))$ have in common. \vskip 2mm Assume that there is a component $T_{X_i}^{\ast} X$ in $CC(H^r_{(f_1,\dots,f_{s-1})}(M))$ and $CC(H^{r}_{(f_1,\dots,f_{s-1})}(M_{f_s}))$ not appearing in $CC(C_r)$, i.e. it has not been pruned in step $(s)$. In particular, this component belongs to the characteristic cycle of some localization modules $M_{f_{\alpha}}$ and $M_{f_{\alpha+\varepsilon_s}}$ satisfying $\alpha_s=0$ and $|\alpha|=r$. Then, this component is not pruned in the computation of the characteristic cycle of the cohomology of a convenient proper \v{C}ech subcomplex of $ {\check{C}}^{\bullet}(f_1,\dots,f_s;M)$ containing $M_{f_{\alpha}}$ and $M_{f_{\alpha+\varepsilon_s}}$. Thus we get a contradiction. \end{proof} If $M$ is not indecomposable we only have to apply Theorem \ref{T} to each component. \begin{example} Consider $R=\mathbb{C}[x]$ and the holonomic $A_1$-module $M= R \oplus H_{(x)}^1(R)$. We have: \vskip 2mm $CC(M)= T^*_{X}X + T^*_{\{x=0\}}X$ $CC(M_x)= T^*_{X}X + T^*_{\{x=0\}}X$. \vskip 2mm \noindent Applying the pruning algorithm to each component, that satisfy $(\dagger)$, we get $$ CC(H^0_{(x)}(M))= CC(H^1_{(x)}(M))= T^*_{\{x=0\}}X. $$ One may be tempted to apply the pruning algorithm to $M$, however, it misleads us resulting in the seeming vanishing of the local cohomology modules $H^r_{(x)}(M)$. Notice that $M$ is not saturated with respect to $f=x$. \end{example} The question whether Theorem \ref{T} still holds for indecomposable $A_n$-modules not satisfying $(\dagger)$ is open. For some examples we may give an affirmative answer. \begin{example} Set $R=\mathbb{C}[x,y]$. The holonomic $A_2$-module $M=H_{(xy)}^1(R)$ is not saturated with respect to $f=y$. We have: \vskip 2mm $CC(M)= T^*_{\{x=0\}}X +T^*_{\{y=0\}}X + T^*_{\{x=y=0\}}X$ $CC(M_y)= T^*_{\{x=0\}}X + T^*_{\{x=y=0\}}X$. \vskip 2mm \noindent Pruning the components they have in common we get $CC(H^0_{(y)}(M))= T^*_{\{y=0\}}X$. It agrees with the fact that $H^0_{(y)}(M)\cong H_{(y)}^1(R)$. \end{example} \begin{remark} Let $I=(f_1,\dots,f_s)\subseteq R$ be an ideal and $M$ be an indecomposable holonomic $A_n$-module saturated with respect to $f_1\cdots f_s$. The first step of Algorithm \ref{alg3} also comes from the fact that the \v{C}ech complex ${\check{C}}^{\bullet}(f_1,\dots,f_s;M)$ is quasi-isomorphic to $$ \hskip 5mm 0 \longrightarrow 0 \stackrel{d_0}\longrightarrow M_{f_1}/M \stackrel{d_1}\longrightarrow \bigoplus_{i=2}^{s} M_{f_1f_i}/ M_{f_i} \stackrel{d_2}\longrightarrow \cdots \longrightarrow M_{f_1\cdots f_s}/ M_{f_2\cdots f_{s}}\longrightarrow 0. $$ It would be interesting to continue the pruning process for the whole complex (not just for the components of the characteristic cycle) in order to find a complete description of a minimal complex quasi-isomorphic to the \v{C}ech complex. \vskip 2mm A canonical \v{C}ech complex is introduced in \cite{MS05} for the case of $I$ being a monomial ideal and $M=R$. This complex is associated to a minimal free resolution of $R/I$ in the same way as the usual \v{C}ech complex is associated to the Taylor resolution of $R/I$. The difference with the pruned \v{C}ech complex we propose is that, roughly speaking, they only prune when the localization map $R_{f_\alpha}{\longrightarrow} R_{f_{\alpha+\varepsilon_i}}$ is the identity. \end{remark} \section{Examples} In this section we want to present some examples where we will apply our algorithm to compute the characteristic cycle of local cohomology modules. First we have to study localizations $R_f$ of the polynomial ring $R={\mathbb{Q}}[x_1,\dots,x_n]$ at a polynomial $f\in R$. To compute its characteristic cycle directly one needs to: \vskip 2mm \hskip 1cm $\cdot$ Construct a presentation of the $A_n$-module $R_f$, \hskip 1cm $\cdot$ Compute the characteristic ideal $J(R_f)$, \hskip 1cm $\cdot$ Compute the primary decomposition of $J(R_f)$ and its corresponding multiplicities. \vskip 2mm The first two steps require expensive computations in the Weyl algebra $A_n$ since we have to compute the Bernstein-Sato polynomial of $f$. For some short examples we can do the job just using the {\tt Macaulay2} commands {\tt Dlocalize} and {\tt charIdeal}. \vskip 2mm Following the approach of this work we have developed some scripts written in {\tt Macaulay 2} that compute and print out the list of components and the corresponding multiplicities showing up in the characteristic cycles of the localizations $R_f$ in the examples we present in this section. In fact we develop two different strategies that we may use depending on the examples we want to treat. \vskip 2mm {\it $\cdot$ Single localization:} \hskip 2mm Since the characteristic cycle of $R$ is $CC(R)= T_X^*X$, the characteristic cycle of $R_f$ is $CC(R_f)=T_X^*X+\Gamma$, where $\Gamma$ is computed according to Theorem \ref{propBMM} so we may compute it in one step. Notice that the defining ideal of $\Gamma$ may be quite large so computing its primary decomposition can be expensive. \vskip 2mm {\it $\cdot$ Iterative localization:} \hskip 2mm We can apply Theorem \ref{propBMM} in an iterative way on the components of the polynomial $f$. This strategy is useful to treat large examples, since, usually, it leads to computing primary decompositions of ideals of lower degrees compared to the former strategy. \vskip 2mm For the examples we present in this work both strategies can be applied. \subsection{Local cohomology modules} \label{subsecLocalCohomology} Consider the ideal $I\subset R={\mathbb{Q}}[x_1,...,x_6]$ generated by the minors of the matrix $$ \left( \begin{array}{lll} x_1 & x_2 & x_3 \\ x_4 & x_5 & x_6 \end{array}\right).$$ It is a nontrivial problem to show that the local cohomology module $H^3_I(R)$ is nonzero (see \cite[Remark 3.13]{HL90}, \cite{KL02}). For example, {\tt Macaulay 2} runs out of memory before computing this module with the command {\tt localCohom}. U.~Walther \cite[Example 6.1]{Wa99} gives a complete description of this module using a tailor-made implementation of his algorithm which is based on the construction of the \v{C}ech complex. The difference with the implementation of the {\tt Macaulay 2} command is that he uses iterative localization to reduce the complexity in the computation of Bernstein-Sato polynomials. \vskip 2mm Our method makes it possible to prove algorithmically that $H^3_I(R)\neq 0$ from the computation of the characteristic cycles of the localization modules in the \v{C}ech complex which for this particular example looks like $$(\star) \hskip 5mm 0 \rightarrow R \rightarrow R_{f_1}\oplus R_{f_2}\oplus R_{f_3} \rightarrow R_{f_1f_2}\oplus R_{f_1f_3}\oplus R_{f_2f_3} \rightarrow R_{f_1f_2f_3} \rightarrow 0,$$ where $f_1= x_1x_5-x_2x_4, \hskip 2mm f_2= x_1x_6-x_3x_4 \hskip 2mm {\rm and} \hskip 2mm f_3= x_2x_6-x_3x_5$. \vskip 2mm \begin{remark} By flat base change we can also deduce the non-vanishing of the local cohomology module $H^3_I(R)$ where $R=k[x_1,...,x_6]$ is the polynomial ring over any field $k$ of characteristic zero. \end{remark} The list of components and their corresponding multiplicities showing up in the characteristic cycles of the chains in the \v{C}ech complex $(\star)$ and different from the whole space $X$ that we get with our script contains 14 elements. A sample entry is as follows: \begin{Macaulay2} \small \begin{verbatim} Component = V(ideal (x x - x x , x x - x x , x x - x x )) 3 5 2 6 3 4 1 6 2 4 1 5 entries-> HashTable{{0, 1, 2} => 2} {0, 1} => 1 {0, 2} => 1 {0} => 0 {1, 2} => 1 {1} => 0 {2} => 0 \end{verbatim} \end{Macaulay2} \noindent Namely, the component corresponding to the ideal $I$ is present with multiplicity one in $R_{f_1f_2}$, $R_{f_2f_3}$, $R_{f_1f_3}$ and with multiplicity two in $R_{f_1f_2f_3}$. The following is the complete list of 14 components: \vskip 2mm $\begin{array}{ccc} \hskip -1.3cm \hskip -1.8cm A_1 = V(f_1),& \hskip -1.8cm A_2 = V(f_2),& \hskip -1.8cm A_3 = V(f_3),\\ \hskip -1.3cm \hskip -1.1cm B_1 = V(x_3,x_6),& \hskip -1.1cm B_2 = V(x_2,x_5),&\hskip -1.1cm B_3 = V(x_1,x_4),\\ \hskip -1.3cm \hskip -0.6cm C_1 = V(x_3,x_6,f_1),& \hskip -0.6cm C_2 = V(x_2,x_5,f_2),& \hskip -0.6cm C_3 = V(x_1,x_4,f_3),\\ \hskip -1.3cm D_1 = V(x_1,x_2,x_4,x_5),& D_2 = V(x_1,x_3,x_4,x_6),& D_3 = V(x_2,x_3,x_5,x_6),\\ \hskip -1.3cm \hskip 1cm E = V(x_1,x_2,x_3,x_4,x_5,x_6),& \hskip -2cm F = V(I).& \end{array}$ \vskip 2mm \noindent Piecing the results of our computation together we can draw the $3$-hypercube in Figure $1$. \vskip 2mm \begin{figure}[h]\label{cocellular2} {\small $${\xymatrix { &{\begin{array}{cccc} X & B_1&C_1&D_1 \\ A_1&B_2&C_2&D_2\\ A_2&B_3&C_3&D_3 \\ A_3&\textbf{F}[2]&\textbf{E}& \end{array} } & \\ {\begin{array}{cccc} X & A_1 & A_2 &B_3 \\ C_3&D_1 &D_2 &\textbf{F} \end{array}} \ar[ur] & {\begin{array}{cccc} X & A_1 & A_3 &B_2 \\ C_2&D_1 &D_3 &\textbf{F} \end{array}} \ar[u] & {\begin{array}{cccc} X & A_2 & A_3 &B_1 \\ C_1&D_2 &D_3 &\textbf{F} \end{array}} \ar[ul] \\ X,A_1,D_1 \ar[ur]|\hole \ar[u]&X,A_2,D_2 \ar[ul] \ar[ur]& X,A_3,D_3 \ar[ul]|\hole \ar[u] \\& X \ar[ul] \ar[u] \ar[ur]& }}$$} \caption{Components of characteristic cycles for the \^Cech complex $(\star)$ (multiplicity~$>1$ is specified in square brackets).} \end{figure} To compute the characteristic cycle of the cohomology modules we have to apply Theorem \ref{T} that has been implemented in the routine {\tt PruneCechComplexCC}. According to the output \begin{Macaulay2} \small \begin{verbatim} {} => {} {0} => {} {1} => {} {2} => {} {0, 1} => {ideal (x x - x x , x x - x x , x x - x x ) => 1} 3 5 2 6 3 4 1 6 2 4 1 5 {0, 2} => {} {1, 2} => {} {0, 1, 2} => {ideal (x , x , x , x , x , x ) => 1} 6 5 4 3 2 1 \end{verbatim} \end{Macaulay2} \noindent we get $CC(H^2_I(R)) = T^{\ast}_F X$ and $CC(H^3_I(R)) = T^{\ast}_E X$. Finally, it is worth to point out that the obtained result is coherent with the fact that the local cohomology module $H^3_I(R)$ is isomorphic to the injective hull of the residue field $E_R(R/(x_1,...,x_6))$. \subsection{Lyubeznik numbers} Let $R=k[x_1,...,x_n]$ be the polynomial ring over a field $k$ of characteristic zero. Let $I\subseteq R$ be an ideal and ${\mathfrak m}=(x_1,...,x_n)$ be the homogeneous maximal ideal. G.~Lyubeznik \cite{Ly93} has defined a new set of numerical invariants of the quotient ring $R/I$ by means of the Bass numbers $${\lambda}_{p,i}(R/I):=\mu_p({\mathfrak m},H_I^{n-i}(R)) := {\mathrm {dim }}_k \hskip 1mm {\mbox{\rm{Ext}} }_R^p(k,H_I^{n-i}(R)).$$ These invariants can be described as the multiplicities of the characteristic cycle of the local cohomology modules $H_{{\mathfrak m} }^p(H_{I}^{n-i}(R))$ (see \cite{Al02}). Namely, $$CC(H_{{\mathfrak m} }^p(H_{I}^{n-i}(R)))= {\lambda}_{p,i} \hskip 1mm T^{\ast}_E X $$ \vskip 2mm Lyubeznik numbers carry interesting topological information of the quotient ring $R/I$ as it is pointed in \cite{Ly93} and \cite{GS98}. To compute them for a given ideal $I\subseteq R$ and arbitrary $i,p$ we refer to U.~Walther's algorithm \cite[Algorithm 5.3] {Wa99} even though it has not been implemented yet. When $I$ is a squarefree monomial ideal, a description of these invariants is given in \cite{Al00}. Some other particular computations may also be found in \cite{GS98} and \cite {Wa01}. \vskip 2mm Let $I\subset R={\mathbb{Q}}[x_1,...,x_6]$ be the ideal generated by the minors of the matrix $$ \left( \begin{array}{lll} x_1 & x_2 & x_3 \\ x_4 & x_5 & x_6 \end{array}\right)$$ considered above, i.e. $I= (x_1x_5-x_2x_4, x_1x_6-x_3x_4, x_2x_6-x_3x_5)$. We want to compute the characteristic cycle of the local cohomology modules $H_{{\mathfrak m} }^p(H_{I}^{i}(R))$ for $i=2,3$ and $\forall p$ so we have to construct the \v{C}ech complex $$(\star \star) \hskip 5mm 0 \rightarrow M \rightarrow \bigoplus_{i=1}^{6} M_{x_i} \rightarrow \cdots \rightarrow M_{x_1\cdots x_6} \rightarrow 0,$$ where $M$ is either $H_{I}^{2}(R)$ or $H_{I}^{3}(R)$. Then we have to compute the characteristic cycles of the localization modules and use Theorem \ref{T}. \vskip 2mm $\bullet$ For $M=H^3_I(R)$ we know that its characteristic cycle is $T^{\ast}_E X$ so, applying Theorem \ref{propBMM}, the \v{C}ech complex $(\star \star)$ reduces to the first term. Then, $$CC(H^0_{{\mathfrak m} }(H^3_I(R))) = T^{\ast}_E X$$ and the other local cohomology modules vanish. \vskip 2mm $\bullet$ For $M=H^2_I(R)$ we obtain $$CC(H^2_{{\mathfrak m} }(H^2_I(R))) = T^{\ast}_E X$$ $$CC(H^4_{{\mathfrak m} }(H^2_I(R))) = T^{\ast}_E X$$ and the other local cohomology modules vanish. We are not going to present the complete output with all the components as in Figure $1$ for this case but at least we are going to show the multiplicities of the component $T^{\ast}_E X$ appearing in the \v{C}ech complex $(\star \star)$ in Figure $2$. We point out that the components $T^{\ast}_E X$ that survive to Algorithm \ref{alg3} belong to $CC(M_{x_1x_2})$ and $CC(M_{x_1x_4x_5x_6})$. \begin{figure}[h] {\small $$\hskip -1cm {\xymatrix { &{\begin{array}{cccccccccccccccccccc} M: \emptyset &M_{x_1}: \emptyset &M_{x_1x_2}: E& M_{x_1x_2x_3}: E[2]& M_{x_1x_2x_3x_4}: E[3] &M_{x_1x_2x_3x_4x_5}: E[3]& M_{x_1x_2x_3x_4x_5x_6}: E[3]\\ &M_{x_2}: \emptyset&M_{x_1x_3}: E&M_{x_1x_2x_4}: E&M_{x_1x_2x_3x_5}: E[3]&M_{x_1x_2x_3x_4x_6}: E[3]& \\ &M_{x_3}: \emptyset&M_{x_1x_4}: \emptyset&M_{x_1x_2x_5}: E&M_{x_1x_2x_3x_6}: E[3]&M_{x_1x_2x_3x_5x_6}: E[3]& \\ &M_{x_4}: \emptyset&M_{x_1x_5}: E&M_{x_1x_2x_6}: E[3]&M_{x_1x_2x_4x_5}: E&M_{x_1x_2x_4x_5x_6}: E[3]& \\ &M_{x_5}: \emptyset&M_{x_1x_6}: E&M_{x_1x_3x_4}: E&M_{x_1x_2x_4x_6}: E[3]&M_{x_1x_3x_4x_5x_6}: E[3]& \\ &M_{x_6}: \emptyset&M_{x_2x_3}: E&M_{x_1x_3x_5}: E[3]&M_{x_1x_2x_5x_6}: E[3]&M_{x_2x_3x_4x_5x_6}: E[3]& \\ &&M_{x_2x_4}: E&M_{x_1x_3x_6}: E&M_{x_1x_3x_4x_5}: E[3]&& \\ &&M_{x_2x_5}: \emptyset &M_{x_1x_4x_5}: E&M_{x_1x_3x_4x_6}: E&& \\ &&M_{x_2x_6}: E&M_{x_1x_4x_6}: E&M_{x_1x_3x_5x_6}: E[3]&& \\ &&M_{x_3x_4}: E&M_{x_1x_5x_6}: E[3]&M_{x_1x_4x_5x_6}: E[3]&& \\ &&M_{x_3x_5}: E&M_{x_2x_3x_4}: E[3]&M_{x_2x_3x_4x_5}: E[3]&& \\ &&M_{x_3x_6}: \emptyset &M_{x_2x_3x_5}: E&M_{x_2x_3x_4x_6}: E[3]&& \\ &&M_{x_4x_5}: E&M_{x_2x_3x_6}: E&M_{x_2x_3x_5x_6}: E&& \\ &&M_{x_4x_6}: E&M_{x_2x_4x_5}: E&M_{x_2x_4x_5x_6}: E[3]&& \\ &&M_{x_5x_6}: E&M_{x_2x_4x_6}: E[3]&M_{x_3x_4x_5x_6}: E[3]&& \\ &&&M_{x_2x_5x_6}: E&&& \\ &&&M_{x_3x_4x_5}: E[3]&&& \\ &&&M_{x_3x_4x_6}: E&&& \\ &&&M_{x_3x_5x_6}: E&&& \\ &&&M_{x_4x_5x_6}: E[2]&&& \end{array} }}}$$} \caption{Component $T^{\ast}_E X$ appearing in the \v{C}ech complex $(\star \star)$ (multiplicity $>1$ is specified in square brackets).} \end{figure} \vskip 2mm Using the properties that Lyubeznik numbers satisfy (see \cite[Section 4]{Ly93}), we can collect the multiplicities in a triangular matrix as follows: $$\Lambda(R/I)=\begin{pmatrix} 0 & 0 & 0 & 1 & 0 \\ & 0 & 0& 0 & 0 \\ & & 0 & 0 & 1 \\ & & & 0 & 0 \\ & & & & 1 \end{pmatrix}$$ The complex variety $V$ defined by $I$ has an isolated singularity at the origin. The singular cohomology groups of $V$ with complex coefficients and support at the origin can be described from Lyubeznik numbers (see \cite{GS98}). In our case we get \vskip 2mm \hskip 1cm $1=\lambda_{4,4}= \dim_{\mathbb{C}} H^8_{\{0\}}(V,\mathbb{C})$, \hskip 1cm $1=\lambda_{2,4}= \dim_{\mathbb{C}} H^6_{\{0\}}(V,\mathbb{C})$, \hskip 1cm $1=\lambda_{0,3}= \dim_{\mathbb{C}} H^3_{\{0\}}(V,\mathbb{C})$. \section{Conclusion and possible developments} We have shown that characteristic cycles of local cohomology modules can be be computed by algorithm operating in commutative polynomial rings as opposed to the direct computation of these modules, which requires Gr\"{o}bner bases technique in noncommutative Weyl algebras. \vskip 2mm The computational engine of our method is primary decomposition, from which we extract only geometrical information. This prompts a natural interest in \emph{numerical primary decomposition}, an algorithm that would produce just that -- the descriptions of reduced components and their multiplicities -- by means of numerical computations.
1,108,101,566,284
arxiv
\section{Introduction} Neural language generation models optimized by likelihood have a tendency towards `safe' word choice. This lack of output diversity has been noted in NMT \citep{vanmassenhove-etal-2019-lost} and throughout NLP \citep{li-etal-2016-diversity,sultan-etal-2020-importance}. Model-generated language may be repetitive or stilted. More insidiously, generating the most likely output based only on corpus statistics can amplify any existing biases in the corpus \citep{zhao-etal-2017-men}. Potential harms arise when biases around word choice or grammatical gender inflections reflect demographic or social biases \citep{sun-etal-2019-mitigating}. The resulting gender mistranslations could involve implicit misgendering of a user or other referent, or perpetuation of social stereotypes about the `typical' gender of a referent in a given context. Past approaches to the problem almost exclusively involve retraining \citep{vanmassenhove-etal-2018-getting, escude-font-costa-jussa-2019-equalizing,bergmanis2020mitigating} or tuning \citep{saunders-byrne-2020-reducing,basta-etal-2020-towards} on gender-adjusted data. Such approaches are often computationally expensive and risk introducing new biases \citep{shah-etal-2020-predictive}. Instead, we seek to improve translations from existing models. \citet{roberts2020decoding} highlight beam search's tendency to amplify gender bias and \citet{renduchintala-etal-2021-gender} show that very shallow beams degrade gender translation accuracy; we instead guide beam search towards better gender translations further down the n-best list. Our contributions are as follows: we rerank NMT n-best lists, demonstrating that we can extract better gender translations from the \emph{original model's} beam. We also generate new n-best lists subject to gendered inflection constraints, and show this makes correctly gendered entities more common in n-best lists. We make no changes to the NMT model or training data, and require only monolingual resources for the source and target languages. \subsection{Related work} Prior work mitigating gender bias in NLP often involves adjusting training data, directly \cite{zhao-etal-2018-gender} or via embeddings \cite{bolukbasi2016man}. Our inference-only approach is closer to work on controlling or `correcting' gendered output. Controlling gender translation generally involves introducing external information into the model. \citet{miculicich-werlen-popescu-belis-2017-using} integrate cross-sentence coreference links into reranking to improve pronoun translation. \citet{vanmassenhove-etal-2018-getting} and \citet{moryossef-etal-2019-filling} incorporate sentence-level gender features into training data and during inference respectively. Token-level source gender tags are used by \citet{bergmanis2020mitigating} and \citet{saunders-etal-2020-neural}. As in this prior work, our focus is applying linguistic gender-consistency information, rather than obtaining it. A separate line of work treats gender-related inconsistencies as a search and correction problem. \citet{roberts2020decoding} find that beam search amplifies gender bias compared to sampling search. \citet{saunders-byrne-2020-reducing} rescore translations with a model fine-tuned for additional gender sensitivity, constraining outputs to gendered-reinflections of the original. Related approaches for monolingual tasks reinflect whole-sentence gender \citep{habash-etal-2019-automatic, alhafni-etal-2020-gender, sun2021they}. An important difference in our work is use of the same model for initial translation and reinflection, reducing computation and complexity. \section{Finding consistent gender in the beam} There are two elements to our proposed approach. First, we \emph{produce an n-best list} of translations using our single model per language pair. We use either standard beam search or a two-pass approach where the second pass searches for differently-gendered versions of the highest likelihood initial translation. We then \emph{select a translation} from the list, either by log likelihood or by how far the target language gender features correspond to the source sentence. \subsection{Gender-constrained n-best lists} \label{ss:n-best} \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{simplelattice2.png} \caption{Constraints for a toy initial hypothesis. \label{fig:lattice} \end{figure} We produce n-best lists in two ways. One option is standard beam search. Alternatively, we synthesize n-best lists using the gendered constraint scheme of \citet{saunders-byrne-2020-reducing}, illustrated in Figure \ref{fig:lattice}. This involves a second \emph{gender-constrained} beam search pass to reinflect an initial hypothesis, producing a synthesized n-best list containing gendered alternatives of that hypothesis. The second reinflection pass uses a target language \emph{gender inflection transducer} which defines grammatically gendered reinflections. For example, Spanish definite article \emph{el} could be unchanged or reinflected to \emph{la}, and profession noun \emph{médico} could be reinflected to \emph{médica} (and vice versa). Composing the reinflections with the original hypothesis generates a \emph{constrained hypothesis lattice}. We can now perform constrained beam search, which can encourage NMT to output specific vocabulary \citep{stahlberg-etal-2016-syntactically, khayrallah-etal-2017-neural}. The only difference from standard beam search is that gender-constrained search only expands translations forming paths in the constrained hypothesis lattice. In the Figure \ref{fig:lattice} example, beam-$n$ search would produce the $n$ most likely translations, while the gender-constrained pass would only produce the 4 translations in the lattice. Importantly, for each language pair we use just one NMT model to produce gendered variations of its \emph{own} hypotheses. Unlike \citet{saunders-byrne-2020-reducing} we do not reinflect translations with a separate gender-sensitive model. This removes the complexity, potential bias amplification and computational load of developing the gender-translation-specific models central to their approach. While we perform two full inference passes to simplify implementation, further efficiency improvements are possible. For example, the source sentence encoding could be reused for the reinflection pass. In principle, some beam search constraints could be applied in the first inference pass, negating the need for two passes. These potential efficiency gains would not be possible if using a separate NMT model to reinflect the translations. \begin{figure*}[ht] \centering \includegraphics[width=\linewidth]{workflow.png} \caption{Complete workflow for a toy en-es example. We have two options for producing an n-best list - standard or gender-constrained search - and can then either take the highest likelihood output from the list, or rerank it.} \label{fig:workflow} \end{figure*} \subsection{Reranking gendered translations} \label{ss:rerank} \begin{algorithm} \caption{Gender-reranking an n-best list}\label{alg:cap} \textbf{Input:} $x$: Source sentence; $Y$: set of translation hypotheses for $x$; $L$: Log likelihoods for all $y \in Y$; $A$: word alignments between $x$ and all $y$ \begin{algorithmic} \State $p, p_g \gets \text{pronoun\_and\_gender}(x)$ \Comment{Or oracle} \State $e \gets \text{get\_entity}(x, p)$ \Comment{Or oracle} \ForAll{$y \in Y$} \State $y_{score} \gets 0$ \ForAll{$t \in A_y(e)$} \Comment{Translated entity} \State $t_g \gets \text{get\_gender(t)}$ \If{$t_g = p_g$} \State $y_{score}\mathrel{+}= 1$ \EndIf \EndFor \EndFor \State $\hat{Y}=\{\text{argmax}_{y}(y_{score}, y \in Y)\}$ \State $\hat{y}=\text{argmax}_{y}(L(y), y \in \hat{Y})$\\ \Return $\hat{y}$ \end{algorithmic} \end{algorithm} We select an output translation from an n-best list in two ways, regardless of whether the list was produced by beam search or the two-pass approach. One option selects the highest-likelihood translation under the NMT model. Alternatively, we rerank for gender consistency with the source sentence. We focus on either \emph{oracle} or \emph{inferred} entities coreferent with a source pronoun. The \emph{oracle} case occurs in several scenarios. Oracle entity labels could be provided as for the WinoMT challenge set \citep{stanovsky-etal-2019-evaluating}. They could also be user-defined for known entities \citep{vanmassenhove-etal-2018-getting}, or if translating the same sentence with different entity genders to produce multiple outputs \citep{moryossef-etal-2019-filling}. The \emph{inferred} case determines entities automatically given a source pronoun\footnote{In \ref{sec:names} we show this could also be a source named entity.} and its grammatical gender. We find coreferent entities using a target language coreference resolution tool in get\_entity. For brevity Algorithm 1 is written for one entity per sentence: in practice there is no such limit. For each entity we find the aligned translated entity, similar to \citet{bergmanis2020mitigating}. We determine the translated entity's grammatical gender by target language morphological analysis in get\_gender. Finally we rerank, first by source gender agreement, tie-breaking with log likelihood\footnote{Reranking code and n-best lists at \url{https://github.com/DCSaunders/nmt-gender-rerank}}. \section{Experimental setup} \label{ss:data} We translate English into German, Spanish and Hebrew using Transformers \citep{vaswani2017attention}. We train the en-de model on WMT19 newstask data including filtered Paracrawl \cite{barrault-etal-2019-findings}, en-es on UNCorpus data \citep{ziemski-etal-2016-united}, and en-he on the IWSLT corpus \citep{cettolo2014report}. For further training details see Appendix \ref{appendix-experimental}. Some proposed steps require tools or resources: 1) For gender-constrained search, creating gender inflection transducers; 2) For inferred-reranking, finding source gendered entities 3) For all reranking, finding translated gendered entities; 4) For all reranking, getting translated entity genders. For 1) we use Spacy \citep{honnibal2017spacy} and DEMorphy \citep{altinok2018demorphy} morphological analysis for Spanish and German, and fixed rules for Hebrew, on large vocabulary lists to produce gender transducers, following \citet{saunders-byrne-2020-reducing}\footnote{Scripts and data for lattice construction as in \citet{saunders-byrne-2020-reducing} provided at \url{https://github.com/DCSaunders/gender-debias}}. The highest likelihood outputs from beam-4 search form the original hypothesis lattices. For 2) we use a RoBERTa model \citep{liu2019roberta} tuned for coreference on Winograd challenge data\footnote{Model from \url{https://github.com/pytorch/fairseq/tree/master/examples/roberta/wsc}}. For 3) we use fast\_align \citep{dyer-etal-2013-simple}. For 4) we use the same morphological analysis as in 1, now on translated entities. We evaluate gender translation on WinoMT \citep{stanovsky-etal-2019-evaluating} via accuracy and $\Delta$G (F1 score difference between masculine and feminine labelled sentences, closer to 0 is better). As WinoMT lacks references we assess cased BLEU on WMT18 (en-de), WMT13 (en-es) and IWSLT14 (en-he) using SacreBLEU \citep{post-2018-call}. \section{Results and discussion} \begin{table*}[t] \centering \small \begin{tabular}{p{0.05cm}|c|cc|ccc|ccc|ccc|} \cline{2-13} & \multirow{2}{*}{\textbf{Beam}} & \textbf{Gender} & \textbf{Oracle}& \multicolumn{3}{c|}{\textbf{en-de}} & \multicolumn{3}{c|}{\textbf{en-es}} & \multicolumn{3}{c|}{\textbf{en-he}}\\ & & \textbf{constrain} & \textbf{rerank} &BLEU & Acc & $\Delta$G & BLEU & Acc & $\Delta$G& BLEU & Acc & $\Delta$G\\ \cline{2-13} \footnotesize{1} & \multirow{4}{*}{4} & $\times$ & $\times$ & \textbf{42.7} & 60.1 & 18.6 & 27.5 & 47.8& 38.4 & 23.8&47.5 &21.1\\ \footnotesize{2} & & \checkmark & $\times$ & \textbf{42.7} & 59.1 & 20.1 & \textbf{27.8} & 48.3 & 36.2 & 23.8 & 47.4& 21.5 \\ \footnotesize{3} & & $\times$ & \checkmark & - & 66.5 &10.1 & - & 53.9 & 25.9 & - &52.0&16.8\\ \footnotesize{4} & & \checkmark& \checkmark & - & 77.9&\textbf{-0.6}& - &55.7& 22.3& -& 54.5& 13.7\\ \cline{2-13} \footnotesize{5} & \multirow{4}{*}{20} & $\times$ & $\times$ & 42.3 & 59.0 & 20.1 & 27.3 & 46.4 & 40.7 & \textbf{24.0} & 46.8&22.5\\ \footnotesize{6} & & \checkmark & $\times$ & \textbf{42.7} & 59.0 & 20.3& \textbf{27.8} & 48.3 & 36.2& 23.8 & 47.3 & 21.7\\ \footnotesize{7} & & $\times$ & \checkmark & - & 74.3 &2.4 & - &63.5&11.0 & - &59.3&11.2\\ \footnotesize{8} & & \checkmark & \checkmark & - & \textbf{84.2} & -3.6 & - &\textbf{66.3} & \textbf{8.1} & - &\textbf{65.3}&\textbf{4.9}\\ \cline{2-13} \end{tabular} \caption{Accuracy (\%) and masculine/feminine F1 difference $\Delta$G, oracle-reranking WinoMT. BLEU scores are for en-de WMT18, en-es WMT13, and en-he IWSLT14, which lack gender labels so cannot be oracle-reranked. } \label{tab:mfresults} \end{table*} \begin{table*}[t] \centering \small \begin{tabular}{p{0.05cm}|c|cc|ccc|ccc|ccc|} \cline{2-13} & \multirow{2}{*}{\textbf{Beam}} & \textbf{Gender} & \textbf{Inferred} & \multicolumn{3}{c|}{\textbf{en-de}} & \multicolumn{3}{c|}{\textbf{en-es}} & \multicolumn{3}{c|}{\textbf{en-he}}\\ & & \textbf{constrain} & \textbf{rerank} &BLEU & Acc & $\Delta$G & BLEU& Acc & $\Delta$G& BLEU& Acc & $\Delta$G\\ \cline{2-13} \footnotesize{1}& \multirow{2}{*}{4} & $\times$ & \checkmark &42.7 & 65.9 & 10.7 & 27.5 &52.6 &28.1 &23.8&51.3&17.0 \\ \footnotesize{2} & & \checkmark& \checkmark &42.7 & 76.4 & 0.5 & 27.8 & 53.9 & 24.6 & 23.8 &53.6&14.4 \\ \cline{2-13} \footnotesize{3}& \multirow{2}{*}{20} & $\times$ & \checkmark& 42.2& 72.9 &3.3 & 27.3& 60.2 & 15.3 &24.0 &57.8& 11.9\\ \footnotesize{4} & & \checkmark & \checkmark &42.6 & 81.8 & -2.6 & 27.8& 63.5& 10.9&23.8 &62.8& 6.2\\ \cline{2-13} \end{tabular} \caption{Accuracy (\%) and masculine/feminine F1 difference $\Delta$G. Inferred-reranking with genders and entities for WinoMT and generic test sets determined by a RoBERTa model. Non-reranked results unchanged from Table \ref{tab:mfresults}.} \label{tab:roberta} \end{table*} \subsection{Oracle entities} We first describe oracle-reranking n-best lists in Table \ref{tab:mfresults}, before proceeding to the more general scenario of inferred-reranking. Comparing lines 1 vs 2, gender-constrained beam-4 search taking the highest likelihood output scores similarly to standard beam-4 search for all metrics and language pairs. For beam-20 (5 vs 6) en-de and en-es, constraints do mitigate the BLEU degradation common with larger beams \citep{stahlberg-byrne-2019-nmt}. In lines 1 vs 3, 5 vs 7, we oracle-rerank beam search outputs instead of choosing by highest likelihood. We see about 10\% accuracy improvement relative to non-reranked beam-4 across languages, and over 25\% relative improvement for beam-20. Combining oracle-reranking and constraints further boosts accuracy. This suggests constraints encourage presence of better gender translations in n-best lists, but that reranking is needed to extract them. Using beam-20 significantly improves the performance of reranking. With constraints, beam-20 oracle-reranking gives \emph{absolute} accuracy gains of about 20\% over the highest likelihood beam search output. However, beam-4 shows most of the improvement over that baseline. We find diminishing returns as beam size increases (Appendix \ref{appendix-beamsizes}), suggesting large, expensive beams are not necessary. \subsection{Inferred entities} We have shown accuracy improvements with oracle reranking, indicating that the synthesized n-best lists often contain a gender-accurate hypothesis. In Table \ref{tab:roberta}, we explore inferred-reranking using a RoBERTa model, investigating whether that hypothesis can be found automatically. We find very little degradation in WinoMT accuracy when inferring entities compared to the oracle (Table \ref{tab:mfresults}). We hypothesise that difficult sentences are hard for both coreference resolution and NMT, so cases where RoBERTa disambiguates wrongly are also mistranslated with oracle information. We are unable to oracle-rerank the generic test sets, since they have no oracle gender labels. However, we can tag them using RoBERTA for inferred-reranking. In Table \ref{tab:roberta} we find this has little or no impact on BLEU score, unsurprising for sets not designed to highlight potentially subtle gender translation effects. This suggests positively that our scheme does not impact general translation quality. \begin{table}[t] \centering \small \begin{tabular}{|c|c|ccc|} \hline \textbf{Beam} & \textbf{System} & \textbf{en-de} & \textbf{en-es} & \textbf{en-he}\\ \hline \multirow{2}{*}{4} &S\&B & 79.4 & 62.2 & 53.1 \\ & S\&B + rerank & 81.9 & 68.9 & 56.6 \\ \hline \multirow{2}{*}{20} & S\&B & 79.6 &62.1& 52.8 \\ & S\&B + rerank & 83.6 & 73.9 & 62.9 \\\hline \end{tabular} \caption{WinoMT accuracy inferred-reranking the adaptation scheme of \citet{saunders-byrne-2020-reducing}.} \label{tab:sandb} \end{table} \begin{table*}[ht] \centering \begin{small} \begin{tabular}{|c|p{12cm}|} \hline \multicolumn{2}{|p{13cm}|}{Vallejo appears to have only narrowly edged out Calderon, \textbf{who} had led polls before election day} \\ \hline -12.3 &Vallejo scheint nur knapp ausgegrenzt Calderon, \textbf{der} vor dem Wahltag Wahlen geführt hatte. \\ -14.6 & $\ast$ Vallejo scheint nur knapp ausgegrenzt Calderon, \textbf{die} vor dem Wahltag Wahlen geführt hatte.\\ -24.3 & Vallejo scheint nur knapp ausgegrenzt Calderon, \textbf{der} vor dem Wahltag Wahlern geführt hatte.\\ -26.5 & Vallejo scheint nur knapp ausgegrenzt Calderon, \textbf{die} vor dem Wahltag Wahlern geführt hatte.\\ \hline \end{tabular} \caption{Sentence from WMT newstest12 with gender-constrained n-best list and NLL scores. Words like `who' coreferent with `Calderon' become entities for Algorithm 1, which finds a better gendered translation ($\ast$).} \label{tab:nonpronounexamples} \end{small} \end{table*} So far we have not changed the NMT model at all. In Table \ref{tab:sandb}, for comparison, we investigate the approach of \citet{saunders-byrne-2020-reducing}: tuning a model on a dataset of gendered profession sentences, then constrained-rescoring the original model's hypotheses.\footnote{Different scores from the original work may be due to variations in hyperparameters, or WinoMT updates.} We do indeed see strong gender accuracy improvements with this approach, but inferred-reranking the resulting models' n-best lists further improves scores. We also note that inferred reranking the baseline with beam size 20 (Table \ref{tab:roberta} line 4) outperforms non-reranked S\&B, without requiring specialized profession-domain tuning data or any change to the model. \subsection{Reranking with named entities} \label{sec:names} At time of writing, published gender translation test sets focus on profession nouns, a domain we evaluate with WinoMT. However, our approach can also improve other aspects of gender translation. One of these is consistently gendering named entities. Sentences may contain gendered terminology with no pronouns, only named entities. Generic name-gender mappings are unreliable: many names are not gendered, and a name with a `typical' gender may not correspond to an individual's gender. However, we may know the appropriate gendered terms to use for a \emph{specific} named entity, for example from other sentences, a knowledge base, or user preference. With this information we can gender-rerank. An example is given in Table \ref{tab:nonpronounexamples}. The English sentence contains no gendered pronoun, so is not covered by our default reranking algorithm. We know from previous sentences that Calderon should be referred to with the linguistic feminine, so we can rerank with known $p_g$. The `entities' $e$ are the words referring to Calderon, including `who', `had' and `led'.\footnote{Extracted using RoBERTa coreference model; future work might explore use of a lightweight dependency parser.} Algorithm 1 proceeds over these entities, of which only `who' is gendered in German, to extract a better gendered translation. \subsection{Reranking with new gendered language} Another benefit of our approach is flexibility to introducing new gendered vocabulary, e.g. as used by non-binary people. Developing a system to correctly produce new terms like neopronouns is itself an open research problem \citep{saunders-etal-2020-neural}. However, we can simulate such a system by editing existing WinoMT translations to contain gendered-term placeholders instead of binary gendered terms, and shuffling these translations into n-best lists. For example, where a German translation includes \emph{der Mitarbeiter}, the employee (masculine), we substitute \emph{DEFNOM MitarbeiterNEND}. This allows later replacement of \emph{DEFNOM} by e.g. \emph{dier} or \emph{NEND} by \emph{\_in} \citep{heger2020xier}, but remains flexible to preferences for new gendered language. We then define the new patterns for identification by the reranker. To evaluate reranking with new gendered language, we use 1826 neutral WinoMT sentences with they/them pronouns on the English side. We initialise the corresponding n-best lists with the masculine WinoMT German 20-best lists, and shuffle one `placeholder' translation into each, giving them the average log likelihood of the whole list. We find that the reranker successfully extracts the correct placeholder-style sentences in 92\% of cases. This demonstrates that if a system can generate some new gendered term, reranking can extract it from an n-best list with minimal adjustments. \section{Conclusions} This paper attempts to improve gender translation without a single change to the NMT model. We demonstrate that gender-constraining the target language during inference can encourage models to produce n-best lists with correct hypotheses. Moreover, we show that simple reranking heuristics can extract more accurate gender translations from the n-best lists using oracle or inferred information. Unlike other approaches to this problem we do not attempt to counter unidentified and potentially intractable sources of bias in the training data, or produce new models. However, our approach does significantly boost the accuracy of a prior data-centric bias mitigation technique. In general we view our scheme as orthogonal to such approaches: if a model ranks diverse gender translations higher in the beam initially, finding better gender translations during beam search becomes simpler. \section*{Acknowledgments} This work was supported by EPSRC grants EP/M508007/1 and EP/N509620/1 and performed using resources from the Cambridge Tier-2 system operated by the University of Cambridge Research Computing Service\footnote{\url{http://www.hpc.cam.ac.uk}} funded by EPSRC Tier-2 capital grant EP/P020259/1. \section*{Impact statement} Where machine translation is used in people's lives, mistranslations have the potential to misrepresent people. This is the case when personal characteristics like social gender conflict with model biases towards certain forms of grammatical gender. As mentioned in the introduction, the result can involve implicit misgendering of a user or other human referent, or perpetuation of social biases about gender roles as represented in the translation. A user whose words are translated with gender defaults that imply they hold such biased views will also be misrepresented. We attempt to avoid these failure modes by identifying translations which are at least consistent within the translation and consistent with the source sentence. This is dependent on identifying grammatically gendered terms in the target language -- however, this element is very flexible and can be updated for new gendered terminology. We note that models which do not account for variety in gender expression such as neopronoun use may not be capable of generating appropriate gender translations. However, we demonstrate that, if definable, a variety of gender translations can be extracted from the beam. By avoiding the data augmentation, tuning and retraining elements in previously proposed approaches to gender translation, we simplify the process and remove additional stages where bias could be introduced or amplified \citep{shah-etal-2020-predictive}. In terms of compute time and power, we minimize impact by using a single GPU only for training the initial NMT models exactly once for the iterations listed in Appendix \ref{appendix-experimental}. All other experiments involve inference or rescoring the outputs of those models and run in parallel on CPUs in under an hour, except the experiments following \citet{saunders-byrne-2020-reducing}, an approach itself involving only minutes of GPU fine-tuning. \bibliographystyle{acl_natbib}
1,108,101,566,285
arxiv
\section{Introduction} Fluid-particle systems are ubiquitous in nature and industry. Sediment transport and erosion are important in many environmental studies and the interaction between particles and interstitial fluid affects the rheology of avalanches, slurry flows and soils. In industry, the efficiency of a fluidised bed process (e.g. Fluidized Catalytic Cracking) is completely determined by the complex two-way interaction between the injected gas flow and the solid granular material. Also, the dispersion of solid particles in a fluid is of broad industrial relevance to the food, chemical and painting industries, which involves in most cases three phases: a granular medium, the air initially present in its pores and an injected liquid. The length-scale of interest determines the method of simulation for fluid-particle systems. For very small scale processes it is feasible to fully resolve the interstitial fluid between the particles (see \citet{zhu99pore,pereira10sph,potapov01liquid,wachmann98collective} for a few examples of particle or pore-scale simulations). However, for many applications the dynamics of interest occur over length scales much larger than the particle diameter and the computational effort required to resolve the pore-scale is too great. It then becomes necessary to use unresolved, or mesoscale, fluid simulations. This mesoscale is the focus of this paper and the domain of applicability for the SPH-DEM method. At even larger length scales of interest (macroscale) it becomes infeasible to model the granular material as a discrete collection of grains and instead a continuum model is used in a two-fluid model. However, it must be noted that while this approach might be computationally necessary in many cases, it can fail for some systems involving dense granular flow, where existing continuum models for granular material do not adequately reproduce important material properties such as anisotropy, history dependency, jamming and segregation. Fluid-particle simulations at the mesoscale are often given the term Discrete Particle Models (DPM). These models fully resolve the individual solid particles using a Lagrangian model for the solid phase. The fluid phase does not resolve the interstitial fluid, but instead models the locally averaged Navier-Stokes equations and is coupled to the solid particles using appropriate drag closures. Most of the prior work on DPMs have been done using grid-based methods for the fluid phase, and a few relevant examples can be seen in the papers by \citet{tsuji93discrete}, \citet{xu97numerical,xu00numerical}, \citet{hoomans96discrete,hoomans00granular} or \citet{chu08numerical}. Fixed pore flow simulations (where the geometry of the solid particles is unchanging over time) using SPH for the (unresolved) fluid phase have been described by \citet{li07saturated} and \citet{jiang07mesoscale}, but these do not allow for the motion and collision of solid grains. \citet{cleary06prediction} and \citet{fernandez11using} simulate slurry flow at the mesoscale using SPH and DEM in SAG mills and through industrial banana screens, but only perform a one-way coupling between the solid and fluid phases. The DPM model presented in this paper is based on the locally averaged Navier-Stokes (AVNS) equations that were first derived by Anderson and Jackson in the sixties \citep{anderson67fluid}, and have been used with great success to model the complex fluid-particle interactions occurring in industrial fluidized beds \citep{deen07review}. Anderson and Jackson defined a smoothing operator identical to that used in SPH and used it to reformulate the NS equations in terms of smoothed variables and a local porosity field (porosity refers to the fraction of fluid in a given volume). Given its theoretical basis in kernel interpolation, it is natural to consider the use of the SPH method to solve the AVNS equations, coupled with a DEM model for the solid phase. The coupling of SPH and DEM results in a purely particle-based solution method and therefore enjoys the flexibility that is inherent in these methods. This is the primary advantage of this method over existing grid-based DPMs. In particular, the model described in this paper is well suited for applications involving a free surface, including (but not limited to) debris flows, avalanches, landslides, sediment transport or erosion in rivers and beaches, slurry transport in industrial processes (e.g. SAG mills) and liquid-powder dispersion and mixing in the food processing industry. Another advantage of using a DPM, or mesoscale simulation, is of course the reduced computational requirements over a fully resolved simulation. We have found that in general a fluid resolution of $h = 2d$ minimises the error in the SPH-DEM method, where $d$ is the solid particle diameter. For a fully resolved simulation the interstitial fluid must be resolved, and therefore the fluid resolution would need to be at least $h = 0.2d$, which scales the number of computational nodes (for the fluid) by a factor of 1000. \begin{figure}[htp] \centering \includegraphics[width=0.6\textwidth]{figure1} \caption{Example of a two-phase SPH-DEM simulation of a water jet (bottom) injected into a granular bed. On the left is shown the cell geometry along with spheres representing the solid grains and the water surface (coloured blue). On the right is shown the porosity profile along the plane given by x=0. Black indicates no fluid. } \label{fig:dispersionExample} \end{figure} Figure \ref{fig:dispersionExample} shows a SPH-DEM simulation applied to a liquid-powder mixing problem in the food processing industry, taken from a simulation of a water jet injected in a granular bed whose pores are initially filled with air. To predict the shape of the front correctly, one has to consider the free surface and the absence of dissipation on the air side, both in the SPH-DEM model. Even more complex (realistic) injection geometries are easily incorporated into the simulation with no additional effort. Moreover, using DEM enables studying the effect (on the initial liquid front propagation) of packing and top surface inhomogeneities that can be generated during pouring, unlike simpler ``porous media"-like approaches. Polydispersity can also be included by altering the radius of the simulated grains and using a suitable drag term (e.g., see \citet{hoef05lattice}) Sections \ref{sec:GovEq}-\ref{sec:SPHDEMModel} describe the AVNS equations and the SPH and DEM models for the fluid and solid phases and the coupling between them. Section \ref{sec:ValidationTestCases} introduces the test cases, Section \ref{sec:SPS} describes the results for the Single Particle Sedimentation test case, Section \ref{sec:CPB} the results for Multiple Particle Sedimentation and Section \ref{sec:RTI} describes the inhomogeneous Rayleigh Taylor Instability test using solid particles sedimenting into a clear fluid. \section{Governing Equations}\label{sec:GovEq} \subsection{The Locally Averaged Navier-Stokes Equations} \label{sec:AVNS} Here we describe the governing equations for the fluid phase, the locally averaged Navier-Stokes equations derived by \citet{anderson67fluid}. Anderson and Jackson defined a local averaging based on a radial smoothing function $g(r)$. The function $g(r)$ is greater than zero for all $r$ and decreases monotonically with increasing $r$, it possesses derivatives $g^{n}(r)$ of all orders and is normalised so that $\int g(r)dV = 1$. The local average of any field $a'$ defined over the fluid domain can be obtained by convolution with the smoothing function \begin{equation}\label{eq:AVNSvariable} \epsilon(x) a(x) = \int_{V_{f}} a'(y)g(x-y)dV_y, \end{equation} where $x$ and $y$ are position coordinates (here one dimensional for simplicity). The integral is taken over the volume of interstitial fluid $V_f$ and $\epsilon(x)$ is the porosity. \begin{equation}\label{eq:AVNSporosity} \epsilon(x) = 1 - \int_{V_{s}} g(x-y)dV_y, \end{equation} where $V_s$ is the volume of the solid particles. In a similar fashion, the local average of any field $a'(x)$ defined over the solid domain is given by \begin{equation}\label{eq:AVNSvariable2} (1 - \epsilon(x)) a(x) = \int_{V_{s}} a'(y)g(x-y)dV_y, \end{equation} where the integral is taken over the volume of the solid particles. Applying this averaging method to the Navier-Stokes equations, \citet{anderson67fluid} derived the following continuity equation in terms of locally averaged variables \begin{equation}\label{eq:aveContinuity} \frac{\partial (\epsilon \rho_f)}{\partial t} + \nabla \cdot (\epsilon \rho_f \mathbf{u} ) = 0, \end{equation} where $\rho_f$ is the fluid mass density and $\mathbf{u}$ is the fluid velocity. The corresponding momentum equation is \begin{equation}\label{eq:aveMomentum2} \epsilon \rho_f \left ( \frac{\partial \mathbf{u}}{\partial t} + \mathbf{u} \cdot \nabla \mathbf{u} \right ) = -\nabla P + \nabla \cdot \boldsymbol{\tau} - n \mathbf{f} + \epsilon \rho_f \mathbf{g}, \end{equation} where $P$ is the fluid pressure, $\boldsymbol{\tau}$ is the viscous stress tensor and $n \mathbf{f}$ is the fluid-particle coupling term. We use a Newtonian fluid where $\boldsymbol{\tau}=\mu \nabla \cdot \mathbf{u}$. We neglect Reynolds-like terms and do not consider turbulent flow. The coefficient for the coupling term $n$ is the local average of the number of particles per unit volume and $\mathbf{f}$ is the local mean value of the force exerted on the particles by the fluid. This force includes all effects, both static and dynamic, of the particles on the fluid, the details of which can be seen in Eq.\ (\ref{Eq:demCouplingForce}). \subsection{Smoothed Particle Hydrodynamics} Smoothed Particle Hydrodynamics \citep{gingold77smoothed, lucy77numerical, monaghan05SPH} is a Lagrangian scheme, whereby the fluid is discretised into ``particles'' that move with the local fluid velocity. Each particle is assigned a mass and can be thought of as the same volume of fluid over time. The fluid variables and the equations of fluid dynamics are interpolated over each particle and its nearest neighbours using a smoothing kernel $W(r,h)$, where $h$ is the smoothing length scale. Like $g(r)$ in the AVNS equations, the SPH kernel is a radial function that decreases monotonically and is normalised so that $\int W(r,h)dV = 1$. Unlike $g(r)$ and to reduce the computational burden of the method, the SPH kernel is normally defined with a compact support and a finite number of derivatives. In SPH, a fluid variable $A(\mathbf{r})$ (such as momentum or density) is interpolated using the kernel $W$ \begin{equation}\label{Eq:integralInterpolant} A(\mathbf{r}) = \int A(\mathbf{r'})W(\mathbf{r}-\mathbf{r'},h)d\mathbf{r'}. \end{equation} To apply this to the discrete SPH particles, the integral is replaced by a sum over all particles, commonly known as the \emph{summation interpolant}. To estimate the value of the function $A$ at the location of particle $a$ (denoted as $A_a$), the summation interpolant becomes \begin{equation}\label{Eq:summationInterpolant} A_a = \sum_b m_b \frac{A_b}{\rho_b} W_{ab}(h_a), \end{equation} where $m_b$ and $\rho_b$ are the mass and density of particle $b$. The volume element $d\mathbf{r'}$ of Eq.\ (\ref{Eq:integralInterpolant}) has been replaced by the volume of particle $b$ (approximated by $\frac{m_b}{\rho_b}$), equivalent to the normal trapezoidal quadrature rule. The kernel function is denoted by $W_{ab}(h) = W(\mathbf{r}_a-\mathbf{r}_b,h)$. The dependence of the kernel on the difference in particle positions is not explicitly stated for readability. Due to the limited support of W, particle neighbourhood search methods as standard in SPH or DEM can be applied to optimize the summation in Eq. (\ref{Eq:integralInterpolant}). The accuracy of the SPH interpolant depends on the particle positions within the radius of the kernel. If there is not a homogeneous distribution of particles around particle $a$ (for example, it is on a free surface), then the interpolation can be compromised. The interpolation can be improved by using a Shepard correction \citep{shepard682Dinterp}, originally devised as a low cost improvement to data fitting. This correction divides the interpolant by the sum of kernel values at the SPH particle positions, so the summation interpolant becomes \begin{equation} A_a = \frac{1}{\sum_b \frac{m_b}{\rho_b} W_{ab}(h_a)} \sum_b m_b \frac{A_b}{\rho_b} W_{ab}(h_a). \end{equation} This correction ensures that a constant field will always be interpolated exactly close to boundaries, and improves the interpolation accuracy of other, non-constant fields. \section{SPH-DEM Model}\label{sec:SPHDEMModel} \subsection{SPH implementation of the AVNS equations}\label{sec:SPH} SPH is based on a similar local averaging technique as the AVNS equations, so it is natural to convert the interpolation integrals in Eqs.\ (\ref{eq:AVNSvariable}) and (\ref{eq:AVNSporosity}) to SPH sums using a smoothing kernel $W(r,h)$ in place of $g(r)$. To calculate the porosity $\epsilon_a$ at the center position of SPH/DEM particle $a$, the integral in Eq.\ (\ref{eq:AVNSporosity}) is converted into a sum over all DEM particles within the kernel radius and becomes \begin{equation}\label{eq:epsilonCalculation} \epsilon_a = 1 - \sum_{j} W_{aj}(h_c) V_j, \end{equation} where $V_j$ is the volume of DEM particle $j$. For readability, sums over SPH particles use the subscript $b$, while sums over surrounding DEM particles use the subscript $j$. Note that we have used a coupling smoothing length $h_c$ to evaluate the porosity, which sets the length scale for the coupling terms between the phases. Here we set $h_c$ to be equal to the SPH smoothing length, but in practice this can be set within a range such that $h_c$ is large enough that the porosity field is smooth but small enough to resolve the important features of the porosity field. For more details on this point please consult the numerical results of the test cases and the conclusions of this paper. Applying the local averaging method to the Navier-Stokes equations, Anderson and Jackson derived the continuity and momentum equations shown in Eqs. (\ref{eq:aveContinuity}) and (\ref{eq:aveMomentum2}) respectively. To convert these to SPH equations, we first define a superficial fluid density $\rho$ equal to the intrinsic fluid density scaled by the local porosity $\rho=\epsilon \rho_f$. Substituting the superficial fluid density into the averaged continuity and momentum equations reduces them to the normal Navier-Stokes equations. Therefore, our approach is to use the standard weakly compressible SPH equations, see \citep{robinson11direct}, using the superficial density for the SPH particle density and adding terms to model the fluid-particle drag. The rate of change of superficial density is calculated using the variable smoothing length terms derived by \citet{price12smoothed}. \begin{equation} \label{Eq:changeInDensity} \frac{D\rho_a}{Dt} = \frac{1}{\Omega_a}\sum_b m_b \mathbf{u}_{ab} \cdot \nabla_a W_{ab}(h_a), \end{equation} where $\mathbf{u}_{ab}=\mathbf{u}_a-\mathbf{u}_b$. The derivative on the lhs of Eq.\ (\ref{Eq:changeInDensity}) denotes the time derivative of the superficial fluid density for each SPH particle $a$. Since the SPH particles move with the flow, this is equivalent to a material derivative of the superficial density $\rho=\epsilon \rho_f$. For more details of the derivation of Eq.\ (\ref{Eq:changeInDensity}) from Eq.\ (\ref{eq:aveContinuity}), the reader is referred to \citet{monaghan05SPH} or \citet{price12smoothed}. The correction term $\Omega_a$ is a correction factor due to the gradient of the smoothing length and is given by \begin{equation} \Omega_a = 1 - \frac{\partial h_a}{\partial \rho_a} \sum_b m_b \frac{\partial W_{ab}(h_a)}{\partial h_a}. \end{equation} Neglecting gravity, the SPH acceleration equation becomes \begin{equation}\label{Eq:sphJustPressureForce} \frac{d\mathbf{u}_a}{dt} = -\sum_b m_b \left [ \left ( \frac{P_a}{\Omega_a \rho_a^2} + \Pi_{ab} \right ) \nabla_a W_{ab}(h_a) + \left ( \frac{P_b}{\Omega_b \rho_b^2} + \Pi_{ab} \right ) \nabla_a W_{ab}(h_b) \right ] + \mathbf{f}_a/m_a, \end{equation} where $\mathbf{f}_a$ is the coupling force on the SPH particle $a$ due to the DEM particles (see Section \ref{sec:fluidParticleCoupling}). The viscous term $\Pi_{ab}$ models the divergence of the viscous stress tensor in Eq.\ (\ref{eq:aveMomentum2}) is calculated using the term proposed by \citet{monaghan97SPHRiemannSolvers}, which is based on the dissipative term in shock solutions based on Riemann solvers. For this viscosity \begin{equation}\label{Eq:monaghansViscousTerm} \Pi_{ab} = - \alpha \frac{u_{sig} u_n }{2 \overline{\rho}_{ab} |\mathbf{r}_{ab}|}, \end{equation} where $u_{sig} = c_s + u_n / |\mathbf{r}_{ab}| $ is a signal velocity that represents the speed at which information propagates between the particles. The normal velocity difference between the two particles is given by $u_n = \mathbf{u}_{ab} \cdot \mathbf{r}_{ab}$. The constant $\alpha$ can be related to the dynamic viscosity of the fluid $\mu$ using \begin{equation}\label{eq:alphaToMu} \mu = \rho \alpha h c_s / S, \end{equation} where $S=112/15$ for two dimensions and $S=10$ for three \citep{monaghan05SPH}. For some of the reference fluids we have chosen to simulate in this paper it was found that the physical viscosity was not sufficient to stabilise the results (see Section \ref{sec:effectOfPorosity}), and it was necessary to add an artificial viscosity term with $\alpha_{art} = 0.1$. However, this viscosity term is only applied when the SPH particles are approaching each other (i.e. $u_{ab}\cdot r_{ab} < 0)$ so that the dissipation due to the artificial viscosity is reduced while still stabilising the results. The fluid pressure in Eq.\ (\ref{Eq:sphJustPressureForce}) is calculated using the weakly compressible equation of state. This equation of state defines a reference density $\rho_0$ at which the pressure vanishes, which must be scaled by the local porosity to ensure that the pressure is constant over varying porosity. \begin{equation}\label{Eq:sphEquationOfState} P_a = B \left ( \left ( \frac{\rho_a}{\epsilon_a \rho_0} \right )^\gamma - 1 \right ). \end{equation} The scaling factor $B$, is free a-priori and is set so that the density variation from the local reference density is less than 1 percent, ensuring that the fluid is close to incompressible. For this, in terms of $B$, the local sound speed is \begin{equation} c_s^2 = \left. \frac{\partial P}{\partial \overline{\rho}} \right |_{\overline{\rho}=\epsilon_a \rho_0} = \frac{\gamma B}{\epsilon_a \rho_0}, \end{equation} and the fluctuations in density can be related to the sound speed and velocity of the SPH particles \citep{monaghan05SPH}: \begin{equation} \frac{| \delta \rho |}{\rho} = \frac{u^2}{c_s^2}. \end{equation} Therefore, in order to keep these fluctuations less than 1\% in a flow where the maximum velocity is $u_m$ and the maximum porosity is as always $\epsilon_m=1$, $B$ is set to \begin{equation} B = \frac{100 \rho_0 u_m^2}{\gamma}. \end{equation} As the superficial density will vary according to the local porosity, care must be taken to update the smoothing length for all particles in order to maintain a sufficient number of neighbour particles. This is referred to as "variable-h" in this study. The smoothing length $h_a$ is calculated using \begin{equation}\label{Eq:variableh} h_a = \sigma \left ( \frac{m_a}{\rho_a} \right )^{1/d}, \end{equation} where $d$ is the number of dimensions and $\sigma$ determines the resolution of the summation interpolant. The value used in all the simulation results presented here is $\sigma = 1.5$. Recall that the SPH density is given by $\rho = \epsilon \rho_f$. Assuming a constant intrinsic fluid density $\rho_f$, the smoothing length $h$ is thus proportional to the local porosity $h \propto (1/\epsilon)^{1/d}$. Setting $\epsilon=1$ gives the minimum smoothing length possible in the simulation. One of the key assumptions of the SPH-DEM method is that the smoothing length scale $h$ is sufficiently larger than the solid particle diameter, and the results for the Single Particle Sedimentation tests case (Section \ref{sec:SPSresolution}) indicate that the minimum $h$ should always be greater than two times the solid particle diameter (or much smaller, which is not considered here). \subsection{Discrete Element Model (DEM)}\label{sec:DEM} In DEM, Newton's equations of motion are integrated for each individual solid particle. Interactions between the particles involve explicit force expressions that are used whenever two particles come into contact. Given a DEM particle $i$ with position $\mathbf{r}_i$, the equation of motion is \begin{equation} m_i \frac{d^2 \mathbf{r}_i}{dt^2} = \sum_j \mathbf{c}_{ij} + \mathbf{f}_i + m_i\mathbf{g}, \end{equation} where $m_i$ is the mass of particle $i$, $\mathbf{c}_{ij}$ is the contact force between particles $i$ and $j$ (acting from $j$ to $i$) and $\mathbf{f}_i$ is the fluid-particle coupling force on particle $i$. For the simulations presented below, we have used the linear spring dashpot contact model \begin{equation} \mathbf{c}_{ij} = -(k \delta -\beta \dot{\delta})\mathbf{n}_{ij}, \end{equation} where $\delta$ is the overlap between the two particles (positive when the particles are overlapping, zero when they are not) and $\mathbf{n}_{ij}$ is the unit normal vector pointing from $j$ to $i$.. The simulation timestep is calculated based on a typical contact duration $t_c$ and is given by $\Delta t = \frac{1}{50}t_c$, with $t_c=\pi/\sqrt{(2k/m_i)-\beta/m_i}$. The timestep for the SPH method is set by a CFL condition \begin{equation}\label{Eq:CFL} \delta t_1 \le \min_a \left ( 0.6 \frac{h_a}{u_{sig}} \right ), \end{equation} where the minimum is taken over all the particles. This is normally much larger than the DEM contact time, so the DEM timestep usually sets the minimum timestep for the SPH-DEM method. See Table \ref{Tab:parameters} in Section \ref{sec:ValidationTestCases} for all the parameters and time-scales used in these simulations. \subsection{Fluid-Particle Coupling Forces}\label{sec:fluidParticleCoupling} The force on each solid particle by the fluid is \citep{anderson67fluid} \begin{equation}\label{Eq:demCouplingForce} \mathbf{f}_i = V_i (-\nabla P + \nabla \cdot \mathbf{\tau})_i + \mathbf{f}_d(\epsilon_i,\mathbf{u}_s), \end{equation} where $V_i$ is the volume of particle $i$. The first two terms models the effect of the resolved fluid forces (buoyancy and shear-stress) on the particle. For a fluid in hydrostatic equilibrium, the pressure gradient will reduce to the buoyancy force on the particle. The divergence of the shear stress is included for completeness and ensures that the movement of a neutrally buoyant particle will follow the fluid streamlines. For the simulations considered in this paper this term will not be significant. The force $\mathbf{f}_d$ is a particle drag force that depends on the local porosity $\epsilon_i$ and the superficial velocity $\mathbf{u}_s$ (defined in the following section). This force models the drag effects of the unresolved fluctuations in the fluid variables and is normally defined using both theoretical arguments and fits to experimental data. For a single particle in 3D creeping flow this term would be the standard Stokes drag force. For higher Reynolds numbers and multiple particle interactions this term is determined using fits to numerical or experimental data \citep{hoef05lattice}. See Section \ref{sec:DragLaws} for further details. The pressure gradient and the divergence of the stress tensor are evaluated at each solid particle using a Shepard corrected \citep{shepard682Dinterp} SPH interpolation. Using the already given SPH acceleration equation, Eq. (\ref{Eq:sphJustPressureForce}), this becomes \begin{align} &(-\nabla P + \nabla \cdot \mathbf{\tau})_i = \frac{1}{\sum_b \frac{m_b}{\rho_b} W_{ab}(h_b)} \sum_b m_b \theta_b W_{ib}(h_b), \\ &\theta_a = -\sum_b m_b \left [ \left ( \frac{P_a}{\Omega_a \rho_a^2} + \Pi_{ab} \right ) \nabla_a W_{ab}(h_a) + \left ( \frac{P_b}{\Omega_b \rho_b^2} + \Pi_{ab} \right ) \nabla_a W_{ab}(h_b) \right ]. \end{align} In order to satisfy Newtons third law (i.e. the action = reaction principle), the fluid-particle coupling force on the fluid must be equal and opposite to the force on the solid particles. Each DEM particle is contained within multiple SPH interaction radii, so care must be taken to ensure that the two coupling forces are balanced. The coupling force on SPH particle $a$ is determined by a weighted average of the fluid-particle coupling force on the surrounding DEM particles. The contribution of each DEM particle to this average is scaled by the value of the SPH kernel. \begin{equation}\label{Eq:SPHCoupleForce} \mathbf{f}_a = - \frac{m_a}{\rho_a} \sum_j \frac{1}{S_j} \mathbf{f}_j W_{aj}(h_c), \end{equation} where $\mathbf{f}_j$ is the coupling force calculated for each DEM particle using Eq.\ (\ref{Eq:demCouplingForce}). The scaling factor $S_j$ is added to ensure that the force on the fluid phase exactly balances the force on the solid particles. It is given by \begin{equation}\label{Eq:SPHCoupleForce2} S_j = \sum_b{\frac{m_b}{\rho_b} W_{jb}(h_c)}, \end{equation} where the sum is taken over all the SPH particles surrounding DEM particle $j$. For a DEM particle immersed in the fluid this will be close to unity. \subsection{Fluid-Particle Drag Laws}\label{sec:DragLaws} The drag force $\mathbf{f}_d$ depends on the superficial velocity $\mathbf{u}_s$, which is proportional to the relative velocity between the phases. If $\mathbf{u}_f$ and $\mathbf{u}_i$ are the fluid and particle velocity respectively, then the superficial velocity is defined as \begin{equation} \mathbf{u}_s = \epsilon_i (\mathbf{u}_f-\mathbf{u}_i). \end{equation} This term is used as the dependent variable in many drag laws as it is easily measured from experiment by dividing the fluid flow rate by the cross-sectional area. In the SPH-DEM model, the fluid velocity $\mathbf{u}_f$ used to calculate the superficial velocity, is found at each DEM particle position using a Shepard corrected SPH interpolation. The value of the porosity field at each DEM particle position $\epsilon_i$ is found in an identical way. The simplest drag law is the Stokes drag force \begin{equation}\label{eq:stokesDrag} \mathbf{f}_d = 3 \pi \mu d \mathbf{u}_s, \end{equation} where $d$ is the particle diameter. This is valid for a single particle in creeping flow. \citet{coulson93chemical} proposed a drag law valid for a single particle falling under the full range of particle Reynolds Numbers $Re_p = \rho_f |\mathbf{u}_s| d / \mu$. \begin{equation}\label{eq:coulson_and_richardson} \mathbf{f}_d = \frac{\pi}{4} d^2 \rho_f |\mathbf{u}_s| \left (1.84 Re_p^{-0.31}+0.293 Re_p^{0.06} \right )^{3.45} \end{equation} For higher Reynolds numbers and multiple particles, the drag law can be generalised to \begin{equation}\label{eq:singleParticleInInfiniteDomain} \mathbf{f}_d = \frac{1}{8} C_d f(\epsilon_i) \pi d^2 \rho_f |\mathbf{u}_s|\mathbf{u}_s, \end{equation} where $C_d$ is a drag coefficient that varies with the particle Reynolds number $Re_p = \rho_f |\mathbf{u}_s| d / \mu$, and $f(\epsilon_i)$ is the voidage function that models the interactions between multiple particles and the fluid. A popular definition for the drag coefficient was proposed by \citet{dallavalle48micromeritics} \begin{equation}\label{eq:DallavalleDrag} C_d = \left [ 0.63 + \frac{4.8}{\sqrt{Re_p}} \right ]^2. \end{equation} Di Felice proposed a voidage function based on experimental data of fluid flow through packed spheres \citep{difelice94voidage} \begin{align}\label{eq:DiFeliceDrag} &f(\epsilon_i) = \epsilon_i^{-\xi}, \\ &\xi = 3.7 - 0.65 \exp \left [ -\frac{(1.5 - \log_{10}Re_p)^2}{2} \right ].\label{eq:DiFeliceDrag2} \end{align} Both the Stokes drag term (as the simplest reference case) and the combination of Dallavalle and Di Felice's drag terms are used in the simulations presented in this paper. Another commonly used drag term is given by a combination of drag terms by \citet{ergun52fluid} and \citet{wen66mechanics}. For $\epsilon_i \rightarrow 1$ this term and Di Felice's are identical (over all $Re$). As the porosity decreases both drag terms generally follow the same trend, although the Ergun and Wen \& Yu model gives a larger drag force for dense systems. \section{Validation Test Cases} \label{sec:ValidationTestCases} In this section, three different sedimentation test cases are proposed and used to verify that SPH-DEM correctly models the dynamics of the two phases (fluid and solid particles) and their interactions. \begin{enumerate} \item Single Particle Sedimentation (SPS) \item Sedimentation of a constant porosity block (CPB) \item Rayleigh Taylor Instability (RTI) \end{enumerate} \begin{figure} \centering \includegraphics[width=0.7\textwidth]{figure2} \caption{Setup for test case SPS, single particle sedimentation in a fluid column. (Left) Perspective view, showing the fluid domain, the no-slip bottom boundary and the single spherical DEM particle. (Right) Top view, the grey area is the bottom no-slip boundary} \label{fig:singleDiagram} \end{figure} These test cases were designed to test the particle-fluid coupling mechanics in order of increasing complexity. The first test case simply requires the correct calculation and integration of the drag force on the single particle, the single particle being too small to noticeably alter the surrounding fluid velocity. The second requires that the drag on both phases and the displacement of fluid by the particles be correctly modelled for a simple velocity field and constant porosity. The third test case does the same but with a more complicated and time-varying velocity and porosity field due to the moving particle phase. The first test case (SPS) models a single particle sedimenting in a fluid column under gravity. Figure \ref{fig:singleDiagram} shows a diagram of the simulation domain. The water column has a height of $h=0.006 \mathrm{m}$ and the bottom boundary is constructed using Lennard-Jones repulsive particles (these particles are identical to those used by \citet{monaghan03fluid}). The boundaries in the $x$ and $y$ directions are periodic with a width of $w=0.004\text{ m}$ and gravity acts in the negative $z$ direction. The single DEM particle is initialised at $z=0.8h$. It has a diameter equal to $d = 1\times 10^{-4} \text{ m}$ and has a density $\rho_p = 2500\text{ kg/m}^3$. For the initial conditions of the simulation, the position of the DEM particle is fixed and the fluid is allowed to reach hydrostatic equilibrium. The particle is then released at $t=0\text{ s}$. \begin{figure} \centering \includegraphics[width=0.37\textwidth]{figure3} \caption{Setup for test cases CPB and RTI, multiple particle sedimentation in a fluid column.} \label{fig:multipleDiagram} \end{figure} Most fluid-particle systems of interest will involve large numbers of particles, and therefore the second test case (CPB) involves the sedimentation of multiple particles through a water column. In this case, a layer of sedimenting particles is placed above a clear fluid region. Figure \ref{fig:multipleDiagram} shows the setup geometry. The fluid column is identical to the previous test case, but now the upper half of the column is occupied by regularly distributed DEM particles on a cubic lattice, with a given porosity $\epsilon$. The separation between adjacent DEM particles on the lattice is given by $\Delta r = (V/(1-\epsilon))^{1/3}$, where $V$ is the (constant) particle volume. The diameter and density of the particles are identical to the single particle case. In order to maintain a constant porosity as the layer of particles falls, the DEM particles are restricted from moving relative to each other and the layer of particles falls as a block (only translation, no rotation of the layer). The third test case (RTI) uses the same simulation domain and initial conditions as CPB, but now the particles are allowed to move freely. This setup is similar in nature to the classical Rayleigh-Taylor (RT) instability, where a dense fluid is accelerated (normally via gravity) into a less dense fluid. The combination of particles and fluid can be modeled as a two-fluid system with the upper ``fluid" having an effective density $\rho_d$, and an effective viscosity $\mu_d$, both higher than the properties of the fluid without particles. From this an expected growth rate can be calculated for the instability and compared with the simulated growth rate. See Section \ref{sec:RTI} for more details. For all three test cases, three different model fluids are used to evaluate the SPH-DEM model at different fluid viscosities and particle Reynolds numbers. The densities and viscosities of these fluids correspond to the physical properties of air, water and a 10\% glycerol-water solution. \subsection{Simulation Parameters, Analytical Solutions and Timescales} \begin{sidewaystable*} \caption{Relevant parameters and timescales for the simulations using different fluids. Parameters appearing only in one column are kept constant for all fluids.} \renewcommand{\arraystretch}{1.3} \label{Tab:parameters} \footnotesize{ \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline \bf{} & \bf{Notation} & \bf{Units} & \bf{Air} & \bf{Water}& \bf{Water + 10\% Glycerol}\\ \hline Box Width& $w$ &$\text{m}$& $4 \times 10^{-3}$ &&\\ Box Height& $h$ &$\text{m}$& $6 \times 10^{-3}$ &&\\ \hline Fluid Density & $\rho$ &$\text{kg/m}^3$& $1.1839$ & $1000$ & $1150$ \\ Fluid Viscosity & $\mu$ &$ \text{Pa} \cdot \text{s}$&$1.86 \times 10^{-5}$ & $8.9 \times 10^{-4}$ & $8.9\times 10^{-3} $ \\ \hline Particle Density & $\rho_p$ &$ \text{kg/m}^3$&$2500$ &&\\ Particle Diameter & $d$ &$\text{m}$& $1.0 \times 10^{-4}$ &&\\ Spring Stiffness& $k$ &$ \text{kg/s}^2$& $1.0 \times 10^{-4}$ &&\\ Spring Damping& $\beta$ &$\text{kg/s}$& $0 $ &&\\ \hline Porosity & $\epsilon$ && 0.6-1.0 &&\\ Calculated Terminal Velocity (Eq.\ \ref{eq:expectedDiFeliceTermVel})& $|\mathbf{u}_t|$& $\text{m/s}$&0.102-0.5 &$1.3\times 10^{-3}$-$7.6\times 10^{-3}$ &$ 1.3\times 10^{-4}$-$8.4\times 10^{-4}$ \\ Calculated Terminal Re Number (Eq.\ \ref{eq:expectedDiFeliceTermVel})& $Re_p$ && 0.65-3.19 & 0.15-0.85 & 0.002-0.011 \\ Archimedes Number (single particle)& $Ar$ && 83.89 &18.57& 0.192 \\ \hline Particle Contact Duration& $t_c$ &$\text{s}$& $2.54\times 10^{-3}$ && \\ Fluid CFL Condition& $t_f$ &$\text{s}$& 1.4-4.5 $\times 10^{-5}$ && \\ Fluid-particle Relaxation Time& $t_d$ &$\text{s}$& $7.47 \times 10^{-2} $ & $1.56 \times 10^{-3}$ & $1.56 \times 10^{-4}$ \\ \hline \end{tabular} \end{center} } \end{sidewaystable*} Table \ref{Tab:parameters} shows the parameters used in the three test cases. Each column corresponds to a different model fluid. Where a value appears only in one column, this indicates that the parameter is constant for all the fluids. The particle Reynolds number is calculated using the expected terminal velocity of either the single particle or porous block. The standard Stokes law, Eq.\ (\ref{eq:stokesDrag}), can be used to calculate the vertical speed of a single particle falling in a quiescent fluid. \begin{equation}\label{Eq:fallingParticleVel} v(t) = \frac{(\rho_p-\rho) V g}{b} \left ( 1-e^{-bt/m} \right ), \textrm{ with constant } b = 3 \pi \mu d. \end{equation} Since we are interested in a range of particle Reynolds numbers, not just at the Stokes limit, we also consider the Di Felice drag force, Eq.\ (\ref{eq:DiFeliceDrag}), which is valid for higher Reynolds numbers and varying porosity (i.e.\ it considers the interaction of multiple particles). When the buoyancy and gravity force on the falling particle balance out the drag force, the particle is falling at its terminal velocity. Equating these terms leads to a polynomial equation in terms of the particle Reynolds number at terminal velocity \begin{equation}\label{eq:expectedDiFeliceTermVel} 0.392Re_p^2 + 6.048Re_p^{1.5} + 23.04Re_p - \frac{4}{3}Ar\epsilon^{1+\xi} = 0, \end{equation} where $\xi$ is given in Eq.\ (\ref{eq:DiFeliceDrag2}) and $Ar = d^3\rho(\rho_p - \rho)g/\mu^2$ is the Archimedes number. The Archimedes number gives the ratio of gravitational forces to viscous forces. A high $Ar$ means that the system is dominated by convective flows generated by density differences between the fluid and solid particles. A low $Ar$ means that viscous forces dominate and the system is governed by external forces only. Solving for $Re_p$, one can find the expected terminal velocity using $Re_p = \rho |\mathbf{u}_t| d / \mu$. Note that a range of porosities is used for test cases CPB and RTI, and this results in a range of particle Reynolds numbers as the terminal velocity depends on the porosity. Also included in Table \ref{Tab:parameters} are the relevant timescales for the simulations. The particle contact duration $t_c$ and fluid CFL condition $t_f$ are described in Sections \ref{sec:DEM} and \ref{sec:SPH} respectively. The fluid-particle relaxation time is the characteristic time during which a falling particle in Stokes flow will approach its terminal velocity. This is given by $t_d=m/b$ from Eq.\ (\ref{Eq:fallingParticleVel}). This relaxation time provides another minimum timestep for the SPH-DEM simulation, given by \begin{equation}\label{eq:relaxationCondition} \Delta t_{relax} \le \frac{1}{20} \frac{m}{b}. \end{equation} The physical properties of the solid DEM particles are constant over all the simulated cases. Since the results of the test cases are insensitive to the particle-particle contacts, a relatively low spring stiffness of $k= 10^{-4} \text{kg/s}^2$ was used. This value ensures that the timestep is limited by the fluid CFL condition, rather than the DEM timestep, significantly speeding up the simulations. \section{Single Particle Sedimentation (SPS)}\label{sec:SPS} This section describes the results from SPH-DEM simulations using the first test case (SPS). We tested one and two-way coupling between the phases, the effect of different drag laws (Stokes and Di Felice), different fluid properties (air, water and water-glycerol) and the effect of varying the fluid resolution. \subsection{One and two-way coupling in Stokes flow} For a single particle falling in Stokes flow the standard Stokes drag equation, Eq.\ (\ref{eq:stokesDrag}), can be used. Since Stokes drag law assumes a quiescent fluid, the force on the fluid due to the particle is set to zero ($\mathbf{f}_a=0$ in Eq.\ (\ref{Eq:SPHCoupleForce})). This implements a one-way coupling between the phases. Note that the SPH particles can still interact with the DEM particles through the porosity field, but for a single particle this effect will be negligible. \begin{figure} \centering \includegraphics[height=0.9\textwidth, angle=-90]{figure4} \caption{Normalised sedimentation velocity as a function of scaled time for a single particle in different fluids falling from rest with both one-way and two-way coupling. The dashed line is the theoretical result integrating Stokes law. The particle's vertical velocity is scaled by the expected terminal velocity $|\mathbf{u}_t|$ and time is scaled by the drag relaxation time $t_d$. The inset shows the percentage error between the SPH-DEM and the expected trajectory. The fluid resolution is set to $h=6d$, where $d$ is the particle diameter.} \label{fig:SPSoneway_water} \end{figure} In Figure \ref{fig:SPSoneway_water} the evolution of a DEM particle's vertical speed in water is shown for one-way and two-way coupling. Also shown is the expected analytical prediction using Eq.\ (\ref{Eq:fallingParticleVel}). The falling DEM particle reproduces the analytical velocity very well for both one-way and two-way coupling and the error between the two curves is less than 1\% for the vast majority of the simulations. Note that the initial error curve reaches 5\% when the particle is first released, but this is is a short-lived effect and the error drops below 1\% after a time of about $t_d$, the relaxation time for the drag force. These results indicate that the pressure gradient, calculated from the SPH model, very accurately reproduces the buoyancy force on the particle, balancing out the drag force at the correct terminal velocity. The results are close for both one-way and two-way coupling, indicating that the drag force on the fluid has a negligible effect here. This is true as long as the fluid resolution is sufficiently larger than the DEM particle diameter (this is explored in more detail in Section \ref{sec:SPSresolution}). Figure \ref{fig:SPSoneway_water} also shows the same result for a DEM particle falling in air and in the water-glycerol mixture. For air, the drag force on the particle is much lower than for water, and the particles do not have time to reach their terminal velocity before reaching the bottom boundary, where the simulation ends. As for the previous simulation with water, there is initially a larger (approx 4\%) underestimation of the particle vertical speed, but once again this occurs only for a very small time period and does not affect the long term motion of the particle. For the majority of the simulation the error is less than 1\% for both one-way and two-way coupling. The results for the water-glycerol fluid are qualitatively similar to water. Here the drag force on the particle is much higher than for water and the particle reaches terminal velocity very quickly. As long as the simulation timestep is modified to resolve the drag force relaxation time $t_d$ as per Eq.\ (\ref{eq:relaxationCondition}), the results are accurate. For both the one-way and two-way coupling, the simulated velocity matches the analytical velocity very well and the error remains less than 1\% for the duration of the simulation. In summary, the results for the one-way and two-way coupling between the fluid and particle for all the reference fluids are very accurate, and reproduce the analytical velocity curve within 1\% error besides short-lived higher deviations at the initial onset of motion. All data scale using $u_t$ and $t_d$ for velocity and time, respectively. \subsection{Effect of Fluid Resolution} \label{sec:SPSresolution} \begin{figure} \centering \includegraphics[height=0.9\textwidth, angle=-90]{figure5} \caption{The effect of fluid resolution for the SPS test case, with water as the surrounding fluid. The average percentage error between the particle terminal velocity and the analytical value is plotted against $h/d$, where $h$ is the SPH resolution and $d$ is the DEM particle diameter. The errorbars show one standard deviation from the mean.} \label{fig:termVelTwoWay_res} \end{figure} In this section we vary the fluid resolution to see its effects on the SPS results. Using water as the reference fluid, four different simulations were performed with the number of SPH particles was ranging from 10x10x15 particles to 40x40x60. Using the SPH smoothing length $h$ as the resolution of the fluid, this gives a range of $1.5d \le h \le 6d$, where $d$ is the DEM particle diameter. Figure \ref{fig:termVelTwoWay_res} shows the percentage difference between the average terminal velocity of the particle and the expected Stokes law. The error bars in this plot show one standard deviation of the fluctuations in the terminal velocity around the average, taken over a time period of $0.34$ s after the terminal velocity has been reached. The $h/d=6$ resolution corresponds to that used in the previous one- and two-way coupled simulations, and the percentage error here is similar to the one-way case, which is a mean of 0.2\% with a standard deviation of 0.8\%. As the fluid resolution is increased there is no clear trend in the average terminal velocity, but there is an obvious increase in the fluctuation of the terminal velocity around this mean. For $h/d \ge 2$, the standard deviation of these fluctuations is less than 1\%, but this quickly grows to 3\% for $h/d=1.5$. The increased error as the fluid resolution approaches the particle diameter is due to one of the main assumptions of the AVNS equations, i.e. that the fluid resolution length scale is sufficiently larger than the solid particle diameter. In this case the smoothing operator used to calculate the porosity field is also much greater than the particle diameter and this will result in a smooth porosity field. As the fluid resolution is reduced to the particle diameter the calculated porosity field will become less smooth and there will emerge local regions of high porosity at the locations of the DEM particles. Therefore, the fluctuations in the porosity field become greater which will cause greater fluctuations in the forces on the SPH particles leading to a more noisy velocity field. Another trend (not clear in Figure \ref{fig:termVelTwoWay_res} but can be seen for higher density solid particles) is the terminal velocity of the particle increasing with increasingly finer fluid resolution. Due to the two-way coupling, the drag force on the particle will be felt by the fluid as an equal and opposite force. This will accelerate the fluid particles by an amount proportional to the relative mass of the SPH and DEM particles. For higher resolutions the mass of the SPH particles is lower, leading to an increase in vertical velocity of the affected fluid particles. Since the DEM particle's drag force depends on the velocity difference between the phases, which is now smaller, this will lead to a increase in the particle's terminal velocity. For the SPS test case shown here, the single particle does not exert too much force on the fluid and this is not a very large effect. As the fluid resolution is increased from $h/d=6$ to $2$, there is a slight increase (on the order of 1-2\%) in the terminal velocity; for lower $h/d$ the trend is lost, likely due to the increasing noise due to the fluctuations in the porosity field. \subsection{The effect of fluid properties and particle Reynolds number} We have used three different reference fluids in the simulations, corresponding to air, water and a water-glycerol mixture. Using the SPS test case, this results in a range of particle Reynolds numbers between $0.011$ (water-glycerol) and $3.19$ (air), allowing us to explore a realistic range of particle Reynolds numbers. We have further extended this range by considering two additional (artificial) fluids with a density of water but lower viscosities, resulting in a range of $0.011 \le Re_p \le 9$. Rather than assuming Stokes flow as in the previous sections, here we will use the Di Felice drag law ($\epsilon=1$), which is assumed to be valid for all Reynolds numbers. This will be compared against fully resolved simulations using COMSOL Multiphysics (finite element analysis, solver and simulation software. \url{http://www.comsol.com/}). \begin{figure} \centering \includegraphics[height=0.9\textwidth,angle=-90]{figure6} \caption{Error in SPH-DEM average SPS terminal velocity at different terminal Re numbers. The fully resolved COMSOL simulation is used as reference for the error calculation. The solid red and dashed green lines show the results using either Stokes or Di Felice drag law. The dotted blue line shows the reference terminal velocity calculated using the Coulson and Richardson drag law \citep{coulson93chemical}. The SPH-DEM results use a fluid resolution of $h/d=6$} \label{fig:diFeliceCompare} \end{figure} Figure \ref{fig:diFeliceCompare} shows the average error in the terminal velocity measured from the SPH-DEM simulations using both the Stokes and Di Felice drag laws, using the COMSOL results as the reference terminal velocity. Since the two drag laws are equivalent at low $Re_p$, they give the same result at $Re_p = 0.01$. As $Re_p$ increases, the plots diverge, and the simulated terminal velocity using the Stokes drag quickly becomes much larger than the COMSOL prediction (as expected since the Stokes drag law is only valid for low $Re_p$). In contrast, the Di Felice drag law results in a simulated terminal velocity that follows the same trend as the COMSOL results. At low $Re_p$ the DEM particle falls slightly ($\sim 5$\%) faster, at higher $Re_p$ it falls slightly (3-6\%) slower. For further comparison, the COMSOL results have also been compared with the analytical drag force model proposed in \cite{khan1987resistance,coulson93chemical} and reproduced in Eq.\ (\ref{eq:coulson_and_richardson}). The expected terminal velocity was calculated using this model and plotted alongside the SPH-DEM results in Figure \ref{fig:diFeliceCompare}. As shown, the COMSOL results agree with this analytical terminal velocity to within 3.5\% over the range of $Re_p$ considered. While the results in previous SPS sections have shown that the SPH-DEM model can accurately (within 1\%) reproduce the expected terminal velocity assuming a given drag law (Stokes), this subsection illustrated that the final accuracy is still largely determined by the suitability of the underlying drag law chosen. However, a full comparison of the numerous drag laws currently in the literature is beyond the scope of this paper, and for the purposes of validating the SPH-DEM model we can assume that the chosen drag law (from here on the Di Felice), approximates well the true drag on the particles. \section{Sedimentation of a Constant Porosity Block (CPB)}\label{sec:CPB} This section shows the results from the Constant Porosity Block (CPB) test case. In a similar fashion to the SPS case, we explore the effect of fluid resolution and fluid properties. In addition, we consider the influence of a new parameter, the porosity of the block, on the results. All the simulations in this section use two-way coupling, as the hindered fluid flow due to the presence of the solid particles is an important component of the simulation. As the porous block falls, the fluid will be displaced and flow upward through the block, affecting the terminal velocity. All the simulations use the Di Felice drag law, which is necessary to incorporate the effects of moderate Re and of neighbouring particles (lower porosity) on the drag force. \begin{figure} \centering \includegraphics[width=0.9\textwidth]{figure7} \caption{Visualisation of the DEM particles for the Constant Porosity Block test case. On the left the DEM particles are shown coloured by porosity $\epsilon_i$, and a transparent box representing the simulation domain. On the right the corresponding fluid velocity field is shown at $x=0$, with the arrows scaled and coloured by velocity magnitude.} \label{fig:CPBimage} \end{figure} Figure \ref{fig:CPBimage} shows an example visualisation during the simulation of a block with porosity $\epsilon=0.8$ falling in water. On the left hand side of the image are shown the DEM particles (coloured by porosity $\epsilon_i$) falling in the fluid column. The porosity of most of the DEM particles is $\epsilon=0.8$, as expected, except near the edge of the block where the discontinuity in particle distribution is smoothed out by the kernel (with smoothing length $h_c \cong 6d$) in Eq.\ (\ref{eq:epsilonCalculation}). This results in a porosity greater than 0.8 for DEM particles whose distance is lower than $h_c$ from the edge of the block. We will show in subsection \ref{sec:CPBresolution} that this effect can be limited/avoided by choosing a smaller smoothing length. On the right hand side a vector plot of the velocity field at $x=0$ shows the upward flow of fluid due to the displacement of fluid by the particles as they fall. Also noticeable are the fluctuations in velocity near the edges of the block, which are discussed in more detail in subsection \ref{sec:effectOfPorosity}. Shortly after release, the vertical velocity of the CPB converges to a terminal velocity that is consistent with the expected terminal velocity, although it is slightly (less than 5\%) higher than expected. The systematically increased terminal velocity is due to reduced drag at the edges of the block due to the finite width of the smoothing kernel. As the width of the smoothing kernel $h$ used to calculate the porosity field is larger (by a factor of 2-6, see Figure \ref{fig:multiResolution_water} for details) than the particle diameter $d$, the porosity field near the edges of the CPB will be smoothed out according to the width of the kernel. This results in a slightly higher apparent local porosity and a reduced drag than what would be expected with $\epsilon=0.8$. \subsection{The effect of fluid resolution}\label{sec:CPBresolution} \begin{figure} \centering \includegraphics[height=0.9\textwidth,angle=-90]{figure8} \caption{Average percentage error in the terminal velocity and average porosity of the Constant Porosity Block (CPB), with $\epsilon=0.8$ in water, for varying fluid resolution. Errorbars in the terminal velocity points show one standard deviation of the vertical velocity data from the average, taken over a time period of $0.34$ s ($\approx 50 t_d$) after the terminal velocity has been reached.} \label{fig:multiResolution_water} \end{figure} Figure \ref{fig:multiResolution_water} shows the percentage difference between the vertical velocity of the block and the expected terminal velocity. The results from five different simulations are shown, each with a different fluid resolution ranging from $h/d=6$ to $h/d=2$. The porosity is set to $\epsilon=0.8$. The $h/d=6$ simulation suffers from a too strong smoothing of the porosity field near the edges of the block. Integrating the porosity field over the volume of the CPB leads to a porosity of 0.85, about 6\% higher than the true porosity of the block. This results in an increase of 22\% in the terminal velocity of the block. Increasing the fluid resolution to $h/d=5$ causes the error to decrease to 15\%, since the interpolated porosity at the edge of the block is now closer to the set value of $\epsilon=0.8$. Further increases in the fluid resolution consistently decrease the measured terminal velocity until at $h/d=2$ the error is only 5\% of the expected value. These results illustrate how the smoothing applied to the porosity field can have dramatic results on the accuracy of the simulations. This is largely due to the fact that the modelled drag only depends on the local (smoothed) porosity, which does not properly consider sharp porosity gradients. Thus, the accuracy of the drag law near large changes in porosity is highly dependent on the magnitude of smoothing applied to the porosity field. This is true for the Di Felice law and the most other drag laws proposed in the literature, but there has been some recent work by \citet{xu07discrete}, which attempts to account for the influence of the porosity gradient, but we will not study this further here. \subsection{The effect of porosity}\label{sec:CPBporosity} \begin{figure} \centering \includegraphics[height=0.9\textwidth,angle=-90]{figure9} \caption{Average terminal velocity (scaled by $|\mathbf{u}_t|$, the expected terminal velocity of a single DEM particle) of the Constant Porosity Block (CPB) in water and water-glycerol for varying porosity and $h/d = 2$. Errorbars show one standard deviation of the vertical velocity data from the average, taken over a time period of $0.34$ s ($\approx 50 t_d$). The y-axis is scaled by $|\mathbf{u}_t|$, the expected terminal velocity of a single DEM particle given by Eq.\ (\ref{eq:expectedDiFeliceTermVel}), which corresponds to the SPS test case.} \label{fig:multiPorosity_water} \end{figure} Varying the porosity of the CPB allows us to evaluate the accuracy of the SPH-DEM model at different porosities when $h/d=2$. Figure \ref{fig:multiPorosity_water} shows the average terminal velocity of the block, as measured from SPH-DEM simulation of the CPB over a range of porosities from $\epsilon=0.6$ to $1.0$. Results using both water and water-glycerol as the interstitial fluid are shown on the same plot by scaling the y-axis by the expected terminal velocity of a single DEM particle. The average velocity is taken after the block has reached a steady terminal velocity and the error bars show one standard deviation of the vertical velocity from the average. Shown with the SPH-DEM results is the expected terminal velocity computed using Eq.\ (\ref{eq:expectedDiFeliceTermVel}) and the input porosity of the block. The SPH-DEM results for both water and water-glycerol match this reference line very well over the range of porosities tested. At lower porosities the vertical velocity of the CPB suffers from increasing fluctuation around the mean. This is a consequence of fluctuations seen in the surrounding fluid velocity, and will be described further in Section \ref{sec:effectOfPorosity}. In summary, the simulated terminal velocity for the CPB matched the expected value over the range of resolutions and porosities considered, as long as the resolution of the fluid phase (set by $h$) is sufficient to resolve the porosity field of the given problem. For the CPB we have an discontinuous jump at the edges of the block from the given porosity of the block to the surrounding $\epsilon=1$. We found that as long as the fluid resolution was kept at $h=2d$, where $d$ is the DEM particle diameter (i.e., the length scale of the porosity jump), the results matched the theoretical predictions within 5\% over prediction. Using $h < 2d$ is not recommended due to errors caused by a non-smooth porosity field, as shown by the SPS test results in Section \ref{sec:SPS}. \subsection{Effect of Porosity Gradients on Fluid Solution}\label{sec:effectOfPorosity} \begin{figure} \centering \includegraphics[height=0.9\textwidth,angle=-90]{figure10} \caption{Scatter-plot of the vertical velocity (red dots) and porosity (green line) versus height for all the SPH particles. The test case was CPB with a porosity of $\epsilon = 0.8$ in water as the surrounding fluid, the fluid resolution was $h/d = 2$ and $\alpha_{art}=0.1$. Snapshot is taken once the CPB has reached terminal velocity.} \label{fig:velAndPor} \end{figure} In the previous section it was shown how the smoothing of the porosity discontinuity of the block slightly affected the drag on the DEM particles and the final terminal velocity of the block. In this section we will show how the high porosity gradients near the edge of the block also give rise to further effects on the SPH solution for the fluid. Figure \ref{fig:velAndPor} shows the vertical velocity and porosity for all the SPH particles in a CPB simulation with fluid resolution $h/d=2$ and porosity $\epsilon=0.8$, plotted against the vertical position of the SPH particles. The porosity is rather smooth and clearly shows the location of the CPB. However, there are fluctuations in the vertical velocity of the SPH particles near the edges of the block, much larger than the rather small average (positive) velocity inside the block. These fluctuations are present to different degrees in all of the SPH-DEM simulations and their magnitude is proportional to the local porosity gradient. Therefore, their effect is strongest for the simulations with low porosity or fine fluid resolution (i.e. small $h$). Given the correlation of these fluctuations with high porosity gradients, their source is likely to be due to errors in the SPH pressure field. It is well-known, e.g. \citep{colagrossi03numerical}, that SPH solutions can exhibit spurious fluctuations in the pressure field, which normally have little or no effect on the fluid velocity. For our simulations the pressure of each SPH particle is proportional to $(\rho/\epsilon \rho_0)^7$ and is therefore very sensitive to changes in $\epsilon$. It is likely that for high porosity gradients the pressure variations that are normally present would be amplified and generate corresponding large fluctuations in the velocity field. As long as the fluctuations do not grow too large, they do not affect the mean flow of the fluid, as evidenced by the reproduction of the expected terminal velocity in the previous sections. To ensure the simulation accuracy, it was found that the application of an artificial viscosity with strength $\alpha_{art} = 0.1$, see Eq.\ (\ref{Eq:monaghansViscousTerm}), was enough to damp out the fluctuations in velocity so that they did not have a significant effect on the results. This value of $\alpha_{art}$ was used in all of the CPB simulations shown here. The artificial viscosity has little effect on the settling velocity of the SPS or CPB since this viscosity is only applied between SPH particles and is not included in the fluid-particle coupling term (Eq. \ref{Eq:demCouplingForce}). However, for systems where the fluid viscosity plays an important role (e.g. the Rayleigh Taylor instability), this has an effect which will be described in the next section. \section{Rayleigh-Taylor Instability (RTI)} \label{sec:RTI} The classic Rayleigh-Taylor fluid instability is seen when a dense fluid is accelerated into a less dense fluid, for example, under the action of gravity. Consider a water column of height $h$ filled with a dense fluid with density $\rho_d$ and viscosity $\nu_d$ located above a lighter fluid with parameters $\rho_f$ and $\nu_f$. For the RTI test case, the lower and higher density fluids are represented by the pure fluid and the suspension, respectively. If the height of the interface between the two fluids is perturbed by a normal mode disturbance with a certain wave number $k$ (see Figure \ref{fig:rayleightaylor_diagram} and Eq.\ (\ref{eq:normal_mode_disterbance})), then this disturbance will grow exponentially with time. \begin{figure} \centering \includegraphics[height=0.6\textwidth]{figure11} \caption{Diagram showing a cross-section of the initial setup for the Rayleigh-Taylor Instability (RTI) test case. The upper grey area is the particle-fluid suspension with effective density and viscosity $\rho_d$ and $\nu_d$, the lower white region is clear fluid with density and viscosity $\rho_f$ and $\nu_f$. The suspension is given an initial vertical perturbation with wave number $k$ and amplitude $d/4$.} \label{fig:rayleightaylor_diagram} \end{figure} The two-fluid model of a Rayleigh-Taylor instability was derived in the authoritative text by \citet{chandrasekhar61hydrodynamic}. The exponential growth rate $n(k)$ of a normal mode disturbance with wave number $k$ at the interface between the two fluids (with zero surface tension) is characterised by the dispersion relation \citep{chandrasekhar61hydrodynamic} given by \begin{align} &- \left [ \frac{gk}{n^2} (\alpha_f - \alpha_d) + 1 \right ] (\alpha_c q_d + \alpha_f q_c - k) - 4k \alpha_f \alpha_d \nonumber \\ &+ \frac{4k^2}{n} (\alpha_f \nu_f - \alpha_d \nu_d) [\alpha_d q_f - \alpha_f q_d + k(\alpha_f - \alpha_d)] \nonumber \\ &+ \frac{4k^3}{n^2} (\alpha_f \nu_f - \alpha_d \nu_d)^2 (q_f - k)(q_d-k) = 0, \label{eq:dispersionRT} \end{align} where $\nu_{f,d}=\mu_{f,d}/\rho_{f,d}$ is the kinematic viscosity of the two phases, $\alpha_{f,d} = \rho_{f,d}/(\rho_f+\rho_d)$ is a density factor and $q^2_{f,d}=k^2+n/\nu_{f,d}$ is a convenient abbreviation. For this test case, we use an identical initial condition as in the CPB test case, with a block of particles immersed in the fluid with an initial porosity of $\epsilon=0.8$. Using the density of the surrounding fluid $\rho_f$, the effective density of the fluid-particle suspension is $\rho_d=\epsilon \rho_f + (1-\epsilon) \rho_p$. The effective viscosity of the suspension $\mu_{d}$ is estimated here using Krieger's hard sphere model \citep{krieger59mechanism} (assumed to be valid for both dilute and dense suspensions) \begin{equation}\label{Eq:krieger} \mu_{d} = \mu_{f} \left ( \frac{\epsilon-\epsilon_{min}}{1-\epsilon_{min}} \right )^{-2.5(1-\epsilon_{min})}, \end{equation} where $\epsilon_{min} = 0.37$ is the porosity at the maximum packing of the solid particles. We generate an initial disturbance in the interface between the two ``fluids" by adding a small perturbation to the vertical position of every DEM particle \begin{equation}\label{eq:normal_mode_disterbance} \Delta z_i = -\frac{d}{4} (1-\cos(k_x x_i))(1-\cos(k_y y_i)), \end{equation} where $k_x=k_y=2\pi/w$ and $x_i$ and $y_i$ are the coordinates of particle $i$. This yields a symmetric disturbance in the interface with a wave length equal to the box width $w$ and identical to the wave length of the dominant mode. \begin{figure} \centering \includegraphics[width=0.9\textwidth]{figure12} \caption{Visualisation of the DEM particles (left) and the fluid velocity field (right) at $x=0$ in the y-z plane, for the Rayleigh Taylor (RT) test case at $t=0.37$, using $\epsilon=0.8$ and water-glycerol as the surrounding fluid. The growth rate for this simulations versus time can be seen in Figure \ref{fig:multiGrowthRate_water_glycerol}.} \label{fig:RTVis} \end{figure} Figure \ref{fig:RTVis} shows the positions of the DEM particles during the growth of the instability, along with the fluid velocity field at $x=0$. At this time there is a strong fluid circulation that is moving downward in the centre of the domain and upward at the corners (not visible in this cut). This causes the growth of the instability by increasing the sedimentation speed of the DEM particles near the centre while reducing or even reversing the sedimentation of those particles near the outer boundaries of the domain. The movement of the DEM particles matches the expected behaviour of the instability. Next we will attempt to quantitatively compare the SPH-DEM results to the growth rate predicted by the analytical two-fluid model. \begin{figure} \centering \includegraphics[height=0.9\textwidth,angle=-90]{figure13} \caption{Growth of the Rayleigh-Taylor instability using water. The red pluses and green crosses show the position of the lowest DEM particle when the artificial viscosity is either added or not. The two reference lines show the growth rate predicted by a two-fluid model, using the lowest and highest porosity of the CPB.} \label{fig:multiGrowthRate_water} \end{figure} In Figure \ref{fig:multiGrowthRate_water} the growth of the RT instability versus time for $\epsilon=0.8$, fluid resolution $h/d=2$ is shown using water as the surrounding fluid. The symbols give the vertical position of the lowest DEM particle, which provides an approximate measure of the instability amplitude relative to an initially unperturbed situation. The vertical displacement of this point over time can be compared with the estimated growth rate for the RT instability as given by the two-fluid model in Eq.\ (\ref{eq:dispersionRT}). The growth rate of the instability is added to the expected sedimentation speed using Eq.\ (\ref{eq:expectedDiFeliceTermVel}) to calculate the expected trajectory of the lowest DEM particle. Using the parameters of the simulation and solving for the growth rate leads to a growth curve given by the lowest blue dashed line. While a constant porosity of $0.8$ is used for the two-fluid RTI model, the porosity of the DEM particles ranges from $0.8 \le \epsilon \le 0.86$ at $t=0$ (initial conditions) and the porosity at the leading front of the instability grows over time, reaching a value of 0.93 at the time shown in Figure \ref{fig:RTVis} and a maximum value of 0.95 before the instability meets the bottom boundary. We use the analytical model to obtain an upper and lower bound to the instability growth. The upper bound is calculated using $\epsilon=0.8$ (the blue dashed line) and the lower bound (slower growth) is calculated using $\epsilon=0.93$, which gives the purple dashed line. The two-fluid model is included here as a benchmark, but it should be noted that this model contains some significant approximations in treating the particle suspension as an equivalent fluid, and is not necessarily more accurate than the SPH-DEM results. The SPH-DEM results are shown for the cases where the artificial viscosity is either applied ($\alpha_{art}=0.1$) or not used ($\alpha_{art}=0.0$). In both cases there is a clear exponential growth of the RT instability and only the quantitative growth rate differs between the two simulations. Without the artificial viscosity, the (exponential) growth rate lies between the two bounds. After $t=0.15$ s the growth rate becomes slower than the upper bound, but by this time the bottom of the instability is close to the bottom boundary, and we do not expect the two-fluid model (which assumes small perturbations and an unbounded domain) to apply. With artificial viscosity, the growth rate of the instability is decreased and becomes slower than both of the two reference bounds. \begin{figure} \centering \includegraphics[angle=-90,width=0.9\textwidth]{figure14} \caption{Growth of the Rayleigh-Taylor instability using water-glycerol. The red pluses and green crosses show the position of the lowest DEM particle when the artificial viscosity is either added or not. The two reference lines show the growth rate predicted by a two-fluid model, using the lowest and highest porosity of the CPB.} \label{fig:multiGrowthRate_water_glycerol} \end{figure} Figure \ref{fig:multiGrowthRate_water_glycerol} shows the same results but using water-glycerol as the interstitial fluid. In this case the physical viscosity of the fluid is proportionally greater than the artificial viscosity applied, and therefore the addition of the artificial viscosity has a lesser effect. For both $\alpha_{art}=0.1$ and $\alpha_{art}=0.0$ the growth rate of the instability lies between the two bounds, except when the DEM particles reach the bottom of the domain and large amplitude and wall effects dominate. While it is encouraging that the SPH-DEM results closely match the expected growth of the RT instability, the results highlight the negative effect of the artificial viscosity when used in problems where the fluid or suspension viscosity are important. It is therefore desirable to develop other approaches to reduce the velocity fluctuations near high porosity gradients, and this is the subject of current work. However, it is important to note that for the majority of applications the addition of a small amount of artificial viscosity has no significant effect on the results and is successful in eliminating the problematic velocity fluctuations. For the interested reader, please see \citet{colagrossi03numerical,gomez2010state,monaghan1994simulating} for a few more examples where a similar SPH artificial viscosity has been successfully applied. In summary, the results from the RTI simulations using water-glycerol show that the SPH-DEM simulation can accurately reproduce the Rayleigh-Taylor instability. The addition of an artificial viscosity, while successful in dampening the spurious velocity fluctuations, increases the effective viscosity of the system and slightly reduces the growth rate of the instability. \section{Conclusion} We have presented a SPH implementation of the locally averaged Navier Stokes equations and coupled this with a DEM model in order to provide a simulation tool for two-way coupled fluid-particle systems. One notable property of the resulting method is that it is completely particle-based and avoids the use of a mesh. It is therefore suitable for those applications where a mesh presents additional problems, for example, free surface flow or flow around complex, moving and/or intermeshed geometries \citep{robinson2012dispersion}. Furthermore, as the second main contribution of this study, we proposed a validation procedure with test cases of increasing complexity (which can be applied also to other methods). The SPH-DEM formulation was used for 3D single and multiple particle sedimentation problems and compared against analytical solutions for validation. For single particle sedimentation (SPS) the simulations reproduced the analytical solutions very well, with less than 1\% error over a wide range of Particle Reynolds Numbers $0.011 \le Re_p \le 9$ and fluid resolutions. Only when the fluid resolution became less than two times the particle diameter did the results start to diverge from the expected solution. For the multiple particle sedimentation test case using the Constant Porosity Block (CPB), the SPH-DEM method accurately reproduced the expected terminal velocity of the block within 5\% over prediction, over a range of porosities $0.5< \epsilon <1.0$ and Particle Reynolds Numbers $0.002 \le Re_p \le 0.85$. The over prediction of the terminal velocity is due to smoothing of the porosity field near the edges of the block and reduces with a finer fluid resolution. This error can be considered acceptable, considering the much lower computational cost of SPH-DEM with the respect to the more accurate simulations that can be obtained using a finely resolved FEM or Lattice Boltzmann method. Further results from the CPB test case showed fluctuations in velocity of the SPH particles near the edges of the block, which are likely due to fluctuations in the pressure field being amplified by sudden changes in porosity. Adding a small amount of artificial viscosity to the simulations was sufficient to damp these fluctuations and prevent them from affecting the terminal velocity of the block. The Rayleigh-Taylor Instability (RTI) test case successfully reproduced the instability and its growth rate for both water and water-glycerol. For this test case the addition of artificial viscosity was not necessary for stability; due to the relatively high porosity $\epsilon=0.8$ and lower porosity gradients at the interface between the suspension and clear fluid. Overall, the SPH-DEM model successfully reproduced the expected results from the analytical test cases over a wide range of Reynolds Numbers and porosities, and promises to be a flexible and accurate tool for modelling particle-fluid systems. Current work is addressing the SPH velocity fluctuations near high porosity gradients and promising results have already been obtained by either calculating the drag separately on the fluid or re-deriving the SPH equations from a Lagrangian formulation. In the future, the method will be applied to dispersion of solids in liquid or liquid-gas environments \citep{robinson2012dispersion}. Other relevant directions for future developments are: (i) the choice of appropriate drag laws (e.g. for polydisperse flows) and the inclusion of the added mass and lift forces; (ii) more realistic DEM particle contact forces and (iii) the inclusion of contact friction and lubrication forces; and the inclusion of surface tension effects. \section*{Acknowledgment} This work was supported by the PARDEM (www.pardem.eu) collaboration, which is a EU Funded Framework 7, Marie Curie Initial Training Network. Thanks to use of the cluster supported by the two STW grants on ``A Numerical Wave Tank for Complex Wave and Current Interactions" of Bokhove and Van der Vegt and on ``Polydispersed Granular Flows through Inclined Channels" by Bokhove, Kuipers, Van der Vegt and Luding. \bibliographystyle{model2-names}
1,108,101,566,286
arxiv
\section{Introduction} Magnetohydrodynamic (MHD) turbulence in the early Universe can be a powerful source of gravitational waves (GWs) that could be observable as a stochastic background today \citep{1994PhRvD..49.2837K,PhysRevLett.85.2044,2002PhRvD..66b4030K,Dolgov+02,2018CQGra..35p3001C}. The frequency spectrum of these waves is related to the spectrum of the underlying turbulence. Such turbulence could be induced by the various phase transitions in the early Universe \citep{PhysRevD.30.272,10.1093/mnras/218.4.629,Mazumdar_2019,10.21468/SciPostPhysLectNotes.24} or the possible presence of primordial magnetic fields \citep{1988PhRvD..37.2743T,tanmay1991, 1992ApJ...391L...1R,durrer2013,subramanian2016,tanmay2021,Bran+He+Shar21}. These GWs produced by turbulence at an epoch of the electroweak phase transition lie in the sensitivity range of the proposed Laser Interferometer Space Antenna and pulsar timing arrays for the turbulence induced around an epoch of the quantum chromodynamics (QCD) phase transition. Recently, various pulsar timing arrays \citep{NANOGrav2020, Goncharov_2021, Chen2021, Antoniadis2022} have reported evidence for the presence of a common spectrum process across analyzed pulsars in the search of the presence of an isotropic stochastic GW background. This evidence has been used to constrain the strength and correlation length of the magnetic fields generated at the QCD epoch \citep{2021PhRvD.103L1302N,Sharma21,RoperPol+22}. However, the presence of a quadrupolar spatial correlation \citep{Hellings&Downs}, a characteristics of a GW background, is yet to be claimed. Numerical simulations have confirmed that there is indeed a direct connection between the slopes of the turbulence and GW spectra \citep{RoperPol+20}, except that at low frequencies, below the peak of the spectrum, the GW spectrum was found to be shallower in the simulations than what was previously expected from analytical calculations. However, there is the worry that this shallow tail could be caused by unknown numerical artifacts such as the finite size of the computational domain and the way the turbulence is initiated in the simulations. To address the problem of a limited computational domain, it is important to inspect the detailed temporal dynamics of the different spatial Fourier modes of the stress. In an alternative approach, the authors of Ref.~\cite{RoperPol+22} have recently compared numerical simulations with an analytic model, where the stress is constant for a certain interval of time that is related to the eddy turnover time corresponding to the peak wavenumber at the initial time. Their model predicts a flat spectrum whose extent depends on the duration over which the stress is held constant. In this way, it was possible to determine an effective duration for a given numerical simulation. Their model is therefore descriptive rather than predictive. In another recent approach, the authors of Ref.~\cite{Auclair:2022jod} have focused on the importance of unequal time correlation functions of the Fourier components of the velocity field for purely hydrodynamic turbulence. While the authors acknowledge the potential importance of the initial growth phase of the turbulence, they also show that there is no inverse cascade in their simulations. This is different from MHD turbulence, which can display inverse cascading even in the absence of net magnetic helicity. This will be crucial to the approach discussed in the present paper. In the simulations of Ref.~\cite{RoperPol+22}, the wavenumber corresponding to the peak of the GW spectrum and the wavenumbers below that corresponding to the horizon size at the initial time are well resolved. Since the stress appears explicitly in the linearized GW equation, we also provide the evolution of the stress spectrum for these simulations. Second, we develop a simple model, motivated by the stress evolution seen in simulations, to explain the GW spectrum obtained in simulations. In this model, our main focus is to understand the nature of the GW spectrum below the wavenumber corresponding to the peak of the spectrum. We call this part the low frequency tail of the GW spectrum. We emphasize that the Hubble horizon wavenumber poses an ultimate cutoff for the flat spectrum toward low wavenumbers. This paper is organized as follows. In \Sec{TheModel}, we discuss the evolution of the magnetic field, stress, and GW spectrum in our new runs. In this section, we also discuss how the stress spectrum evolves when inverse transfer and inverse cascade turbulence regimes correspond to the evolution of the nonhelical and helical magnetic fields in the early Universe. In \Sec{simplemodel}, we discuss the model to explain the low frequency tail of the GW spectrum. Further, in \Sec{comparison}, we compare the GW spectrum obtained from our numerical simulations and our model. We conclude in \Sec{conclusion}. \section{Nonhelical and helical cascades} \label{TheModel} Various phenomena such as primordial magnetic fields and phase transitions can lead to the generation of turbulence in the early Universe. The stress associated with magnetic fields and turbulence lead to the production of GWs. This has been studied in the literature both analytically \citep{Dolgov+02,kosowsky2002,Gogo+07,tina2008} and numerically \citep{RoperPol+20,RoperPol+21,Kahniashvili+21,RoperPol+22,Auclair:2022jod}. In the present paper, we perform new simulations of decaying MHD turbulence, where we resolve the scales which are smaller than the Hubble horizon size at the initial time. Before explaining the simulations in detail, let us begin by summarizing the basic equations. \subsection{GWs from MHD turbulence} \label{GWfromCME} We follow here the formalism of Ref.~\cite{RoperPol+20b, RoperPol+20}, where conformal time is normalized to unity at the initial time. One could associate this with the electroweak phase transition, for example. The velocity $\mathbf{u}$ is normalized to the speed of light. The magnetic field $\mathbf{B}=\mbox{\boldmath $\nabla$} {}\times\mathbf{A}$ is written in terms of the magnetic vector potential $\mathbf{A}$, and the current density is written as $\mathbf{J}=\mbox{\boldmath $\nabla$} {}\times\mathbf{B}$. Following Ref.~\cite{BEO96}, the energy density $\rho$ includes the restmass density, so its evolution equation obeys a continuity equation that also includes magnetic energy terms. As in \cite{RoperPol+20b}, $\rho$ is normalized to the critical energy density for a flat Universe. We solve for the Fourier transformed plus and cross polarizations of the gravitational strain, $\tilde{h}_+$ and $\tilde{h}_\times$, which are driven by the corresponding projections of the stress, which, in turn, is composed of kinetic and magnetic contributions, \begin{equation} {\sf T}_{ij} =\frac{4}{3}\gamma_{\rm Lor}^2\rho u_i u_j-B_i B_j+..., \end{equation} where $\gamma_{\rm Lor}=(1-\mathbf{u}^2)^{-1/2}$ is the Lorentz factor, and the ellipsis denotes terms proportional to $\delta_{ij}$, which do not contribute to the projected source $\tilde{T}_{+/\times}$. Assuming the Universe to be conformally flat, its expansion can be scaled out by working with conformal time $t$ and comoving variables \citep{BEO96}. We use the fact that in the radiation-dominated era, the scale factor grows linearly with conformal time. The only explicit occurrence of conformal time is then in the GW equation, where a $6/t$ factor occurs in the source term \citep{RoperPol+20b}. The full set of equations is therefore \begin{eqnarray} &&\frac{\partial\mathbf{B}}{\partial t}= \mbox{\boldmath $\nabla$} {}\times[\mathbf{u}\times\mathbf{B}+\mbox{\boldmath $\nabla$} {}\times\mathbf{B}],\label{dAdt}\\ &&{{\rm D} {}\mathbf{u}\over{\rm D} {} t}= {1\over\rho}\mbox{\boldmath $\nabla$} {}\cdot\left(2\rho\nu\mbox{\boldmath ${\sf S}$} {}\right)-{1\over4}\mbox{\boldmath $\nabla$} {}\ln\rho +{\mathbf{u}\over3}\left(\mbox{\boldmath $\nabla$} {}\cdot\mathbf{u}+\mathbf{u}\cdot\mbox{\boldmath $\nabla$} {}\ln\rho\right) \nonumber \\ &&\qquad\quad-{\mathbf{u}\over\rho} \left[\mathbf{u}\cdot(\mathbf{J}\times\mathbf{B})+\eta \mathbf{J}^2\right] +{3\over4\rho}\mathbf{J}\times\mathbf{B}, \label{dudt} \\ &&{\partial\ln\rho\over\partial t} =-\frac{4}{3}\left(\mbox{\boldmath $\nabla$} {}\cdot\mathbf{u}+\mathbf{u}\cdot\mbox{\boldmath $\nabla$} {}\ln\rho\right) +{1\over\rho}\left[\mathbf{u}\cdot(\mathbf{J}\times\mathbf{B})+\eta \mathbf{J}^2\right]\!, \nonumber \\ &&\frac{\partial^2}{\partial t^2} \tilde{h}_{+/\times} (\mathbf{k}, t) +k^2\tilde{h}_{+/\times} (\mathbf{k}, t) = {6\over t} \tilde{T}_{+/\times}(\mathbf{k},t), \label{GW4} \end{eqnarray} where ${\rm D} {}/{\rm D} {} t\equiv\partial/\partial t+\mathbf{u}\cdot\mbox{\boldmath $\nabla$} {}$ is the advective derivative, $\eta$ is the magnetic diffusivity, $\nu$ is the kinematic viscosity, ${\sf S}_{ij}={\textstyle{1\over2}}(u_{i,j}+u_{j,i})-{\textstyle{1\over3}}\delta_{ij}\mbox{\boldmath $\nabla$} {}\cdot\mathbf{u}$ are the components of the rate-of-strain tensor $\mbox{\boldmath ${\sf S}$} {}$ with commas denoting partial derivatives. Fourier transformation in space is denoted by a tilde. In all cases studied in this paper, the initial conditions are such that $\mathbf{B}$ consists of a weak Gaussian-distributed seed magnetic field, $\mathbf{u}=0$, $\rho=1$. We work with spectra that are defined as integrals over concentric shells in wavenumber space $\mathbf{k}$ with $k=|\mathbf{k}|$. They are normalized such that their integrals over $k$ give the mean square of the corresponding quantity, i.e., $\int\mbox{\rm Sp}(\mathbf{B})\,{\rm d} {} k=\bra{\mathbf{B}^2}$, where $\mbox{\rm Sp}(\mathbf{B})=\mbox{\rm Sp}(B_x)+\mbox{\rm Sp}(B_y)+\mbox{\rm Sp}(B_z)$. Likewise, $\mbox{\rm Sp}(\mbox{\boldmath ${\sf h}$} {})=\mbox{\rm Sp}(h_+)+\mbox{\rm Sp}(h_\times)$ is defined as the sum over the two polarization modes. Of particular interest will also be the stress spectrum $\mbox{\rm Sp}(\mbox{\boldmath ${\sf T}$} {})$, which is defined analogously through $\mbox{\rm Sp}(\mbox{\boldmath ${\sf T}$} {})=\mbox{\rm Sp}(T_+)+\mbox{\rm Sp}(T_\times)$. To study the evolution of the stress at selected Fourier modes, we compute $|T(k,t)|\equiv\sqrt{\mbox{\rm Sp}(\mbox{\boldmath ${\sf T}$} {})/4\pi k^2}$, which scales the same way as $|\tilde{T}_+(k,t)|$ and $|\tilde{T}_\times(k,t)|$. \subsection{Evolution of the stress and strain spectra} \label{EvolutionOfStress} To put our results into perspective and compare with earlier work, we study cases of suddenly initiated turbulence. We perform simulations similar to those of Ref.~\cite{RoperPol+20} by using as initial condition for the magnetic field a random Gaussian-distributed magnetic field with a $k^4$ spectrum for $k<k_{\rm p}$ and a $k^{-5/3}$ spectrum for $k>k_{\rm p}$. For details of such a magnetic field, see Ref.~\cite{Bran+17}. As initial condition for the GW field, we assume that $h$ and $\dot{h}$ vanish. The strength of the GW field is then strongly determined by the sudden initialization of a fully developed turbulence spectrum. The details of the simulations is given in \Tab{table1}. In this table, the first column represents the name of the runs, ${\cal E}_{\rm EM}^{i}$ is the initial value of the magnetic energy density compared to the background energy density, $k_{\rm p}$ is the wavenumber at which the magnetic energy spectrum peaks and it is normalized by the wavenumber corresponding to the Hubble horizon size at the initial time, ${\cal E}_{\rm GW}^{\rm sat}$ is the value of the GW energy density after saturation compared to the background energy density, and ${\Omega}_{\rm GW}^{\rm sat}$ is the density parameter of GWs, representing the ratio of the GW energy density compared to the critical energy density at present. ${\Omega}_{\rm GW}^{\rm sat}$ has been calculated considering the production of GWs around the electroweak phase transition. \begin{figure*}\begin{center} \includegraphics[width=\textwidth]{pstress_etc_LowFreq_sig1} \end{center}\caption{ Spectra of the magnetic field, the TT-projected stress, the strain derivative, and the strain for suddenly initiated turbulence with magnetic helicity. }\label{pstress_etc_LowFreq_sig1}\end{figure*} \begin{figure*}\begin{center} \includegraphics[width=\textwidth]{pstress_etc_LowFreq_sig0} \end{center}\caption{ Same as \Fig{pstress_etc_LowFreq_sig1}, but for the nonhelical case. }\label{pstress_etc_LowFreq_sig0}\end{figure*} In \Figs{pstress_etc_LowFreq_sig1}{pstress_etc_LowFreq_sig0}, we show spectra of the magnetic field, the TT-projected stress, the strain derivative, and the strain for runs with and without magnetic helicity, respectively. Inverse cascading is seen in the magnetic energy spectra, which leads to the expected increase of the spectral stress at small $k$; see \Figsp{pstress_etc_LowFreq_sig1}{a}{b} for the helical case. We also see in \Figp{pstress_etc_LowFreq_sig1}{c} that the GW energy spectrum has a maximum at $k\sim20$, which is not present in the nonhelical case; cf.\ \Figp{pstress_etc_LowFreq_sig0}{c}. Their spectra fall off toward smaller $k$ proportional to $k$ and $k^{1.5}$ in the helical and nonhelical cases, respectively. \begin{table}\caption{ Summary of simulation parameters }\begin{center} \begin{tabular}{ccccccc} \hline Run & ${\cal E}_{\rm M}^i$ &$k_{\rm p}$ & ${\cal E}_{\rm GW}^{\rm sat}$ & ${\Omega}_{\rm GW}^{\rm sat}$\\ \hline hel & $5.4\times10^{-3}$ &$10$& $3.7\times10^{-7}$ & $5.9\times10^{-12}$\\ \hline nonhel & $5.5 \times 10^{-3}$ &$10$& $3.5\times10^{-7}$ & $5.6\times10^{-12}$\\%LowFreq1024sig0_k01_kf10b2_rep2 \hline \label{table1}\end{tabular} \end{center} \end{table} \begin{figure*}\begin{center} \includegraphics[width=\textwidth]{rslice_stress_plot_LowFreq.eps} \end{center}\caption{ Modulus and phase of $\tilde{T}(k,t)$ and $\dot{\tilde{h}}(k,t)$ for the helical case for $\mathbf{k}=(k,0,0)$ with $k=0.3$ (orange), 0.4 (red), 0.5 (green), 0.6 (blue), and 0.7 (black). The inset shows the phase with a linear abscissa. }\label{rslice_stress_plot_LowFreq}\end{figure*} \begin{figure*}\begin{center} \includegraphics[width=\textwidth]{rslice_stress_plot_LowFreq_nohel} \end{center}\caption{ Same as \Fig{rslice_stress_plot_LowFreq}, but for the nonhelical case. }\label{rslice_stress_plot_LowFreq_nohel}\end{figure*} \begin{figure*}\begin{center} \includegraphics[scale=0.7]{helical_with_and_wo_phase.eps} \includegraphics[scale=0.7]{nonhelical_with_and_wo_phase.eps} \end{center}\caption{$\mbox{\rm Sp}(\dot{\tilde{h}})(k,t)$ vs $k$: (a) The solid black and blue curves represent $\mbox{\rm Sp}(\dot{\tilde{h}})(k,t)$ at times $t=1.5$ and $t=37$ for run hel. The dashed red and orange curves show $\mbox{\rm Sp}(\dot{\tilde{h}})(k,t)$ for the case when the stress spectrum has been replaced by its modulus in the GW evolution equation. (b) Same as (a), but for run nonhel.} \label{with_and_wo_phase} \end{figure*} In \Figs{rslice_stress_plot_LowFreq}{rslice_stress_plot_LowFreq_nohel}, we compare stress, strain derivative, and strain spectra for helical and nonhelical runs with similar values of the initial Alfv\'en speed of about 0.1. In the helical case with inverse cascading, the stress increases with time at small $k$. By contrast, in the nonhelical case, the stress always decreases at small $k$. In spite of these differences, the GW spectra are not so different in the two cases. Both show a drop for $k<1$ and a nearly flat spectrum in the interval $1<k<10$, which is below the peak at $2k_{\rm p}=20$. In \Figs{rslice_stress_plot_LowFreq}{rslice_stress_plot_LowFreq_nohel}, we also show the evolution of the phase, $\arg(\tilde{T})$, for different $k$ values. From these figures, it is evident that $\arg(\tilde{T})$ remains constant for some time and starts evolving more rapidly after that. It is also interesting to note that the amplitude of $|\dot{\tilde{h}}|$ increases up to the time until which $\arg(\dot{\tilde{h}})$ is roughly constant. After this time, $|\dot{\tilde{h}}|$ enters an oscillatory regime and its amplitude does not change much. This conclusion applies for both runs shown in \Figs{rslice_stress_plot_LowFreq}{rslice_stress_plot_LowFreq_nohel} and it leads us to develop a simple model to understand the GW spectrum in these cases. In this model we replace the stress, $\tilde{T}$ by its magnitude, $|\tilde{T}|$ discussed in \Sec{simplemodel}. Further, to understand the role of the phases of the stress tensor in the production of GWs, we run two new simulations analogous to Runs~hel and nonhel where we replace $\tilde{T}(k,t)$ with its modulus at each time step. The final GW spectrum in these new runs turn out to be same as Runs~hel and nonhel and these are shown in \Fig{with_and_wo_phase}. The comparisons of helical and nonhelical runs are shown in parts (a) and (b) of this figure, respectively. Dashed red and orange curves at times $t=1.5$ and $t=37$ respectively, are for the case when $\tilde{T}(\mathbf{k},t)$ has been replaced by its modulus. It is evident from the figure that there is hardly any difference in the actual $\mbox{\rm Sp}(\dot{\tilde{h}})$ and the spectra obtained after replacing the stress with its modulus. On the basis of this observation, we develop a model to obtain the GW spectrum from the time evolution of the spectrum of the stress tensor. A striking difference between the helical and nonhelical cases is a more pronounced peak in the spectral GW energy. As we show in \App{appendixa}, this is due to the fact that the stress spectrum for the nonhelical case is different from that of the helical case due to the presence of additional helical contributions to the two-point correlation of the magnetic field vectors. This difference is shown in \Fig{stress_spectrum_helvsnonhel} and the details are explained in the next section. \begin{figure} \begin{center} \includegraphics[scale=0.6]{helical_vs_nonhelical.eps} \end{center}\caption{ In this figure, magnetic field energy spectrum, $E_{\rm M}(k)$ (Dashed curves) and $\mbox{\rm Sp}(\tilde{T})$ (Solid curves) for the helical and non helical case. The blue and red curves are for nonhelical and helical case respectively. }\label{stress_spectrum_helvsnonhel} \end{figure} \begin{figure*}\begin{center} \includegraphics[scale=0.7]{helical1.eps} \includegraphics[scale=0.7]{helical2.eps} \end{center}\caption{ Left: Solutions for $\mbox{\rm Sp}(\mbox{\boldmath ${\sf T}$} {}(k))$ (red) for different $E_{\rm M}(k)$ (blue) for three values of $k_{\rm p}$. Right: solutions for $\mbox{\rm Sp}(\mbox{\boldmath ${\sf T}$} {}(k)$ scaled by $k_{\rm p}$ (blue) and $k_{\rm p}^{-8/3}$ (red), to see its scalings in the subinertial and inertial ranges, respectively. }\label{pcascade}\end{figure*} \begin{figure*}\begin{center} \includegraphics[scale=0.7]{nonhelical1.eps} \includegraphics[scale=0.7]{nonhelical2.eps} \end{center}\caption{ Similar to \Fig{pcascade}, but for a case with $\beta=1$. On the right, the solutions for $\mbox{\rm Sp}(\mbox{\boldmath ${\sf T}$} {}(k)$ are scaled by $k_{\rm p}$ (blue) and $k_{\rm p}^{14/3}$ (red), to see its scalings in the subinertial and inertial ranges, respectively. }\label{pcascade_beta2}\end{figure*} \subsection{Overall behavior of the stress} \label{stress_evolution} At the most minimalistic level, we can say that the magnetic field shows an approximately self-similar evolution at late times, where for the helical case, the peak value of $E_{\rm M}(k,t)$ is unchanged, but the position of the peak $k_{\rm p}$ goes to progressively smaller values as $k_{\rm p}\sim t^{-2/3}$. To understand the consequences for the evolution of the stress, let us now consider an idealized model, where $E_{\rm M}(k)\equiv\mbox{\rm Sp}(\mathbf{B})/2$ has a $k^4$ subinertial range, where $k<k_{\rm p}$, with $k_{\rm p}(t)$ being the peak wavenumber, and a $k^{-5/3}$ inertial range spectrum for $k>k_{\rm p}$. The spectrum of the transverse traceless part of the stress, $\mbox{\rm Sp}(\mbox{\boldmath ${\sf T}$} {})/2$, can be computed analytically using the expressions given in \App{appendixa} \citep[for details, see][]{CDK04,Sharma+20} and is shown in \Figs{pcascade}{pcascade_beta2} for the helical and nonhelical cases, respectively. In these figures, we take three instances where the magnetic peaks are at wavenumbers $k_{\rm p}=1$, 0.3, and $0.1$. For the helical case, the position of the peak of $E_{\rm M}(k)$ is unchanged. We see that, in agreement with earlier work \citep{RoperPol+20}, the positions of the peak of $\mbox{\rm Sp}(\mbox{\boldmath ${\sf T}$} {})$ are always at $2k_{\rm p}$. However, even though the peak values of $E_{\rm M}(k)$ are unchanged, except for the factor of two, those of $\mbox{\rm Sp}(\mbox{\boldmath ${\sf T}$} {})$ are not and decay. Nevertheless, at small $k$, $\mbox{\rm Sp}(\mbox{\boldmath ${\sf T}$} {})$ still increases proportional to $k_{\rm p}^{-1}$. If $k_{\rm p}\propto t^{-2/3}$, as expected for helical turbulence \citep{Hat84,BM99,BK17}, we find that $\mbox{\rm Sp}(\mbox{\boldmath ${\sf T}$} {})\propto t^{2/3}$ for small $k$. For the nonhelical case, as shown in Ref.~ \cite{BK17}, the peak of the spectrum decreases with decreasing values of $k_{\rm p}$ proportional to $k_{\rm p}^\beta$, where $\beta$ is an exponent that can be between one and four. In \Fig{pcascade_beta2}, we present the case with $\beta=1$ and find that now $\mbox{\rm Sp}(\mbox{\boldmath ${\sf T}$} {})(k)\propto k_{\rm p}$ for small $k$ and $\propto k_{\rm p}^{14/3}$ for large $k$. If $k_{\rm p}\propto t^{-1/2}$, as expected for the nonhelical case for $\beta=1$, $\mbox{\rm Sp}(\tilde{T})\propto t^{-1/2}$ for small $k$. Recently, it has been found that the Saffman helicity invariant is well conserved in nonhelical magnetically dominated decaying turbulence \citep{hosking20}, which implies $\beta=1.5$. For the general case, we write $\mbox{\rm Sp}(\tilde{T})\propto k_{\rm p}^{2\beta-1}$ (for $k<k_{\rm p}$), which implies $\mbox{\rm Sp}(\tilde{T})\propto t^{-8/9}$ for $\beta=1.5$ and $k_{\rm p}\propto t^{-4/9}$. It is also interesting to note that, for a given spectrum of magnetic field (blue and red dashed curve in \Fig{stress_spectrum_helvsnonhel}), $\mbox{\rm Sp}(\mbox{\boldmath ${\sf T}$} {})$ is different for the helical and nonhelical cases. The blue and red curves are for nonhelical and helical cases, respectively. In the helical case, $\mbox{\rm Sp}(\mbox{\boldmath ${\sf T}$} {})$ has smaller values compared to the nonhelical case at wavenumbers below $k_{\rm p}$. However, it has large values for wavenumbers around the peak and above. Such a feature of the stress spectrum also translates to the GW spectrum and that is why we see a difference in the final GW spectrum produced from helical and nonhelical cases discussed in the previous section. \section{Predictions from algebraically growing stress}\label{simplemodel} With the detailed information above, we are now in a position to compare with the predictions from a simple time-dependent model. In this section, we compute GW spectra by considering a simple model for the time evolution of the stress. It is assumed to increase algebraically as a power law characterized by a power law index $p$ during the time interval from $t=1$ to $t_{\rm e}$. \subsection{The model} We model the $+$ and $\times$ polarizations of the Fourier-transformed stress, $\tilde{T}(k,t)$, as \begin{equation}\label{stresscase2} \tilde{T}(k,t)= \left\{\begin{array}{ll} \tilde{T}_0(k) \, t^{p} , \quad & 1\le t \le t_{\rm e}, \\ 0, & t > t_{\rm e}, \end{array}\right. \end{equation} where $\tilde{T}_0(k)\equiv\sqrt{\langle\tilde{T}_{ij}^{TT}(\mathbf{k})\tilde{T}_{*ij}^{TT}(\mathbf{k})\rangle}$ represents $|\tilde{T}(k,t)|$ at the initial time and is obtained for given energy and helicity spectra of the magnetic field; see \App{appendixa} for details. We note that the authors of Ref.~\cite{RoperPol+22} have developed an analytical model for the GW spectrum on the basis of the time evolution of the stress, which they assumed constant during a certain interval -- unlike our case. The authors explain the location of certain breaks in their GW spectrum as a consequence of the finite duration over which the stress is constant. This duration is an empirical input parameter. In our model, by contrast, the stress evolves as a power law with an index that is in principle known from MHD theory, although we can get even better agreement with the simulations when we take the actual power-law index that is realized in the simulations. \begin{figure*} \includegraphics[scale=0.7]{model.eps} \includegraphics[scale=0.7]{model_fig2.eps} \caption{ (a) $\mbox{\rm Sp}(\dot{\tilde{h}}(k,t)$ at different times. Here, we assume $t=t_{\rm e}$, $E_{\rm M}=c (k/k_{\rm p})^4/(1+(k/k_{\rm p})^{17/3})$, where $k_{\rm p}=10$ and $c=10^{-4}$ and $p=-1/4$. The red, blue, and black curves are for $t_{\rm e}=2$, $4$, and $10$ respectively. The two black vertical lines correspond to $k_{*}$ and $2 k_{\rm p}$. (b) $\mbox{\rm Sp}(\dot{\tilde{h}})$ at times $t_{\rm e}=10$, $20$, and $30$. }\label{results_from_model} \end{figure*} To obtain the GW spectrum for our model, we first solve \Eq{GW4} for a case when the source is active during the interval $1<t<t_{\rm e}$ and thus obtain $\tilde{h}(k,t)$ and $\dot{\tilde{h}}(k,t)$. The solution for $t\ge t_{\rm e}$ is given by \begin{align} \tilde{h}(k,t)&=\int_1^{t} \frac{\sin k(t-t')}{k} \, \frac{6\tilde{T}(k,t')}{t'} \, dt', \\ \dot{\tilde{h}}(k,t)&=\int_1^{t} \cos k(t-t') \, \frac{6\tilde{T}(k,t')}{t'} \, dt'. \label{hdot} \end{align} Using \Eq{hdot} and our model for $\tilde{T}(k,t)$, we obtain \begin{align} \dot{\tilde{h}}(k,t)&=\frac{-3 \tilde{T}_0(k)}{(k t_0)^p}\Big\{e^{i(kt-p \pi/2)}\big[\Gamma(p,ikt_{\rm e})-\Gamma(p,ikt_0)\big]\nonumber\\ &+e^{-i(kt-p \pi/2)}\big[\Gamma(p,-ikt_{\rm e})-\Gamma(p,-ikt_0)\big]\Big\}.\label{full_solution} \end{align} In the above expression, $t_0=1$ represents the initial time. In \Fig{results_from_model}(a), we show $\mbox{\rm Sp}(\dot{\tilde{h}})$ at different times for this model with $p=-1/4$. The red, blue, and black curves represent $\mbox{\rm Sp}(\dot{\tilde{h}})$ at $t=2$, $4$, and $10$, respectively. It is evident from this figure that $\mbox{\rm Sp}(\dot{\tilde{h}})$ is almost flat for $1\la k\la 2 k_{\rm p}$ and declines as $\propto k^{-11/3}$ for $k> 2k_{\rm p}$. $\mbox{\rm Sp}(\dot{\tilde{h}})$ is proportional to $k^2$ for $k<k_{\rm H}$, where $k_{\rm H}$ represents the wavenumber corresponding to the Hubble horizon size at $t=t_{\rm e}$. Further, as time increases, $\mbox{\rm Sp}(\dot{\tilde{h}})$ at low wavenumbers ($k_{\rm H} < k < 1$) grows and saturates, as is evident from \Fig{results_from_model}(b). To understand the role of the power-law index in the algebraically growing part of the stress on $\mbox{\rm Sp}(\dot{\tilde{h}})$, we calculate $\mbox{\rm Sp}(\dot{\tilde{h}})$ for different values of $p$. Those are shown in the right-hand panel of \Fig{sphdot_for_different_gamma}. Here, $\mbox{\rm Sp}(\dot{\tilde{h}})$ is rapidly oscillating, so we plot in this figure only its envelope. From this figure, we conclude that $\mbox{\rm Sp}(\dot{\tilde{h}})$ can be divided into three regime. We begin discussing first the high wavenumber regime ($k>k_{0}$, regime~I), where $k_{0}$ represents the wavenumber corresponding to the Hubble horizon at the initial time. $\mbox{\rm Sp}(\dot{\tilde{h}})$ is flat and changes to $k^{-11/3}$ for $k>2k_{\rm p}$. For very low wavenumbers corresponding to the superhorizon range ($k<k_{\rm H}$, regime~III) $\mbox{\rm Sp}(\dot{\tilde{h}})$ is proportional to $k^2$. In the intermediate regime ($k_{\rm H}\lesssim k\lesssim (1-p)/t_0$, regime~II), $\mbox{\rm Sp}(\dot{\tilde{h}})$ changes from a flat spectrum to a $k^2$ spectrum as the wavenumber decreases. Note that, as the wavenumbers decrease, the transition from a flat spectrum to a $k^2$ spectrum is faster for the case when $p=-1/4$ than $p=1/3$. The wavenumber at which this transition occurs depends on the value of $p$ and can be understood as follows. In the algebraically growing phase, the typical time scale over which $\tilde{T}/t$ decays, is $\delta t_T\sim t/(1-p)$ and the typical time scale for sourcing GWs at a given wavenumber $k$ just after $t=1$ is $\delta t_{\rm GW}\sim 1/k$, as can be inferred from the cosine function in \Eq{hdot}. The value of $\tilde{T}/t$ does not change much when $\delta t_{\rm GW}/\delta t_T\le 1$. This implies that, for $k>(1-p)/t_0$, there will be a finite interval during which $\tilde{T}/t$ can be assumed constant. However, for $k<(1-p)/t_0$, $\tilde{T}/t$ always changes. The wavenumber $k\sim (1-p)/t_0$ corresponds to the wavenumber where $\mbox{\rm Sp}(\dot{\tilde{h}})$ starts changing from a flat spectrum. \begin{figure} \begin{center} \includegraphics[scale=0.8]{sp_hdot_for_different_p.eps} \caption{$\mbox{\rm Sp}(\dot{\tilde{h}})$ for different values of $p$. Here, the left, middle, and right black vertical lines represent the wavenumbers corresponding to the horizon size at the final time $t$, the initial time $t_0=1$, and $2k_{\rm p}$, respectively.} \label{sphdot_for_different_gamma} \end{center} \end{figure} The nature of $\mbox{\rm Sp}(\dot{\tilde{h}})$ can also be understood by writing the expression of $\dot{\tilde{h}}(k,t)$, given in \Eq{full_solution}, for different limits depending on the values of $kt_0$ and $kt_{\rm e}$. For $t_{\rm e}\gg t_0$, which is indeed the case, and $p<1$, \Eq{full_solution} reduces to \begin{equation} \frac{\dot{\tilde{h}}(k,t)}{6 \tilde{T}_0(k)}\approx \left\{\begin{array}{lr} \frac{\sin{k(t-t_0)}}{kt_0} \quad \text{(I)},\\[5pt] \frac{\Gamma[p]}{(kt_0)^p}\cos\left({kt-\frac{p\pi}{2}}\right)-\frac{\cos{kt}}{p} \quad \text{(II)},\\[5pt] \frac{\cos{kt}}{p}\left[\left(\frac{t_{\rm e}}{t_0}\right)^p-1\right] \quad \text{(III)}. \end{array}\right. \end{equation} Using this, we calculate the spectrum of $\dot{\tilde{h}}$; it is given by \begin{equation} \frac{\mbox{\rm Sp}(\dot{\tilde{h}}(k,t))}{36 \tilde{T}_0^2(k)}\approx \left\{\begin{array}{lr} \left[\frac{\sin{k(t-t_0)}}{t_0}\right]^2 \quad \text{(I)}, \\[5pt] k^2\Big[-\frac{\cos{kt}}{p}+\frac{\Gamma[p]}{(kt_0)^p}\cos({kt-\frac{p\pi}{2}})\Big]^2 \, \text{(II)}, \\[5pt] k^2\Big\{\frac{\cos{kt}}{p}\left[\left(\frac{t_{\rm e}}{t_0}\right)^p-1\right]\Big\}^2 \quad \text{(III)}. \end{array}\right.\label{approx_solution} \end{equation} From the above expression, we conclude that the break points for the different slopes of $\mbox{\rm Sp}(\dot{\tilde{h}})$ is decided by $\tilde{T}_0^2(k)$ for $k>t_0^{-1}$. Here, $\tilde{T}_0^2(k)$ is flat for $k<2 k_{\rm p}$ and proportional to $k^{-11/3}$ for $k>2 k_{\rm p}$. For the superhorizon modes, i.e., $k<1/t_{\rm e}$, $\mbox{\rm Sp}(\dot{\tilde{h}})$ is proportional to $k^2$, and for wavenumbers $t_0^{-1}<k<t_{\rm e}^{-1}$, $\mbox{\rm Sp}(\dot{\tilde{h}})$ changes from a flat spectrum to $k^2$, as shown as the blue curves in \Fig{sphdot_for_different_gamma}. \begin{figure*} \begin{center} \includegraphics[scale=0.7]{t_evolution_helical.eps} \includegraphics[scale=0.7]{t_evolution_nonhelical.eps} \caption{ $\tilde{T}(k,t)$ vs.\ $t$. (a) The blue curve corresponds to the time evolution of $\tilde{T}(k,t)$ vs $t$ obtained from the simulation for $k=3$ for the helical case and the red curve corresponds to a broken power law fit to the blue curve. (b) Same as (a), but for nonhelical case. \label{t_evolution} }\end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[scale=0.7]{LowFreq1024sig1_comparison_broken.eps} \includegraphics[scale=0.7]{LowFreq1024sig0_comparison_broken.eps} \caption{ $\mbox{\rm Sp}(\dot{\tilde{h}})$ vs $k$: (a) The blue curve represents $\mbox{\rm Sp}(\dot{\tilde{h}})$ corresponding to the run shown in \Fig{pstress_etc_LowFreq_sig1}, respectively. The red and green curves represent the spectra obtained in the model for the time evolution of $\tilde{T}(k,t)$ given in column 2 of \Tab{table2}. The orange curves represent $\mbox{\rm Sp}(\dot{\tilde{h}})$ for the values of $p$ given in column~1 of \Tab{table2}. The red curves are for the case when $|\tilde{T}_0(k)|$ is obtained using \Eq{tijnh} of \App{appendixa}. For the green curves, $|\tilde{T}_0(k)|$ is taken from the simulation. (b) Same as (a), but for nonhelical case. }\label{helical_comparison} \end{center} \end{figure*} In this model, we take the same algebraic evolution with a constant power-law index for all wavenumbers. In general, however, the time evolution of $\tilde{T}(k,t)$ is different for wavenumbers below and above the peak of $\mbox{\rm Sp}(\tilde{T})(k,t)$. For the case of helical magnetic fields discussed in \Fig{pcascade}, the value of $\mbox{\rm Sp}(\tilde{T})$ for a particular $k<2k_{\rm p}$ at the initial time, grows as $t^{2/3}$ until the time for which $\mbox{\rm Sp}(\tilde{T})$ peaks at this particular wavenumber. After this time, the value of $\mbox{\rm Sp}(\tilde{T})$ for this particular $k$ starts decreasing as $t^{-16/9}$. This implies that we would have assumed $\tilde{T}(k)\propto t^{1/3}$ for $k<k_{\rm p}$ and $\tilde{T}(k)\propto t^{-8/9}$ for $k>k_{\rm p}$. For the nonhelical magnetic field shown in \Fig{pcascade_beta2}, $\mbox{\rm Sp}(\tilde{T})$ is always decreasing. For $k<2 k_{\rm p}$ at the initial time, it first decreases as $t^{-1/2}$ and later switches to $t^{-7/3}$. We study $\mbox{\rm Sp}(\dot{\tilde{h}})$ by incorporating this and find that there is no difference in the final $\mbox{\rm Sp}(\dot{\tilde{h}})$ compared to the one obtained in our model. This is why we did not consider the aforementioned evolution of the stress to keep the model simple. \section{Comparison of the analytical model with simulation results} \label{comparison} In the above section, we discussed $\mbox{\rm Sp}(\dot{\tilde{h}})$ in a model inspired by the fact that the GW spectrum does not change if we change the stress tensor by its modulus for decaying MHD turbulence in the early Universe. In this model, we approximate the stress tensor by $|T(k,t)|\equiv\sqrt{\mbox{\rm Sp}(\mbox{\boldmath ${\sf T}$} {})/4\pi k^2}$ and its time evolution is parameterized as a power law with index $p$; see \Fig{t_evolution}. In this section, we provide a comparison for $\mbox{\rm Sp}(\dot{\tilde{h}})$ obtained in this model with the simulation results discussed in \Sec{EvolutionOfStress}. To compare, we show $\mbox{\rm Sp}(\dot{\tilde{h}})$ obtained from the model and different simulations together. In \Figsp{helical_comparison}{a}{b}, we plot the spectra for the runs shown in \Figs{pstress_etc_LowFreq_sig1}{pstress_etc_LowFreq_sig0} and discussed in \Sec{EvolutionOfStress}. For the dashed orange curve, $\tilde{T}_0$ is obtained by using \Eq{tijh} of \App{appendixa}, where we take $E_{\rm M}=c (k/k_{\rm p})^4/(1+(k/k_{\rm p})^{17/3})$ and the value of the constant $c$ is determined such that the obtained stress spectrum matches with that of the simulation at $t=1$. For this case, the stress spectrum evolution is modeled as a single power law and the value of $p$ is $1/3$ and $-1/4$ for the helical and nonhelical cases, respectively. The value of $p$ is decided from the time evolution of the low wavenumber tail of $\mbox{\rm Sp}(\tilde{T})$, as discussed in \Sec{stress_evolution}. From this figure, it is concluded that the spectral nature of $\mbox{\rm Sp}(\dot{\tilde{h}})$ matches well with the prediction from the model. However, there is a difference at small wavenumbers, especially for the nonhelical case. This is due to the fact that the modelling of $\mbox{\rm Sp}(\tilde{T})$ by a single power does not provide a better fit to the evolution obtained in the simulation for the nonhelical case. A double power law of the form $t^{-1/3}/(1+(t-1)^n)^{5/7n}$, where $n$ regulates the transition, here with $n=10$, provides a better fit to $\mbox{\rm Sp}(\tilde{T})$ for the low frequency tail for the nonhelical case; see \Fig{t_evolution}. In this figure, we plot $\tilde{T}(k,t)$ obtained from the simulation in blue color for the wavenumber $k=3$ and the double power law fit to this curve is in red color. The double power law, which fits the $\tilde{T}(k,t)$ for the helical case, is $1/(1+(t-0.2)^n)^{5/24n}$, where $n=20$. After considering such a time evolution, the obtained $\mbox{\rm Sp}(\dot{\tilde{h}})$ is shown as the dashed red curve in \Fig{helical_comparison}(b). For the dashed green curve, we consider $\tilde{T}_0(k)=\sqrt{\mbox{\rm Sp}(\mbox{\boldmath ${\sf T}$} {})/4\pi k^2}$ and $\mbox{\rm Sp}(\mbox{\boldmath ${\sf T}$} {})$ is obtained from the simulation at $t=1$. These different forms of the time evolution of $\tilde{T}(k,t)$ are tabulated in \Tab{table2}. We notice that the nature of the GW spectra in the helical case are different than in the nonhelical case. There is a large power in the helical case compared to the nonhelical case around the peak of the GW spectrum for the same strength of the initial magnetic field. This is due to the presence of an additional term due to the helicity spectrum in the stress spectrum. \begin{table}\caption{ Time dependence of $\tilde{T}(k,t)$ taken in our analysis }\begin{center} \begin{tabular}{c|c|c} &\multicolumn{1}{c|}{from MHD theory}& \multicolumn{1}{c}{from MHD simulations}\\ Run& $p$& $p$ \\ \hline hel& $\left(\frac{t}{t_0}\right)^{1/3}$& $\frac{1}{(1+(t-0.2)^n)^{5/24n}}$ \\ \hline nonhel& $\left(\frac{t}{t_0}\right)^{-1/4}$& $\frac{t^{-1/6}}{(1+(t-1)^n)^{5/14n}}$ \\ \hline \label{table2}\end{tabular} \end{center} \end{table} \section{Conclusions}\label{conclusion} In this work, we have suggested a simple model to understand the GW spectrum obtained for decaying MHD turbulence in the early Universe. The Fourier-transformed stress is taken to be $|\tilde{T}(k,t)|$, i.e., we ignore changes in the phase, and its time evolution is parameterized by a power law. Such a time evolution of the stress is motivated by the simulations for the decaying MHD turbulence at low wavenumbers discussed in \Sec{EvolutionOfStress}. We find that the spectral nature of the GW spectrum is well represented by this simple model. In this work, we also show that the nature of the GW spectra in the helical case are different from those in the nonhelical case. Apart from the polarization of GW, this spectral difference may also be important in distinguishing the helical and nonhelical nature of the primordial magnetic field. In this work, we have developed a model to understand the low frequency tail of the GW spectrum in the cases where turbulence is initiated suddenly. However, it will be more interesting to study those cases where magnetic field is generated selfconsistently, such as through the chiral magnetic effect in the early Universe \citep{Roga_etal17,Schober+18}. It would be interesting to see, if a model such as the one discussed in this paper can also explain the GW spectra obtained through the chiral magnetic effect. This, we hope to report in a future study. \begin{acknowledgements} RS would like to thank Hongzhe Zhou for helping him analyzing the output data with the Mathematica routines of the {\sc Pencil Code}. This work was supported by the Swedish Research Council (Vetenskapsradet, 2019-04234). Nordita is sponsored by Nordforsk. We acknowledge the allocation of computing resources provided by the Swedish National Allocations Committee at the Center for Parallel Computers at the Royal Institute of Technology in Stockholm and Link\"oping. \end{acknowledgements} \vspace{2mm} {\bf Data availability}---The source code used for the simulations of this study, the {\sc Pencil Code}, is freely available from Refs.~\cite{JOSS}. The simulation setups and the corresponding data are freely available from Ref.~\cite{DATA}.
1,108,101,566,287
arxiv
\section{Modeling error suppression through PIS and PIP} Thermal fluctuations are significant on the molecular scale, and we describe transcription as a stochastic hopping process between well defined states, with transition rates set by the intervening free-energy barriers~\cite{risken_fokker-planck_1996}. Following Hopfield~\cite{hopfield_kinetic_1974}, we take the error suppression to be achieved through a sequence of serially connected energy-consuming, molecular-scale, and error-correcting checkpoints. The quality of a checkpoint is judged by its error fraction $r$, and the quality of several sequential checkpoints is given by the product of individual error fractions $r_1\cdot r_2\cdot r_3\cdot \ldots$ (see supplemental information). Error suppression in transcription involves several checkpoints, divided into two classes: PIS and PIP~\cite{sydow_rna_2009}. Contrary to the situation for the DNA polymerase, both types of checkpoints are controlled by the same multifunctional active region inside the RNA polymerase (RNAP)~\cite{kettenberger_architecture_2003, opalka_structure_2003}. The PIS process likely involves several steps~\cite{sydow_rna_2009} before the incoming NTP establishes the correct Watson-Crick base pairing with the DNA template, and catalyzes onto the growing RNA molecule~\cite{sydow_rna_2009, cramer_gene_2007}. As the states prior to catalysis are limited by the free-energy cost $\Delta G_{\rm act}$ of binding the wrong base to the template DNA strand within the polymerase, $r_{\rm PIS}\ge \exp(-\Delta G_{\rm act}/k_{\mathrm{B}}T)$. From direct nucleotide discrimination studies $r_{\rm PIS}$ has been shown to be $1/10^3-1/10^2$~\cite{Svetlov:2004dc}, corresponding to an average $\Delta G_{\rm act}\approx 6k_{\mathrm{B}}T$. Utilizing PIS alone, sequences of no more then a few hundred base pairs (bp) can be reliably transcribed without errors. To increase fidelity past $r_{\rm PIS}$, and be able to faithfully transcribe longer genes, RNAP has evolved the ability to proofread the transcript by selectively removing already incorporated bases~\cite{sydow_rna_2009,thomas_transcriptional_1998, erie_multiple_1993}. The succesive action of both PIS and PIP is known to bring the combined error fraction $r_{\rm PIS}r_{\rm PIP}$ down to around $1/10^{5}$~\cite{rosenberger_frequency_1983, blank_rna_1986, mercoyrol_accuracy_1992}. From the estimates of the PIS efficiency mentioned above, we expect half of the error suppression to reside in PIP: $r_{\rm PIP}=1/10^3-1/10^2$. Lead by experimental results we now set out to quantitatively explain how this is achieved in a physiologically relevant setting through the use of extended, backtracked pauses. To highlight the benefits and implications of an extended backtracked state space, we first consider the case of only one backtracked state, and later contrast it to the case with the physiologically more relevant case of multiple states. \subsection{Proofreading through backtracking} It is well established that an erroneous base can be cleaved from the growing transcript once the polymerase has entered what is known as a backtracked state\footnote{There is some evidence in the literature for an intermediate state between elongation and backtracking~\cite{herbert_E._2010}. However, the rates for transversing this state are similar to those for entering the backtrack, and adding such a state does not change the general dynamics of the model.} (see Figure~\ref{fig:back}A): an off pathway state where the whole polymerase is displaced backward along the transcript~\cite{shaevitz_backtracking_2003, galburt_backtracking_2007}. Within the polymerase, the template DNA and nascent RNA strands form a 8-9 bp hybrid. As the polymerase shifts backward, this hybrid remains in register by breaking the last formed bond and reforming an old bond at the opposite end of the hybrid~\cite{shaevitz_backtracking_2003, galburt_backtracking_2007,wang_structural_2009} (see Figure~\ref{fig:back}B and C). This exposes already incorporated bases to the active site, blocking further elongation but enabling cleavage of the most recently added base (catalyzed by the transcription factor IIS in eukaryotes and GreA and GreB in prokaryotes)~\cite{fish_promoting_2002,borukhov_bacterial_2005,jeon_fidelity_1996,awrey_yeast_1998,opalka_structure_2003, kettenberger_architecture_2003, sosunov_unified_2003, thomas_transcriptional_1998}. If cleaved, a potential error is removed, the active site is cleared, and elongation can resume. The cleavage process competes with the spontaneous recovery from the backtrack~\cite{galburt_backtracking_2007}, by which the polymerase returns to the elongation competent state without removing the potential error (see Figure~\ref{fig:back}C). In order for cleavage from the backtracked state to lower the error content, the cleavage reaction must select for erroneous bases. The inability of incorrectly matched bases to form proper Watson-Crick base paring within the RNA-DNA hybrid induces this selectivity. If an error has been catalyzed onto the 3'-end of the nascent RNA molecule, the total energy of the transcription complex is lowered if the RNAP moves into a backtrack (see Figure ~\ref{fig:back}D). Doing this, the RNAP extrudes the unmatched base pair from the hybrid and so returns to the low energy state of a perfect Watson-Crick base-pairing within the entire hybrid (see Figure ~\ref{fig:back}C). When the polymerase is in a backtracked state, the last added base is exposed to the active site and can be cleaved off. \begin{figure}[htb!] \begin{center} \includegraphics[width=\columnwidth]{Figure1.pdf} \end{center} \caption{\label{fig:back} {\bf Single-state backtracking}. A) The basic hopping model coupling one-step backtracking to elongation. The repetitive unit is highlighted, with the off-pathway backtracked state indicated as BT. After entering a backtrack, elongation can resume either through cleavage out to a previous state of the chain (NMP)$_{n-1}$ or by recovery without cleavage to the entrance state (NMP)$_{n}$. B) Schematic illustration of the repeat unit with a correct base incorporated last. The template strand, the nascent transcript, and the hybrid region of the polymerase are shown. The polymerase can enter a backtrack with rate $k_{\rm bt}$ or add a base to the transcript with rate $k_{\rm cat}$. From the backtracked state, recovery by cleavage occurs with rate $k_{\rm clv}$, while realigning without cleavage occurs at a rate $k_{\rm rec}$. C) Same as B, but with an incorrect base at the growing 3'-end of the transcript. The corresponding rates are indicated with the superscript ${\rm I}$. D) Sketch of the free-energy landscape corresponding to B and C. Solid black line corresponds to the last base correct; dashed red line corresponds to the last base incorrect. $\Delta G_{\rm act}$ refers to the free-energy increase at the active site when the last incorporated base is wrong, while $\Delta G_{\rm cat}$ denotes the corresponding increase in the barrier to catalysis (cat). Recovery without cleavage occurs at a rate $k_{\rm bt}$, which places all selectivity in the entrance step to the backtrack (see text). E) Three traces simulated with a Gillespie algorithm: a typical polymerizing RNAP with ($k_{\rm cat}=10/$s, $k_{\rm bt}=1/$s, $k_{\rm clv}=0.1/$s, see main text), a stalled polymerase ($k_{\rm cat}=1/$s, $k_{\rm bt}=10/9$s, $k_{\rm clv}=10/$s), and a depolymerase ($k_{\rm cat}=1/$s, $k_{\rm bt}=10/$s, $k_{\rm clv}=10/$s). Traces are black when the polymerase is elongating, and red when backtracked.} \end{figure} How much cleavage from backtracked states contributes to error suppression depends on the effect of misincorporations on the transition rates in and out of backtracks. Specifically, the manner in which a missincorporation effects the transition state to backtracking determines if fidelity increases will be affected through an increased entrance rate into the backtrack (no shift of transition state) or a lowered exit rate out of the backtrack (transition state shifts with the hybrid energy). For the latter case to have an appreciable proofreading capability, every single base must at some point be extruded out of the polymerase through backtracking, such that the base can be proofread and removed if it happen to be incorrectly matched to the template strand. The required high backtracking frequency would render the polymerization process inefficient---even reverse it (see below)---which is clearly not what is observed in experiments~\cite{galburt_backtracking_2007,abbondanzieri_direct_2005}. We thus take the selectivity to reside in the entrance step of the backtrack (see Figure~\ref{fig:back}D). For rates as illustrated in Figure~\ref{fig:back}B and C, this corresponds to $k_{\rm rec}=k_{\rm rec}^{\rm I}=k_{\rm bt}$ and $k_{\rm bt}^{\rm I}=k_{\rm bt}\exp(\Delta G_{\rm act}/k_{\mathrm{B}}T)$ (rates corresponding to incorrect bases are denoted with the superscript I). We will simply refer to $k_{\rm bt}$ as the backtracking rate, and the resulting form of the free-energy landscape is illustrated in Figure~\ref{fig:back}D. \subsection{Physiological rate estimates} Although single-molecule traces give us direct access to many of the individual rates introduced in Figure~\ref{fig:back}B and C, the spread even between individual enzymes of any specific type of polymerase is substantial~\cite{neuman_ubiquitous_2003, toli-nrrelykke_diversity_2004}. On top of this, not all rates are known for any one type of polymerase, so we are here content with relying on the structural homology between polymerases ~\cite{Ebright:2000,Hirata:2008} and take {\it in vitro} rates from the different domains as representing order of magnitude estimates of a generic enzyme. We use $k_{{\rm cat}}=10$/s~\cite{neuman_ubiquitous_2003,toli-nrrelykke_diversity_2004} (prokaryotic) \cite{galburt_backtracking_2007} (eukaryotic), backtracking rate $k_{{\rm bt}}=1$/s~\cite{depken_origin_2009} (prokaryotic), and cleavage rate $k_{{\rm clv}}=0.1$/s \cite{galburt_backtracking_2007} (eukaryotic). Though this will not cover every scenario, the analytical nature of our work enables direct application of our results to other relevant situations. In a development largely parallel to the theory of kinetic proofreading through PIS~\cite{hopfield_kinetic_1974}, the error suppression of PIP can be calculated as (see supplemental information) \begin{equation} \label{eq:pip0} r\simeq \frac{k_{\rm cat}}{k_{\rm cat}+k_{\rm clv} e^{(\Delta G_{\rm act}+\Delta G_{\rm cat})/k_{\mathrm{B}}T}}. \end{equation} Here $\Delta G_{\rm cat}$ denotes the change in barrier height for the transition to catalysis when trying to incorporate a base directly after an error (see Figure~\ref{fig:back}D). We can get an estimate of $\Delta G_{\rm cat}$ from published experiments that use "non-hydrolizable" nucleotide substitutes. These substitutes are thought not to influence binding affinities, but to change the catalysis rate to an extent comparable to that of an erroneous base~ \cite{thomas_transcriptional_1998}. From this we estimate $\Delta G_{\rm cat}\approx 2k_{\mathrm{B}}T$. For our typical polymerase this implies proofreading capabilities amounting to a modest $r_{\rm PIP}\approx 1/30$: off by an order of magnitude from the experimentally determined fidelity ($1/10^3-1/10^2$). Note that the error ratio is insensitive to $k_{\rm bt}$ for our typical polymerase ($k_{\rm cat} \gg k_{\rm bt} \gg k_{\rm clv}$). Further, a comparison of the regular traces (see Figure~\ref{fig:back}E) resulting from this model (see Figure~\ref{fig:back}A) with those from single-molecule experiments~\cite{galburt_backtracking_2007,depken_origin_2009, ibarra_proofreading_2009} demonstrates that the model does not adequately capture the observed irregular transcription dynamics (see also below). Although much of the observed dynamical heterogeneity has been attributed to structural heterogeneity through sequence specific pauses~\cite{herbert_sequence-resolved_2006,neuman_ubiquitous_2003, ibarra_proofreading_2009}, we here show that this is not {\em necessarily} the dominant contribution. \subsection{Entropic fidelity enhancements} It is clear from Equation~\ref{eq:pip0} that apart from increasing the energy penalty for a bad basepair, a low error-ratio can be achieved through a relative increase of the transcript cleavage rate compared to the elongation rate. Given their reverse arrangement ($k_{\rm cat}\simeq 10/{\rm s} \gg k_{\rm clv}\simeq 0.1/{\rm s}$), we speculate that the evolution of these rates has been strongly limited by external constraints pertaining to nucleotide chemistry and the intercellular environment. To mediate these external constraints, the polymerase has had to find alternative {\em internal} paths to increase error suppression. One such internal path could be to reduce the free energy of the backtracked state. This would suppress spontaneous reversal of the backtrack and therethrough increase the probability of cleavage and error removal. Since a substantial part of the free energy relates to the energetics of base matching within the hybrid, the energy level of the backtracked state is likely constrained by the structure of the hybrid---again presumably fixed by early evolutionary choices. However, nature appears to have come up with a different solution: an effective entropic reduction in the free-energy level of the backtracked state is achieved by extending the number of accessible states. RNAP is able to backtrack by more than just one base, and thermally move between the different backtracking states that are available~\cite{nudler_rna_1997, komissarova_rna_1997, komissarova_transcriptional_1997,shaevitz_backtracking_2003,galburt_backtracking_2007} (see Figure~\ref{fig:entropy}A). With $N$ off-pathway and backtracked proofreading states, the free energy associated with the backtracked state would, in an equilibrium setting, be reduced by the entropic term $k_{\mathrm{B}}T \ln(N)$. Even in our out of equilibrium setting this mechanism delays spontaneous recovery and raises the chance of cleavage and error removal (see supplemental information). \begin{figure}[htb!] \begin{center}\includegraphics[width=\columnwidth]{Figure2.pdf}\end{center} \caption{\label{fig:entropy} {\bf Multi-state backtracking.} A) The basic repeat unit of multi-state backtracking in a nested scheme. For visual clarity, only the backtracked states in the highlighted repeat unit are drawn. B) Sketch of the free-energy landscape of a multi-state backtrack. Solid black line corresponds to the last base correct; dashed red line corresponds to the last base incorrect. Also illustrated are the multiple backtracked states and the effect of cleavage. See caption to Figure~\ref{fig:back}B for a description of the rates. C) Three traces simulated with a Gillespie algorithm: a typical polymerizing RNAP with ($k_{\rm cat}=10$, $k_{\rm bt}=1$, $k_{\rm clv}=0.1$), a stalled complex ($k_{\rm cat}=10$, $k_{\rm bt}=10$, $k_{\rm clv}=0.1$), and a depolymerizing one ($k_{\rm cat}=1$, $k_{\rm bt}=10$, $k_{\rm clv}=0.1$)---all in accordance to the theoretical predictions derived in the supplemental information. A section of the trace for our typical polymerase has been magnified, showing two backtracks, one rescued to elongation by cleavage and one by diffusion. Only the backtrack reentering elongation through cleavage would have corrected an error at the end of the transcript. Traces are black when the polymerase is elongating, and red when backtracked.} \end{figure} With an extended backtracking space\footnote{Even with infinite room for backtracking, our typical polymerase would only take around $k_{\rm bt}/k_{\rm clv}= 10$ diffusive backtracking steps before being cleaved off, and would reach a typical backtracking depth of around $N\approx \sqrt{k_{\rm bt}/k_{\rm clv}}\approx 3$. This is below the lower estimates for the distance to RNA hairpin barriers in the trailing RNA strand~\cite{klopper_influence_2010}. We are thus justified in assuming the available backtracking distance to be effectively infinite (see Figure~\ref{fig:entropy}A). }, it is now clear from simulated traces (Figure~\ref{fig:entropy}C) that the irregular dynamics of our typical polymerase qualitatively matches the irregular dynamics observed in single-molecule experiments~\cite{herbert_sequence-resolved_2006,neuman_ubiquitous_2003, ibarra_proofreading_2009} (see below for a quantitative assessment). By comparing the experimental effects of cleavage stimulating factors and simulated traces for increased cleavage rates, we provide further support of our kinetic scheme in the supplemental information. We also show that our model can capture the stalling dynamics of a polymerase as it transcribes against an increasing force~\cite{galburt_backtracking_2007}. When acting through extended backtracked states, the error suppression of PIP can be calculated as (see supplemental information) \begin{eqnarray} \label{eq:r} r_{\rm 1:PIP}&\simeq&\frac{k_{\rm cat}}{k_{\rm cat}+\sqrt{k_{\rm clv}k_{\rm bt}} e^{(\Delta G_{\rm act}+\Delta G_{\rm cat})/k_{\mathrm{B}}T}}\\ &=& \frac{k_{\rm cat}}{k_{\rm cat}+k_{\rm bt} e^{\Delta G_{\rm 1:PIP}/k_{\mathrm{B}}T}},\nonumber\\ \Delta G_{\rm 1:PIP}&=&\Delta G_{\rm act}+\Delta G_{\rm cat}-\frac{1}{2}k_{\mathrm{B}}T\ln(k_{\rm bt}/k_{\rm clv}).\label{eq:r2} \end{eqnarray} Comparing Equation~\ref{eq:r} to Equation~\ref{eq:pip0}, we see that fidelity is increased by extending the space available for backtracking: the low cleavage rate $k_{\rm clv}$ is replaced by the geometric mean $\sqrt{k_{\rm clv}k_{\rm bt}}$. This increases the fidelity by about a factor of three for our typical polymerase, and provides an error reduction of $r_{1:\rm PIP}\simeq 1/100$. The notation in Equation~\ref{eq:r2} is introduced to facilitate the extension to several PIP checkpoints presented in the next section. The error suppression now depends on the additional parameter $k_{\rm bt}$ (c.f. Equation~\ref{eq:pip0})---a parameter independent of nucleotide chemistry and susceptible to change through evolutionary pressures. Although the extension of the backtracking space does provide for fidelity enhancements, the total fidelity is still at the lower end of what is experimentally observed. However, our extended backtracking space gives further proofreading benefits by supplying the polymerase with additional inherent PIP checkpoints, as we now discuss. \begin{figure}[htb!] \begin{center} \includegraphics[width=\columnwidth]{Figure3.pdf} \end{center} \caption{\label{fig:CPIII} {\bf A second PIP checkpoint.} The polymerase is expected to be sensitive to errors incorporated also next to last. The magnitude of the rates are illustrated by relative thickness of the transition arrows, bad base stackings are indicated in red. $G$ indicates the free energy of the complex with respect to the elongation competent state.} \end{figure} \subsection{Second PIP checkpoint and beyond} Even when additional bases have been added to the transcript after an erroneous incorporation, the error can in principle still be corrected through an extensive backtrack and cleavage~\cite{voliotis_backtracking_2009}. For this to lead to an appreciably increased likelihood of error removal, the random walk must be biased towards entering further into the backtrack. With an error at the {\em penultimate} 3'-position of the transcript, the polymerase experiences such bias, since moving into a backtrack will eliminate a bad base-pair stacking within the hybrid (see Figure~\ref{fig:CPIII}). This is followed by another heavily biased step to completely extrude the error from the hybrid, making it amenable to cleavage. We know of no direct measurement of the penultimate bias $\Delta G_{\rm 2:PIP}$, but as the typical stacking energy in a RNA-DNA hybrid is $1.5-4.5\,k_{\mathrm{B}}T$~\cite{sugimoto_thermodynamic_1995} we assume $\Delta G_{\rm 2:PIP}\approx 3k_{\mathrm{B}}T$. This second PIP checkpoint provides an error ratio (see supplemental information) of \begin{equation}\label{eq:rp} r_{2:\rm PIP}\simeq\frac{k_{\rm cat}}{k_{\rm cat}+k_{\rm bt} e^{\Delta G_{\rm 2:PIP}/k_{\mathrm{B}}T}}. \end{equation} For our typical polymerase $r_{2:\rm PIP}\simeq 1/3$, and the total PIP induced error reduction $r_{\rm PIP}=r_{1:\rm PIP} r_{2:\rm PIP}\simeq 1/300$ falls well within the experimentally observed range. The suggested scheme thus quantitatively accounts for the typically observed error-suppression, but there could in principle be additional inherent PIP checkpoints that would enable the polymerase to reach even higher fidelities. An increasing free-energy penalty for moving the error further into the hybrid would incur a longer range bias for backtracking, and additional fidelity gains according to (see supplemental information) \begin{eqnarray}\label{eq:mp} r_{\rm PIP} &=& r_{1:\rm PIP}r_{2:\rm PIP}\cdots r_{n\rm :PIP}\cdots\quad\nonumber \\ r_{n:\rm PIP}&\simeq&\frac{k_{\rm cat}}{k_{\rm cat}+k_{\rm bt} e^{\Delta G_{n\rm :PIP}/k_{\mathrm{B}}T}}. \end{eqnarray} Based on structural considerations of base pairing within the RNA-DNA hybrid, we conclude that PIP-proofreading of RNAP includes at least two serial checkpoints that account for the typical fidelities observed in transcription. The polymerase could in principle select and remove an error as long as it remains within the hybrid. Intriguingly, the 8-9-bp hybrid might thus not only serve the purpose of stabilizing the ternary complex~\cite{nudler_rna-dna_1997}, but also provide enhanced fidelity. \subsection{Power-law pause distributions and spatial heterogeneity} We next illustrate the consequences of the proofreading states on pause duration and frequency. To this end we simulate our typical polymerase transcribing a long sequence and compare it to a simulation of an otherwise identical polymerase, but which has PIP turned off ($k_{\rm bt}=0$/s). In Figure~\ref{fig:sim}A we show a particular realization (of our generic polymerase) of incorporation errors (only PIS in red) together with the errors left after the section has been proofread (PIS and PIP in black). The fidelity enhancements are clearly visible, but they come at the cost of both an decreased velocity, as well as an increased spatial heterogeneity. These effects are qualitatively visible already at the level of individual traces, but are quantitatively best seen in the changes of the dwell-time distribution (see Figure~\ref{fig:sim}B) or in the transition-rate (inverse dwell-time) distribution (see Figure~\ref{fig:sim}C). In the dwell-time distribution, proofreading introduces a power-law regime, throughout which the probability of a long pause falls off with duration $t$ as $t^{-3/2}$~\cite{depken_origin_2009}, until it drops off exponentially beyond $t\sim 1/k_{\rm clv}$. In Figure~\ref{fig:sim}B we see a clear exponential behavior of the dwell-time distribution for both processes at around $t\sim 1/k_{\rm el}=0.1$s, while the proofreading polymerase also has the above mentioned power-law decay extending out to $t\sim 1/k_{\rm clv}=10$s. Similarly, considering the transition-rate distributions we see a narrow but significant low velocity peak develop around the transition rate$\sim k_{\rm clv}=0.1/$s, diminishing the bare elongation peak situated around the rate$\sim k_{\rm cat}=10/$s (see Figure~\ref{fig:sim}C). To further elucidate the effects of the power-law regime, we consider another important observable: the pause-time distribution, or the total time a polymerase spends at each position along the DNA molecule. In Figure~\ref{fig:sim}D we show pause density plots along a sequence of 500 bp, with darker bands indicating longer total time spent at that position during the transcription process. Comparing transcription with and without PIP it is clear that PIP leads to greater spatial heterogeneity, exhibiting distinct regions of markedly increased occupation density even where there are no incorporation errors. Thus, our model accounts for both the observed spatial heterogeneity as well as the broad pause-time distributions~\cite{neuman_ubiquitous_2003,galburt_backtracking_2007} without the need to introduce additional assumptions about the effects of sequence heterogeneity~\cite{depken_origin_2009, galburt_backtracking_2007}. Having shown that external constraints can be mediated through accessing an internal extended backtracked space---resulting in irregular transcription dynamics---we now turn our attention to the specific level of irregularity observed in experiments. Irregularity is tuned by the backtracking rate, and considering that increasing $k_{\rm bt}$ would render all proofreading checkpoints more effective (see Equation~\ref{eq:mp}) one might wonder why the backtracking rate is kept moderate (1/s) and not made much larger~\cite{depken_origin_2009,abbondanzieri_direct_2005}. \begin{figure}[htb!] \begin{center} \includegraphics[width=\columnwidth]{Figure4.pdf} \end{center} \caption{\label{fig:sim} {\bf The effects of proofreading.} A) On top we show a realization of incorporation errors according to our free-energy estimates (only PIS in red), and below we show the errors that survive (or, possibly, are inserted by) the proofreading mechanisms (PIS and PIP in black). B) The dwell-time distribution from a process without proofreading and one with proofreading. Proofreading gives rise to a power-law regime significantly increasing the fraction of long-pauses. C) The transition-rate (inverse dwell-time) distribution for the same processes as in B, where the effects of proofreading can be seen through a shift from a unimodal to a bimodal distribution as many excessively slow transitions involving backtracks start influencing the kinetics. D) The pause density, or the total occupation time plotted for a 500 bp sequence transcribed by the same two polymerases as used in B and C. The darker the bands, the longer the total occupation time at that position. The scales are individually normalized to cover the range of occupation times for each polymerase. Two incorporation errors are indicated with red markers.} \end{figure} \section{Transcription performance} We have here suggested that by utilizing {\em extended} backtracked states, the polymerase has overcome external constraints to suppress errors. This introduces the backtracking rate $k_{\rm bt}$ as a variable susceptible to evolutionary pressures. In order to understand the underlying reasons for why the backtracking rate is kept moderate, we now consider the phenotypic space made available through the extended backtracking space. The quantities needed to access polymerase performance---as it varies with the level of PIP---are calculated in the supplemental information by using continuous time random walk theory~\cite{montroll_fluctuation_1987}. Starting with instantaneous transcriptional efficiency measures on the level of the individual base pairs, we then consider the efficiency on extended sequences or genes. Importantly, we investigate how much faster the polymerase can produce perfect transcripts of extended sequences with PIP as compared to without PIP. \subsection{Performance on the level of a base pair} We are interested in the effective elongation rate, and thus calculate the average elongation rate $1/\tau_{\rm el}$ (see supplemental information). Since there is only about one error passing through the PIS checkpoint every 500 bases, we can ignore the effect of errors on the overall elongation dynamics. We now construct the efficiency measure $\eta_{\rm el}$, \begin{equation} \label{eq:tau} \eta_{\rm el}=\frac{1/\tau_{\rm el}}{k_{\rm cat}}\simeq \frac{1-k_{\rm bt}/k_{\rm cat}}{1/2+\sqrt{1/4+k_{\rm bt}/k_{\rm clv}}}, \end{equation} which describes the relative slowdown due to PIP. With no PIP ($k_{\rm bt}=0$) the efficiency is appropriately $\eta_{\rm el}=1$, while it vanishes at the transition between polymerization and depolymerization $ k_{\rm cat}=k_{\rm bt}$. At this point, elongation stops proceeding with a well defined velocity, and behaves diffusively on large lengthscales. For $k_{\rm cat}<k_{\rm bt}$ net depolymerization sets in. This situation is pathological, and shows that backtracking cannot dominate the dynamics even though this would be judged optimal in terms of fidelity calculated within the Hopfield kinetic proofreading scheme. The transition to non-functional polymerases can be seen in the single-molecule transcription traces presented in\footnote{See Figure 3D in~\cite{galburt_backtracking_2007}, where an opposing force was used to increase the entrance rate into the backtrack, bringing the system to stall around 14pN.}~\cite{galburt_backtracking_2007}, and in the simulated traces presented in Figure~\ref{fig:entropy}C (see also supplemental Figure S4C). Also note that the overall elongation rate increases with increasing cleavage rate, as is observed experimentally~\cite{fish_promoting_2002, herbert_E._2010,proshkin_cooperation_2010}. We next introduce an efficiency parameter for PIP, \begin{equation*} \eta_{\rm PIP}=1-r_{\rm PIP}, \end{equation*} which is $0$ in the absence of PIP and $1$ for perfect PIP. Finally, we parameterize the nucleotide efficiency of the transcription process by the ratio of final transcript length and the average number of nucleotides consumed in its production. This ratio is given by the simple expression (see supplemental information) \begin{equation*} \eta_{\rm NTP}=1-k_{\rm bt}/k_{\rm cat}. \end{equation*} The measure is unity without PIP, and vanishes at stall ($\eta_{\rm el} = 0$). Figure \ref{fig:evo} shows the three efficiency measures $\eta_{\rm el}$, $\eta_{\rm PIP}$ and $\eta_{\rm NTP}$ as functions of the backtracking rate $k_{\rm bt}$ (within the operational range $0\le k_{\rm bt}\le k_{\rm cat}\approx10$), for an otherwise typical polymerase. We see that while transcription velocity and nucleotide efficiency correlate positively, they both correlate negatively with fidelity, directly illustrating the cost of enhancing fidelity. This hints at an underlying competition, which we now explore by considering transcription of extended sequences. \begin{figure}[htb!] \begin{center} \includegraphics[width=\columnwidth]{Figure5.pdf} \end{center} \caption{\label{fig:evo} {\bf Polymerase performance.} Proofreading efficiency $\eta_{\rm PIP}$ (red dot-dashed), elongation efficiency $\eta_{\rm el}$ (black solid) and nucleotide efficiency $\eta_{\rm NTP}$ (blue dashed) as a function of the backtracking rate, for an otherwise typical polymerase with $k_{\rm cat}=10$/s and $k_{\rm clv}=0.1$/s. Values indicated by diamonds were obtained numerically, through Gillespie simulations.} \end{figure} \subsection{Performance on the level of the gene} Here we demonstrate that a moderate rate of backtracking is necessary for rapidly generating transcripts with few mistakes from extended sequences. This becomes apparent when noting that the longer the sequence, the less likely it is for a polymerase to produce an error-free transcript. It is instructive to introduce the probability $P_l$ of producing a long error-free sequence\footnote{This sequence length $l$ should not necessarily be interpreted as the complete gene length $l_{\rm gene}$, but instead as the typical error-free length $l=l_{\rm gene}/n$ that is required, where $n$ is the number of errors acceptable during transcription of the gene.} of length $l$. For each attempt, the probability of transcribing a sequence of length $l$ without an error is given by $P_l(r)=(1+r)^{-l}\simeq\exp(-l r)$, with $r=r_{\rm PIP}r_{\rm PIS}$ representing the total error fraction. The production-rate gain $\chi_{\rm el}$ on extended sequences is obtained by comparing the rate at which error-free transcripts are produced with PIP, to the rate with which they are produced without PIP ($k_{\rm bt}=0$). Thus, $\chi_{\rm el}=\eta_{\rm el}P_l\left(r_{\rm PIS}r_{\rm PIP}\right)/P_l\left(r_{\rm PIS}\right)\simeq \eta_{\rm el} \exp(lr_{\rm PIS}\eta_{\rm PIP})$. Similarly, we introduce the NTP-efficiency gain on extended genes $\chi_{\rm NTP}$ by comparing the number of error-free transcripts produced per nucleotide used with and without PIP, giving $\chi_{\rm NTP}=\eta_{\rm NTP} P_l\left(r_{\rm PIS}r_{\rm PIP}\right)/P_l\left(r_{\rm PIS}\right)\simeq \eta_{\rm NTP}\exp(lr_{\rm PIS}\eta_{\rm PIP})$. From both these quantities it is clear that even moderate PIP provides enormous gains in the rate of perfectly transcribing long ($l>1/r_{\rm PIS}$) sequences. \begin{figure}[htb!] \begin{center} \includegraphics[width=\columnwidth]{Figure6.pdf} \end{center} \caption{\label{fig:evon} {\bf High fidelity transcript production.} A) On the left vertical axis we mark the production-rate gain on extended sequences $\chi_{\rm el}$ as a function of the backtracking rate (black solid line). On the right vertical axis we mark the NTP-efficiency gain $\chi_{\rm NTP}$ as a function of the backtracking rate (red dashed line), all for a sequence of length $l=10^4$ bp. The region between the two peaks is where one might expect the optimal value of $k_{\rm bt}$ to lie. Note there is a gain of 13 orders of magnitude in the rate of producing error-free transcripts when transcribing with PIP as compared to without PIP, with similar gains in nucleotide efficiency. B) The backtracking rate that optimizes the production-rate gain (black solid) or the energy-efficiency gain (red dashed) as a function of sequence length. Gray shading indicates a region of compromise between both gains. Inset, a magnification of the region around $k_{\rm bt}=1/$s indicates that PIP is optimal with $k_{\rm bt}=1/$s for gene lengths of $10^4-4 \cdot10^4$~bp. The vertical blue line indicates the sequence length used in A. } \end{figure} With the two sequence-wide measures that we have introduced, it is now possible to address transcriptional efficiencies on the level of transcription of whole genes. As an example we consider a sequence of a length comparable to the typical human gene $l=10^{4}$ bp, and in Figure \ref{fig:evon}A we plot the efficiencies $\chi_{\rm el}$ and $\chi_{\rm NTP}$ as a function of the backtracking rate $k_{\rm bt}$ (within the operational limits $0<k_{\rm bt}<k_{\rm cat}=10$/s). Each measure has a definite optimal value, and we see that the gains in both rate of perfect transcript production and nucleotide efficiency can be enormous, here reaching thirteen orders of magnitude. If RNAP was optimized to transcribe this particular sequence length, then we would expect the true value of the backtracking rate to lie somewhere in the intermediate region between the peaks: representing a compromise between NTP efficiency and production rate. For the intimidate value of $k_{\rm bt}=1/$s---coinciding with our estimate of the physiologically relevant backtracking rate---it would take a polymerase of the order of one hour to produce an error free transcript, which should be compared to $10^{13}$ hours without PIP. Finally, it is interesting to ask how the region of optimal backtracking rate changes as the transcribed sequence length varies. Figure \ref{fig:evon}B shows the $k_{\rm bt}$ that optimizes $\chi_{\rm el}$ (black solid line) and $\chi_{\rm NTP}$ (red dashed line) as a function of sequence length $l$. The inset in Figure~\ref{fig:evon}B highlights the backtracking rate for our typical polymerase ($k_{\rm bt}=1$/s), and the implied sequence lengths ($\simeq 10^4-4\cdot 10^4$bp) for which this backtracking rate would be optimal. A complete discussion would need to consider relaxed fidelity constraints due to e.g. codon redundancy~\cite{alberts_essential_1998}, but considering that the average gene length in eukaryotes lies in the range $10^4-10^5$~\cite{xu_average_2006}, it is thought-provoking to speculate that the moderate observed backtracking rates of around $1/$s are the result of an evolutionary optimization for rapidly and efficiently producing functional transcripts from genes in the tens-of-kbp range. \section{Discussion} By analytically studying a model of backtracking couple to chain elongation and cleavage, we have shown that irregular transcription dynamics is likely a result of maintaining transcriptional efficiency, not at the level of individual nucleotides, but rather, at the level of extended sequences and genes. Our work suggests that proofreading relies on an entropic enhancement of fidelity, where an extended state space reduces the chance of spontaneous recovery. This ensures low error rates even with low rates of transcript cleavage. Through backtracking, an incorporated error can be proofread at least twice through biasing the entry into backtracks, but could in principle be proofread as many times as there are bases in the RNA-DNA hybrid within the elongation complex. To what extent there are additional proofreading checkpoints beyond the two discussed here is an interesting line of future research, providing a potential link between the structure of the elongation complex and overall transcriptional efficiency and fidelity. Such work might offer additional clues as to why the RNA-DNA hybrid has a length of about 8-9 bp~\cite{kent_maintenance_2009}. Considering both the effects of proofreading on NTP consumption and the production rate of extended functional transcripts, our investigation suggests that the internal hopping rate in the backtracked state is not optimized for fidelity alone. Instead, it is kept moderate in order to enable rapid production of extended transcripts that are of high fidelity. That there will be many more backtracks than there are errors to remove is a direct consequence of undetected errors being costly, since they have the potential to render the whole transcript dysfunctional. A certain level of paranoia is thus desirable on part of the polymerase. Even though such paranoia decreases the instantaneous average transcription rate, the observed level of backtracking---perhaps counterintuitively---drastically increases the rate at which high fidelity transcripts are produced. Interestingly, the backtracking rate and the amount of backtracks in cells of a particular organism would be expected to correlate positively with the sequences length that has induced the highest evolutionary pressures on transcription (see Figure~\ref{fig:evon}B, gray region). In other words, genomes with genes of increasing length should be transcribed with increasingly irregular dynamics to maintain transcriptional efficiency. It would be interesting to determine if an overall trend in backtracking rate~\cite{galburt_backtracking_2007, depken_origin_2009}, and consequent irregularity of dynamics, could be found for polymerases originating in organisms with varying genetic complexity. To conclude, our model highlights the enormous gains offered by post-incorporation proofreading when transcribing long sequences, illustrating how important this basic mechanism has become for the sustenance of life. \begin{acknowledgments} We thank Eric Galburt, Justin Bois and Abigal Klopper for fruitful discussions and suggestions. JMRP acknowledges financial support from grants MOSAICO (Spanish Government) and MODELICO (Comunidad de Madrid). SWG acknowledges funding by the EMBO young investigator program and the Paul Ehrlich Foundation. MD acknowledges partial support from FOM, which is financially supported by the ``Nederlandse Organisatie voor Wetenschappelijk Onderzoek". This research was supported in part by the National Science Foundation under Grant No. NSF PHY05-51164. \end{acknowledgments}
1,108,101,566,288
arxiv
\section{PROSPECT-II Oscillation Physics Opportunities} \section{PROSPECT-II: unique inputs for the resolution of short baseline anomalies} \begin{figure}[h] \centering \includegraphics[trim = 0cm 0.0cm 0.5cm 0.0cm, clip=true, height=2.7in]{figures/PROSPECT_Sensitivity_Mev1.pdf} \includegraphics[trim = 0cm 0.0cm 0.5cm 0.0cm, clip=true, height=2.7in]{figures/PROSPECT_Sensitivity_Gev1.pdf} \caption{Sensitivity contours from one year~(black, solid) and two years~(pink, solid) of PROSPECT-II~\cite{Andriamirado:2021qjc} data-taking compared to: \textbf{Left}: Already excluded parameter space from the relative reactor spectral experiments~(gray, dashed), and the allowed region~(blue, solid) from RAA (KI model), \textbf{Right:} Suggested parameter space from GA and Neutrino-4 experiment~(pink) and the CP violation disambiguity limit (red, dashed). PROSPECT-II can significantly increase the global sensitivity in the 1-10~eV$^2$ range. Additionally, PROSPECT-II in conjunction with the projected sensitivity~(dashed, teal) from KATRIN~\cite{Aker:2022ldk} will be able to exclude all of the GA suggested parameter space and clear up the CP violation disambiguity.} \label{fig:sensitivity} \end{figure} In light of these recent developments, the physics opportunities for PROSPECT-II become even more tantalizing. In 2021, the PROSPECT collaboration published a detailed summary of the physics opportunities with an upgraded detector which can be rapidly deployed~\cite{Andriamirado:2021qjc}. As detailed above, both MicroBooNE{} and the gallium experiments point to preferred parameter space in the few-eV$^2$ region, with oscillation amplitudes just beyond what has been probed by the current generation of SBL reactor experiments. Reactor experiments are highly complementary to accelerator- and source-based measurements and feature a flavor-pure, high-intensity source of \ensuremath{\overline{\nu}_{e}}{}. While there have been questions about the uncertainties in absolute flux predictions, segmented detectors at short baselines are able to directly search for energy-dependent oscillation-induced spectral distortions that are the `smoking-gun' of sterile neutrinos. This model-independent technique is crucial to positively identify neutrino oscillations as opposed to an ambiguous flux-deficit that could be caused by a mismatch between data and theoretical predictions. The energies ($\sim$few MeV) and baseline (7-9~m) available to PROSPECT-II at HFIR are uniquely suited to searching for oscillations in the 1--10~eV$^2$ region. The projected PROSPECT-II sensitivity will surpass the current global analysis' precision at all $\Delta$m$^2$ above $\sim$2~eV$^2$ with as much as 2 to 4 times improvement for mass-splittings in the 5--10~eV$^2$ region. The Neutrino-4 collaboration reports a $\sim$3$\sigma$ oscillation-like signal with a best-fit point of $\sim$7.3~eV$^2$ and an amplitude of $\mathrm{sin}^22\theta = 0.36$. The allowed region is shown in Fig.~\ref{fig:sensitivity}. This best-fit point is in tension with results from PROSPECT and STEREO. By probing this broad-region of parameter space, PROSPECT-II can play a valuable role in the resolution of the aforementioned confusing experimental and theoretical landscape. While testing the presence of an additional sterile state is an important BSM study in its own right, it is also crucial for the future studies of Standard Model neutrino parameters. Results from upcoming long baseline~(LBL) experiments designed to measure CP-violation remain ambiguous~\cite{deGouvea:2014aoa,Gandhi:2015xza,deGouvea:2016pom,Klop:2014ima} if sterile neutrinos are not fully excluded for mixing angles $\sin^2 2\theta \gtrapprox 0.03$~\cite{,Dutta:2016glq}. Thus a combination of PROSPECT-II, tritium beta endpoint measurements, and medium baseline neutrino experiments together will play a complementary role in the interpretation of the future LBL results. \section{PROSPECT-II ``other'' Scientific Opportunities} \section{PROSPECT-II: Benchmark measurements for precise understanding of reactor \ensuremath{\overline{\nu}_{e}}{} emission} There are additional scientific goals that the PROSPECT-II upgrade can achieve. These primarily relate to greatly improving our understating of reactors as an antineutrino source which would benefit neutrino physics, BSM studies, and safeguards applications using neutrinos~\tlnote{Cite reactor whitepaper}. Comparisons of experimental and predicted \ensuremath{\overline{\nu}_{e}}{} energy spectra measured at LEU-fueled reactors show sizable disagreements, most prominently in the 4-6~MeV energy range. PROSPECT-II will help to address this situation by further improving the precision of the world-leading PROSPECT measurement of the $^{235}$U \ensuremath{\overline{\nu}_{e}}{} energy spectrum. As described in Ref.~\cite{Andriamirado:2021qjc}, PROSPECT-II will produce a spectrum measurement that approaches or exceeds the precision of current prediction approaches, providing a stringent test of the underlying models and nuclear data. Furthermore, a joint analysis of spectrum measurements from Daya Bay and PROSPECT-II would produce purely data-driven reactor $\overline{\nu}_e$ spectrum models for future particle physics measurements and potential applications. Benchmark spectra have been identified as a high-priority ``nuclear data'' need during a recent community workshop~\cite{bib:WoNDRAM}. PROSPECT-II will also perform a precise measurement of the \ensuremath{\overline{\nu}_{e}}{} flux produced in $^{235}$U{} fission. By performing a modern $^{235}$U{}~\ensuremath{\overline{\nu}_{e}}{} flux measurement, PROSPECT-II can increase the reliability of the global flux picture, similarly benefiting the particle physics and nuclear science communities. A flux precision of 2.5\% is anticipated, with the dominant systematic being knowledge of the HFIR power ($\sim$2$\%$). When combined with flux measurements at LEU fueled reactors that have a more complex fuel mix, the pure $^{235}$U{}~\ensuremath{\overline{\nu}_{e}}{} flux measurement performed by PROSPECT-II would improve the precision of IBD yields from all major fissioning isotopes~\cite{Andriamirado:2021qjc}. \section{The PROSPECT-II Experimental Program} \section{PROSPECT-II: An evolutionary detector upgrade with physics results within 2 years} The original PROSPECT detector initially met all design requirements, as laid out in Ref.~\cite{PROSPECT:2015iqr}. Unprecedented background rejection, provided by detector segmentation and particle identification via Pulse Shape Discrimination using a $^6$Li-doped liquid scintillator~\cite{PROSPECT:2018dnc,PROSPECT:2019enz}, allowed precision reactor antineutrino measurements to be conducted near the earth's surface with very little overburden. Excellent energy resolution, precision energy calibration and reconstruction, and event position reconstruction~\cite{PROSPECT:2018hzo} in a compact detector enabled model-independent short baseline oscillation searches and modern antineutrino energy spectrum measurements from $^{235}$U fission~\cite{PROSPECT:2018dtt, PROSPECT:2018snc, PROSPECT:2020sxr}. As in the original PROSPECT design, the PROSPECT-II detector will contain a segmented $^6$Li-doped liquid scintillator volume optimized for inverse beta decay detection with minimal cosmic-ray shielding. The PROSPECT-II detector design addresses technical issues encountered during the initial data taking period that caused a fraction of the detector PMTs to become inoperable. The principal design change moves the PMTs outside the liquid scintillator volume, thereby eliminating the possibility of liquid scintillator affecting voltage divider operation. Additionally, this change reduces the range of materials in contact with the liquid scintillator, providing an improved environment for long-term stability and operation. Completing the upgrade involves rebuilding the inner scintillator containment vessel, the production of new liquid scintillator, and a revamped calibration deployment scheme. Components outside this inner region, including an outer liquid containment vessel, an extensive shielding package and data acquisition electronics are largely unchanged. These evolutionary changes require modification to a minority of subsystems and are expected to maintain the demonstrated performance achieved during initial PROSPECT operation. Based on the demonstrated construction timeline of PROSPECT, the PROSPECT-II detector can be built and deployed within one calendar year of project start. This ability to leverage existing components and expertise makes it possible for PROSPECT-II to rapidly begin collecting the largest ever data set from an isotopically pure source of $^{235}$U fissions at the High Flux Isotope Reactor. Impactful physics results can then be produced with as little as one calendar year of data, with full sensitivity being reached after 14 reactor cycles (Fig.~\ref{fig:sensitivity}). With a timely start, this can be comfortably achieved prior to a long reactor outage planned for 2028~\cite{HFIR-schedule}. \section{Complementary Experimental Approaches to resolve potential phenomenological explanations} \begin{figure}[h] \centering \includegraphics[trim = 0cm 0.0cm 0.5cm 0.0cm, clip=true, height=2.7in]{figures/Exclusion_MeV1.pdf} \includegraphics[trim = 0cm 0.0cm 0.5cm 0.0cm, clip=true, height=2.7in]{figures/Exclusion_GeV1.pdf} \caption{\textbf{Left:} Comparison of the suggested parameter space from RAA (HM model)~\cite{Giunti:2021kab} and Neutrino-4~\cite{Serebrov:2020kmd} to the allowed regions from the RAA (KI model)~\cite{Giunti:2021kab} and excluded parameter regions from global fits of spectral-ratio reactor measurements~\cite{Berryman:2021yan} and KATRIN experiment~\cite{Aker:2022ldk}. \textbf{Right:} Comparison of the suggested parameter space from the gallium anomaly~\cite{Barinov:2021mjj} and two $\nu_{e}$-disappearance analyses using MicroBooNE{} data, one hinting~\cite{Denton:2021czb} at oscillations and the other~\cite{Arguelles:2021meu} excluding a small portion of the parameter space, to the excluded parameter regions from global fits of spectral-ratio reactor measurements~\cite{Berryman:2021yan}. Both cases show regions of interesting parameter space with $\Delta m^2 > 5 $ eV$^{2}$ yet to be explored.} \label{fig:exclusions} \end{figure} It is worth considering the aforementioned experimental results in a broader phenomenological context to inform future experimental efforts. There is an increasing amount of evidence suggesting that the source of RAA is, at least in part, due to the mismodeling of the reactor \ensuremath{\overline{\nu}_{e}}{} spectra--primarily driven by $^{235}$U{}. This interpretation is supported by an improved agreement between the measured isotopic IBD yields and the new updated summation model~(the Estienne-Fallot or EF model) based on the revised nuclear databases~\cite{Estienne:2019ujo} along with the Daya Bay~\cite{DayaBay:2017jkb,DayaBay:2019yxq} and RENO~\cite{RENO:2018pwo} fuel evolution results and re-evaluated KI-based conversion model. Combined fits of the reactor antineutrino yields and the Daya Bay and RENO evolution data-sets suggest a persistent RAA at $\sim 3\sigma$ when compared to the ILL/HM model while the anomaly reduces to $\sim 1\sigma$ when compared to the KI and EF models~\cite{Giunti:2021kab}. When considered in the context of a 3+1 sterile neutrino hypothesis, the EF and KI models have no statistically significant preference for eV-scale oscillations. Though the sterile neutrino explanation of the RAA is diminished with the updated models, the combined reactor rate and evolution data do not preclude the presence of sterile neutrinos in this region, as shown on the left panel of Fig.~\ref{fig:exclusions}. Viable hybrid models exist that could accommodate incorrect reactor neutrino flux predictions while also allowing oscillations to sterile neutrinos~\cite{Giunti:2019qlt}. Rate and flux-evolution measurements alone are not sufficient to unambiguously resolve the reactor anomaly, due in part to reactor power uncertainties and the complex uncertainties in predicting neutrino spectra from fission. Relative spectral measurements, such as those deployed in SBL reactor experiments, are needed for a definitive resolution. Over the past 5 years SBL reactor experiments performing oscillation searches using relative spectral measurements have collected considerable amount of valuable data. A combined analysis~\cite{Berryman:2021yan} using data from these SBL experiments--including PROSPECT~\cite{PROSPECT:2018dtt}, STEREO~\cite{STEREO:2019ztb}, NEOS~\cite{NEOS:2016wee}, DANSS~\cite{DANSS:2018fnn}, and Neutrino-4~\cite{NEUTRINO-4:2018huq}--shows no strong evidence of sterile neutrino oscillations at the eV-scales. The use of relative oscillation searches for this combined fit makes it robust against reactor modeling uncertainties. As shown in Fig.~\ref{fig:exclusions}, while the combination of these experiments excludes major portions of sterile neutrinos parameter space, a sizeable fraction of the RAA still persists. Moreover, these results are compatible with the gallium results under a 3+1 sterile neutrino model at similar oscillation frequencies. Hence, more data covering $\Delta m^2${} $>$ 1 eV$^2$ are needed to fully explore this parameter space. The first data release of the MicroBooNE{} experiment has not shown indications of $\nu_e$ appearance from $\nu_{\mu} \rightarrow \nu_{e}$ oscillations. Nevertheless, the presence of intrinsic $\nu_{e}$ in the beam allows for a $\nu_{e}$ disappearance search with this dataset. One such preliminary analysis~\cite{Denton:2021czb} performed using the MicroBooNE{}'s data release hints at a 2.2$\sigma$ evidence of sterile neutrinos in the similar parameter space as the RAA and the GA with the best-fit point at ($\mathrm{sin}^2(2\theta_{14}) = 0.30$ and $\Delta m^2_{41} = 1.42 $eV$^2$). A more rigorous fully-consistent 3+1 neutrino oscillation based approach~\cite{Arguelles:2021meu} using the same MicroBooNE{} dataset but including the official MicroBooNE{} covariance matrix\footnote{Note that neither of these analyses are performed by the MicroBooNE{} collaboration and the outcomes from the official analysis may vary slightly.} that accounts for correlated systematic uncertainties sees no hints of oscillations. The results from both these are shown in the right panel of Fig.~\ref{fig:exclusions}. This analysis excludes portions of parameter space suggested by the MiniBooNE{}, reactor, and gallium anomalies. The final MicroBooNE{} dataset, planned to be $\sim$2x the size of the current analyzed dataset, is expected to improve the experiment's coverage but significant portions of the $\sin^2\theta_{e e}$ parameter space will remain unexplored. The presence of $\nu_{e}$ in the ~MicroBooNE s $\nu_{\mu}$ beam produces degenerate effects between $\nu_{e}$ disappearance and $\nu{e}$ appearance complicating the interpretations for sterile neutrino oscillation searches. These results highlight the importance of a flavor-pure neutrino source and the need for complementary sterile neutrino searches that can fully address the parameter space suggested by all anomalies shown in Fig.~\ref{fig:exclusions}. Looking at the broader picture, MicroBooNE{} results so far don't resolve the decades-long MiniBooNE{}~\cite{MicroBooNE:2021zai} and LSND~\cite{LSND:1997vun,LSND:2001aii} anomalous results. Moreover, the reconciliation of LSND, MiniBooNE{}, and MicroBooNE{} results demand invocation of a combination of multiple non-vanilla BSM models. The picture gets even more complicated when datasets from reactor and gallium experiments are included. A key point to note is that while the gallium experiments were so far only able to probe the deficit and can't disambiguate between effects arising from oscillations or an unknown production effect, relative reactor searches have the powerful capability to directly search for the propagation effect induced by neutrino oscillations. Ultimately, the consolidation of these paradoxical results necessitates the need for multiple complementary probes to disentangle multiple competing BSM effects. Despite significant experimental, theoretical, and phenomenological progress in the reactor, gallium, and long baseline sectors, a consistent description of the neutrino picture hasn't emerged yet. The combined picture of all the anomalous results cannot be fully explained using a 3+1 sterile neutrino picture highlighting the need for multiple complementary efforts to comprehensively probe the anomalies. \section{Summary} Short-baseline reactor experiments have been very successful in probing low-mass ($<$1eV$^2$) sterile neutrinos, though sensitivity to the high-$\Delta m^2${} region remains limited. MicroBooNE{} and BEST have generated renewed excitement about the possibility of high-$\Delta m^2${} sterile neutrinos. Efforts to interpret these results have demonstrated the need for new and enhanced data that can probe this region. The KATRIN experiment is beginning to probe the $>10$~eV$^2$ region and the $\theta_{13}$ reactor experiments have effectively covered the low-$\Delta m^2$ region, leaving an opportunity for short-baseline reactor-based experiments to probe for 1--10~eV$^2$ mass-splittings. We highlight the unique contributions that the recently proposed PROSPECT-II physics program can make to this exciting landscape. By rapidly deploying a robust detector, it is possible to explore this region for new physics in a two-year timeline. \section*{Acknowledgements} The PROSPECT experiment is supported by the following sources: US Department of Energy (DOE) Office of Science, Office of High Energy Physics under Award No. DE-SC0016357 and DE-SC0017660 to Yale University, under Award No. DE-SC0017815 to Drexel University, under Award No. DE-SC0010504 to University of Hawaii, under Award No. DE-SC0008347 to Illinois Institute of Technology, under Award No. DE-SC0016060 to Temple University, under Contract No. DE-SC0012704 to Brookhaven National Laboratory, and under Work Proposal Number SCW1504 to Lawrence Livermore National Laboratory. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and by Oak Ridge National Laboratory under Contract DE-AC05-00OR22725. Additional funding for the experiment was provided by the Heising-Simons Foundation under Award No. \#2016-117 to Yale University. J.G. is supported through the NSF Graduate Research Fellowship Program and A.C. performed work under appointment to the Nuclear Nonproliferation International Safeguards Fellowship Program sponsored by the National Nuclear Security Administration’s Office of International Nuclear Safeguards (NA-241). This work was also supported by the Canada First Research Excellence Fund (CFREF), and the Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery program under grant \#RGPIN-2018-04989, and Province of Ontario. We further acknowledge support from Yale University, the Illinois Institute of Technology, Temple University, Brookhaven National Laboratory, the Lawrence Livermore National Laboratory LDRD program, the National Institute of Standards and Technology, and Oak Ridge National Laboratory. We gratefully acknowledge the support and hospitality of the High Flux Isotope Reactor and Oak Ridge National Laboratory, managed by UT-Battelle for the U.S. Department of Energy. \section{PROSPECT-II ``other'' Scientific Opportunities} \section{PROSPECT-II: Benchmark measurements for precise understanding of reactor \ensuremath{\overline{\nu}_{e}}{} emission} There are additional scientific goals that the PROSPECT-II upgrade can achieve. These primarily relate to greatly improving our understating of reactors as an antineutrino source which would benefit neutrino physics, BSM studies, and safeguards applications using neutrinos~\tlnote{Cite reactor whitepaper}. Comparisons of experimental and predicted \ensuremath{\overline{\nu}_{e}}{} energy spectra measured at LEU-fueled reactors show sizable disagreements, most prominently in the 4-6~MeV energy range. PROSPECT-II will help to address this situation by further improving the precision of the world-leading PROSPECT measurement of the $^{235}$U \ensuremath{\overline{\nu}_{e}}{} energy spectrum. As described in Ref.~\cite{Andriamirado:2021qjc}, PROSPECT-II will produce a spectrum measurement that approaches or exceeds the precision of current prediction approaches, providing a stringent test of the underlying models and nuclear data. Furthermore, a joint analysis of spectrum measurements from Daya Bay and PROSPECT-II would produce purely data-driven reactor $\overline{\nu}_e$ spectrum models for future particle physics measurements and potential applications. Benchmark spectra have been identified as a high-priority ``nuclear data'' need during a recent community workshop~\cite{bib:WoNDRAM}. PROSPECT-II will also perform a precise measurement of the \ensuremath{\overline{\nu}_{e}}{} flux produced in $^{235}$U{} fission. By performing a modern $^{235}$U{}~\ensuremath{\overline{\nu}_{e}}{} flux measurement, PROSPECT-II can increase the reliability of the global flux picture, similarly benefiting the particle physics and nuclear science communities. A flux precision of 2.5\% is anticipated, with the dominant systematic being knowledge of the HFIR power ($\sim$2$\%$). When combined with flux measurements at LEU fueled reactors that have a more complex fuel mix, the pure $^{235}$U{}~\ensuremath{\overline{\nu}_{e}}{} flux measurement performed by PROSPECT-II would improve the precision of IBD yields from all major fissioning isotopes~\cite{Andriamirado:2021qjc}. \section{The PROSPECT-II Experimental Program} \section{PROSPECT-II: An evolutionary detector upgrade with physics results within 2 years} The original PROSPECT detector initially met all design requirements, as laid out in Ref.~\cite{PROSPECT:2015iqr}. Unprecedented background rejection, provided by detector segmentation and particle identification via Pulse Shape Discrimination using a $^6$Li-doped liquid scintillator~\cite{PROSPECT:2018dnc,PROSPECT:2019enz}, allowed precision reactor antineutrino measurements to be conducted near the earth's surface with very little overburden. Excellent energy resolution, precision energy calibration and reconstruction, and event position reconstruction~\cite{PROSPECT:2018hzo} in a compact detector enabled model-independent short baseline oscillation searches and modern antineutrino energy spectrum measurements from $^{235}$U fission~\cite{PROSPECT:2018dtt, PROSPECT:2018snc, PROSPECT:2020sxr}. As in the original PROSPECT design, the PROSPECT-II detector will contain a segmented $^6$Li-doped liquid scintillator volume optimized for inverse beta decay detection with minimal cosmic-ray shielding. The PROSPECT-II detector design addresses technical issues encountered during the initial data taking period that caused a fraction of the detector PMTs to become inoperable. The principal design change moves the PMTs outside the liquid scintillator volume, thereby eliminating the possibility of liquid scintillator affecting voltage divider operation. Additionally, this change reduces the range of materials in contact with the liquid scintillator, providing an improved environment for long-term stability and operation. Completing the upgrade involves rebuilding the inner scintillator containment vessel, the production of new liquid scintillator, and a revamped calibration deployment scheme. Components outside this inner region, including an outer liquid containment vessel, an extensive shielding package and data acquisition electronics are largely unchanged. These evolutionary changes require modification to a minority of subsystems and are expected to maintain the demonstrated performance achieved during initial PROSPECT operation. Based on the demonstrated construction timeline of PROSPECT, the PROSPECT-II detector can be built and deployed within one calendar year of project start. This ability to leverage existing components and expertise makes it possible for PROSPECT-II to rapidly begin collecting the largest ever data set from an isotopically pure source of $^{235}$U fissions at the High Flux Isotope Reactor. Impactful physics results can then be produced with as little as one calendar year of data, with full sensitivity being reached after 14 reactor cycles (Fig.~\ref{fig:sensitivity}). With a timely start, this can be comfortably achieved prior to a long reactor outage planned for 2028~\cite{HFIR-schedule}. \section{PROSPECT-II Oscillation Physics Opportunities} \section{PROSPECT-II: unique inputs for the resolution of short baseline anomalies} \begin{figure}[h] \centering \includegraphics[trim = 0cm 0.0cm 0.5cm 0.0cm, clip=true, height=2.7in]{figures/PROSPECT_Sensitivity_Mev1.pdf} \includegraphics[trim = 0cm 0.0cm 0.5cm 0.0cm, clip=true, height=2.7in]{figures/PROSPECT_Sensitivity_Gev1.pdf} \caption{Sensitivity contours from one year~(black, solid) and two years~(pink, solid) of PROSPECT-II~\cite{Andriamirado:2021qjc} data-taking compared to: \textbf{Left}: Already excluded parameter space from the relative reactor spectral experiments~(gray, dashed), and the allowed region~(blue, solid) from RAA (KI model), \textbf{Right:} Suggested parameter space from GA and Neutrino-4 experiment~(pink) and the CP violation disambiguity limit (red, dashed). PROSPECT-II can significantly increase the global sensitivity in the 1-10~eV$^2$ range. Additionally, PROSPECT-II in conjunction with the projected sensitivity~(dashed, teal) from KATRIN~\cite{Aker:2022ldk} will be able to exclude all of the GA suggested parameter space and clear up the CP violation disambiguity.} \label{fig:sensitivity} \end{figure} In light of these recent developments, the physics opportunities for PROSPECT-II become even more tantalizing. In 2021, the PROSPECT collaboration published a detailed summary of the physics opportunities with an upgraded detector which can be rapidly deployed~\cite{Andriamirado:2021qjc}. As detailed above, both MicroBooNE{} and the gallium experiments point to preferred parameter space in the few-eV$^2$ region, with oscillation amplitudes just beyond what has been probed by the current generation of SBL reactor experiments. Reactor experiments are highly complementary to accelerator- and source-based measurements and feature a flavor-pure, high-intensity source of \ensuremath{\overline{\nu}_{e}}{}. While there have been questions about the uncertainties in absolute flux predictions, segmented detectors at short baselines are able to directly search for energy-dependent oscillation-induced spectral distortions that are the `smoking-gun' of sterile neutrinos. This model-independent technique is crucial to positively identify neutrino oscillations as opposed to an ambiguous flux-deficit that could be caused by a mismatch between data and theoretical predictions. The energies ($\sim$few MeV) and baseline (7-9~m) available to PROSPECT-II at HFIR are uniquely suited to searching for oscillations in the 1--10~eV$^2$ region. The projected PROSPECT-II sensitivity will surpass the current global analysis' precision at all $\Delta$m$^2$ above $\sim$2~eV$^2$ with as much as 2 to 4 times improvement for mass-splittings in the 5--10~eV$^2$ region. The Neutrino-4 collaboration reports a $\sim$3$\sigma$ oscillation-like signal with a best-fit point of $\sim$7.3~eV$^2$ and an amplitude of $\mathrm{sin}^22\theta = 0.36$. The allowed region is shown in Fig.~\ref{fig:sensitivity}. This best-fit point is in tension with results from PROSPECT and STEREO. By probing this broad-region of parameter space, PROSPECT-II can play a valuable role in the resolution of the aforementioned confusing experimental and theoretical landscape. While testing the presence of an additional sterile state is an important BSM study in its own right, it is also crucial for the future studies of Standard Model neutrino parameters. Results from upcoming long baseline~(LBL) experiments designed to measure CP-violation remain ambiguous~\cite{deGouvea:2014aoa,Gandhi:2015xza,deGouvea:2016pom,Klop:2014ima} if sterile neutrinos are not fully excluded for mixing angles $\sin^2 2\theta \gtrapprox 0.03$~\cite{,Dutta:2016glq}. Thus a combination of PROSPECT-II, tritium beta endpoint measurements, and medium baseline neutrino experiments together will play a complementary role in the interpretation of the future LBL results. \section*{Executive Summary} Nuclear reactors provide the highest intensity source of pure electron-type neutrinos available on earth. Reactor neutrino experiments have played a central role in developing our current understanding of the three neutrino paradigm and in establishing the current era of precision neutrino physics. Precision measurements of the flavor-pure antineutrino flux from reactors are one way to search for new physics by probing both the physics of neutrino oscillations and the production mechanism of rector antineutrinos. In the years to come reactor neutrino experiments will continue to play an important role in resolving the global neutrino picture. Unique features of reactor neutrinos mean that these experiments can continue to play a leading role in resolving the global neutrino picture. The PROSPECT experiment has substantially addressed the original `Reactor Antineutrino Anomaly' by performing a high-resolution spectrum measurement from an enriched compact reactor core and a reactor model-independent sterile neutrino oscillation search based on the unique spectral distortions the existence of eV$^2$-scale sterile neutrinos would impart. But as the field has evolved, the current short-baseline (SBL) landscape supports many complex phenomenological interpretations, establishing a need for complementary experimental approaches to resolve the situation. While the global suite of SBL reactor experiments, including PROSPECT, have probed much of the sterile neutrino parameter space, there remains a large region above 1 eV$^2$ that remains unaddressed. Recent results from BEST confirm the Gallium Anomaly, increasing its significance to nearly 5$\sigma$, with sterile neutrinos providing a possible explanation of this anomaly. Separately, the MicroBooNE{} exclusion of electron-like signatures causing the MiniBooNE{} low-energy excess does not eliminate the possibility of sterile neutrinos as an explanation. In fact, MicroBooNE{} potentially indicates an oscillation-based deficit in the electron neutrino channel. Focusing specifically on the future use of reactors as a neutrino source for beyond-the-standard-model (BSM) physics and applications, higher-precision spectral measurements still have a role to play. These recent results have created a confusing landscape which requires new data to disentangle these seemingly contradictory measurements. To directly probe \ensuremath{\overline{\nu}_{e}}{} disappearance from high-$\Delta m^2${} sterile neutrinos, the PROSPECT collaboration proposes to build an upgraded and improved detector, PROSPECT-II. It features an evolutionary detector design which can be constructed and deployed within one year and have impactful physics with as little as one calendar year of data. \section{Compelling questions persist about the Short Baseline Oscillation Landscape after Recent Results} \section{A Precision Reactor Oscillation and SPECTrum Experiment} The search for eV$^2$-scale sterile neutrinos is an active area of neutrino physics that is well motivated by theory and by experimental data. In the reactor neutrino sector in the early 2010s, roughly 6\% deficit was found in the experiments measuring inverse beta decay (IBD) \ensuremath{\overline{\nu}_{e}}{} interactions from reactors compared to the then recently improved reactor neutrino flux models~\cite{Huber:2011wv,Mueller:2011nm}. This discrepancy, referred to as the Reactor Antineutrino Anomaly (RAA), hinted at the possible existence of a sterile neutrino flavor~\cite{Mention:2011rk}. Additionally, radiochemical solar neutrino experiments based on gallium found a $\sim$3$\sigma$ deficit of detected $\nu_e$ interactions from nearby intense radioactive sources, referred to as the Gallium Anomaly (GA)~\cite{Giunti:2010zu}. These two anomalies have prompted an intense global experimental campaign using MeV-scale neutrino sources to test for the existence of sterile neutrinos Built in part to provide a definitive search for sterile neutrino oscillations at very short baselines, the PROSPECT experiment was supported by the Intermediate Neutrino Research Program~\cite{Adams:2015ogl} that followed from the 2014 P5 report. PROSPECT provided strong constraints on sterile neutrinos over significant portions of the phase space suggested by the RAA as well as a high-resolution measurement of neutrinos from an HEU core. While successful in supporting many of the collaboration's science goals, the PROSPECT detector suffered from technical problems which cut short its useful life. As described below, the short baseline oscillation landscape continues to evolve, motivating the PROSPECT collaboration to preparing for an evolutionary detector upgrade (PROSPECT-II) that builds from the success of the experiment so far and leverages that existing investment. The PROSPECT-II upgrade, which is described in detail in Ref.~\cite{Andriamirado:2021qjc}, resolves technical issues that abbreviated the first run, introduces design features that improve robustness and time-stability, and extends both the depth and the scope of the experiment’s physics reach. \section{Recent results further complicate the Short Baseline Oscillation Landscape} A number of experiments including PROSPECT, STEREO, NEOS, DANSS, and Neutrino-4 have probed $\ensuremath{\overline{\nu}_{e}}$ oscillations at very short baselines from reactors~\cite{PROSPECT:2020sxr,STEREO:2019ztb,NEOS:2016wee,DANSS:2018fnn,NEUTRINO-4:2018huq,Serebrov:2020kmd}. Each experiment uses model-independent spectral ratio measurements which directly search for energy and baseline dependent spectral distortions that are unique to sterile neutrino oscillations. With the exception of Neutrino-4, the experiments' results have been found to be statistically consistent with the three neutrino model. Neutrino-4 reports evidence for sterile oscillation with 2.9$\sigma$ significance\footnote{For consistency, this note uses the published Neutrino-4 results from Ref.~\cite{Serebrov:2020kmd}.}, but is in direct tension with the other reactor experiments and the analysis has drawn criticism~\cite{Giunti:2021iti,PROSPECT:2020raz,Coloma:2020ajw}. Overall, these direct oscillometry experiments have excluded large portions of low-$\Delta$m$^2$ preferred regions for RAA and GA. As the RAA is based on an observed deficit between the predicted and measured \ensuremath{\overline{\nu}_{e}}{} fluxes at multiple reactor sites, it depends on the accuracy of reactor flux predictions. These predictions are based on neutron-induced fission beta spectra collected by Schreckenbach et al. at the Institut Laue-Langevin (ILL) and converted into neutrino spectra by Huber~\cite{Huber:2011wv} and Mueller~\cite{Mueller:2011nm} (referred to as the HM model). A recent Kurchatov Institute (KI) measurement of the ratio of the cumulative $\beta$-decay spectrum between $^{235}$U{} and $^{239}$Pu{} is lower than the ILL/HM value by 5.4\%~\cite{Kopeikin:2021rnb}. It was suggested by Kopeikin et al., that this discrepancy is likely due to an overestimation of the absolute normalization of $^{235}$U{} at ILL. Under this assumption, the re-evaluated flux (KI model) based on the modified normalization produces IBD yields that agree with reactor flux and evolution measurements within 1$\sigma$~\cite{Kopeikin:2021ugh}, reducing the significance of the original motivation for the RAA. Though is appears likely that a normalization error contributed to the original RAA, the KI model does not preclude sterile neutrinos from existing in this parameter space as shown in Fig.~\ref{fig:exclusions}. New results from SBL reactor experiments, BEST, and MicroBooNE{} have brought new information and interest in a potential eV-scale sterile neutrino. In contrast to the RAA, the GA requires no reactor flux prediction or knowledge. The initial gallium experiments SAGE and GALLEX were not purpose-built to probe for sterile neutrinos. To directly probe the GA, the Baksan Experiment on Sterile Transitions (BEST) measured the rate of neutrino interactions in a layered gallium detector, with a high intensity $\nu_e$ source in the center~\cite{Barinov:2021asz}. To search for oscillations, the rate of production of $^{71}$Ge is measured in inner and outer volumes and compared to expected results. The BEST results show a $\sim$20\% deficit in both volumes, strengthening the significance of the GA, but not providing any indication as to whether the deficit is oscillatory in nature. Neutrino experiments using accelerators have provided intriguing short-baseline anomalies, and remain a highly active avenue for probing sterile oscillations. In the 1990s and 2000s, accelerator neutrino measurements by LSND and MiniBooNE{} found an excess of $\nu_e$-like and \ensuremath{\overline{\nu}_{e}}{}-like events from predominantly $\nu_\mu$ sources, with the MiniBooNE{} excess eventually established with 4.8$\sigma$ statistical significance~\cite{LSND:2001aii,MiniBooNE:2020pnu}. Potential explanations of these results have involved sterile neutrinos, other BSM physical phenomena, or some combination of the two. The corresponding sterile oscillation for this anomaly is in a similar region of the $\Delta m^2${} parameter space as the RAA and GA, increasing interest in a sterile neutrino of this scale. Recent results from MicroBooNE{} using a beam-line and baseline very similar to MiniBooNE{} show no such excess~\cite{MicroBooNE:2021rmx}, though their initial sensitivity does not cover the entirety of the MiniBooNE{} suggested region. Interestingly, MicroBooNE{} observes a modest deficit in measured $\nu_e$~\cite{MicroBooNE:2021zai}, which some interpret as a hint of BSM physics~\cite{Denton:2021czb}. \section*{Executive Summary} Nuclear reactors provide the highest intensity source of pure electron-type neutrinos available on earth. Reactor neutrino experiments have played a central role in developing our current understanding of the three neutrino paradigm and in establishing the current era of precision neutrino physics. Precision measurements of the flavor-pure antineutrino flux from reactors are one way to search for new physics by probing both the physics of neutrino oscillations and the production mechanism of rector antineutrinos. In the years to come reactor neutrino experiments will continue to play an important role in resolving the global neutrino picture. Unique features of reactor neutrinos mean that these experiments can continue to play a leading role in resolving the global neutrino picture. The PROSPECT experiment has substantially addressed the original `Reactor Antineutrino Anomaly' by performing a high-resolution spectrum measurement from an enriched compact reactor core and a reactor model-independent sterile neutrino oscillation search based on the unique spectral distortions the existence of eV$^2$-scale sterile neutrinos would impart. But as the field has evolved, the current short-baseline (SBL) landscape supports many complex phenomenological interpretations, establishing a need for complementary experimental approaches to resolve the situation. While the global suite of SBL reactor experiments, including PROSPECT, have probed much of the sterile neutrino parameter space, there remains a large region above 1 eV$^2$ that remains unaddressed. Recent results from BEST confirm the Gallium Anomaly, increasing its significance to nearly 5$\sigma$, with sterile neutrinos providing a possible explanation of this anomaly. Separately, the MicroBooNE{} exclusion of electron-like signatures causing the MiniBooNE{} low-energy excess does not eliminate the possibility of sterile neutrinos as an explanation. In fact, MicroBooNE{} potentially indicates an oscillation-based deficit in the electron neutrino channel. Focusing specifically on the future use of reactors as a neutrino source for beyond-the-standard-model (BSM) physics and applications, higher-precision spectral measurements still have a role to play. These recent results have created a confusing landscape which requires new data to disentangle these seemingly contradictory measurements. To directly probe \ensuremath{\overline{\nu}_{e}}{} disappearance from high-$\Delta m^2${} sterile neutrinos, the PROSPECT collaboration proposes to build an upgraded and improved detector, PROSPECT-II. It features an evolutionary detector design which can be constructed and deployed within one year and have impactful physics with as little as one calendar year of data. \section{Compelling questions persist about the Short Baseline Oscillation Landscape after Recent Results} \section{A Precision Reactor Oscillation and SPECTrum Experiment} The search for eV$^2$-scale sterile neutrinos is an active area of neutrino physics that is well motivated by theory and by experimental data. In the reactor neutrino sector in the early 2010s, roughly 6\% deficit was found in the experiments measuring inverse beta decay (IBD) \ensuremath{\overline{\nu}_{e}}{} interactions from reactors compared to the then recently improved reactor neutrino flux models~\cite{Huber:2011wv,Mueller:2011nm}. This discrepancy, referred to as the Reactor Antineutrino Anomaly (RAA), hinted at the possible existence of a sterile neutrino flavor~\cite{Mention:2011rk}. Additionally, radiochemical solar neutrino experiments based on gallium found a $\sim$3$\sigma$ deficit of detected $\nu_e$ interactions from nearby intense radioactive sources, referred to as the Gallium Anomaly (GA)~\cite{Giunti:2010zu}. These two anomalies have prompted an intense global experimental campaign using MeV-scale neutrino sources to test for the existence of sterile neutrinos Built in part to provide a definitive search for sterile neutrino oscillations at very short baselines, the PROSPECT experiment was supported by the Intermediate Neutrino Research Program~\cite{Adams:2015ogl} that followed from the 2014 P5 report. PROSPECT provided strong constraints on sterile neutrinos over significant portions of the phase space suggested by the RAA as well as a high-resolution measurement of neutrinos from an HEU core. While successful in supporting many of the collaboration's science goals, the PROSPECT detector suffered from technical problems which cut short its useful life. As described below, the short baseline oscillation landscape continues to evolve, motivating the PROSPECT collaboration to preparing for an evolutionary detector upgrade (PROSPECT-II) that builds from the success of the experiment so far and leverages that existing investment. The PROSPECT-II upgrade, which is described in detail in Ref.~\cite{Andriamirado:2021qjc}, resolves technical issues that abbreviated the first run, introduces design features that improve robustness and time-stability, and extends both the depth and the scope of the experiment’s physics reach. \section{Recent results further complicate the Short Baseline Oscillation Landscape} A number of experiments including PROSPECT, STEREO, NEOS, DANSS, and Neutrino-4 have probed $\ensuremath{\overline{\nu}_{e}}$ oscillations at very short baselines from reactors~\cite{PROSPECT:2020sxr,STEREO:2019ztb,NEOS:2016wee,DANSS:2018fnn,NEUTRINO-4:2018huq,Serebrov:2020kmd}. Each experiment uses model-independent spectral ratio measurements which directly search for energy and baseline dependent spectral distortions that are unique to sterile neutrino oscillations. With the exception of Neutrino-4, the experiments' results have been found to be statistically consistent with the three neutrino model. Neutrino-4 reports evidence for sterile oscillation with 2.9$\sigma$ significance\footnote{For consistency, this note uses the published Neutrino-4 results from Ref.~\cite{Serebrov:2020kmd}.}, but is in direct tension with the other reactor experiments and the analysis has drawn criticism~\cite{Giunti:2021iti,PROSPECT:2020raz,Coloma:2020ajw}. Overall, these direct oscillometry experiments have excluded large portions of low-$\Delta$m$^2$ preferred regions for RAA and GA. As the RAA is based on an observed deficit between the predicted and measured \ensuremath{\overline{\nu}_{e}}{} fluxes at multiple reactor sites, it depends on the accuracy of reactor flux predictions. These predictions are based on neutron-induced fission beta spectra collected by Schreckenbach et al. at the Institut Laue-Langevin (ILL) and converted into neutrino spectra by Huber~\cite{Huber:2011wv} and Mueller~\cite{Mueller:2011nm} (referred to as the HM model). A recent Kurchatov Institute (KI) measurement of the ratio of the cumulative $\beta$-decay spectrum between $^{235}$U{} and $^{239}$Pu{} is lower than the ILL/HM value by 5.4\%~\cite{Kopeikin:2021rnb}. It was suggested by Kopeikin et al., that this discrepancy is likely due to an overestimation of the absolute normalization of $^{235}$U{} at ILL. Under this assumption, the re-evaluated flux (KI model) based on the modified normalization produces IBD yields that agree with reactor flux and evolution measurements within 1$\sigma$~\cite{Kopeikin:2021ugh}, reducing the significance of the original motivation for the RAA. Though is appears likely that a normalization error contributed to the original RAA, the KI model does not preclude sterile neutrinos from existing in this parameter space as shown in Fig.~\ref{fig:exclusions}. New results from SBL reactor experiments, BEST, and MicroBooNE{} have brought new information and interest in a potential eV-scale sterile neutrino. In contrast to the RAA, the GA requires no reactor flux prediction or knowledge. The initial gallium experiments SAGE and GALLEX were not purpose-built to probe for sterile neutrinos. To directly probe the GA, the Baksan Experiment on Sterile Transitions (BEST) measured the rate of neutrino interactions in a layered gallium detector, with a high intensity $\nu_e$ source in the center~\cite{Barinov:2021asz}. To search for oscillations, the rate of production of $^{71}$Ge is measured in inner and outer volumes and compared to expected results. The BEST results show a $\sim$20\% deficit in both volumes, strengthening the significance of the GA, but not providing any indication as to whether the deficit is oscillatory in nature. Neutrino experiments using accelerators have provided intriguing short-baseline anomalies, and remain a highly active avenue for probing sterile oscillations. In the 1990s and 2000s, accelerator neutrino measurements by LSND and MiniBooNE{} found an excess of $\nu_e$-like and \ensuremath{\overline{\nu}_{e}}{}-like events from predominantly $\nu_\mu$ sources, with the MiniBooNE{} excess eventually established with 4.8$\sigma$ statistical significance~\cite{LSND:2001aii,MiniBooNE:2020pnu}. Potential explanations of these results have involved sterile neutrinos, other BSM physical phenomena, or some combination of the two. The corresponding sterile oscillation for this anomaly is in a similar region of the $\Delta m^2${} parameter space as the RAA and GA, increasing interest in a sterile neutrino of this scale. Recent results from MicroBooNE{} using a beam-line and baseline very similar to MiniBooNE{} show no such excess~\cite{MicroBooNE:2021rmx}, though their initial sensitivity does not cover the entirety of the MiniBooNE{} suggested region. Interestingly, MicroBooNE{} observes a modest deficit in measured $\nu_e$~\cite{MicroBooNE:2021zai}, which some interpret as a hint of BSM physics~\cite{Denton:2021czb}. \section{Summary} Short-baseline reactor experiments have been very successful in probing low-mass ($<$1eV$^2$) sterile neutrinos, though sensitivity to the high-$\Delta m^2${} region remains limited. MicroBooNE{} and BEST have generated renewed excitement about the possibility of high-$\Delta m^2${} sterile neutrinos. Efforts to interpret these results have demonstrated the need for new and enhanced data that can probe this region. The KATRIN experiment is beginning to probe the $>10$~eV$^2$ region and the $\theta_{13}$ reactor experiments have effectively covered the low-$\Delta m^2$ region, leaving an opportunity for short-baseline reactor-based experiments to probe for 1--10~eV$^2$ mass-splittings. We highlight the unique contributions that the recently proposed PROSPECT-II physics program can make to this exciting landscape. By rapidly deploying a robust detector, it is possible to explore this region for new physics in a two-year timeline. \section*{Acknowledgements} The PROSPECT experiment is supported by the following sources: US Department of Energy (DOE) Office of Science, Office of High Energy Physics under Award No. DE-SC0016357 and DE-SC0017660 to Yale University, under Award No. DE-SC0017815 to Drexel University, under Award No. DE-SC0010504 to University of Hawaii, under Award No. DE-SC0008347 to Illinois Institute of Technology, under Award No. DE-SC0016060 to Temple University, under Contract No. DE-SC0012704 to Brookhaven National Laboratory, and under Work Proposal Number SCW1504 to Lawrence Livermore National Laboratory. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and by Oak Ridge National Laboratory under Contract DE-AC05-00OR22725. Additional funding for the experiment was provided by the Heising-Simons Foundation under Award No. \#2016-117 to Yale University. J.G. is supported through the NSF Graduate Research Fellowship Program and A.C. performed work under appointment to the Nuclear Nonproliferation International Safeguards Fellowship Program sponsored by the National Nuclear Security Administration’s Office of International Nuclear Safeguards (NA-241). This work was also supported by the Canada First Research Excellence Fund (CFREF), and the Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery program under grant \#RGPIN-2018-04989, and Province of Ontario. We further acknowledge support from Yale University, the Illinois Institute of Technology, Temple University, Brookhaven National Laboratory, the Lawrence Livermore National Laboratory LDRD program, the National Institute of Standards and Technology, and Oak Ridge National Laboratory. We gratefully acknowledge the support and hospitality of the High Flux Isotope Reactor and Oak Ridge National Laboratory, managed by UT-Battelle for the U.S. Department of Energy. \section{Complementary Experimental Approaches to resolve potential phenomenological explanations} \begin{figure}[h] \centering \includegraphics[trim = 0cm 0.0cm 0.5cm 0.0cm, clip=true, height=2.7in]{figures/Exclusion_MeV1.pdf} \includegraphics[trim = 0cm 0.0cm 0.5cm 0.0cm, clip=true, height=2.7in]{figures/Exclusion_GeV1.pdf} \caption{\textbf{Left:} Comparison of the suggested parameter space from RAA (HM model)~\cite{Giunti:2021kab} and Neutrino-4~\cite{Serebrov:2020kmd} to the allowed regions from the RAA (KI model)~\cite{Giunti:2021kab} and excluded parameter regions from global fits of spectral-ratio reactor measurements~\cite{Berryman:2021yan} and KATRIN experiment~\cite{Aker:2022ldk}. \textbf{Right:} Comparison of the suggested parameter space from the gallium anomaly~\cite{Barinov:2021mjj} and two $\nu_{e}$-disappearance analyses using MicroBooNE{} data, one hinting~\cite{Denton:2021czb} at oscillations and the other~\cite{Arguelles:2021meu} excluding a small portion of the parameter space, to the excluded parameter regions from global fits of spectral-ratio reactor measurements~\cite{Berryman:2021yan}. Both cases show regions of interesting parameter space with $\Delta m^2 > 5 $ eV$^{2}$ yet to be explored.} \label{fig:exclusions} \end{figure} It is worth considering the aforementioned experimental results in a broader phenomenological context to inform future experimental efforts. There is an increasing amount of evidence suggesting that the source of RAA is, at least in part, due to the mismodeling of the reactor \ensuremath{\overline{\nu}_{e}}{} spectra--primarily driven by $^{235}$U{}. This interpretation is supported by an improved agreement between the measured isotopic IBD yields and the new updated summation model~(the Estienne-Fallot or EF model) based on the revised nuclear databases~\cite{Estienne:2019ujo} along with the Daya Bay~\cite{DayaBay:2017jkb,DayaBay:2019yxq} and RENO~\cite{RENO:2018pwo} fuel evolution results and re-evaluated KI-based conversion model. Combined fits of the reactor antineutrino yields and the Daya Bay and RENO evolution data-sets suggest a persistent RAA at $\sim 3\sigma$ when compared to the ILL/HM model while the anomaly reduces to $\sim 1\sigma$ when compared to the KI and EF models~\cite{Giunti:2021kab}. When considered in the context of a 3+1 sterile neutrino hypothesis, the EF and KI models have no statistically significant preference for eV-scale oscillations. Though the sterile neutrino explanation of the RAA is diminished with the updated models, the combined reactor rate and evolution data do not preclude the presence of sterile neutrinos in this region, as shown on the left panel of Fig.~\ref{fig:exclusions}. Viable hybrid models exist that could accommodate incorrect reactor neutrino flux predictions while also allowing oscillations to sterile neutrinos~\cite{Giunti:2019qlt}. Rate and flux-evolution measurements alone are not sufficient to unambiguously resolve the reactor anomaly, due in part to reactor power uncertainties and the complex uncertainties in predicting neutrino spectra from fission. Relative spectral measurements, such as those deployed in SBL reactor experiments, are needed for a definitive resolution. Over the past 5 years SBL reactor experiments performing oscillation searches using relative spectral measurements have collected considerable amount of valuable data. A combined analysis~\cite{Berryman:2021yan} using data from these SBL experiments--including PROSPECT~\cite{PROSPECT:2018dtt}, STEREO~\cite{STEREO:2019ztb}, NEOS~\cite{NEOS:2016wee}, DANSS~\cite{DANSS:2018fnn}, and Neutrino-4~\cite{NEUTRINO-4:2018huq}--shows no strong evidence of sterile neutrino oscillations at the eV-scales. The use of relative oscillation searches for this combined fit makes it robust against reactor modeling uncertainties. As shown in Fig.~\ref{fig:exclusions}, while the combination of these experiments excludes major portions of sterile neutrinos parameter space, a sizeable fraction of the RAA still persists. Moreover, these results are compatible with the gallium results under a 3+1 sterile neutrino model at similar oscillation frequencies. Hence, more data covering $\Delta m^2${} $>$ 1 eV$^2$ are needed to fully explore this parameter space. The first data release of the MicroBooNE{} experiment has not shown indications of $\nu_e$ appearance from $\nu_{\mu} \rightarrow \nu_{e}$ oscillations. Nevertheless, the presence of intrinsic $\nu_{e}$ in the beam allows for a $\nu_{e}$ disappearance search with this dataset. One such preliminary analysis~\cite{Denton:2021czb} performed using the MicroBooNE{}'s data release hints at a 2.2$\sigma$ evidence of sterile neutrinos in the similar parameter space as the RAA and the GA with the best-fit point at ($\mathrm{sin}^2(2\theta_{14}) = 0.30$ and $\Delta m^2_{41} = 1.42 $eV$^2$). A more rigorous fully-consistent 3+1 neutrino oscillation based approach~\cite{Arguelles:2021meu} using the same MicroBooNE{} dataset but including the official MicroBooNE{} covariance matrix\footnote{Note that neither of these analyses are performed by the MicroBooNE{} collaboration and the outcomes from the official analysis may vary slightly.} that accounts for correlated systematic uncertainties sees no hints of oscillations. The results from both these are shown in the right panel of Fig.~\ref{fig:exclusions}. This analysis excludes portions of parameter space suggested by the MiniBooNE{}, reactor, and gallium anomalies. The final MicroBooNE{} dataset, planned to be $\sim$2x the size of the current analyzed dataset, is expected to improve the experiment's coverage but significant portions of the $\sin^2\theta_{e e}$ parameter space will remain unexplored. The presence of $\nu_{e}$ in the ~MicroBooNE s $\nu_{\mu}$ beam produces degenerate effects between $\nu_{e}$ disappearance and $\nu{e}$ appearance complicating the interpretations for sterile neutrino oscillation searches. These results highlight the importance of a flavor-pure neutrino source and the need for complementary sterile neutrino searches that can fully address the parameter space suggested by all anomalies shown in Fig.~\ref{fig:exclusions}. Looking at the broader picture, MicroBooNE{} results so far don't resolve the decades-long MiniBooNE{}~\cite{MicroBooNE:2021zai} and LSND~\cite{LSND:1997vun,LSND:2001aii} anomalous results. Moreover, the reconciliation of LSND, MiniBooNE{}, and MicroBooNE{} results demand invocation of a combination of multiple non-vanilla BSM models. The picture gets even more complicated when datasets from reactor and gallium experiments are included. A key point to note is that while the gallium experiments were so far only able to probe the deficit and can't disambiguate between effects arising from oscillations or an unknown production effect, relative reactor searches have the powerful capability to directly search for the propagation effect induced by neutrino oscillations. Ultimately, the consolidation of these paradoxical results necessitates the need for multiple complementary probes to disentangle multiple competing BSM effects. Despite significant experimental, theoretical, and phenomenological progress in the reactor, gallium, and long baseline sectors, a consistent description of the neutrino picture hasn't emerged yet. The combined picture of all the anomalous results cannot be fully explained using a 3+1 sterile neutrino picture highlighting the need for multiple complementary efforts to comprehensively probe the anomalies.
1,108,101,566,289
arxiv
\section{Introduction}\label{sec:introduction}} \section{Introduction} \IEEEPARstart{W}{e} take \textit{lifestyle} to be \textit{the way in which a person or group lives including the interests, opinions, behaviors, and behavioral orientations.} Understanding lifestyle is key to gaining insight of the physical and mental aspects of individuals, social groups and cultures. Health, for example, is highly related to one's lifestyle~\cite{cox1987health,lee1999learning}. Cultural boundaries can be discovered from people's ways of living such as pace of life, eating and drinking habits and so on~\cite{garcia2013cultural,silva2014you}. Researchers have also discovered correlations between health and individuals' daily movements as estimated from cellphone GPS tags on social media~\cite{sadilek2013modeling}. In this work, we study the differences of lifestyles in cities of different sizes. A popular stereotype is that life in big cities is fast-paced, high-pressure, and consistently exciting, while life in small cities is calmer and less various due to a lower population density and more limited selection of recreational venues. We select the Greater New York City area (NYC) as being representative, for our purposes, of big cities in the US. For smaller cities, we select the Great Rochester area (ROC) as representative for two main reasons: First, the size of Rochester (0.2 million) is close to the median size (0.16 million) of cities in the US, approximately 40 times smaller than NYC. Second, these two areas are located close to each other (both in the north-eastern US). Geographic closeness generally leads to similarity of climate and culture, which helps eliminate confounding factors that may lead to differences in lifestyle behaviors unrelated to city size. In contrast to traditional research investigating lifestyle patterns, where data collection methods include questionnaires and telephone interviewing~\cite{budesa2008gender,randler2008morningness,singapore}, we leverage data from social media to make inferences about people's lifestyles. The wide adoption of social media brings researchers a new opportunity of studying natural, unconstrained human behavior at very large scales. Foursquare is one of the most popular Location Based Social Networks (LBSNs), holding 5 billion check-in records for 55 million users worldwide\footnote{\url{https://foursquare.com/about}}. This offers us a rich data source for conducting mobility, behavior and lifestyle studies. We consider temporal and spatial lifestyle in this work. The temporal dimension of a person's lifestyle is assumed to correlate with his/her work-rest ratio in daily actives. In the primary literature on circadian topology (CT), people are classified into one of three categories: morning-types, evening-types, and neither-types ~\cite{horne1975self}. In the CT literature, individuals are modeled by just one of these \textit{types}. In our present work, work-rest behavioral patterns are instead considered to be a weighted combination of all three temporal \textit{lifestyles}: ``Night Owl'', ``Early Bird'' and ``Intermediate'' To avoid assigning a person to a lifestyle in an arbitrary or qualitative fashion, we employ non-negative matrix factorization (NMF) to discover three latent patterns of temporal activity. The extracted patterns offer precise definitions of activity levels associated with specific lifestyles and align with our assumptions about human work-rest habits. A spatial dimension is used to describe lifestyles according to locational behavior. For example, one primitive lifestyle pattern is defined by frequent visits to POIs (points of interest) such as bars and music venues, while another is defined by visits to parks, art galleries and museums. We then apply a clustering method to group these primitive latent patterns into more complex lifestyles that are representative of a group of individuals (e.g. students or stay-at-home parents). We significant variance between the distribution of lifestyles in NYC and ROC. Additionally, we use third-order tensor decomposition to find composite patterns across both spatial and temporal dimensions. We extract clearly identifiable patterns of behavior, for example high school students posting during school hours, and for college students frequently visiting or living on campus. This method offers promise as an efficient way of extracting complicated patterns across multiple high-dimensional spaces. The main contributions of this work are: 1. Use of open-source geo-tagged social media data for analyzing lifestyle patterns as a low-cost, large-scale alternative to traditional survey methods. 2. Application of matrix factor analysis to extract persistent and salient human mobility and work-rest patterns over a large population of users. 3. Application of CP tensor decomposition to discover composite spatial-temporal lifestyle patterns which are useful for understanding fluctuations in people's activity across different time ranges and locations. 4. Confirming intuitive knowledge and previous research in human activity patterns with quantitative, unsupervised data analysis. 5. Shedding light on the differences and similarities between life in big cities and life in smaller cities, quantitatively confirming many of the common perceptions about life in big and small cities. For example, life in big cities is more work-focused, while it is more home-focused in smaller cities; life in large cities is also more fast-paced and diverse. Furthermore, we have discovered fine-grained lifestyle descriptions that previous small-scale survey-based studies have failed to illuminate. For example, we extracted three types of temporal lifestyles, and report the activity level of each lifestyle along time quantitatively. \section{Related Work} \subsection{Sociology and Cronobiology} Lifestyle is well studied in sociology. The work of~\cite{singapore} suggests that lifestyles such as residential location, mode options, destination choices, and trip timing are constrained by household considerations. Gender difference in lifestyles also attracts interest of many researchers. Budesa et al. study the influence of gender on perceived healthy and unhealthy lifestyles, finding that gender is not an important determinant of individual perceptions about health~\cite{budesa2008gender}. Merritt et al. in~\cite{merritt2003gender} suggest that men and women have no significant difference in motor ability in daily activities. Finally, a study~\cite{von2005gender} on university students finds that female students are healthier due to less alcohol consumption and more healthy habits. Much work has been done on human work-rest habits in chronobiology and Circadian Topology (CT). The traditional method of studying how work-rest patterns relate to aspects of physical and mental well-being has been learned through the morningness-eveningness questionnaire (MEQ) of~\cite{horne1976self} and variations of it~\cite{smith1989evaluation}. In the work of Horne et al., Morning-type subjects (MTs) are found to wake at a mean of 7:24am, Neither-type subjects (NTs) at 8:07am, and Evening-type subjects (ETs) at 9:18am; mean bed times for the three types are 11:26pm, 11:30pm, and 1:05am, respectively. These specific times vary in different studies, leading to differing assertions about how much of the population is a member of each CT type~\cite{Taillard1999needsleep, adan2012review}. Randler finds a significant positive correlation between ``morningness'' tendencies of people and satisfaction in life~\cite{randler2008morningness}, and Monk~\cite{monk2004morningness} find that MT individuals appear to have more regular lifestyle than ETs. A positive correlation between eveningness and depression level is reported by Hasler et al.~\cite{hasler2010morningness}. A thorough review of contemporary CT literature is available in \cite{adan2012review}. \subsection{Social Media Analytics} In recent years researchers have successfully utilized social media in research ventures related to lifestyle analysis. Noulas et al. of~\cite{noulas2011empirical} use Foursquare data to discover the behavioral habits of residents in London. The work presented in this paper is strongly inspired by this research: we contribute stacked plots similar to those of Noulas et al., representing the relative visit frequencies of the most frequent POIs, comparing between NYC (\ref{fig:nyc_stacked}) and ROC (\ref{fig:roc_stacked}) and between weekends and weekdays within these cities. Based on the contents of tweets, Sadiek et al. build a language model to detect the health condition of individuals~\cite{sadilek2013nemesis}. By relating a user's health level with other attributes such as environmental features of places where the user spends tags as estimated from his tweet geotags, they estimate the influence of lifestyle on health conditions~\cite{sadilek2013modeling}. Eating and drinking habit is also a key point to understanding human life. In~\cite{abbar2014you} Abbar et al. find out the names of food in people's tweets and use them to estimate the caloric values people possibly take. Cranshaw et al.~\cite{cranshaw2010bridging} construct a metric called Location Entropy to measure the diversity of a POI. Sang et al. discuss people's movement session patterns~\cite{sang2015activity} based on China's LBSN data. The eating and drinking habits of different countries and regions are investigated in ~\cite{silva2014you} based on Foursquare data. They find that geographic closeness usually leads to closeness in eating and drinking habits. Wu et al. reported an approach on modeling temporal dynamic in\cite{wu2016unfolding}. Their work showed that besides user-item factors temporal factors are as important in social media popularity prediction. Other aspects of lifestyle such as pace of life and power distance, are discussed in~\cite{garcia2013cultural}. They estimate each index related to life via tweets collected all over the world. In~\cite{golder2011diurnal}, Golder et al. found that the negative affect (NA) tweets sent in winter is higher than those sent in summer. Similar results are reported in~\cite{park2013mood}, in which the weather influence on human sentiment is studied using tweets. Tensor decomposition was applied in~\cite{zheng2014diagnosing}. In this work, Zheng Yu et. al decomposed third-order tensors to extract noise-location compound patterns in an urban area. We employ similar method in our work to find temporal-spatial compound patterns of lifestyles. \section{Data Set and Preprocessing} The large number of self-reported location records and wide geographic coverage make Foursquare a valuable data source for analyzing behavioral tendencies across groups of individuals. However, directly collection of users' check-ins is a nontrivial task due to the strict limits on Foursquare data download rates. As an alternative, many researchers collect Foursquare data through other social media sources that connect with Foursquare such as Twitter~\cite{noulas2011empirical}. If a user links his or her Foursquare account with a Twitter account, when s/he performs a check-in on Foursquare, a geo-tagged tweet will be posted automatically. This tweet contains a link to the webpage of the venue where the user checks in via Foursquare. In the present work, we use this method to collect users' check-in data. To avoid tweets from possible tourists, we filter out users whose tweets appeared exclusively within a period of less than 7 days. \subsection{Lifestyle Study through Social Media} We collected 233,046 Foursquare check-ins from 49,744 POIs from geo-tagged tweets in NYC, and 99,466 check-ins from 13,483 POIs from ROC. Foursquare also provides around 600 POI categories, such as Arts \& Entertainment, Home, etc. A venue can be assigned to several categories, where one category can be a subset of another. For example, Foursquare may assign both American Restaurant and Restaurant to a single venue. Due to the sparsity of direct Foursquare check-ins, we chose to extend these activity records by applying a method used in~\cite{sadilek2013nemesis}: for each geo-tagged tweet located within a small distance (30 meters) of a POI, we count it as a check-in from this POI. Through this process, we extend the number of check-ins to 1,028,016 for NYC and 971,660 for ROC. In order to study gender effect on lifestyles, we employ the API of genderize.io to assign gender tags to each user~\cite{abbar2014you}. Genderize.io gives the probability of an individual being either male or female given his or her username. We first filter out the users who send less than 10 tweets during our sampling period, and obtain 12,960 users in NYC and 10,576 users in ROC. We then feed the handles for these users' Twitter accounts into the genderize.io API, and filter out gender tags with low confidence (probability $< 0.8$). From this, we aggregate a total of 3,493 male- and 3,508 female-labeled users (see Table~\ref{tab:table2}). Other work has predicted users’ gender using tweet contents~\cite{schwartz2013personality}, profile information, and profile pictures~\cite{quercia2011our}; however, given the complexity of these methods, and the reasonable accuracy of our approach, we exclusively used the genderize.io API to assign gender labels. \begin{table} \center \begin{tabular}{ | l || c | c | } \hline & NYC & ROC \\ \hline Foursquare Check-ins & 233,046 & 49,744 \\ \hline Foursquare venues & 99,466 & 13,483\\ \hline Extended Check-ins & 1,028,016 & 971,660\\ \hline \hline Total \# of users & 12,960 & 10,576 \\ \hline Male users &1,690 & 1,491 \\ \hline Female users & 1,803 & 2,017\\ \hline \end{tabular} \caption{Number of users in our data set, by city and gender. We only assign gender labels to users when a high confidence in gender classification is achieved, so the genders of many users are considered unknown.}~\label{tab:table2} \vspace{-2em} \end{table} Two key points should be verified to ensure the quality of our data set: \begin{figure}[!htbp] \centering \includegraphics[width=0.9\columnwidth]{fig2/long_tailed_II.pdf} \caption{Plot of Complementary Cumulative Distribution Functions for the numbers of check-in amount of POIs. This plot reports the probabilities of a POI's check-in volume exceeds a given amount in three data sets. }~\label{fig:long_tailed} \vspace{-1em} \end{figure} 1) Only 20\% to 25\% Foursquare accounts are linked with Twitter ~\cite{foursquare_link}. Check-ins collected from tweets form a subset of all the Foursquare records. We need to ensure that our data set should have a similar distribution to the original Foursquare data. To validate the applicability of our extension method, we plot the probability of the amount of visits as a function of the amount of POIs as Noulas et al. did in~\cite{noulas2011empirical} in Fig.~\ref{fig:long_tailed}. It shows that the extended data set not only preserves the long-tailed characteristic, but also shortens the gap between the original Foursquare data and its subset that is extracted from tweets. \begin{figure}[!htbp] \centering \includegraphics[width=1\columnwidth]{fig2/12_month.pdf} \caption{Percentages of check-ins from top 10 visited POI categories.}~\label{fig:12_month} \vspace{-1em} \end{figure} 2) In order to obtain datasets of comparable size for the two cities, we collect tweets in NYC for a one-month period, and ROC for a one-year period. The length of the time period for tweet collection is different in NYC and ROC data sets. Tweets from NYC were posted during June 2012, while tweets from ROC were posted from July 2012 to June 2013. In Fig.~\ref{fig:12_month}, we plot the percentage of check-ins from the top 10 most frequent check-in categories. Note that we eliminate duplicate categories. For example we omit ``Restaurant'' (ranked 3rd) since we have ``American Restaurant'' in the first place. The portions of check-ins from most categories remain at stable levels throughout the year. This observation implies that the distribution in one month could approximately represent the remaining months of a year. One exception is the category of University, which shows a decrease from May to August. This coincides with the summer break of universities. \section{Lifestyle difference at city level} \begin{figure}[!htbp] \centering \includegraphics[width=0.9\columnwidth]{fig2/return_range.pdf} \caption{Box plot of visiting frequency of Bar, Church, Drugstore, Gas Station, Grocery, Park, Restaurant and Supermarket, aggregated over both cities.}~\label{fig:return_range} \end{figure} \begin{figure}[!htbp] \centering \includegraphics[width=0.9\columnwidth]{fig2/return_range_compare-crop-2.pdf} \caption{Comparison between visiting frequencies of 9 POI categories in big cities and small cities. The yellow boxes are the frequencies for NYC and the green ones are for ROC.}~\label{fig:return_range_compare} \end{figure} \subsection{Visiting Frequency of POIs} The visiting frequency of for a location is defined as the number of visits (check-ins) divided by the number of unique visitors. In other words, visiting frequency is the average visits per visitor to a location. This metric measures the degree of relevance of a POI to people's daily life. The higher visiting frequency is, the more relevant the POI is to a person's lifestyle. ``Home'', for example, as one of the most important locations to individuals' lives, has a very high visiting frequency. Most of the check-ins at home are performed by family members or friends, so the visiting frequency of home is very high. On the contrary, a public location such as bar, usually has a lower visiting frequency. Regarding visiting frequency, we have two interesting observations: \begin{itemize} \item Each POI category has a specific range of visiting frequency, which is clearly indicative of differing functionality between different POIs in people's daily life. \item Some categories show different ranges of visiting frequency in cities with different sizes. This help us to examine the different lifestyles at the city level. \end{itemize} \subsubsection{Visiting frequency range of POI categories} We plot the visiting frequency of several popular POI categories as a box plot in Figure~\ref{fig:return_range}. This plot shows that categories that are highly related to daily life are visited repeatedly, e.g. Church, Grocery, Drugstore and Supermarket all have a high visiting frequency. The median visiting frequencies of all categories is approximately 4, though some show very high visiting frequencies. For example, the visiting frequencies for some churches reaches 12, indicating a high prevalence of Church in some people's lives. As we expected, the highest visiting frequencies appear in Home (with a median of approximately 20 for both cities) and Office (with a median of approximately 10 for both cities), as shown in Figure~\ref{fig:return_range_compare} for details. These two are the most frequently visited locations for most people. The visiting frequencies of Bar and Restaurant are much lower with a median around 2. \subsubsection{Difference in visiting frequencies between NYC and ROC} In this section, we compare the visiting frequencies of categories in big cities and small cities (see Figure~\ref{fig:return_range_compare}). It is interesting that the visiting frequencies for some categories in small cities are larger than those in big cities, such as Restaurant, Bar, Supermarket and Drugstore. This may imply a higher regularity of life in smaller cities -- in other words, people in smaller cities are more localized, with stricter routines. In big cities, people have more options to go when eating (Restaurant), having fun (Bar) and purchasing daily necessities (Supermarket and Drugstore). Therefore, these places are generally visited less frequently in larger cities. While for other categories such as Home, Office and Church, the visiting frequencies of two types of cities are roughly the same. This makes obvious sense because the lifestyles of working, returning home and religion should be similar under the same cultural atmosphere. \subsection{Basic mobility patterns in big cities and small cities} It is interesting to study the fluctuation of residents' activity over time in terms of occurrence at POIs. We plot the 10 most popular POI categories on weekdays and weekends separately for ROC and NYC. On weekends, the mobility patterns of the two cities are similar (Figure~\ref{fig:roc_subim2} and Figure~\ref{fig:subim2}). The total check-in amount climbs rapidly to a high level around 10am and 12pm in ROC and NYC, respectively. The activity levels remain constant until 9pm, when a peak of check-ins appears in both cities, indicating a sudden increase of mobility in weekends night. After the peak, the activity level in ROC moves down quickly, while it remains at a high level through 2am in NYC. The 3 most frequently visited POI categories on weekends for both cities are Bar, American Restaurant, Home. Obvious divergence is present between the weekday mobility patterns of the two cities (Figure~\ref{fig:roc_subim1} and Figure~\ref{fig:subim1}). In big cities, there are three peaks during a day appearing around 8am, 1pm and 9pm. Similar pattern also appear in London according to~\cite{noulas2011empirical}, indicating roughly the same mobility pattern between London and NYC. Among the three peaks, the highest one is at night, which implies that night is the most active period in large cities. In contrast, there is only one peak during a day at 10am in smaller cities and the check-in amount drops significantly after that. It reveals that during weekday nights, people in small cities are not as active as those in large cities. During the nights in weekdays, people in small cities prefer visiting Home, American Restaurant and Cafeteria, while in large cities, Bar is much more popular during night, indicating that people in large cities are more prone to indulge in copious recreation during the weekends. \begin{figure}[!htbp] \begin{subfigure}{0.5\textwidth} \includegraphics[width=0.9\linewidth, height=6cm]{fig2/roc_weekday.pdf} \caption{Rochester weekdays} \label{fig:roc_subim1} \end{subfigure} \begin{subfigure}{0.5\textwidth} \includegraphics[width=0.9\linewidth, height=6cm]{fig2/roc_weekend.pdf} \caption{Rochester weekends} \label{fig:roc_subim2} \end{subfigure} \caption{Stacked plot of the 10 most popular categories over weekdays and weekends in Rochester. Categories are listed in the order of increasing probability from the top down. The width of each fault indicates the percentage of a POI category for a given time of day.} \label{fig:roc_stacked} \end{figure} \begin{figure}[!htbp] \begin{subfigure}{0.5\textwidth} \includegraphics[width=0.9\linewidth, height=6cm]{fig2/nyc_weekday.pdf} \caption{New York City weekdays} \label{fig:subim1} \end{subfigure} \begin{subfigure}{0.5\textwidth} \includegraphics[width=0.9\linewidth, height=6cm]{fig2/nyc_weekend.pdf} \caption{New York City weekends} \label{fig:subim2} \end{subfigure} \caption{Stacked plot of the 10 most popular categories over weekdays and weekends in New York City. Categories are listed in the order of increasing probability from the top down. The width of each fault indicates the percentage of a POI category for a given time of day.} \label{fig:nyc_stacked} \end{figure} \section{Mining Lifestyles with Matrix and Tensor Decomposition} \subsection{Matrix Decomposition} The activities of a user, $a$, can be described with an $M$ dimensional vector. For temporal patterns we set $M$ to 24 and values in the vector are the activities of the user performed in each hour, i.e. the number of check-ins. When we examine spatial patterns latent in individuals' actives, $M$ is set to be equal to the number of POI categories, and each element indicates the amount of check-ins the person performed in a single POI category. We refer to vector $a$ as an \textit{``activity vector''} of a user. We assume that a person's activities are determined by the lifestyle(s) that person lives. Formally, $$a = w \times L$$ where L is a $k$ by $M$ matrix, recording $k$ latent lifestyles, and $w$ is a coefficient vectors of $k$ dimensions, indicating the user's preference to each lifestyle. To uncover and compare lifestyles that are commonly followed in different cities, first we assemble the activity vectors of residents into a single matrix for each city. We define $$A_{roc} = (a_{1}, a_{2}, ..., a_{N_{roc}})^T$$ where $a_{i}$ indicates the activity vector of a resident, and $N_{roc}$ is the number of samples we collected from the Great Rochester area. Similarly, $$A_{nyc} = (a_{1}, a_{2}, ..., a_{N_{nyc}})^T$$ where $N_{nyc}$ is the number of samples we collected from the Greater New York area. Second, we concatenate $A_{roc}$ and $A_{nyc}$ to obtain a complete matrix $$A = (A_{roc}, A_{nyc})^T$$ where $A$ is a $(N_{roc} + N_{nyc})$ by $M$ matrix. Third, we decompose $A$ into two matrix $W$ and $L$. $W$ is a $N$ by $k$ coefficient matrix, while $L$ is the lifestyle matrix we explained above. Since non-negative matrix factorization (NMF) usually leads to interpretable results~\cite{lee1999learning}, we applied it to complete the decomposition. Formally, we solve the following optimization problem: $$ \underset{W,L}{min} \frac{1}{2}\|A- WL\|^2_F \quad s.t. \quad L\geq 0, W\geq 0 $$ where $A\in\mathbb{R}^{(N_{roc} + N_{nyc})\times M}$, $W\in\mathbb{R}^{(N_{roc} + N_{nyc})\times k}$, $L\in\mathbb{R}^{k\times M}$. $\|X\|_F =(\sum_{i,j}|X_{ij}|^2)^{-\frac{1}{2}}$ is the Frobenius norm, $L \geq 0$ (or $W \geq 0$) requires that all components of $L$ (or $W$) should be nonnegative. $L$ uncovers the lifestyles that people follow, while $W$ provides information about individuals' preferences across these lifestyles. \begin{figure}[!htbp] \centering \includegraphics[width=0.9\columnwidth]{fig2/weekday_3.pdf} \caption{Active time ranges of night owls, early birds and intermediates over weekdays.}~\label{fig:time_weekday} \vspace{-1.5em} \end{figure} After decomposition, we split $W$ into smaller matrices, each of which records a sample of lifestyles for various groups of people. On a city level, $W$ is split into two smaller matrices, $W = (W_{roc}, W_{nyc})^T$, where $W_{roc}\in\mathbb{R}^{N_{roc}\times k}$ and $W_{nyc}\in\mathbb{R}^{N_{nyc}\times k}$. At a finer granularity, $W$ consists of four even smaller matrices: $W = (W_{roc \ male}, W_{roc \ female}, W_{nyc \ male}, W_{nyc \ female})^T$. For a particular group of people, i.e. a component matrix, the degree of preference for a lifestyle is defined as the average of the coefficients of people in the group for the lifestyle. For example, the preference to $i_{th}$ lifestyle of residents of New York City is calculated by averaging the $i_{th}$ column of matrix $W_{nyc}$. In the following sections, we report the temporal and spatial lifestyles found from people's activities, and compare the preferences to lifestyles in the two cities. \subsection{Third-Order Tensor Decomposition} User activities may be analyzed across multiple dimensions simultaneously using higher-order tensors. Tensors are a natural way to aggregate data across multiple factors. Vectors and matrices are special cases of tensors, where each vector $v$ of dimensionality $D$, it is true that $v \in \mathbb{R}^{D}$, and for each matrix $M$ of dimensionality $D_1$ by $D_2$, $M \in \mathbb{R}^{D_1 \times D_2}$. Tensors generalize this to data structures of arbitrary order, where vectors are of order 1, matrices of order 2. For tensor $T$ of order $d$, $T$ may be concisely described as: $T \in \mathbb{R}^{D_1 \times D_2 \times \ldots \times D_d}$. To learn temporal-spatial patterns of human activities, for example, we can aggregate the data into a third-order tensor. In such a tensor, a person's activities is recorded as a matrix, of which dimensions are POI categories and hours of a day. Decomposition on the tensor produces multidimensional knowledge on lifestyles. In Fig.~\ref{fig:tensor-viz}, we illustrate the tensor decompositio processes. The most commonly used technique for tensor decomposition is CANDECOMP/PARAFAC (CP) decomposition~\cite{kolda2009tensor}. This algorithm decomposes a tensor of order $d$ into $d$ separate matrices, each of dimensionality $k \times D_j$, where $k$ is the number of components selected a priori and $D_j$ is the dimension for the tensor's $j^{th}$ order. \begin{figure}[!htbp] \centering \includegraphics[width=1.0\columnwidth]{fig2/tensor.pdf} \caption{Visualization of tensor cube $T$, decomposed into component matrices $W$, $L_M$, and $L_P$.}~\label{fig:tensor-viz} \end{figure} The formal optimization problem for this decomposition is: $$ \underset{W,L_M,L_P}{min} \|T - W (L_M \odot L_P)^\top \|$$ In this equation, $\odot$ represents the Khatri-Rao product. The Khatri-Rao product may be considered as a column-wise Kronecker product $\otimes$ between two matrices with equal numbers of columns $A = [a_1, a_2, a_3]$ and $B = [b_1, b_2, b_3]$, where $A \odot B = [a_1 \otimes b_1, a_2 \otimes b_2, a_3 \otimes b_3]$. While $A$ and $B$ both have 3 columns here, this can be generalized for any number of columns. Assuming that $A \in \mathbb{R}^{M \times K}$ and $D \in \mathbb{R}^{N \times K}$, the Khatri-Rao product matrix will be of dimensionality $MN \times K$. To solve the optimization problem for CP decomposition, we use the alternating least-squares (ALS) algorithm, originally proposed by ~\cite{harshman1970foundations, carroll1970analysis}. The specific implementation of CP-ALS used is provided in the \textit{scikit-tensor toolkit}. \footnote{https://github.com/mnick/scikit-tensor} At a high level, ALS incrementally uses $W$ and $L_M$ to estimate $L_P$, then $L_M$ and $L_P$ to estimate $W$, and so on, improving estimations of one matrix in each iteration. As ALS monotonically decreases error rate for the optimization, the algorithm is subject to getting trapped in local minima. Thus ALS is not guaranteed to find an optimal solution, and results may depend heavily on initialization. In our experience, it was found that using both singular vector and random initializations converged to similar decompositions with similar error rates when using a termination condition of $10^{-5}$ error improvement between iterations. Similar to our matrix decomposition methodology, we assume that individuals' check-in activity may be decomposed into a weighted combination of lifestyle factors stored in a matrix: $$t = w\ (L_M \odot L_P)^\top$$ where $L_M \in\mathbb{R}^{k \times M}$, and $L_P \in \mathbb{R}^{k \times P}$, each recording $k$ latent lifestyles, where again $w \in \mathbb{R}^{k}$ is a coefficient vector for a single user. $L_M$ reveals the first dimensional characterizations of each lifestyle component, and $L_P$ the second dimensional characteristics. As with our matrix decomposition framework, we consider weight matrix $W$ as a concatenation of four smaller matrices according to city and gender. However, in the work presented here, we found no significant differences in mean component weights across these demographics. Full tensors $T_i=\{t_1, t_2, \ldots, t_N\}$ that we consider concatenate lifestyle matrices $t$ across all users. In this work, we present an analysis of two third-order tensors $T_1$ and $T_2$, such that $T_i \in \mathbb{R}^{N \times M \times P}$. $N$ indexes check-in counts by user id and $P$ indexes by category. Only the 100 categories with the highest number of check-ins are used, so $P=100$ for both $T_1$ and $T_2$. $T_1$ indexes by times of day as well, so $M=24$. $T_2$ indexes instead by days of the week, so $M=7$. The first tensor, $T_1\in\mathbb{R}^{N \times 24 \times 100}$, will allow us to examine joint spatial-temporal lifestyles, indicative of user's locational behavior at various times of the day. Trivially, we might find components of user check-in at bars and pubs, with greater weight assigned to night hours than to the morning or afternoon. The second tensor, $T_2\in\mathbb{R}^{N \times 7 \times 100}$, will be conducive to locational lifestyles with distinct trends across the work week, through the weekend. For example, we might see lifestyles of individuals visiting restaurants and entertainment venues later in the week, with less weight assigned to Monday, Tuesday, and Wednesday. \section{Temporal Aspects of Lifestyle} \begin{figure}[!htbp] \centering \includegraphics[width=0.9\columnwidth]{fig2/weekend_3.pdf} \caption{Active time ranges of night owls, early birds and intermediates over weekends.}~\label{fig:time_weekend} \vspace{-1.5em} \end{figure} Analogous to ETs, MTs, and NTs in the circadian topology literature, we classify people's work and rest habits into three categories: \textit{night owls}, people who tend to stay up until late at night; \textit{early birds}, people who usually get up early in the morning and go to bed early in the evening; and \textit{intermediates}, people who have schedules between night owls and early birds~\cite{horne1975self}. Interestingly but not surprisingly, our approach provides support for these three common temporal lifestyles. Moreover, we are able to provide a precise description of activity level along time of day for each lifestyle. We study weekday and weekend separately to gain a better understanding of people's lives. Let $A_{weekday}$ be a $(N_{roc} + N_{nyc})$ by $M$ matrix, where $M$ equals to 24. A component $a_{ij}$ in the matrix denotes the $i_{th}$ user's total number of check-ins during $j_{th}$ hour of weekdays. Similarly, $A_{weekend}$, also a $(N_{roc} + N_{nyc})$ by $M$ matrix, records the activities of users on weekends. We set $k$ as 3 to align with the number of predefined categories, and then employ matrix decomposition on $A_{weekday}$ and $A_{weekend}$, respectively. The results are $L_{weekday}$ and $W_{weekday}$ for $A_{weekday}$; $L_{weekend}$ and $W_{weekend}$ for $A_{weekend}$. We first plot the result matrices $L_{weekday}$ and $L_{weekend}$ in Fig.~\ref{fig:time_weekday} and Fig.~\ref{fig:time_weekend}. \begin{figure}[!htbp] \centering \includegraphics[width=0.9\columnwidth]{fig2/temporal_weekend.pdf} \caption{Average weight on night owls, early birds and intermediates of male and female residents of a small and big city over weekends.}~\label{fig:habit_weekend} \end{figure} \textit{Early birds}: In weekdays, early birds' days start around 7 am, and they are most active around noon. After that, their activities decrease, and then vanish gradually in the night around 10 pm. The distribution of activity over time in weekend is similar for early birds, except the increase and decrease of activities in weekend is faster, which leads to a sharper peak around 12 pm. \textit{Night Owls}: For night owls, we observe two active periods in a day. Their activities start from 10 am for both weekdays and weekends. We observe the first small peak appears at 2 pm, but is not comparable to their active level during night. After the inactive daytime, the activities of night owl rocket from 10 pm and achieve maximum at late night (1 am in weekdays and 2 am weekends). Their activities vanish in the early morning (6 am). \textit{Intermediates}: People who are neither early birds nor night owls usually start their day in the late morning (10 am). On weekdays, the active level increases gradually in the afternoon and reaches the peak around 10 pm, and rapidly decreases to zero around 2 am. On weekends, they are more active during afternoon than on weekdays. Instead of a gradual increase, the active level grows faster after being activated (11 am) and remains high till the peak of night (11 pm). The results extracted through data agree with the time ranges defined by traditional studies. We compare our results with those from previous human Circadian Rhythm ~\cite{shahid2012morningness}. In table~\ref{tab:table3} we list the time ranges of wake up time, sleep time and most active time of the corresponding types (morning-type, evening-type, neither-type) in the morningness-eveningness questionnaire (MEQ)~\cite{horne1976self}, to compare with these time ranges we list the percentage of activity during the same time range of each lifestyle decomposed from our data using NMF. We define these three time ranges as: from the early morning (5 am), the time range of the first $\sim 15\%$ of activities is defined as ``get up'', the next $\sim 70\%$ is defined as ``most activity'' and the final $\sim 15\%$ is defined as ``go to bed''. These percentages are not exact, and a small amount of activity is present between the ``go to bed'' and ``wake up'' time ranges. For most time ranges, our results generally agree with those from previous work, e.g. ``get up'' time range, and ``go to bed'' time range for the Early Bird and Intermediate lifestyle. All the ``most active'' ranges in our findings are later than the previous assessments, and the ``go to bed'' for Night Owl is much later than that of the evening-type in previous work. Our explanation for these differences is twofold: Firstly, we believe that as a general trend people's activities shift a lot into night in modern times when compared with the year when the seminal previous work was done (1976) \cite{horne1975self}. Secondly, individuals' behaviors in our model are modeled by a weighted combination of ``lifestyles'', whereas in the MEQ paradigm, an individual is assigned to a single, discrete ``type''. As a consequence, our ``lifestyle'' patterns should capture more distinctive work-rest activities a single individual might follow as a subset of all his or her behaviors, whereas each MEQ ``type'' should capture the aggregate work-rest patterns for all of an individual's behaviors. For example, an individual in our model might be a ``night owl'' on the weekends and an ``early bird'' on the weekdays, while in the MEQ model this individual would be considered as either a morning-type or an evening-type. \begin{table} \center \begin{tabular}{|l||c|c|} \hline & MEQ & Our results\\ \hline Early Bird & \multicolumn{2}{c|}{} \\ \hline Get up& 5:00 am - 7:45 am& 6:00 am - 8:00 am\\ Most active& 5:00 am - 10:00 am& 7:00 am - 2:00 pm\\ Go to bed& 8:00 pm - 10:15 pm& 8:00 pm - 10:00 am\\ \hline Inter & \multicolumn{2}{c|}{} \\ \hline Get up&7:45 am - 9:45 am & 8:00 am - 10:00 am\\ Most active& 10:00 am - 5:00 pm& 2:00 pm - 8:00 pm\\ Go to bed& 10:00 pm - 12:30 am& 10:00 pm - 12:00 am\\ \hline Night Owl& \multicolumn{2}{c|}{} \\ \hline Get up& 9:45 am - 12 pm & 10:00 am - 12:00 pm\\ Most active& 5:00 pm - 5:00 am& 9:00 pm - 1:00 am\\ Go to bed& 12:00 am - 3:00 am& 3:00 am - 6:00 am\\ \hline \end{tabular} \caption{Comparison between our method with traditional methods} ~\label{tab:table3} \end{table} $W_{weekday}$ and $W_{weekend}$ indicate the preference of each user to three lifestyles on weekday and weekend, respectively. For each matrix, we first split it to 4 smaller matrices according to user's city (ROC or NYC) and gender. Secondly, we calculated the average preference of each group to each lifestyle. We plot the results in Fig.~\ref{fig:habit_weekend} and Fig.~\ref{fig:habit_weekday}. \begin{figure}[!htbp] \centering \includegraphics[width=0.9\columnwidth]{fig2/temporal_weekday.pdf} \caption{Average weight on night owls, early birds and intermediates of male and female residents of a small and big city over weekdays.}~\label{fig:habit_weekday} \end{figure} Generally speaking, the average preference to night owl lifestyle on weekends is significantly higher than on weekdays. Correspondingly, the average weight of early bird and the intermediate style is significantly lower on weekends. This indicates that people in both cities are more willing to stay active late on weekends, while getting up early on weekdays. People live in big cities usually have higher preference to the night owl type for both males and females, while people tend to have higher weights on the early bird type in small cities. This observation suggests that big cities are more active than small cities during night. We did not observe significant difference between two genders on their preference to temporal lifestyles in both two cities. This agrees with previous work~\cite{merritt2003gender}, in which the authors verify that daily activity levels generally are not biased by gender. \subsection{Spatial Aspects of Lifestyles} Individuals’ preferences towards specific locations are another important indicator to their lifestyles. These lifestyles can be described as combinations of several specific POI categories. For example, we observed the co-occurrence of Home, Grocery and Gas Station in many people's visiting records; we could have the feeling that the people performed this pattern are ``home-originated'', since the places they visited are quite related to daily life. Another commonly observed combination is Bar, Pub and Music Venue; people following this pattern clearly tend to have much excitement (alcohol) in their daily life. These movement patterns are conducive to understanding individuals' lifestyle preferences. \begin{table*} \centering \begin{tabular}{ | l || c | c | c | c | c | } \hline Hidden Patterns & 1st category & 2nd category & 3rd category & 4th category & 5th category\\ \hline 1, College & Residence Hall & Co-working Space & Lab & Rec Center & Wine Bar\\ \hline 2, Restaurant & American Restaurant & Grocery Store & Supermarket & Fast Food & Diner\\ \hline 3, Bar \& Pub & Bar & Music Venue & nightclub & Lounge & Rock Club\\ \hline 4, Office & Office & Co-working Space & Building & Conf. Room & Bar\\ \hline 5, Home \& Grocery & Home (private) & Supermarket & Grocery Store & Drugstore & Church\\ \hline 6, Entertainment & Arts \& Entertainment & Baseball Stadium & Bar & Burger Joint & Concert Hall\\ \hline 7, Sports & Gym & Yoga Studio & Athletics \& Sports & Spa & Fitness Center\\ \hline 8, Park \& Outdoor & Park & Neighborhood & Scenic Lookout & Plaza & Beach\\ \hline 9, Hotel \& Bar & Hotel & Lounge & Cocktail Bar & Roof Deck & Airport\\ \hline 10, Commute & Train Station & Subway & Train & Platform & Bus Station\\ \hline \end{tabular} \caption{15 Hidden patterns with their assigned names and top 5 weighted categories of POIs of each pattern.}~\label{tab:table1} \vspace{-1em} \end{table*} We also employ the NMF method to detect these hidden patterns. Instead of a temporal activity matrix, we decompose a spatial activity matrix $A$ in this case. $A$ is a $(N_{roc} + N_{nyc})$ by $M$ matrix, where $M$ is the number of categories of POIs. The decomposition generates two result matrices $L$ and $W$. $L$ records the spatial lifestyles that are lived by the people in different cities, $W$ contains the information of the preference of each resident to these spatial lifestyles. $k$ is empirically set to 5 to achieve a good tradeoff between granularity and interpretability. We report the lifestyles extracted from the data in Table~\ref{tab:table1}. For each pattern we list the top 5 weighted categories of POIs and assign a name to the pattern. We can sense the clear connection between the POI categories within a hidden pattern. Take pattern one as an example. The top three weighted categories are College Residence Hall, Co-working Space and College Lab. This is a common mobility pattern of college students. Pattern seven describes a pattern of people who like to exercise, where the top three categories are Gym, Yoga Studio and Athletics \& Sports. For pattern ten, the top three weighted categories are Train Station, Subway and Train. This is a typical movement pattern of people who commute a lot. It's natural to see one's behaviors as a combination of several lifestyles. For example, for a college student we may find he/her lives first lifestyle in Table~\ref{tab:table1} (College lifestyle) as well as the third (Bar \& Pub lifestyle) and the seventh (Sport lifestyle) with different weights. By ``weight'' we mean the importance of a certain lifestyle to one's daily life. Rows in $W$, indicated by $w_{i}$, are such weight vectors. $w_{i}$ is a 15 dimensional vector, indicating the quantified preferences of $i_{th}$ user to 15 spatial lifestyles. In order to gain a group level understanding of lifestyles, we employ a clustering method on $w_{i}$s. The center of each cluster denotes the mean lifestyle combination for a group of individuals. Moreover, by analyzing the component of a group we are able to determine the tastes for lives of residents cities of different size. We set the number of clusters to 5 empirically. \begin{figure}[!htbp] \centering \includegraphics[width=1.0\columnwidth]{fig2/lifestyle1.pdf} \caption{Components and corresponding percentage of lifestyle 1 and the people in this lifestyle.}~\label{fig:lifestyle1} \end{figure} We plot the components of two of groups of people and the percentages of males and females from both cities. Note that all ratios are normalized by the number of users in each city. For the people who are in the first group (Fig.~\ref{fig:lifestyle1}), home is the absolute center of their life. Additionally, this group is comprised more of people from small cities (56\%) than of those from large cities (44\%). People in the second group (Fig.~\ref{fig:lifestyle2}) tend to visit office and entertainment venues more often. People in large cities (73\%) prefer this lifestyle more than in small cities (27\%). These results suggest that for people in small cities, home is a prominent location in life; while in large cities, people tend to spend more time at office. \begin{figure}[!htbp] \centering \includegraphics[width=1.0\columnwidth]{fig2/lifestyle2.pdf} \caption{Components and corresponding percentage of lifestyle 2 and the people in this lifestyle.}~\label{fig:lifestyle2} \end{figure} \section{Composite Aspects of Lifestyles} Some lifestyle patterns may be seen a combination of individuals' daily, weekly, and spatial habits. In this section, we consider the analysis of tensors $T_1$ and $T_2$. Each tensor $T_i\in\mathbb{R}^{N \times M \times P}$ may be factorized into any number of components $k=[2, min\{N,M,P\}]$ . Recall that for both $T_1$ and $T_2$, $N$ indexes check-in counts by user id and $P=100$ indexes by category. For $T_1$, $M=24$ for indexing by time of day, and for $T_2$, $M=7$ for days in a week. There is a significant trade off that exists with choosing tuning parameter $k$: with a smaller $k$, fewer lifestyle patterns may be identified, and some components may be mixtures of multiple theoretically distinct lifestyle patterns. With higher $k$, we run into issues of redundancy, where multiple highly similar lifestyle patterns are extracted, and low interpretability, where some extracted patterns include very disparate behaviors. For both $T_1$ and $T_2$, we tested a wide range of possible $k$ values. Individuals with fewer check-ins than some threshold $h$ were pruned to remove outlier noise. In our experiments, we found very little difference between many components when $h=5$ from when $h=30$. However, with $h=5$, some patterns emerge more apparently, and components are more distinct overall. \begin{figure}[ht!] \includegraphics[width=1.0\columnwidth]{fig2/A1_plot.pdf} \caption{Component weights $L_M$ by hour for tensor $T_1$; 5 out of $k=12$ total components are shown. Component labels are added a posteriori, and weights shown are normalized by min-max normalization. Each curve in the upper part represents the trend of a POI category along 24 hours of a day.} \label{fig:T1-plot} \end{figure} \begin{figure}[ht!] \includegraphics[width=1.0\columnwidth]{fig2/A2_plot3} \caption{Component weights $L_M$ by day for tensor $T_2$ with $k=5$. Component labels are added a posteriori, and weights shown are normalized by min-max normalization. Each curve in the upper part represents the trend of a POI category along 7 days of a week.} \label{fig:T2-plot} \end{figure} \subsection{Time-of-Day \& Location} $T_1$ offers the most interpretable results of third-order tensor decomposition, most clearly when $k=12$, shown in Fig.~\ref{fig:T1-plot}. Across a wide range of values for $k$, we see a few distinct lifestyle patterns emerge. One component assigns highest weight to the Arts \& Entertainment category and significantly lower for all others, with time-of-day beginning around 10am, peaking at 9pm, and tailing off in the hours following midnight. This matches the intuitive assumption that individuals usually visit these sort of venues later in the day, primarily around evening and night times. The next component assigns highest weight to the High School category, significantly lower for all others, with time-of-day peaking significantly at 7am, and some additional weight for the hours of 8am - 3pm. This range directly corresponds to the standard school day for high school students in the U.S. Although NYC check-ins are only collected for the month of June, the school year for New York public schools continues through June 26th. Two ``college student'' lifestyle patterns consistently emerge. The first assigns the most weight to College Residence Hall, with time-of day gradually peaking around 6am, and decreasing significantly from 8pm through 2am. The second has highest weight on College Rec. Center, with time-of-day increasing gradually from 9am to 9pm, peaking from 10pm to 2am, and tailing off quickly thereafter. Both these lifestyles assign some weight to other college-related POIs, for example College Lab, Co-working Space, and College Cafeteria. The former of these seems to model the pattern ``early bird'' college students, where the latter models ``night owls''. Finally, we also see a Gym lifestyle emerge, where Gym is assigned a very high weight, and all other categories are assigned low weight. This lifestyle pattern also gives high weight to most hours of the day, with a significant dip from the hours of 10pm through 5am. This also makes intuitive sense, since it is unlikely that many people go to the gym during these hours. \subsection{Day-of-Week \& Location} We find noteworthy patterns when decomposing $T_2$ into 5 components, shown in Fig.~\ref{fig:T2-plot}. We see one pattern with highest weight assigned to category Bar, high weights to Cafe, American Restaurant, Private Home, and Music Venue, and moderate weights to a number of similar categories such as Rock Club; this pattern assigns very little weight to Monday and Tuesday, and the highest weight on Sunday. Common sense tells us that people work harder the first few weekdays, and usually visit recreational venues such as these more later in the week. It is not surprising that the highest weight is assigned on a weekend night, since this includes check-ins both the night of that day, and activities past midnight from the night before. We also see a component with very high weight assigned to Arts \& Entertainment, low weight assigned to other categories; very low weight is assigned to Monday, no weight to Sunday, and relatively uniform weights across other days of the week. Many entertainment venues are closed on Sunday, and fitting with the common notion that people recreate less on Mondays. Two distinct ``college student'' patterns emerge with as few as 4 components, both with considerably higher weights assigned to weekends than weekdays. It is plausible that college students, especially in NYC, go off-campus to engage in alternate lifestyle behaviors during the weekdays when they're not in class, and stay on campus to study during the weekends. The fifth and final component we see assigns highest weight to the Office category, and weights an order below to a few categories: American Restaurant, Pub, and Deli. This pattern has a less clear interpretation; one might speculate that individuals who go to the office on weekends might develop a habit of checking into social media outlets during these irregular visits, but not during their daily grind during the weekdays. \section{Conclusion and Future Work} In this paper, we extensively study the differences between lifestyles of a big city (NYC) and of a smaller city (ROC) using social media data. We extract work-rest habits and lifestyles from user activities. Instead of assigning people to qualitatively defined work-rest classes, we apply NMF techniques to discover latent patterns of human diurnal preference. The extracted latent patterns correspond well to the intuitively defined classes. Also using NMF, we find hidden features of human movement preference. We then group residents of two cities into lifestyle clusters based on the weights of hidden features and analyze the difference between the two distinctive cities. Moreover, tensor decomposition techniques are applied to find composite life patterns in our work. Clear and quantifiable differences are found in the lifestyles of large and small cities. Lifestyle is a broad, imprecise concept that covers a multitude of aspects of human behavior, and social media is a flawed representation of individuals' daily behaviors. These challenges present a number of exciting avenues for future work: investigating what sorts of lifestyles can be categorized culturally, or by other sociological factors such as job and age group; establishing what behaviors can be more characteristic of specific lifestyles; and exploring the relationship of individuals' social media activities to their daily behaviors. Regarding the last point, it may be that certain individuals' social media postings are strongly indicative of their behaviors, whereas other individuals' postings may be a biased sample of their activities. We would like to introduce more dimensions in our future work, such as other demographic dimensions including income, age and race, as well as adding more cities to the investigation. Presently, we are collecting data from the San Francisco Bay Area so that we may compare lifestyle behaviors between inhabitants of east and west coast cities, and relate our findings to previous research ~\cite{fincher1998cities}. We also plan to combine multiple social media data sources to gain a comprehensive understanding of human behaviors in the big data era using large-scale social media data. \section{Acknowledgment} We would like to thank the support from the New York State through the Goergen Institute for Data Science, as well as Xerox Foundation. \balance{} \bibliographystyle{IEEEtran}
1,108,101,566,290
arxiv
\section{Introduction and background} The properties of granular materials, in particular sand, are a constant source of fascination for children and adults alike, and are intrinsically related to the ability of such systems to exist in either solid or fluid states, under very similar conditions. For the scientist this fascination may arise through the apparent contradiction between the rigidity of the individual grains and the fragility of the assembly as a whole. This means that, for example, small changes in the loading conditions (such as changing the inclination angle of the support) can lead to large scale structural rearrangements (``avalanches'') or even to the complete fluidization of the material. A few years ago, Liu and Nagel ~\cite{liuNATURE1998} have suggested a ``phase-diagram'' for this type of solid-liquid transition (``jamming''). At zero temperature the axes relate to the ways an unjamming transition can be triggered, either by increasing the external driving (e.g. shear stress) or by decreasing the density of the material. The present study will probe the vicinity of this "unjamming line" using quasistatic and finite strain rate simulations of a model granular system. If one applies a shear stress, which is small and below a certain threshold (``yield-stress''), the material will respond as an elastic solid. Increasing the stress above the yield-stress, the particles will unjam and start to flow. This flow behavior is called ``plastic flow'' as the material will not revert to its original shape when the stress is removed. In the following we will assume that the system can flow at arbitrary small strain-rates without showing flow-localization. That this is possible is by no means guaranteed, as in some instances coexistence of flowing and jammed states is observed ~\cite{denninJPhysCondMatt2008}. We have not observed such a persistent strain localization in the simulations that are presented here. Plastic flow is observed in a large number of glassy materials, that are a priori very different from the athermal granular systems close to point J (see below) studied in this paper. However, all materials display a rather universal behavior, that was illustrated already in early studies on the plastic-flow of metallic glasses~\cite{argonACTA1979,argonACTA1983}. These studies have given indications that in the flowing phase the main plastic activity is spatially localized to so called shear transformation zones~\cite{falklangerPRE1998,bouchbinderPRE2007a}. These zones are non persistent, localized in space, and presumably consist of a few atoms that undergo the irreversible rearrangements responsible for the observed plastic flow. Recently, this plasticity has been further analyzed in simulations with a focus on the quasistatic dynamics at small strain rates, close to the flow arrest~\cite{malandroJCP1999,MaloneyPRE2006,lemaitreCaroliPRE2007,tanguyEPJE2006,tsamadosEPJE2008,tsamadosPRE2009}. With these studies it was possible to trace back the origin of plastic activity to the softening of a vibrational mode and the vanishing of the associated frequency~\cite{malandroJCP1999}. In real space, this softening is associated with the formation of distinct, localized zones where the plastic failure is nucleated~\cite{MaloneyPRE2006}. In turn, this can trigger the failure of nearby zones, such that avalanches of plastic activity form that may ``propagate'' through the entire system. It has been argued that the macroscopic extent of these avalanches is a signature of the quasistatic dynamics, which gives the system enough time to propagate the failure throughout the system. Beyond the quasistatic regime, i.e. farther away from the jammed state, the size of these events is expected to be finite. Thus, one naturally finds an increasing length-scale that is connected with the flow arrest upon reducing the stress towards the threshold value~\cite{bocquetPRL22009,PicardPRE2005,lemaitrePRL2009}. Without external drive, an (un)jamming transition can occur for decreasing particle volume fraction below a critical value, $\phi_c$. This special point, which is only present in systems with purely repulsive steric interactions, has been given the name ``point J''~\cite{ohern03,majmudarPRL2007}. At this point the average number of particle contacts jumps from a finite value $z_0$ to zero just below the transition. The value of $z_0$ is given by Maxwell's estimate for the rigidity transition \cite{maxwell1864,calladine78} and signals the fact that at point J each particle has just enough contacts for a rigid/solid state to exist. This marginally rigid state is called ``isostatic''. Compressing the system above its isostatic state a number of non-trivial scaling properties emerge~\cite{ohern03,durianPRL1995}. As the volume fraction is increased, additional contacts are generated according to $\delta\! z\sim\delta\!\phi^{1/2}$. The shear modululs scales as $G\sim p/\delta\! z$ and vanishes at the transition (unlike the bulk modulus)\cite{ohern03}. This scaling is seen to be a consequence of the non-affine deformation response of the system~\cite{wyart05c}, with particles preferring to rotate around rather than to press into each other~\cite{EllenbroekPRL2006}. Associated with the breakdown of rigidity at point J is the length-scale, $l^\star\sim \delta z^{-1}$~\cite{wyart05b,wyart05a}, which quantifies the size over which additional contacts stabilize the marginally rigid isostatic state. In this article we present results from quasistatic and small strain-rate flow simulations {of a two-dimensional system} in the vicinity of point J. Together with the linear elastic shear modulus, at point J also the yield-stress $\sigma_y$ vanishes~\cite{peyneauPRE2008,olssonPRL2007,hatanoJPSJ2008,heussingerPRL2009,otsukiPRE2009,xuPRE2006}. Thus, point J is connected with a transition from plastic-flow behavior ($\phi>\phi_c$, $\sigma_y>0$) to normal fluid flow ($\phi<\phi_c$, $\sigma_y=0$), with either Newtonian~\cite{olssonPRL2007} or Bagnold rheology~\cite{hatanoJPSJ2008,otsukiPRE2009} at small strain-rates. In consequence, both (un)jamming mechanisms as described above are present at the same time: the flow arrest, as experienced by lowering the stress towards threshold, is combined with the vanishing of the threshold itself. In this study we want to address two questions: In how far do the general plastic flow properties carry over to this situation of small or, indeed, vanishing yield-stress? Is the vicinity to point J and its isostatic state at all relevant for the flow properties ? {It will be shown that while the stress fluctuations reflect the critical properties at point J, dynamical correlations are typical those observed in the flow of elasto-plastic solids}. We will approach these questions starting with the quasistatic-flow regime. The advantage of quasistatic simulations is to provide a clean way of accessing the transition region between elastic, solid-like behavior and the onset of flow. In the quasistatic regime flow is generated by a succession of (force-)equilibrated solid states. Thus, one can connect a liquid-like flow with the ensemble of solid states that are visited along the trajectory through phase-space. In Section~\ref{sec:qs-sim}, we study the instantaneous statistical properties of the configurations generated by this flow trajectory at zero strain rate, and show that they display large fluctuations in several quantities, that are associated with the proximity to the jamming point. In Section~\ref{sec:chi4}, we follow the analysis of recent experiments \cite{lechenault} and use a "four point correlation" tool to define a dynamical correlation length that characterizes the extension of the dynamical heterogeneities observed in the flow process. This dynamical length scale is shown to scale as the system size in the zero strain rate limit, independently of the distance to point J. The heterogeneity in the system is maximal for strains that correspond to the typical duration between the plastic avalanches described above. We complement this analysis with preliminary results from dissipative molecular-dynamics simulations that access strain-rates above the quasistatic regime. This allows us to assess the importance of dynamic effects in limiting access to certain regions of the landscape. Indeed, the results at larger strain rate are system size independent, and reveal a surprising growth of the strength of the heterogeneities with increasing packing fraction away from $\phi_c$. \section{Simulations} Our system consists of $N$ soft spherical particles with harmonic contact interactions \begin{equation}\label{eq:harmInteraction} E(r) = k(r-r_c)^2\,. \end{equation} Two particles, having radii $r_i$ and $r_j$, only interact, when they are ``in contact'', i.e. when their distance $r$ is less than the interaction diameter $r_c=r_i+r_j$. This system has been studied in several contexts, for example in~\cite{ohern03,olssonPRL2007,haxtonPRL2007}. The mixture consists of two types of particles ($50:50$) with radii $r_1=0.5d$ and $r_2=0.7d$ in two-dimensions. Three different system-sizes have been simulated with $N=900$, $1600$ and $2500$ particles, respectively. The unit of length is the diameter, $d$, of the smaller particle, the unit of energy is $kd^2$, where $k$ is the spring constant of the interaction potential. We use quasistatic shear simulation, and compare some of the results with those obtained from dissipative molecular-dynamics simulations at zero temperature. Quasistatic simulations consists of successively applying small steps of shear followed by a minimization of the total potential energy. The shear is implemented with Lee-Edwards boundary conditions with an elementary strain step of $\Delta\gamma=5\cdot10^{-5}$. After each change in boundary conditions the particles are moved affinely to define the starting configuration for the minimization, which is performed using conjugate gradient techniques~\cite{lammps}. The minimization is stopped when the nearest energy minimum is found. Thus, as the energy landscape evolves under shear the system always remains at a local energy minimum, characterized by a potential energy, a pressure $p$ and a shear stress $\sigma$. The molecular dynamics simulations were performed by integrating Newton's equations of motion with elastic forces as deduced from Eq.~(\ref{eq:harmInteraction}) and dissipative forces \begin{equation}\label{eq:friction} {\vec F}_{ij} = - b\left[\left( {\vec v}_i - {\vec v_j}\right)\cdot {\hat r}_{ij}\right]{\hat r}_{ij}, \end{equation} proportional to the velocity differences along the direction ${\hat r}_{ij}$ that connects the particle pair. The damping coefficient is chosen to be $b=1$. Rough boundaries are used during the shear, the boundaries being built by freezing some particles at the extreme ends in the $y$-direction, from a quenched liquid configuration at a given $\phi$. The system is sheared by driving one of the walls at a fixed velocity in the $x$ direction, using periodic boundary conditions in this direction. For all system sizes, the distance between the top and bottom boundaries is $52.8d$ and each of the boundaries has a thickness of $4.2d$. The system size is changed by modifying the length of the box in the $x$-direction. \section{Results} \subsection{Quasistatic simulations}\label{sec:qs-sim} As is readily apparent from Fig.~\ref{fig:stress_strain}, a typical feature of quasistatic stress-strain relations is the interplay of ``elastic branches'' and ``plastic events''. During elastic branches stress grows linearly with strain and the response is reversible. In plastic events the stress drops rapidly and energy is dissipated. {In setting the elementary strain step, $\Delta\gamma$, care must be taken to properly resolve these events. Too large strain steps would make the simulations miss certain relaxation events. We chose a strain step small enough, such that most minimization steps do not involve any plastic relaxations. In consequence, the elastic branches are well resolved, each consisting of many individual strain steps.} \begin{figure}[h] \begin{center} \includegraphics[width=0.4\columnwidth,angle=-90]{figures/sigma.gamma.50.8470.eps} \hfill \includegraphics[width=0.4\columnwidth,angle=-90]{figures/sigma.gamma.50.8433.eps} \end{center} \caption{Stress-strain relation for two different volume-fractions, $\phi=0.847$ (top) and $\phi=\phi_c=0.8433$ (bottom). At volume-fractions close to $\phi_c$ the signal is intermittent showing long quiet regions where the system flows without the building up of stress.}\label{fig:stress_strain} \end{figure} The succession of elastic and plastic events defines the flow of the material just above its yield-stress $\sigma_y(\phi)$. The value of the yield-stress depends on volume-fraction and nominally vanishes at $\phi_c$ (see Fig.~\ref{fig:yieldstress}). For finite systems, however, finite-size effects dominate close to $\phi_c$ such that one cannot observe a clear vanishing of $\sigma_y$. Rather, as Fig.~\ref{fig:stress_strain} shows, one enters an intermittent regime, {i.e. a finite interval in volume-fraction in which} the stress-signal shows a coexistence between jammed and ``freely-flowing'' states. This is evidence of a distribution of jamming thresholds, $P(\phi_c)$, which sharpens with increasing the system-size~\cite{ohern03,heussingerPRL2009}. A finite-size scaling analysis of this distribution allows one to extract the critical volume-fraction. {To this end we count the number of jamming events that lead from the freely-flowing state to the jammed state and back~\footnote{For a closer illustration of a jamming event see the supporting material to our previous paper~\cite{prl2009supplmat}}. We find a maximum number of events at a certain $\phi_c(L)$, which can be extrapolated to $L=\infty$ to define the critical volume fraction of our simulation.} The value we find, $\phi_c=0.8433$ is slightly higher than what has been obtained previously, however, evidence is mounting that $\phi_c$ is non-universal~\cite{chaudhuriCM2009} and depends on the details of the ensemble preparation. Scaling properties in the vicinity of a jamming threshold, on the other hand, appear to be universal~\cite{chaudhuriCM2009}. \begin{figure}[h] \begin{center} \includegraphics[width=0.49\columnwidth]{figures/sigma.phi.eps} \hfill \includegraphics[width=0.49\columnwidth]{figures/sigma.p.eps}\end{center} \caption{Average yield-stress as function of volume-fraction (left) and pressure (right). The yield-stress is determined as an average over stress-values just before plastic events occur, i.e at the top end of each elastic branch.}\label{fig:yieldstress} \end{figure} In Fig.~\ref{fig:yieldstress} we display the yield-stress $\sigma_y$, as a function of volume-fraction, $\delta\phi=\phi -\phi_c$, and pressure $p$. Finite-size effects are particularly strong when using $\phi$ as control variable. In the intermittent regime the average stress levels off to a system-size dependent value. Much better scaling behavior can be obtained, when using pressure as control variable, as this is characterized by the same finite-size effects as the shear-stress. In the following we will therefore use pressure as control variable. Be aware, however, that we do \emph{not} run pressure-controlled simulations, as for example Peyneau and Roux~\cite{peyneauPRE2008} but use the average pressure, $\langle p \rangle(\phi)$, only to plot our simulation results. The value $\sigma/p\approx0.1$ obtained from Fig.~\ref{fig:yieldstress} is consistent with these pressure-controlled simulations. On the other hand, the scaling with volume-fraction, $p\sim \delta\phi^{1.1}$, is slightly stronger than in linear elasticity at zero stress~\cite{ohern03,durianPRL1995}, where the pressure simply scales as $\delta\phi$. In view of the strong finite-size effects, the scaling with volume-fraction should, however, be taken with care. In the following we show results from five different volume-fractions, $\phi=0.846$, $0.848$, $0.85$, $0.86$ and $\phi=0.9$, which are all above $\phi_c$ and \emph{outside} the intermittent regime {(i.e. no freely-flowing zero-stress states occur)}. For each volume-fraction we study three different system sizes with $N=900,1600$ and $2500$ particles. \subsubsection{Elastic properties} As reviewed in the introduction a hallmark of the elasticity of solids in the vicinity of point J is the scaling of the linear elastic shear modulus $g\sim p^{1/2}$. Similarly, the number of inter-particle contacts scale as $z=z_0+Ap^{1/2}$. We have analyzed the elastic branches in the steady-state flow to find (Figs.~\ref{fig:g.p} and \ref{fig:z.p}) that the same scaling properties characterize the average nonlinear elastic modulus $g_{\rm avg}$, which we define as the local slope of the stress-strain curve, and also the associated contact numbers $z_{\rm avg}$. If we take these scaling properties as a signature of the criticality of point J, we can conclude that for the range of volume-fractions considered we are in the ``critical regime''. \begin{figure}[h] \begin{center} \includegraphics[width=0.6\columnwidth,angle=-90]{figures/g.p.eps} \includegraphics[width=0.6\columnwidth,angle=-90]{figures/g.distr.eps} \end{center} \caption{(Top) Average nonlinear elastic shear modulus $g_{\rm avg}$ as function of pressure $p$. (Bottom) Probability distribution $P(g)$ centered around average value $g_{\rm avg}$ and rescaled width according to $\Delta g= p^{0.25}/N^{0.5}$. Black solid line is a Gaussian pdf. }\label{fig:g.p} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=0.6\columnwidth,angle=-90]{figures/z.p.eps} \hfill \includegraphics[width=0.6\columnwidth,angle=-90]{figures/z.distr.eps} \end{center} \caption{(Top) Average contact number $z$ as function of pressure $p$. The values for $z_0\equiv z(p=0)$ are determined from a best fit. They are smaller than $z_{c}=4$ as to the presence of rattlers, which have not been accounted for. (Bottom) Probability distribution $P(z)$ centered around average value $z_{\rm avg}$ and rescaled width according to $\Delta z= p^{-0.35}/N^{0.5}$}\label{fig:z.p} \end{figure} As additional characterization of the ensemble of elastic states we report the probability distributions of shear moduli and contact numbers, respectively. Maybe surprisingly all the obtained distributions have approximately the same shape and can be superimposed on a single master curve. To achieve this we center each distribution around the average value and rescale the width with a factor $p^\alpha N^\beta$. By looking carefully at the individual distributions we do observe a slight trend towards the development of non-Gaussian tails close to $\phi_c$. While non-Gaussian distributions are to be expected close to critical points~\cite{binderZPhysikB1981}, the effect is quite small and all distributions have a well developed Gaussian core. The pronounced small-$g$ tail of $P(g)$ is due to shear moduli that extend down to zero. Similar tails have been observed in \cite{MaloneyPRE2006} and related to a softening of the response upon approach towards plastic instabilities. Indeed, we found that manually suppressing states close to plastic events, the weight in the small-$g$ tail is reduced. For the width of the g-distribution we obtain $\Delta g = p^{0.25}/N^{0.5}$. Thus, the absolute width of the distribution decreases with decreasing pressure, while the relative width, $\Delta g/g_{\rm avg}$ diverges at point J. For the contact numbers, on the other hand, we find a divergence of the absolute width itself, $\Delta z = p^{-0.35}/N^{0.5} $. These enhanced fluctuations certainly support the view of $\delta z$ as an order parameter for a continuous jamming transition. The quantity $\Delta z^2N$ would then be analogous to a susceptibility, $\chi\sim p^{-\gamma}$, diverging with an exponent $\gamma=0.7$. Our results are different than those of Henkes and Chakraborty~\cite{henkesPRE2009}, where fluctuations of $z$ are found to be independent of pressure, $\Delta z\sim p^0$. Note, however, the subtle difference in ensemble. These authors study a pressure-ensemble, {in which states are generated by quenching \emph{random} particle configurations to the local minimum of the potential energy landscape (similar to the procedure in Ref.~\cite{ohern03}). One may view these states as the inherent structures of a high temperature liquid. Our ensemble then corresponds to the inherent structures of a driven glassy material. We fix volume-fraction and sample only states that are connected by the trajectory of the system in phase-space. The ensemble therefore reflects the dynamics of the system and the region of phase-space where it is guided to.} \subsubsection{Yield properties} We now go beyond the properties of the elastic states and discuss aspects related to their failure during the plastic events. As indicated in the introduction, plastic events can be viewed as bifurcations in energy-landscape. A local energy minimum vanishes and the system has to search for a new minimum at lower energy and stress. In quasistatic dynamics this process is instantaneous. The associated stress-drop is therefore visible as a vertical line in the stress-strain relation (Fig.~\ref{fig:stress_strain}). \begin{figure}[t] \begin{center} \includegraphics[width=0.8\columnwidth,angle=0]{figures/stressdrops.distr.eps} \hfill \includegraphics[width=0.6\columnwidth,angle=-90]{figures/avg.stressdrop.p.eps} \end{center} \caption{(Top) Distribution of stress-drops normalized with average values $\Delta\sigma_{\rm avg}$. Inset shows the same figure in a log-log representation. (Bottom) Scaling of $\Delta\sigma_{\rm avg}$ with pressure $p$. }\label{fig:distrDrops} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=0.6\columnwidth,angle=-90]{figures/el.length.distr.eps} \hfill \includegraphics[width=0.6\columnwidth,angle=-90]{figures/avg.el.length.p.eps} \end{center} \caption{(Top) Distribution of elastic branch length normalized with average values $\Delta\gamma_{\rm avg}$. (Bottom) Scaling of $\Delta\gamma_{\rm avg}$ with pressure $p$. The ratio $g=\Delta\sigma_{\rm avg}/\Delta\gamma_{\rm avg}$, which defines a shear-modulus, is consistent with the scaling in Fig.~\ref{fig:g.p}. }\label{fig:elBranches} \end{figure} In the following we characterize the amount of dissipation during a plastic event by the associated stress-drop, $\Delta\sigma$. The frequency of plastic events is discussed in terms of the length of elastic branches, $\Delta\gamma_{\rm avg}$. Just like the probability distributions within the elastic states, the functions $P(\Delta\sigma)$ and $P(\Delta\gamma_{\rm avg})$ (Figs.~\ref{fig:distrDrops} and \ref{fig:elBranches}) are universal and can be rescaled on a single master-curve. Here, it is sufficient to use the first moment of the distribution, i.e. the ensemble-averaged stress drops and elastic-branch lengths, respectively. As the logarithmic scale in the inset in Fig.~\ref{fig:distrDrops} shows, the collapse for the stress-drop distribution is quite good for large as well as for small stress drops. The black line represents a fit of the form $\Delta\sigma^{-1}\exp(-\Delta\sigma/\sigma_L)$ with the stress-scale $\sigma_L\approx 5\Delta\sigma_{\rm avg}$. The intermediate power-law behaviour $P\sim\Delta\sigma^{-1}$ reflects the lack of scale related to a typical event size. The only relevant scale is the exponential cut-off at $\sigma_L$. Tewari {\it et al.} \cite{tewariPRE1999} have reported an exponent of $-0.7$ in the energy-drop distribution at finite-strain rates. The simulated systems are somewhat smaller, however. Kabla {\it et al.} \cite{kablaJFM2007} have found an exponent of $-1.5$ in a vertex model for foams, in agreement with renormalization group arguments~\cite{dahmenPRL2009}. The exponential tail has been observed in several different studies \cite{tsamadosEPJE2008,tsamadosEPJE2008,lernerPRE2009,MaloneyPRE2006,baileyPRL2007} in two and in three spatial dimensions. Tsamados {\it et al.} \cite{tsamadosEPJE2008} have furthermore related this feature of the stress-drop distribution to the diversity of local flow-defects causing the plastic event. A similar universality has been observed by Maloney and Lema\^{i}tre~\cite{MaloneyPRE2006}. Their simulations are conducted with three different interaction potentials but without changing the density, which is set to high values far away from the rigidity transition. The authors have argued for a universal value of the ``flow-strain'' $\sigma_y/g$ of a few percent. Apparently, this can only be true far away from $\phi_c$. As the yield-stress vanishes faster than the shear modulus, one finds a ratio $\sigma_y/g\sim \delta\phi^{1/2}$ that vanishes at point J. Thus, particle configurations at the onset of jamming are highly fragile and susceptible to even minute changes in the boundary conditions. The average stress-drop as well as the average length of elastic branches change with pressure and system-size as displayed in Figs.~\ref{fig:distrDrops} and \ref{fig:elBranches} ~\footnote{Note, that these values also depend on the elementary strain step used in the simulations. For any finite step-size there will be some small plastic events, that cannot be resolved but only lead to an apparent reduction of the stress increase. This will artificially increase the length of elastic branches and decrease the weight of the small $\Delta\sigma$-tail of the stress-drop distribution.}. As a function of pressure one observes an increase but with a slope that depends on system-size. The average stress-drops increase somewhat slower with pressure than the yield-stress~\footnote{ The effective exponents range from $0.8$ to $0.9$ as compared to an exponent ${0.95}$ for the yield-stress}. The relative stress fluctuations $\Delta\sigma_{\rm avg}/\sigma_y$ are thus slightly enhanced at small pressures close to $\phi_c$. The same trend is also visible in the total stress fluctuations as calculated by $\left\langle (\sigma-\langle \sigma\rangle)^2\right \rangle$. The overall scale of both, $\Delta\sigma_{\rm avg}$ and $\Delta\gamma_{\rm avg}$, decreases with system-size to give a smooth stress-strain relation in the thermodynamic limit. Previous studies~\cite{tanguyEPJE2006,MaloneyPRE2006,tsamadosEPJE2008} have observed a scaling of the stress-drops with $N^{-1/2}$. This includes~\cite{MaloneyPRE2006} a system of harmonically interacting particles, as studied here, but at a rather high pressure. In general, we observe a weaker dependence on system-size, with an effective exponent that increases with pressure. Our data is consistent, however, with the value of $1/2$ being the relevant high-pressure limit. \subsection{Dynamical correlations} \label{sec:chi4} Let us now turn to the dynamics of the system. In particular we want to characterize dynamic correlations in the motion of particles. While at volume-fractions above $\phi_c$ the isostaticity length-scale $l^\star$ is clearly finite~\footnote{If we assume for the isostaticity length $l^\star=1/\delta z$, we would have values $l^\star \approx5$ at $\phi=0.846$ and $l^\star\approx 1$ at $\phi=0.9$ ($z$-values taken from Fig.~\ref{fig:z.p}).}, there is nevertheless a large dynamical length-scale related to the flow arrest. This has, for example, been evidenced in a system of Lennard-Jones particles with dissipative dynamics~\cite{lemaitrePRL2009}. We will show below that a similar length-scale occurs in our system of purely repulsively interacting particles, independent of the distance to $\phi_c$. Let us start by presenting the results from the quasistatic simulations. To define a dynamical correlation length we study heterogeneities in the particle mobilities. To this end we use the overlap-function~\cite{glotzerJCP2000,franzJPhysCM2000} \begin{equation}\label{eq:Q_definition} \langle Q(\gamma,a)\rangle =\left\langle \frac{1}{N}\sum_{i=1}^N\exp\left[-\frac{u_{i\rm na}(\gamma)^2}{2a^2}\right] \right\rangle\,, \end{equation} of particles undergoing nonaffine displacements $u_{i\rm na}$ during a strain interval of $\gamma$~\footnote{{To calculate the overlap function we use the non-affine displacements in the gradient direction (irrespective of the distance to the wall).}}. Particles moving farther than the distance $a$ (``mobile''), have $Q\approx 0$, while those that stay within this distance (``immobile'') have $Q\approx 1$. As a function of strain $\gamma$, the average overlap $\langle Q\rangle$ will decay, when particle displacements $u_{\rm na}$ are comparable to the probing length-scale $a$. The overlap function is similar to the intermediate scattering function with wave-vector $q\sim 1/a$. Thus, $a$ sets the probing length-scale. The decay of $Q(\gamma,a)$ then gives an associated structural relaxation strain, $\gamma^\star(a)$, on which particle positions decorrelate. In the following we are interested in the dynamical heterogeneity of $Q$ and the fluctuations around its average value \begin{equation}\label{eq:chi4} \chi_4(a,\gamma) = N\left(\left\langle Q(\gamma,a)^2\right\rangle - \left\langle Q_a(\gamma,a)\right\rangle^2\right)\,, \end{equation} which defines the (self-part of the) dynamical susceptibility $\chi_4$. This is displayed in Fig.~\ref{fig:chi4} as a function of both strain $\gamma$ and probing length-scale $a$. For each $\gamma$ it has a well defined peak (at $a^\star(\gamma)$) that parallels the decay of the overlap function $\langle Q\rangle$~\cite{lechenault}. \begin{figure}[h] \begin{center} \includegraphics[width=0.8\columnwidth]{figures/chi4.50.9000.eps} \end{center} \caption{Dynamical susceptibility $\chi_4$ as function of probing length-scale $a$ for various different strains $\gamma=(5,20,50,200,500,2000)\cdot 10^{-5}$ (from left to right) and $\phi=0.9$. The maxima of the curves (black circles) define the amplitude $h(\gamma)$.}\label{fig:chi4} \end{figure} The strength of the correlations are encoded in the peak-height, $h(\gamma)\equiv\chi_4(a^\star(\gamma),\gamma)$ (black circles in Fig.~\ref{fig:chi4})). As $\chi_4$ can be written as the integral over a correlation function, it is connected to the correlation volume, or to the number of correlated particles. Assuming that this volume forms a compact region in space~\cite{dauchot2005PRL,droccoPRL2005} we can relate the amplitude of $\chi_4$ to a dynamic correlation length via $\xi^2(\gamma)= h(\gamma)$. Following the maxima in Fig.~\ref{fig:chi4} from left to right, one sees that the amplitude first increases and then quickly drops to small values. This implies that there is a finite strain $\gamma$, at which $\chi_4$ presents an \emph{absolute} maximum. To extract this maximum we plot in Fig.~\ref{fig:chi4_amplitude} the amplitude $h(\gamma)$ for various volume-fractions $\phi$ and system-sizes $N$. There are two surprising features in this plot. First, by rescaling the strain-axis with the average length of elastic branches, $\Delta\gamma_{\rm avg}$ (see Fig.~\ref{fig:elBranches}) we find reasonable scaling collapse for all studied volume-fractions and system-sizes. {This implies that cooperativity, as measured by the amplitude of $\chi_4$ and the length of elastic branches are intimately related}. The frequency of plastic events sets the strain-scale for dynamical heterogeneities. As the length of the elastic branches decreases with system size, the absolute maximum shifts towards smaller strains, with $h(\gamma)$ becoming effectively a decreasing function of $\gamma$ in the thermodynamic limit. \begin{figure}[t] \begin{center} \includegraphics[width=0.8\columnwidth]{figures/chi4_amplitude.rescaled.eps} \end{center} \caption{Amplitude $h(\gamma)$ as taken from the quasistatic simulations. Data for different volume-fractions and system-sizes. The axes are normalized according to the scaling form $h(\gamma)=N\tilde h(\gamma/\Delta\gamma_{\rm avg})$, with $\Delta\gamma_{\rm avg}$ taken from Fig~\ref{fig:elBranches}.}\label{fig:chi4_amplitude} \end{figure} The second surprising feature in Fig.~\ref{fig:chi4_amplitude} is the system-size dependence of $h$. It turns out that $h/N$ rather than $h$ itself is independent of system-size, indicating a finite variance of the distribution of $Q$ values in the thermodynamic limit (see Eq.~(\ref{eq:chi4})). Assuming the connection with the correlation length to hold, $\xi^2\sim h$, this implies a correlation length that is proportional to the length of the simulation box, $\xi\approx 0.3L$, independent of volume-fraction and distance to point J. This illustrates the fact that quasistatic dynamics is inherently dominated by system-size effects, as already shown in previous works ~\cite{lemaitrePRL2009,PicardPRE2005,heussingerPRL2009,maloneyPRL2006}. {Note, that this system-size dependence is of different origin than the finite-size effects present within the above mentioned intermittent regime, which occurs close to $\phi_c$. The intermittency can be avoided by staying away from $\phi_c$. In contrast, the system-size dependence encountered here is quite independent of volume-fraction, but rather a generic feature of the quasistatic regime, as we show now.} To this end let us turn to the molecular-dynamics simulations. We will show that the dependence on system-size indeed reflects the saturation of a length-scale that is finite for larger strain-rates and increases towards the quasistatic regime~\footnote{A more detailed account of these simulations will be presented in: P. Chaudhuri and L. Bocquet, in preparation (2010).} . As Fig.~\ref{fig:chi4_strainrate} shows, the amplitude of $\chi_4$ increases when reducing the strain-rate ($a=0.01$, $\phi=0.9$) and approaches the quasistatic limit for small strain-rates. Also the strain $\gamma_m$, at which $\chi_4$ is maximal is very well reproduced in the dynamic simulation. \begin{figure}[t] \begin{center} \includegraphics[width=0.8\columnwidth]{figures/chi4.50.900.strainrate.eps} \end{center} \caption{$\chi_4(\gamma)$ for different strain-rates $\dot\gamma$ and $a=0.01$. comparison with quasistatic (`qs') simulations. Note the different boundary conditions used. MD simulations are with walls, while quasistatic simulations have periodic boundary conditions. This may explain the difference in the amplitude.}\label{fig:chi4_strainrate} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[width=0.95\columnwidth]{figures/chi4_amplitude.gammadot.phi.eps} \end{center} \caption{(Left) Peak-height $\chi_m=\chi_4(\gamma_m)$ as determined from Fig.~\ref{fig:chi4_strainrate}. The saturation at small strain-rates is an indication of the quasistatic limit, in which $\chi_m\sim N$. In contrast, at high strain-rates no significant dependence on system-size is observed. (Right) Peak-height $\chi_m$ as function of volume-fraction.}\label{fig:xi_dotgamma_phi} \end{figure} Fig.~\ref{fig:xi_dotgamma_phi} demonstrates a saturation of the amplitude $\chi_m\equiv \chi_4(\gamma_m)$ at small strain-rates indicating that the quasistatic regime is entered. Comparing with the quasistatic simulation, we find somewhat smaller values for the amplitude. It should be remembered, however, that different boundary conditions have been used. The rough walls used in the molecular dynamics simulations are likely responsible for the reduction of the peak-height as compared to the quasistatic simulations (which are performed with periodic boundary conditions). The presence of the quasistatic regime is also evidenced by the fact that the amplitude $\chi_m$ within the plateau depends on system-size, just as in the quasistatic simulations. Outside this regime, on the other hand, no significant $N$-dependence is observed. {In effect this means that the quasistatic regime shrinks with increasing system-size. The strainrate $\dot\gamma_{\rm qs}(N)$ that describes the crossover to the quasistatic regime decreases with $N$. This is in line with Refs.\cite{lemaitrePRL2009,PicardPRE2005}, where a power-law dependence $\gamma_{\rm qs}\sim1/N$ is reported. From our data we cannot make any definitive statement about this dependence.} The results are furthermore consistent with those of Ono {\it et al.} \cite{onoPRE2003}. The lowest strain-rate accessible in the latter study was $\dot\gamma=0.0001$. At this strain-rate the correlation length was observed to be on the order of $3$ in agreement with our data. We also probed the volume-fraction dependence, by performing runs at $\phi=0.85,0.87,0.9,0.95$ and $1$. The resulting amplitude of $\chi_4$ is given in Fig.~\ref{fig:xi_dotgamma_phi}. Interestingly we observe a mild increase in the amplitude with volume-fraction, signalling enhanced correlations \emph{away} from $\phi_c$. This is not due to the special choice of the parameter $a=0.01$. We have found the same trend when fixing $\gamma$ and viewing $\chi_4$ as function of $a$ as for the quasistatic simulations in Fig.~\ref{fig:chi4}. Finally, we have also calculated the \emph{absolute maximum} of $\chi_4$, viewed as function of both $a$ and $\gamma$. In all cases, the amplitude increases with volume-fraction. Both, the increase of the length-scale with lowering the strain-rate and the increase with volume-fraction are consistent with the recently proposed elasto-plastic model of Bocquet {\it et al.} \cite{bocquetPRL22009}. Given the trend in Fig.~\ref{fig:xi_dotgamma_phi}, one may speculate about a vanishing dynamical correlation length (taken at constant strain-rate $\dot\gamma=10^{-4}$ outside the quasistatic regime), as $\phi_c$ is approached. Such a behavior is indeed compatible with our data and has recently been observed in the rheology of a concentrated emulsion confined in gaps of different thickness~\cite{goyonNature2008}. {Here we interpret this surprising feature in the following way: at a given packing fraction, dynamical correlations increase with decreasing $\dot{\gamma}$, and saturate at a $N$-dependent value in the quasistatic regime. This cross-over to the quasistatic regime does not only depend on system-size but also on packing fraction: $\dot{\gamma}_{\rm qs} (N,\phi )$. For $\phi$ closer to $\phi_c$ the energy landscape becomes increasingly flat. Particle relaxations take longer~\cite{hatanoPRE2009} and smaller strain-rates are needed to allow for full relaxation into the local energy minimum. Thus, smaller strain-rates are needed to reach the quasistatic behavior and $\dot\gamma_{\rm qs}$ decreases towards $\phi_c$. Hence, reducing $\phi$ towards $\phi_c$ {\it at a fixed strain rate} is "equivalent" to increasing the strain rate relative to $\dot{\gamma}_{\rm qs} $, and results in a decrease of dynamical correlations. In contrast, corrrelations taken at a strain rate $\dot\gamma=\dot\gamma_{\rm qs}(\phi)$, are independent of $\phi$ and given by their quasi-static values, as displayed in Fig.~\ref{fig:chi4_amplitude}.} \section{Discussion and Conclusion} We have discussed the small strain-rate elasto-plastic flow of an athermal model system of soft harmonic spheres. In particular, we were interested in the flow properties at and above a critical volume-fraction (point J), at which the yield-stress of the material vanishes. This regime combines the more traditional elasto-plastic flow of solids above their yield-stress with the breakdown of the rigidity of the solid state at point J. We found that this breakdown is visible in the ensemble of states visited during a flow simulation in a similar way as in the linear elasticity of the solid. For example (Fig.~\ref{fig:z.p}), we showed that the average number of particle contacts scale with the square-root of pressure, just as in linear elasticity. In contrast, the fluctuations around this average value show a distinct behavior that has not been observed previously. We showed that the contact-number fluctuations actually diverge upon approaching the critical volume-fraction from above, making the contact number an ideal candidate for an order parameter of a continuous jamming transition as observed under steady shear. The relative fluctuations of the shear modulus and those of the shear stress also diverge in the same limit. Going beyond the characterization of the average elastic properties we have studied the statistics of plastic events (Figs.~\ref{fig:distrDrops} and \ref{fig:elBranches}). It seems that all distributions have universal scaling forms reminiscent of standard critical phenomena. {F}rom all these results, it would be tempting to say that it is the energy landscape as a whole that becomes critical at point J. Isostatic elasticity would then be just one aspect of this criticality, another one could be the intermediate power-law tail in the stress-drop distribution. This critical aspect is also illustrated by the intermittency in the stress response of finite-size systems (see Fig.\ref{fig:stress_strain}) and by the growth of an isostatic correlation length in the quasistatic response when point J is approached from below \cite{heussingerPRL2009}. At strain-rates above the quasistatic regime, the dynamics limits access to certain regions of the energy landscape. While the dynamics is still highly correlated, the dynamical correlation length, as measured by the amplitude of the four-point susceptibility $\chi_4$, remains finite and actually \emph{decreases} with lowering the volume-fraction towards $\phi_c$. In the quasistatic regime we have shown that $\chi_4$ reflects, in two ways, the interplay of elastic loading and plastic energy release (Fig.~\ref{fig:chi4_amplitude}). First, the typical strain-scale of heterogeneity is set by the frequency of plastic events. Second, the amplitude of $\chi_4$ scales with system-size, which highlights the fact that the quasistatic, plastic flow regime is, in fact, a finite-size dominated regime with a correlation length that is limited by system size. This behavior should be contrasted with the one observed below $\phi_c$, where a large but finite correlation length has been identified, which is governed by the approach to point J~\cite{heussinger10epl}. Upon increasing the strain-rate we have shown that the correlation length starts to decrease outside the finite-size scaling regime (Fig.~\ref{fig:xi_dotgamma_phi}). Olsson and Teitel~\cite{olssonPRL2007} infer from their flow simulations that shear-stress should be viewed as a ``relevant perturbation'' to point J, such that a different fixed-point and indeed different physics is relevant for the flow behaviour at finite stress. Our findings support this picture for the dynamical correlations, which appear to behave similarly to those observed in models of elasto-plastic flow \cite{PicardEPJE2004,PicardPRE2005,bocquetPRL22009} or in low temperature glasses \cite{tanguyEPJE2006,lemaitrePRL2009}: {correlations increase upon lowering the strain-rate and saturate at a system-size dependent value in the quasistatic regime.} The flow behaviour in the vicinity of point J is therefore influenced by a complex combination of two critical behaviour. Large stress fluctuations (relative to the yield stress), or geometrical changes (number of neighbours) reflect the enhanced sensitivity of the material to small changes in external conditions at point J, and are specific properties of the energy landscape at this point. On the other hand, dynamical correlations above point J are dominated by the system size, and build up progressively as the strain rate is decreased, as in any elasto-plastic system, and are not particularly sensitive to the proximity of point J. {\bf Acknowledgments} The authors acknowledge fruitful discussions with Ludovic Berthier, Lyd\'eric Bocquet, Erwin Frey; Craig Maloney and Michel Tsamados, as well as thank the von-Humboldt Feodor-Lynen, the Marie-Curie Eurosim and the ANR Syscom program for financial support.
1,108,101,566,291
arxiv
\section{Models and problem formulation} Various problems in fluid mechanics, contact mechanics, heat transfer or diffusion across membranes lead to parabolic or coupled elliptic-parabolic systems of partial differential equations (or inequations) with nonlinear, dynamical conditions prescribed on a Riemannian manifold $\Gamma$ (see \cite{Li69}). We consider in this article the problem \begin{equation}\label{eq:mod1} \begin{split} -\Delta u(t,x) & = 0 \quad \text{in } {\rm I\hspace{-0.50ex}R} ^+ \times (\Omega_i\cup\Omega_e) , \\ \partial_t\jump{u}+s(\jump{u}) -\sigma_e \, \partial_{n_e} u_e & = 0 \quad \text{on } {\rm I\hspace{-0.50ex}R} ^+ \times\Gamma ,\\ \jump{\sigma \partial_n u} & = 0 \quad \text{on } {\rm I\hspace{-0.50ex}R} ^+ \times \Gamma , \\ u_i & = g_i \quad \text{on } {\rm I\hspace{-0.50ex}R} ^+ \times (\partial\Omega_i \setminus \Gamma ), \\ u_e & = g_e \quad \text{on } {\rm I\hspace{-0.50ex}R} ^+ \times (\partial\Omega_e \setminus \Gamma ) , \\ u(0,\cdot ) & = u_0 \quad \text{in } \Omega_i\cup\Omega_e . \end{split} \end{equation} Here, $s$ is a given real function, $\Gamma$ is a Lipschitz regular manifold, $\Omega_i$ and $\Omega_e$ are two disjoint, open sets with Lipschitz regular boundary such that \begin{align*} & \Gamma \subseteq \partial \Omega_i \cap \partial\Omega_e , \end{align*} and \[ [u] = u_i|_\Gamma - u_e|_\Gamma \] is the difference of the traces of ${u}_i := {u}|_{\Omega_i}$ and ${u}_e := {u}|_{\Omega_e}$ on the part of the common boundary $\Gamma$. Moreover, $g\in H^1 (\Omega_i\cup\Omega_e )$, and we denote by ${g}_i := {g}|_{\Omega_i}$ and ${g}_e := {g}|_{\Omega_e}$ the restrictions of the function $g$, as well as their traces on $\partial\Omega_i\setminus\Gamma$ and $\partial\Omega_e\setminus\Gamma$, respectively; there will be no danger of confusion when we denote the functions in the interiors and on the boundaries by the same letter. We denote by $n_i$ and $n_e$ the outer normal derivatives at the boundaries of $\Omega_i$ and $\Omega_e$, respectively, and we denote by \[ [\sigma \partial_n u] = \sigma_i \partial_{n_i} u_i + \sigma_e \partial_{n_e} u_e = \sigma_i \partial_{n_i} u_i - \sigma_e \partial_{n_i} u_e \] the jump of the outer normal derivatives on $\Gamma$; note that $n_i = -n_e$ almost everywhere on $\Gamma$. Here, $\sigma_i$, $\sigma_e >0$ are the (constant) conductivities in $\Omega_i$ and $\Omega_e$, respectively. In the applications which we have in mind, $\Omega_i$ plays the role of an interior domain, $\Omega_e$ is an exterior domain, $\partial\Omega_i = \Gamma$ and $\partial\Omega_e = \Gamma \dot\cup\partial\Omega$. When the manifold $\Gamma$ is the external boundary of a set $\Omega$, a gradient system structure has already been identified for similar problems, namely for problems involving the Dirichlet to Neumann-Steklov-Poincar\'e operator. By applying a recent approach from Chill, Hauer \& Kennedy \cite{ChHaKe14}, we identify an abstract gradient system structure for the problem \eqref{eq:mod1}, and thus provide a unified framework to solve it. The point in this approach is that the gradient structure is identified on the boundary space $L^2 (\Gamma )$, where the actual evolution takes place, but we work with an energy defined on $H^1 (\Omega_i\cup\Omega_e)$. We emphasize that in the gradient system framework, a standard and complete theory for wellposedness, regularity, asymptotic behavior, as well as a large choice of efficient numerical methods for computing solutions are well established, and in particular, a large class of steepest descent methods and optimization approaches with well known properties are ready to use. Following the seminal work of Hodgkin \& Huxley \cite{HoHu52}, a lot of examples of systems of equations like problem \eqref{eq:mod1} were considered in the study of the electrical cell activity in biological tissues \cite{Fi55, Fi83, Fi81}. As a particular example, we consider a revisited version of a model introduced recently by Kavian, Legu\`ebe, Poignard \& Weynans \cite{KLPW14} for the electropermeabilisation (or electroporation) of the membrane of a cell subjected to a short electric pulse. Roughly speaking, under a high transmembrane (electric) potential, the membrane becomes more permeable, thus allowing the diffusion of some molecules; we refer the interested reader to \cite{NeKr99, TeGoRo05, IvViMi10, PePo13, KLPW14} and the references therein for more details on the modelling and the numerous applications of this problem. In their article, Kavian et al. proposed and analysed a mathematical problem to describe qualitatively the electropermeabilisation for a single cell. They considered a static and a dynamical model with a function $s$ ensuring a smooth transmission between two states of the membrane conductivity. We emphasize that their dynamical model does not fit into our approach limited to autonomous systems like \eqref{eq:mod1} with the function $s$ independent of time, however, we have less restrictive assumptions for $s$, $\Omega_i$ and $\Omega_e$ which enlarge the type of problems for which we can identify the abstract gradient structure. In principle, it is straightforward to generalise the theory to quasilinear equations, for example, for equations where the Laplace operator is replaced by the nonlinear $p$-Laplace operator (see Remark \ref{rem.extensions} below). The article is organised as follows. In Section \ref{sec.gradient.structure} we present the theoretical background which leads to the observation that the coupled elliptic-parabolic system \eqref{eq:mod1} is a gradient system. Well-posedness and regularity of solutions then follows from classical results. In Section \ref{sec.discretisation} we discuss the discretisation of the problem \eqref{eq:mod1} relying on the theoretical framework and results obtained in Section \ref{sec.gradient.structure}. In Section \ref{sec.numerics} we present numerical experiments based on the abstract results. We compare the numerical solution with an analytical solution in the context of a simple geometry and a linear transmission law, and provide numerical solutions in the context of nonlinear transmission laws or more complicated geometries. \section{Gradient structure} \label{sec.gradient.structure} Before turning our attention to the evolution problem \eqref{eq:mod1}, we consider the stationary problem \begin{equation} \label{eq.stationary} \begin{split} -\Delta {u} & = 0 \quad \text{in } \Omega_i \cup \Omega_e , \\ [\sigma \partial_n {u}] & = 0 \quad \text{on } \Gamma , \\ s([{u}]) - \sigma_e \, \partial_{n_e} {u}_e & = f \quad \text{on } \Gamma , \\ {u}_i & = g_i \quad \text{on } \partial\Omega_i\setminus\Gamma , \\ {u}_e & = g_e \quad \text{on } \partial\Omega_e\setminus\Gamma , \end{split} \end{equation} with a given right-hand side $f\in L^2 (\Gamma )$ and a given function $g\in H^1 (\Omega_i\cup\Omega_e)$. Here again \[ [{u}] = {u}_i|_\Gamma - {u}_e|_\Gamma \] is the difference of the traces of ${u}_i := {u}|_{\Omega_i}$ and ${u}_e := {u}|_{\Omega_e}$ on the common part of the boundary $\Gamma$, and $[\sigma \partial_n u] = \sigma_i \partial_{n_i} u_i - \sigma_e \partial_{n_i} u_e$ is the jump of the outer normal derivatives. Let \[ H^1_{0,\Gamma} (\Omega_i\cup\Omega_e ) := \{ u\in H^1 (\Omega_i\cup\Omega_e ) : u_i|_{\partial\Omega_i\setminus\Gamma} = 0 \text{ and } u_e|_{\partial\Omega_e\setminus\Gamma} = 0 \} . \] We say that a function ${u}\in H^1 (\Omega_i \cup\Omega_e )$ is a {\em weak solution} of the stationary problem \eqref{eq.stationary} if $u-g\in H^1_{0,\Gamma} (\Omega_i\cup\Omega_e )$ and, for every ${v}\in H^1_{0,\Gamma} (\Omega_i \cup\Omega_e )$, \[ \int_\Omega \sigma \, \nabla {u} \nabla {v} + \int_\Gamma s([{u}]) \, [{v}] = \int_\Gamma f \, [{v}] , \] where $\sigma$ is piecewise constant, namely $\sigma := \sigma_i$ on $\Omega_i$ and $\sigma := \sigma_e$ on $\Omega_e$. Observe that if ${u}$ is a weak solution of the stationary problem, then it satisfies the boundary conditions on $\partial\Omega_i\setminus\Gamma$ and $\partial\Omega_e\setminus\Gamma$ in a weak sense, and \[ -\Delta {u} = 0 \text{ in } {\mathcal D} (\Omega_i \cup \Omega_e )' , \] as one can see by considering test functions ${v}\in{\mathcal D} (\Omega_i \cup \Omega_e )$ in the definition of a weak solution. Then the Gau{\ss}-Green formula implies, at least if ${u}$ is regular enough, that for every ${v}\in H^1_{0,\Gamma} (\Omega_i \cup \Omega_e )$, \begin{align*} \int_\Gamma f [{v}] & = \int_\Gamma \sigma_i\, \partial_{n_i} {u}_i {v}_i + \int_\Gamma \sigma_e\, \partial_{n_e} {u}_e {v}_e + \int_\Gamma s([{u}]) \, [{v}] \\ & = \int_\Gamma [\sigma \partial_n {u}] {v}_i - \int_\Gamma \sigma_e\, \partial_{n_e} {u}_e [{v}] + \int_\Gamma s([{u}]) \, [{v}] , \end{align*} and from here one sees that the two remaining boundary conditions on $\Gamma$ are satisfied, too. Accordingly, we call a function $u\in L^2_{loc} ({\mathbb R}_+; H^1 (\Omega_i\cup\Omega_e ))$ a {\em weak solution} of the evolution problem \eqref{eq:mod1} if $u-g\in H^1_{0,\Gamma} (\Omega_i\cup\Omega_e )$ for almost every $t\in{\mathbb R}_+$, $[u]\in C({\mathbb R}_+ ; L^2 (\Gamma ))\cap H^1_{loc} ((0,\infty ); L^2 (\Gamma ))$, $[u]|_{t=0} = u_0$, and for every $v\in H^1_{0,\Gamma} (\Omega_i\cup\Omega_e)$ one has \[ \int_\Omega \sigma \, \nabla {u} \nabla {v} + \int_\Gamma s([{u}]) \, [{v}] = - \int_\Gamma \partial_t [{u}] \, [{v}] \text{ for almost every } t\in{\mathbb R}_+ . \] As pointed out in the Introduction, we show existence and uniqueness of weak solutions by showing that the evolution problem \eqref{eq:mod1} has a gradient structure. For this, we follow the approach which has recently been developped in Chill, Hauer \& Kennedy \cite{ChHaKe14} and which is in some sense hidden in the definition of weak solution of the stationary problem or the evolution problem. More precisely, we consider the energy space $V:= H^1 (\Omega_i \cup \Omega_e )$, the reference Hilbert space $H:= L^2 (\Gamma )$, the bounded, linear operator \begin{align*} j: H^1 (\Omega_i \cup \Omega_e) & \to L^2 (\Gamma ) , \\ {u} & \mapsto [ {u} ] , \end{align*} and the energy ${\mathcal E} : H^1 (\Omega_i \cup \Omega_e ) \to {\mathbb R}\cup\{+\infty\}$ given by \[ {\mathcal E} ({u}) = \begin{cases} \frac12 \int_\Omega \sigma\, |\nabla {u}|^2 + \int_\Gamma S([{u}] ) & \text{if } u -g \in H^1_{0,\Gamma} (\Omega_i\cup\Omega_e) , \\[2mm] +\infty & \text{else} , \end{cases} \] where $S$ is a primitive of $s$. For the effective domain one has the equality $D({\mathcal E} ) = g+H^1_{0,\Gamma} (\Omega_i\cup\Omega_e)$, and the energy is continuously differentiable on this affine subspace as one easily verifies. Moreover, ${\mathcal E}$ is globally {\em $j$-quasiconvex} and {\em $j$-quasicoercive} in the sense that the ``shifted'' energy \begin{align*} {\mathcal E}_\omega : H^1_{0,\Gamma} (\Omega_i \cup \Omega_e ) & \to {\mathbb R} , \\ {u} & \mapsto {\mathcal E} ({u} ) + \frac{\omega}{2} \, \int_\Gamma [{u}]^2 \end{align*} is convex and coercive for every $\omega$ large enough; in fact, $\omega >L$ is sufficient, where $L\geq 0$ is the Lipschitz constant of $s$. Recall that coercivity of ${\mathcal E}_\omega$ means that the sublevels $\{{\mathcal E}_\omega \leq c\}$ are bounded for every $c\in{\mathbb R}$; it follows in this special case by an application of the first Poincar\'e inequality. We then define the {\em $j$-subgradient} of ${\mathcal E}$ by \begin{equation} \label{def.subgradient} \begin{split} \partial_j{\mathcal E} & := \{ (w,f) \in L^2(\Gamma ) \times L^2 (\Gamma ) : \text{there exists } {u}\in D({\mathcal E} ) \text{ s.t.} \\ & \phantom{:= \{ (w,f) \in L^2(\Gamma )} w = [{u}] \text{ and for every } {v}\in H^1_{0,\Gamma} (\Omega_i \cup \Omega_e ) \text{ one has} \\ & \phantom{:= \{ (w,f) \in L^2(\Gamma )} \liminf_{t\searrow 0} \frac{{\mathcal E} ({u}+t{v}) - {\mathcal E} ({u})}{t} \geq \int_\Gamma f \, [{v}] \} \\ & = \{ (w,f) \in L^2(\Gamma ) \times L^2 (\Gamma ) : \text{there exists } {u}\in H^1 (\Omega_i \cup \Omega_e ) \text{ s.t.} \\ & \phantom{:= \{ (w,f) \in L^2(\Gamma )} u-g\in H^1_{0,\Gamma} (\Omega_i\cup\Omega_e), \, w = [{u}] \text{, and} \\ & \phantom{:= \{ (w,f) \in L^2(\Gamma )} \text{for every } {v}\in H^1 (\Omega_i \cup \Omega_e ) \text{ one has} \\ & \phantom{:= \{ (w,f) \in L^2(\Gamma )} \int_\Omega \sigma \nabla {u} \nabla {v} + \int_\Gamma s([{u}]) \, [{v}] = \int_\Gamma f \, [{v}] \} \end{split} \end{equation} The equality between the first and the second line follows from the identification of the effective domain, from the fact that ${\mathcal E}$ is continuously differentiable in the affine subspace $D({\mathcal E} )$, and the special form of its derivative (in fact, G\^{a}teaux differentiable would be sufficient). The following important and at the same time almost trivial lemma is an immediate consequence of the definition of weak solution of the stationary problem \eqref{eq.stationary} and of the definition of the $j$-subgradient. \begin{lemma} One has $(w,f)\in\partial_j{\mathcal E}$ and $w=[{u}]$ as in the definition of $\partial_j{\mathcal E}$, if and only if ${u}$ is a weak solution of the stationary problem \eqref{eq.stationary}. \end{lemma} Note that the definition of the $j$-subgradient differs from the usual variational setting in the sense that the energy is not defined on the space $L^2 (\Gamma )$ itself, so that the $j$-subgradient is not a classical subgradient as defined, for example in \cite{Br73}. Moreover, we are also not in the usual variational setting of a Gelfand triple in which one has, in particular, a dense embedding of the energy space $V$ into the Hilbert space $H = L^2 (\Gamma )$. Our operator $j$ has dense range in $L^2 (\Gamma )$, but it is clearly not injective since the space of test functions on $\Omega_i\cup\Omega_e$ belongs to the kernel of $j$. Note also that the $j$-subgradient may be a multi-valued operator even if the energy on the energy space $V$ is smooth. By \cite[Corollary 2.6]{ChHaKe14}, and since the energy is $j$-quasi\-convex and $j$-quasi\-coercive, the $j$-subgradient $\partial_j{\mathcal E}$ is a maximal quasimonotone operator on $L^2 (\Gamma )$, that is, the ``shifted'' operator $\omega I + \partial_j{\mathcal E}$ is maximal monotone on $L^2 (\Gamma )$. Moreover, by \cite[Corollary 2.6]{ChHaKe14} again, the $j$-subgradient is already a subgradient, that is, there exists a quasiconvex, lower semicontinuous functional ${\mathcal E}^H : L^2 (\Gamma ) \to {\mathbb R} \cup \{+\infty \}$ on the reference Hilbert space such that \[ \partial_j{\mathcal E} = \partial{\mathcal E}^H , \] where $\partial{\mathcal E}^H$ is a classical subgradient. Theoretically, \cite[Theorem 2.8]{ChHaKe14} provides a description of this energy defined on $L^2 (\Gamma )$, but this description seems not to be useful for the discretisation considered below. For the purpose of this section, it is only important to know that such a functional ${\mathcal E}^H$ exists. Moreover, by \cite[Theorem 2.8]{ChHaKe14}, the effective domain of the functional ${\mathcal E}^H$ can be characterised as follows: \begin{align*} D({\mathcal E}^H ) & := \{ {\mathcal E}^H <+\infty \} \\ & = j (H^1 (\Omega_i\cup \Omega_e )) \\ & = H^\frac12 (\Gamma ) . \end{align*} Here, the second equality is actually \cite[Theorem 2.8]{ChHaKe14}, while the third equality follows from the theory of traces of Sobolev functions \cite{Ad75}. In particular, the effective domain is dense in $L^2 (\Gamma )$, and hence the same is true for the domain of the $j$-subgradient. From these observations we conclude that our system \eqref{eq:mod1} can be rewritten as an abstract, nonautonomous gradient system of the form \begin{equation} \label{eq.gradient.system} \dot w + \partial_j{\mathcal E} (w) \ni f , \quad w(0) = u_0 , \end{equation} where $w := [u]$ is the unknown function from which one has to compute the original solution $u$ by solving, at each time $t$, an elliptic problem. The identification of the effective domain and the classical theory of maximal monotone operators and subgradients of convex, lower semicontinuous energies (see, for example, Brezis \cite[Th\'eor\`emes 3.2, 3.6]{Br73}) yield well-posedness of this problem in the following sense. \begin{theorem}[Existence and uniqueness for the abstract gradient system] \label{thm.existence.w} For every right-hand side $f\in L^2_{loc} ({\mathbb R}_+; L^2 (\Gamma ))$ and every initial value $u_0\in L^2 (\Gamma )$ the gradient system \eqref{eq.gradient.system} admits a unique solution $w\in C({\mathbb R}_+ ; L^2 (\Gamma )) \cap H^1_{loc} ((0,\infty ); L^2 (\Gamma ))$ and $w(t)\in D(\partial_j{\mathcal E} )$ for almost every $t\in{\mathbb R}_+$. If, in addition, $u_0\in H^\frac12 (\Gamma )$ (and $f\in L^2_{loc} ({\mathbb R}_+;L^2 (\Gamma ))$), then $w\in H^1_{loc} ({\mathbb R}_+;L^2 (\Gamma ))$. Finally, if $u_0\in L^2 (\Gamma )$ and $f=0$, then $w\in C({\mathbb R}_+ ;L^2 (\Gamma )) \cap W^{1,\infty}_{loc} ((0,\infty ) ;L^2 (\Gamma ))$. \end{theorem} \begin{remark} Strictly speaking, \cite[Th\'eor\`emes 3.2, 3.6]{Br73} only apply to {\em convex}, lower semicontinuous energies, but the proof easily carries over to the case of quasiconvex energies. This is actually true for each of the following methods which may be employed in order to prove the above well-posedness result: the proof by time discretisation (implicit Euler scheme), the proof by space discretisation (the Faedo-Galerkin method), and the proof by Yosida approximations of the subgradient / Moreau-Yosida approximations of the energy, which reduces the gradient system to an ordinary differential equation. \end{remark} A lifting yields then that the problem \eqref{eq:mod1} admits for every $u_0\in L^2 (\Gamma )$ a unique weak solution, and this weak solution has the regularity described above. \begin{theorem}[Existence and uniqueness of solutions of weak solutions of \eqref{eq:mod1}] For every initial value $u_0\in L^2 (\Gamma )$ the problem \eqref{eq:mod1} admits a unique weak solution $u\in L^2_{loc} ({\mathbb R}_+ ; H^1 (\Omega_i \cup \Omega_e ))$. \end{theorem} \begin{proof} By Theorem \ref{thm.existence.w}, we already have the existence of a solution $w\in W^{1,\infty}_{loc} ((0,\infty );L^2 (\Gamma ))$ of the abstract gradient system \eqref{eq.gradient.system}. Choose $\omega\in{\mathbb R}$ large enough such that ${\mathcal E}_\omega$ is convex and coercive. Then the differential inclusion in \eqref{eq.gradient.system} (with $f=0$) can be rewritten as \[ \omega w + \partial_j{\mathcal E} (w) \ni \omega w - \dot w . \] One easily verifies that $\omega w + \partial_j{\mathcal E} (w) = \partial_j {\mathcal E}_\omega (w)$, where, as before, ${\mathcal E}_\omega$ is the shifted energy functional. By definition of the subgradient (see \eqref{def.subgradient}), and by the convexity of ${\mathcal E}_\omega$, we have \begin{align*} \partial_j{\mathcal E}_\omega & = \{ (w,f) \in L^2(\Gamma ) \times L^2 (\Gamma ) : \text{there exists } {u}\in D({\mathcal E}_\omega ) = D({\mathcal E} ) \text{ s.t.} \\ & \phantom{:= \{ (w,f) \in L^2(\Gamma )} w = [{u}] \text{ and for every } {v}\in H^1 (\Omega_i \cup \Omega_e ) \text{ one has} \\ & \phantom{:= \{ (w,f) \in L^2(\Gamma )} {\mathcal E}_\omega ({u}+{v}) - {\mathcal E}_\omega ({u}) \geq \int_\Gamma f \, [{v}] \} \\ & = \{ (w,f) \in L^2(\Gamma ) \times L^2 (\Gamma ) : \text{there exists } {u}\in D({\mathcal E}_\omega ) = D({\mathcal E} ) \text{ s.t.} \\ & \phantom{:= \{ (w,f) \in L^2(\Gamma )} w = [{u}] \text{ and for every } {v}\in H^1 (\Omega_i \cup \Omega_e ) \text{ one has} \\ & \phantom{:= \{ (w,f) \in L^2(\Gamma )} {\mathcal E}_\omega ({u}+{v}) - \int_\Gamma f \, [u+v] \geq {\mathcal E}_\omega ({u}) - \int_\Gamma f \, [{u}] \} . \end{align*} As a consequence of this identification, if $(w,f)\in \partial_j{\mathcal E}_\omega$, then there exists $u\in H^1 (\Omega_i\cup\Omega_e)$ such that $u-g\in H^1_{0,\Gamma} (\Omega_i\cup\Omega_e)$ and \begin{equation} \label{eq.argmin} u = \argmin \, ({\mathcal E}_\omega (v) - \int_\Gamma f [v] ) . \end{equation} By choosing $\omega$ even larger, if necessary, we see from the special form of the energy ${\mathcal E}$ that the function \begin{align*} H^1 (\Omega_i\cup\Omega_e ) & \to {\mathbb R}\cup\{+\infty\} , \\ v & \mapsto {\mathcal E} (v) + \frac{\omega}{2} \int_\Gamma [v]^2 - \int_\Gamma f [v] \end{align*} is {\em strictly} convex. Hence, the minimizer in \eqref{eq.argmin} is uniquely determined. Standard arguments for classical subgradients and inverses of strictly monotone operators yield that there exists a constant $C\geq 0$ such that for any pair $u_1$, $u_2\in H^1 (\Omega_i\cup\Omega_e )$ of solutions of the minimisation problem \eqref{eq.argmin} for given functions $f_1$, $f_2\in L^2 (\Gamma )$ one has \[ \| u_1 -u_2\|_{H^1 (\Omega_i\cup\Omega_e)} \leq C\, \| f_1-f_2\|_{L^2 (\Gamma )} . \] Applying these observations on the differential inclusion above, we find that there exists a unique $u\in L^2 (0,T; H^1 (\Omega_i\cup\Omega_e))$ such that $u-g\in H^1_{0,\Gamma} (\Omega_i\cup\Omega_e)$ and $[u] = w$ almost everywhere. By construction, $u$ is the unique weak solution of \eqref{eq:mod1}. \end{proof} \begin{remark} \label{rem.extensions} We repeat that our approach to proving well-posedness of the system \eqref{eq:mod1} is formally restricted to the case when the energy does not depend on time, but the framework we are working in allows us to consider several possible generalisations. (a) The theory works in the same way if we choose $s$ to be a function of the form $s=s_0 +s_1$, where $s_0$ is monotone (nondecreasing) and $s_1$ is globally Lipschitz continuous. The energy ${\mathcal E}$ is defined in the same way, with a primitive of $S$, but its effective domain is in general no longer an affine subspace, at least if $s_0$ has superlinear growth. In this case ${\mathcal E}$ is no longer G\^ateaux differentiable on $g+H^1_{0,\Gamma} (\Omega_i\cup\Omega_e )$, but merely lower semicontinuous. The $j$-subgradient $\partial_j{\mathcal E}$ is then only defined by the first line in \eqref{def.subgradient}. However, the energy will still be quasiconvex and quasicoercive, so that the abstract problem \eqref{eq.gradient.system} is still well-posed in the sense described above. (b) Similarly, like in the case of the Dirichlet-to-Neumann operator considered in \cite{ArEl12, AEKS14} (linear case) and \cite{ChHaKe14} (nonlinear case), the regularity assumptions on $\Omega_i$ and $\Omega_e$ may be considerably relaxed. It suffices to assume that $\partial\Omega_i$, $\partial\Omega_e$ and $\Gamma$ have locally finite $(d-1)$-dimensional Hausdorff measure. Traces are then to be understood in a weaker sense; see \cite{Da00, ChHaKe14} for the definition which goes back to Mazya \cite{Mz85}. (c) The method shows that the Laplace operator may be replaced by the $p$-Laplace operator or any other nonlinear elliptic operator with variational structure. This might be of importance if in the applications described in the Introduction it becomes necessary to consider a larger class of models with nonlinear diffusion operators. In the present work we shall show some numerical experiments, and we have therefore restricted ourselves to the case of semilinear problems with the Laplace operator as leading operator. \end{remark} \section{Discretisation} \label{sec.discretisation} In this section we propose to find approximate solutions of the problem \eqref{eq:mod1} by using a semi-discrete implicit time scheme, that is, given a time step $h>0$, we are seeking a sequence $(z^n)_{n=0}^{[T/h]}$, thought to be an approximation of $(u(nh))_{n=0}^{[T/h]}$, where $u$ is a solution of \eqref{eq:mod1}. More precisely, $(z^n)$ is a solution of the discrete system \begin{align*} & \frac{z^{n+1}-z^n}{h}+\partial_j{\mathcal E}(z^{n+1})\ni 0 , \\ & z^0 = u_{0,h} . \end{align*} Recalling that $\partial_j{\mathcal E}$ is actually a subgradient of some energy ${\mathcal E}^H$ defined on $L^2 (\Gamma )$, it is well known that this system is equivalent to solving in each step a minimisation problem, and so we obtain the so called {\em proximal algorithm} \cite{BaCo11,BoVa04,Le89}: \begin{align*} z^0 & = u_{0,h} , \\ z^{n+1} & =\argmin\, ({\mathcal E}^H(w)+\frac{1}{2h}\Vert w-z^n\Vert_{L^2(\Gamma )}^2) \\ & = \argmin\, \inf_{[u]=w}({\mathcal E}(u)+\frac{1}{2h}\Vert [u]-z^n\Vert_{L^2(\Gamma )}^2), \end{align*} where in the last inequality we have used an identification of ${\mathcal E}^H$ from \cite[Corollary 2.9]{ChHaKe14}. Thus, instead of solving a minimisation problem for the energy ${\mathcal E}^H$, which is difficult to identify or to handle in practical situations, we solve the modified proximal algorithm \begin{equation}\label{eq:var0} \begin{split} z^0 & = u_{0,h} , \\ \hat z^{n+1} & = \argmin\, ({\mathcal E}(u)+\frac{1}{2h}\Vert [u]-z^n\Vert_{L^2 (\Gamma )}^2), \\ z^{n+1} & = [ \hat{z}^{n+1} ], \end{split} \end{equation} where now the minimisation is performed for the energy ${\mathcal E}$ in the reference energy space $H^1 (\Omega_i \cup\Omega_e )$ (the effective domain of ${\mathcal E}$). This energy is explicitly given, but we have to pay a price by passing from a minimisation problem in the space $L^2 (\Gamma )$ to a minimisation problem in the reference energy space $H^1 (\Omega_i \cup\Omega_e )$, that is, from a function space over $\Gamma$ to a function space over $\Omega_i\cup\Omega_e$, which adds one space dimension in the domain. However, at the same time, the structure of the problem \eqref{eq:mod1}, which couples a parabolic equation on $\Gamma$ with an elliptic equation in $\Omega_i\cup\Omega_e$, suggests that it is necessary to pass through $\Omega_i\cup\Omega_e$ anyhow. \begin{remark} In the case of the example considered below, it is possible to express the problem \eqref{eq:mod1} on the manifold $\Gamma$ by \begin{equation} \label{eq:absform} \dot{U}+\Lambda_\sigma U + {\mathbf S}(U)=0,\quad U(0)=U_0, \end{equation} with $U=(u_i|_{\Gamma},u_e|_{\Gamma})$, \[ \Lambda_\sigma=\begin{pmatrix} \Lambda_{\sigma_i} & 0 \\ 0 & \Lambda_{\sigma_e} \end{pmatrix}\quad \text{ and } \quad \mathbf{S}(U)=\begin{pmatrix} s(u_i-u_e) \\ -s(u_i-u_e) \end{pmatrix} , \] where $\Lambda_{\sigma_i}$ and $\Lambda_{\sigma_e}$ denote appropriate Dirichlet-to-Neumann operators on $\Gamma$. When the geometry is simple (that is, for example, when $\Omega_i$ and $\Omega_i\cup\Omega_e\cup\Gamma$ are concentric balls) and when the diffusion coefficients $\sigma_i$ and $\sigma_e$ are constant, these operators are easy to compute (the first one admits in fact an explicit representation \cite[Section 36.2]{La02}), and one might solve the gradient system directly on $\Gamma$. However, such geometries seem not realistic for cells and biological tissues. That is why we prefer to have a more general approach for solving problem \eqref{eq:mod1}. \end{remark} The existence and uniqueness for the problem \eqref{eq:var0} is well known, at least if the time step $h$ is small enough ($h<\frac{1}{L}$ is sufficient, where $L$ is the Lipschitz constant of $s$), and the sequence $(z^n)_n$ is then well defined. Note that the variational Euler-Lagrange equation corresponding to \eqref{eq:var0} is \begin{equation}\label{eq:var1} \int_{\Omega_i\cup\Omega_e} \sigma \nabla\hat z^{n+1} \nabla v +\int_\Gamma s([\hat z^{n+1}])\,\jump{v}\,d\sigma+\int_\Gamma\,\frac{1}{h}([\hat z^{n+1}]-z^n)\,\jump{v}\,d\sigma=0. \end{equation} Thus the algorithm reads as follows: \begin{itemize} \item[-] Choose $z^0$ ($=u_{0,h}$), an approximation of the exact initial value $u_0$. \item[-] Given $z^n\in H^\frac12 (\Gamma )$, compute ${\hat z}^{n+1}$, solution of \eqref{eq:var0} or, equivalently, \eqref{eq:var1}. \item[-] Set $z^{n+1} := [\hat z^{n+1}]$. \end{itemize} Note that $z^0$ is any element in the closure of $j(V) = H^\frac12 (\Gamma )$, that is, $z^0 \in L^2 (\Gamma )$, and after one iteration $(z^n)_n$ remains in $H^\frac12 (\Gamma )$, the effective domain of ${\mathcal E}^H$. We emphasize that the gradient structure of the system \eqref{eq:var0} allows one to use any optimization method to solve the minimisation step. However, since the reference energy space $H^1(\Omega_i\cup\Omega_e)$ contains functions with a jump on the manifold $\Gamma$, a natural approach might be based on an alternating algorithm of minimisation in the sub-domains. More precisely, the method consists of a non overlapping Schwarz algorithm to solve the problem \eqref{eq:var1}. For each time step $n$, and given $z^n$, we denote $z^{n+1}$ by $u_i^{n+1}-u_e^{n+1}$ and we drop the index $n+1$ for simplicity. Then, we compute a sequence $(u^k)_k$ in the following way: given $u^k$, we solve \begin{equation}\label{eq:sch1} \begin{cases} -\Delta {u}_i^{k+1}=0 & \text{in } \Omega_i , \\[2mm] \frac{{u}_i^{k+1}-{u}_e^{k}-z^n}{h}+s(({u}_i^{k+1}-{u}_e^{k}))+\sigma_i \, \partial_{n_i} {u}_i^{k+1}=0 & \text{on } \Gamma , \\[2mm] u_i = g_i & \text{on } \partial\Omega_i \setminus \Gamma \end{cases} \end{equation} \begin{equation}\label{eq:sch2} \begin{cases} -\Delta {u}_e^{k+1}=0 & \text{in } \Omega_e , \\[2mm] \frac{{u}_i^{k^*}-{u}_e^{k+1}-z^n}{h}+s(({u}_i^{k^*}-{u}_e^{k+1}))+ \sigma_e\, \partial_{n_e} {u}_e^{k+1}=0 & \text{on } \Gamma , \\[2mm] u_e^{k+1}=g_e & \text{on } \partial\Omega_e\setminus\Gamma , \end{cases} \end{equation} with $k^*=k$ or $k^*=k+1$. The existence of solutions for the sub-problems \eqref{eq:sch1}-\eqref{eq:sch2} follows from the assumptions on $s$ and the condition $h<\frac{1}{L}$. The convergence of the Schwarz algorithm with nonlinear transmission conditions is not obvious and is beyond the scope of this paper. We emphasize that several choices on the coupling terms on $\Gamma$ are possible, for example nonlinear coupling terms which are both implicit for the interior and the exterior domain, nonlinear coupling terms which are implicit in one of the domains, and nonlinear coupling terms which are both explicit for the interior and the exterior domain. \subsection*{A remark on a linear version of the algorithm} A variant of the Schwarz algorithm consists in linearizing the transmission conditions. For this, we set $s(\left[ u\right])=a(\left[ u\right])\left[ u\right]$ (in particular, we assume $s(0) = 0$, which is a reasonable assumption). Then we can rewrite the internal sub-problem \eqref{eq:sch1} as \begin{equation}\label{eq:sch1l} \begin{cases} -\Delta {u}_i^{k+1}=0, & \text{in } \Omega_i , \\[2mm] \frac{{u}_i^{k+1}-{u}_e^{k}-z^n}{h}+a({u}_i^{k}-{u}_e^{k})({u}_i^{k+1}-{u}_e^{k})+\sigma_i\,\partial_{n_i} {u}_i^{k+1}=0 & \text{on } \Gamma , \\[2mm] u_i^{k+1} = g_i & \text{on } \partial\Omega_i\setminus\Gamma . \end{cases} \end{equation} If we set $a_k:=a(\left[ {u}^k\right]) = a({u}_i^{k}-{u}_e^{k})$ and \[ B_{k}(u)= (\frac{1}{h}+a_k)\,u , \] then we may rewrite the nonlinear Schwarz algorithm as a linear implicit method of the form \begin{equation}\label{eq:sch1ll} \begin{cases} -\Delta {u}_i^{k+1}=0 & \text{in } \Omega_i ,\\[2mm] B_{k}({u}_i^{k+1}) + \sigma_i\, \partial_{n_i} u_i^{k+1} = B_{k} ({u}_e^k ) + \frac{z^n}{h} & \text{on } \Gamma , \\ u_i^{k+1} = g_i & \text{on } \partial\Omega_i\setminus\Gamma , \end{cases} \end{equation} \begin{equation}\label{eq:sch2ll} \begin{cases} -\Delta {u}_e^{k+1}=0 & \text{in } \Omega_e ,\\[2mm] B_{k}({u}_e^{k+1}) + \sigma_e\, \partial_{n_e} u_e^{k+1} = B_{k} ({u}_i^k ) + \frac{z^n}{h} & \text{on } \Gamma , \\[2mm] {u}_e^{k+1}=g_e & \text{on } \partial\Omega_e \setminus\Gamma .\\ \end{cases} \end{equation} For $\epsilon>0$, we set this Schwarz method under the form of the linear transmission Robin condition \begin{equation}\label{eq:sch1lleps} \begin{cases} -\Delta {u}_i^{\epsilon,k+1}=0 & \text{in } \Omega_i,\\[2mm] B_{k}({u}_i^{\epsilon,k+1}) + \sigma_i\, \partial_{n_i} u_i^{k+1} = B_{k} ({u}_e^{\epsilon,k}) + \epsilon \partial_{n_e} u_e^{\epsilon,k}& \text{on } \Gamma, \\ u_i^{k+1} = g_i & \text{on } \partial\Omega_i\setminus\Gamma , \end{cases} \end{equation} \begin{equation}\label{eq:sch2lleps} \begin{cases} -\Delta {u}_e^{\epsilon,k+1}=0 & \text{in } \Omega_e ,\\[2mm] B_{k}({u}_e^{\epsilon,k+1}) + \sigma_e\, \partial_{n_e} u_e^{\epsilon,k+1} = B_{k} ({u}_i^{\epsilon,k} ) + \epsilon \partial_{n_i} u_i^{\epsilon,k} & \text{on } \Gamma , \\[2mm] {u}_e^{\epsilon,k+1}=g_e & \text{on } \partial\Omega_e\setminus\Gamma . \end{cases} \end{equation} Note that this is a slight generalization of the Schwarz method considered in \cite[Theorem 1, and the section V]{Li88I} and the convergence of this algorithm may be obtained following the same lines. In particular, for general geometries and domain decompositions, or for non-convex energies ${\mathcal E}$, the linearization of the algorithm might be suitable. \begin{remark} The Schwarz method is not the unique possible choice to solve problem \eqref{eq:var1}, but it is a quite natural approach. In fact, for many classical problems (for example, domain decomposition), the Schwarz method is an elegant approach, although it may have some shortcomings such as expansive cost or slow convergence. When it is used with the state-of-the-art scientific computing methods (parallel programming, preconditioning), it becomes a very attractive tool \cite{GlPaPe95}. For the problem considered here, it is feasible even for more than one cell, for example, a network of cells. \end{remark} \section{Numerics} \label{sec.numerics} In this section we consider three examples to test our approach. We emphasize that our numerical simulations are presented as a proof of the concept rather than the results of an optimized computing code for solving general problems of j-gradient type. In particular, we do not choose the physical parameters for the model of electropermeabilisation and do not try to make any comparison with existing models. The computations are done on a laptop mac-pro i5 (2.5 GHz) with the open source software FreeFem++ \cite{He12}. We use the nonlinear algorithm and the Schwarz iterations are performed with the nonlinear optimization library IPopt \cite{WaBi06}. The first example treats a simple geometry of the cell and linear transmission conditions on $\Gamma$ where actually an analytic solution is available (see \cite{KLPW14}); we may thus compare the analytical and the numerical solution. The second and the third examples treat more complex transmission conditions at the membrane $\Gamma$, namely a nonlinear, monotone transmission law proposed by Kavian et al. \cite{KLPW14}, and one condition of a double well type. The two nonlinearities are of a rather different nature and might serve as representatives of various other transmission conditions. We recall that in our approach several generalizations are possible and we end up this Section with some non trivial geometries. \subsection{Example 1} In our first example we let $0<R_1<R_2$ and put $\Omega_i := B(0,R_1)$, $\Omega_e := B(0,R_2)\setminus \overline{B(0,R_1)}$, and $\Gamma := \partial B(0,R_1)$, that is, $\Omega_i$ is the disk of radius $R_1$, $\Omega_e$ is a concentric annulus with radii $R_1$ and $R_2$, and $\Gamma$ is the circle of radius $R_1$. We assume given two constant conductivities, $\sigma_i$ in $\Omega_i$ and $\sigma_e$ in $\Omega_e$, respectively, and a Dirichlet boundary condition $g=E\,R_2\cos(\theta)$ on $\partial B(0,R_2)$, where $E$ is a given constant electrical field intensity. The function $s$ is assumed to be linear, that is, $s(\lambda)=S_L\cdot \lambda$, where $S_L$ is a constant. An explicit solution for these data is given in \cite{KLPW14} in polar coordinates, namely \begin{align*} & u(r,\theta)=(\alpha_e\,r+b_er^{-1})\cos(\theta) && \text{ for } (r,\theta)\in\left[ R_1,R_2\right]\times\left[ 0,2\pi\right] , \text{ and} \\ & u(r,\theta)=\alpha_i\,r\cos(\theta) && \text{ for } (r,\theta)\in\left[ 0,R_1\right]\times\left[ 0,2\pi\right] , \end{align*} where, if we set $A=\frac{1}{2}(\frac{\sigma_i}{S_LR_1}+1+\frac{\sigma_i}{\sigma_e})$ and $B=(\frac{\sigma_i}{S_LR_1}+1-\frac{\sigma_i}{\sigma_e})$, \[ \alpha_e=A\alpha_i,\quad \beta_e=B\alpha_iR_1^2,\quad \alpha_i=\frac{E}{(A+B(\frac{R_1}{R_2})^2)}. \] For the simulation we take $\sigma_i=\sigma_e=1$ and $R_1=1$, $R_2=2$. In Figure \ref{fig1}, we plot the convergence curve of the $L^2$-error of the solution at the final time $T=1$, as a function of the space discretization parameter $h_x$ in a log log scale and a fixed time step $h=0.1$. \begin{figure}[h!]\label{fig1} \begin{center} {\includegraphics[width = 0.7\textwidth]{errL2.eps}} \end{center} \caption{Convergence curve for the $L^2$ error in log log scale. $S_L=10^8$, rate of convergence $1.97$} \end{figure} We note that the algorithm converges very quickly in this example and the solution is accurately computed, as formally expected from theoretical considerations. This supports that the Schwarz method should converge even with nonlinear transmission conditions. Moreover, as we work in a variational setting, we may consider more general geometries and boundary conditions without supplementary efforts. Note that the convergence rate, in this example, decreases with $S_L$ whatever the mesh size $h_x$ is and for a fixed time step $h$. This might be justified by the fact that when $S_L$ decreases, the solution becomes more singular. In addition, for smaller $S_L$, the time step should be chosen small, too, to ensure the coerciveness of the energy. \begin{remark} For the electropermeabilisation problem, the dynamical transmission condition is \[ C_m\partial_t {u}+s_m(u)+\sigma_i\partial_{n_i} {u}_i=0 \text{ on } \Gamma , \] where $\sigma_i$ is the internal constant conductivity (a typical value is $0.455$ S/m) and $C_m$ is the capacitance (a typical value is $9.5 10^{-3}\ F/m^2$ \cite{NeKr99}). We have not taken exactly these values in this example, because we are only interested in the qualitative behaviour of the system. Nevertheless, the large difference between $S_L$ and $S_R$ allows for an optimal rate of convergence of the algorithm. \end{remark} \subsection{Example 2} In the second example, we choose $\Omega_i$, $\Omega_e$ and $\Gamma$ as in the example 1, and we consider the nonlinear function $s$ to be the derivative of a double well potential with equilibrium points $S_L<S_a<S_R$, that is, \[ s(t)=-\epsilon^2\,A_m\,(t-S_R)(t-S_a)(t-S_L) , \] with $\epsilon>0$, $A_m\geq 0$. We assume that $S_a< \frac{S_L+S_R}{2}$. This is a particular example for a more general choice of functions satisfying \[ s'(S_R)<0,\quad s'(S_a)>0,\quad s'(S_L)<0. \] In this example we consider the same boundary conditions as in the first example and a zero initial condition. In Figures \ref{fig2-1}-\ref{fig2-6} we plot the solution $u=(u_i,u_e)$ at times $T=0.5$ and $T=1.0$. The time step is $0.05$ and the mesh size $h_x= 0.07$. In this example, we have set $S_L=1.9$, $S_R=10^2$, $S_a=10$, $Am=1$ and $\epsilon=10^{-3}$. Note that the colormap is calculated for each single image so that the colormap for the solution in the entire domain $\Omega_i\cup\Omega_e$ does not necessarily appear to be the sum of the colormaps of the two images in each sub-domain (compare, for example, \ref{fig2-4}, \ref{fig2-5}, and \ref{fig2-6}). \begin{remark} Note that as we may expect the solutions to be very smooth except in a neighborhood of $\Gamma$, we may use different meshes on $\Omega_i$ and $\Omega_e$ and refine the meshes close to $\Gamma$ (see example 3). \end{remark} \begin{figure}[h!] \begin{center} \subfigure[The solution in $\Omega_i$ \label{fig2-1}] {\includegraphics[width = 0.32\textwidth]{DWPInt-n4-u-5}} \subfigure[The solution in $\Omega_e$ \label{fig2-2}] {\includegraphics[width = 0.32\textwidth]{DWPEx-n4-u-5}} \subfigure[Solution in $\Omega$\label{fig2-3}] {\includegraphics[width = 0.32\textwidth]{DWP-n4-u-5}}\\[1mm] \subfigure[Solution in $\Omega_i$\label{fig2-4}] {\includegraphics[width = 0.32\textwidth]{DWPInt-n16-u-10}} \subfigure[ Solution in $\Omega_e$ \label{fig2-5}] {\includegraphics[width = 0.32\textwidth]{DWPEx-n16-u-10}} \subfigure[Solution in $\Omega$ \label{fig2-6}] {\includegraphics[width = 0.32\textwidth]{DWP-n16-u-10}} \end{center} \caption{Computed solution $(u_i,u_e)$ at $T=0.5$ and $T=1.$} \end{figure} \subsection{Example 3} In the third example, we take the function $s$ which has been considered in Kavian et al. \cite{KLPW14}. It is the globally monotone function \[ s(t)=S_L+\frac{(S_R-S_L)}{2}(1+\tanh (Ke\,(\vert t\vert-Vr))) , \] where $V_r$, $Ke$, $S_L$, $S_R$ are given constants. To make the problem differentiable, we replace $\vert t\vert$ by $\sqrt{t^2+\epsilon^2}$. We consider the same boundary condition on $\partial\Omega$ and a zero initial condition, like in example 2. In Figures \ref{fig3}-\ref{fig3-1one}, we plot the solution $u=(u_i,u_e)$ at times $T=0.5$ and $T=1$. The time step is $0.05$ and the mesh size $h_x= 0.07$. We take the constants $Ke=10$, $S_L=1.9$, $S_R=10^2$, $Vr=2.9$, and $E=1$. \begin{figure}[h!] \begin{center} \subfigure[Solution in $\Omega_i$\label{fig3}] {\includegraphics[width = 0.32\textwidth]{VKaPInt-n8-u-6}} \subfigure[Solution in $\Omega_e$\label{fig3-0}] {\includegraphics[width = 0.32\textwidth]{VKaPEx-n8-u-6}} \subfigure[Solution in $\Omega$\label{fig3-1}] {\includegraphics[width = 0.32\textwidth]{VKaP-n8-u-6}}\\[1mm] \subfigure[Solution in $\Omega_i$\label{fig3one}] {\includegraphics[width = 0.32\textwidth]{VVKaPInt-n8-u-10}} \subfigure[Solution in $\Omega_e$\label{fig3-0one}] {\includegraphics[width = 0.32\textwidth]{VVKaPEx-n8-u-10}} \subfigure[Solution in $\Omega$\label{fig3-1one}] {\includegraphics[width = 0.32\textwidth]{VVKaP-n8-u-10}} \end{center} \caption{Computed solution $(u_i,u_e)$ at $T=0.5$ and $T=1$} \end{figure} \begin{remark} The numerical results with two different nonlinearities $S$ are in this example quite similar, since the two functions ensure a transmission/transition from the left state characterized by the potential $S_L$ to the right state $S_R$. The main difference is the smoothness of the transition from the left to the right. Note also the role of the constants $V_r$ and $K_e$ on the profile of this transition for the example 3, which has no counterpart in the example 2 even if $\epsilon$ tends to sharpen the profile. We emphasize that our main concern in this article is the possibility of using several kind of nonlinearities, geometries etc. in the framework of the $j$-gradient theory and not to validate any choice of the electropermeabilisation model. \end{remark} We end this numerical section by considering the data in a range close to the physical parameters, namely $Ke=10$,$S_L=1.9$, $S_R=10^6$, $Vr=1.5$, and $E=4$. The results are plotted in Figures \ref{fig3b}-\ref{fig3b-2} at time $T=0.5$. One may observe that high and fast variations of the electrical potential are located close to $\Gamma$. \begin{figure}[h!] \begin{center} \subfigure[Solution $u_i$, $T=0.5$ \label{fig3b}] {\includegraphics[width = 0.32\textwidth]{KaPInt-n8-u-10}} \subfigure[Solution $u_e$, $T=0.5$ \label{fig3b-1}] {\includegraphics[width = 0.32\textwidth]{KaPEx-n8-u-10}} \subfigure[Solution $u$, $T=0.5$ \label{fig3b-2}] {\includegraphics[width = 0.32\textwidth]{KaP-n8-u-10}} \end{center} \caption{Solution with nearly physical parameters} \end{figure} The last results correspond to the example 3 in the sense that we take the same nonlinearity $s$, but for different geometries. We have set $Ke=10$, $S_L=1.9$, $S_R=10^3$, $Vr=2.01$, and $E=1$. It is interesting to note how the shape of $\Gamma$ changes the solution, namely both the profile and the magnitude. \begin{figure}[h!] \begin{center} \subfigure[Cassini egg mesh\label{fig4-1}] {\includegraphics[width = 0.35\textwidth]{Cassini-no-th}} \subfigure[Solution in $\Omega$ \label{fig4-2}] {\includegraphics[width = 0.35\textwidth]{NL-Egg-schwarz-no-u-0116}}\\[1mm] \subfigure[the solution $u_e$\label{fig4-3}] {\includegraphics[width = 0.32\textwidth]{NL-Egg-Ext-no-u-0116}} \subfigure[ the solution $u_i$ \label{fig4-4}] {\includegraphics[width = 0.32\textwidth]{NL-Egg-Int-no-u-0116}} \end{center} \caption{Cassini egg shape} \end{figure} \begin{figure}[h!] \begin{center} \subfigure[Snale mesh \label{fig4-1snail}] {\includegraphics[width = 0.35\textwidth]{Escargot-no-th}} \subfigure[Solution in $\Omega$ \label{fig4-2snail}] {\includegraphics[width = 0.35\textwidth]{NL-Esc-schwarz-no-u-0116}}\\[1mm] \subfigure[the solution $u_i$ at $T=0.025$\label{fig4-3snail}] {\includegraphics[width = 0.32\textwidth]{NL-Esc-Ext-no-u-0116}} \subfigure[ the solution $u_e$ at $T=0.025$\label{fig4-4snail}] {\includegraphics[width = 0.32 \textwidth]{NL-Esc-Int-no-u-0116}} \end{center} \caption{A snale cell} \end{figure} \newpage \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \bibliographystyle{amsplain} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def\ocirc#1{\ifmmode\setbox0=\hbox{$#1$}\dimen0=\ht0 \advance\dimen0 by1pt\rlap{\hbox to\wd0{\hss\raise\dimen0 \hbox{\hskip.2em$\scriptscriptstyle\circ$}\hss}}#1\else {\accent"17 #1}\fi} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
1,108,101,566,292
arxiv
\section{Introduction} \subsection{Motivation} Motivated by the celebrated Weyl's law, we aim to study the asymptotic behavior of eigenvalues of the Kohn Laplacian $\square_b$ (also referred to as the complex Laplacian) on compact Heisenberg manifolds, specifically compact quotients of the Heisenberg group by lattice subgroups. Much of our work is inspired by \cite{REU2020Weyl}, where the authors compute the leading coefficient of the eigenvalue counting function for $\square_b$ on functions on the $(2n-1)$-dimensional sphere $S^{2n-1}$. As in the original Weyl's law, here the leading coefficient is proportional to the volume of $S^{2n-1}$, multiplied by a constant that depends only on the dimension $n$. This constant is expressed as an integral and is similar to the constant that appears in \cite[Theorem 6.1]{Stanton1984TheHE}. Note that the result in \cite{Stanton1984TheHE} examines Weyl's law for the Kohn Laplacian on $\left(p,q\right)$-forms, where $0 < q < n - 1$, on compact strongly pseudoconvex embedded CR manifolds of hypersurface type in $\mathbb{C}^n$ for $n\geq 3$. A similar analog of Weyl's law for the Kohn Laplacian on functions on such general CR manifolds is an open problem. \cite{REU2020Weyl} gives an answer on spheres. In this paper, we obtain an analog of Weyl's law for the Kohn Laplacian on functions and on differential forms on compact Heisenberg manifolds. We first note that the Heisenberg group has two distinguished left-invariant differential operators: $\mathcal{L}_0$ and $i^{-1} T$. For $\alpha \in \mathbb{R}$, the family of left-invariant operators given by $\mathcal{L}_\alpha = \mathcal{L}_0 + i \alpha T$ is also of importance due to its relation to the Kohn Laplacian. In fact, the spectral analysis of $\square_b$ reduces to understanding $\mathcal{L}_\alpha$. We note that every positive real number is an eigenvalue of $\mathcal{L}_\alpha$ on the Heisenberg group, see \cite{STRICHARTZ1991350}, therefore the spectrum is not discrete. Thus, it is not a suitable manifold on which to count eigenvalues. However, on compact quotients $M$, the operators $\mathcal{L}_\alpha$ have discrete spectra, as noted in \cite{Folland2004CompactHM}. Thus, we can count the eigenvalues on these compact Heisenberg manifolds. We note that obtaining the asymptotics of a counting function for a given positive sequence of numbers is not always straightforward. In \cite{Strichartz2015} and \cite{Taylor1986}, the authors study the distribution of eigenvalues on compact quotients for the single operator $\mathcal{L}_0$. In particular, they obtain the asymptotic result $N \left(\lambda\right)\sim C_{d,0} \operatorname{vol} \left(M\right) \lambda^{ d + 1}$, where $N(\lambda)$ is the eigenvalue counting function for $\mathcal{L}_0$ and $C_{d,0}$ is a constant that that depends only on the dimension $d$. In \cite{Strichartz2015}, Strichartz obtains his result by using Folland's explicit spectrum for $\mathcal{L}_0$ and a careful analysis of the asymptotics of binomial coefficients. On the other hand, in \cite{Taylor1986}, Taylor uses asymptotics of the trace of the heat kernel and Karamata's Tauberian theorem, with no reference to an explicit spectrum. In this note, by combining both the explicit spectrum from \cite{Folland2004CompactHM} and Karamata's Tauberian theorem, we obtain asymptotics for a family of second order differential operators $\mathcal{L}_\alpha$, for $-d \leq \alpha \leq d$. As a corollary, we obtain an analog of Weyl's law for the Kohn Laplacian on functions and differential forms on $M$. We note that our result on $\left(p,q\right)$-forms, up to a simple dimensional constant, matches the Weyl's law analog in \cite{Stanton1984TheHE}. Furthermore, the Weyl's law analog we obtain for functions matches, up to the same dimensional constant as before, with the Weyl's law for spheres in \cite{REU2020Weyl}. These observations provide more insight into the open problem mentioned above. \subsection{Preliminaries} We follow the exposition of the Heisenberg group and the Kohn Laplacian in \cite{Folland2004CompactHM} closely and refer the reader to that paper. We also refer the reader to \cite{CanarecciMasterthesis} and \cite[Chapter XIII]{Stein} for further definitions and details, and \cite{CS01} for a detailed introduction to the Kohn Laplacian on CR manifolds. \begin{definition} \label{def:Heisenberg} The $d$-dimensional {\it Heisenberg group}, $\mathbb{H}_d$, is the set $\mathbb{C}^d \times \mathbb{R}$ along with the group law defined by \[\left(z,t\right) \cdot \left(z',t'\right) = \left(z + z', t + t' + 2\operatorname{Im}\left\langle z, z'\right\rangle\right),\] where $z,z' \in \mathbb{C}^d$; $t, t'\in \mathbb{R}$; and $\left\langle z, z'\right\rangle = z_1 \overline{z}'_1 + \cdots + z_d \overline{z}'_d$. \end{definition} Note that $\mathbb{H}_d$ embeds naturally in $\mathbb{C}^{d+1}$ under the identification \[\left(z,t\right) \mapsto \left(z,t + i\left|z\right|^2\right)\] and therefore it is an embedded CR manifold of hypersurface type. The Heisenberg group can be alternatively described in polarized coordinates. That is, $\mathbb{H}_d$ is the set $\mathbb{R}^d \times \mathbb{R}^d \times \mathbb{R}$ with the group law \[\left(p,q,s\right) \cdot \left(p',q',s'\right) = \left(p + p', q + q', s + s' + p\cdot q'\right).\] For $-d \leq \alpha \leq d$, define the second order differential operator \[\mathcal{L}_a = -\frac{ 1}{2} \sum_{j=1}^d \left( Z_j \overline{Z}_j + \overline{Z}_j Z_j\right) + i \alpha T,\] where \[\overline{Z}_j = \frac{\partial }{\partial \overline{z}_j} - i z_j \frac{\partial }{\partial t} \quad \text{ and } \quad T = \frac{\partial }{\partial t}.\] The following properties of $\mathcal{L}_\alpha$ and $\square_b$ are well-known and documented in \cite{Folland2004CompactHM}. To investigate the spectral asymptotics of $\mathcal{L}_\alpha$, it is convenient to study $\mathcal{L}_0$ and $i^{-1}T$ separately as they are essentially self-adjoint strongly commuting operators. The connection between $\mathcal{L}_\alpha$ and $\square_b$ is then given by the diagonal action of $\square_b$ on $\left(0,q\right)$-forms, $0 \leq q \leq d$: \[\square_b\left( \sum_{\left|J\right| = q} f_J d \overline{z}^J\right) = \sum_{\left|J\right| = q} \mathcal{L}_{d - 2q} f_J d \overline{z}^J,\] where $f_J$ are functions, $J=\left(j_1, \ldots, j_q\right)$ with $1 \leq j_1 < \cdots <j_q \leq d$, and $d\overline{z}^J=d\overline{z}_1 \wedge \cdots \wedge d\overline{z}_q$. Let $\Gamma$ be a lattice subgroup of $\mathbb{H}_d$, that is, a discrete subgroup so that $M = \Gamma\setminus \mathbb{H}_d$ is a compact manifold. Note that the CR structure and the operators $\mathcal{L}_\alpha$ and $T$ descend onto $M$. This makes $M$ a strongly pseudoconvex CR manifold. Furthermore, the diagonal action of $\square_b$ descends onto $M$. Importantly, for all $\alpha$, $\mathcal{L}_\alpha$ on $M$ has discrete eigenvalues which are explicitly given in \cite{Folland2004CompactHM}. To obtain an analog of Weyl's law, we use a generating functions argument and invoke Karamata's Tauberian theorem (see \cite[Theorem 1.1, page 57]{ANPS09}). \begin{theorem*}[Karamata]\label{thm:tauberian} Let $\left\{\lambda_j\right\}_{j \in \mathbb{N}}$ be a sequence of positive real numbers such that $\sum_{j \in \mathbb{N}}e^{-\lambda_j t}$ converges for every $t>0$. Then for $n>0$ and $a \in \mathbb{R}$, the following are equivalent. \begin{enumerate} \item $\lim_{t \to 0^+} t^n\sum_{j \in \mathbb{N}}e^{-\lambda_j t} =a$ \item $\lim_{\lambda \to \infty} \frac{N\left(\lambda\right)}{\lambda^n}=\frac{a}{\Gamma\left(n+1\right)}$ \end{enumerate} where $N\left(\lambda\right)=\#\left\{j:\lambda_j\leq \lambda\right\}$ is the counting function. \end{theorem*} \subsection{Main Results} Putting these ideas together yields the following analog of Weyl's law for $\mathcal{L}_\alpha$. \begin{theorem}\label{mainT Let $N(\lambda)$ be the eigenvalue counting function for $\mathcal{L}_\alpha$ on $L^2\left(M\right)$ for $-d \leq \alpha \leq d$. For $-d < \alpha < d$, \[\lim_{\lambda\to\infty} \frac{N \left(\lambda\right)}{\lambda^{d + 1}} =\operatorname{vol}\left(M\right) \frac{2}{\pi^{d + 1}\Gamma\left(d + 2\right)} \int_{-\infty}^\infty \left(\frac{x}{\sinh x}\right)^d e^{-\alpha x}\,dx\] and for $\alpha = \pm d$, \[\lim_{\lambda\to\infty} \frac{N \left(\lambda\right)}{\lambda^{d + 1}} =\operatorname{vol}\left(M\right) \frac{2d}{\left(d+1\right)\pi^{d + 1}\Gamma\left(d+2\right)} \int_{-\infty}^\infty \left(\frac{x}{\sinh x}\right)^{d + 1} e^{-\left(d - 1\right)x }\,dx.\] \end{theorem} From this statement and the diagonal action of $\square_b$ we can obtain the following statement on $\left(p,q\right)$-forms. Note that we require $d\geq 2$ to obtain nontrivial $\left(p,q\right)$-forms. \begin{corollary}\label{mainC Fix $d \geq 2$. Let $N(\lambda)$ be the eigenvalue counting function for $\square_b$ on $M$ acting on $\left(p,q\right)$-forms, where $0 \leq p < d + 1$ and $0 < q < d$. We have that \[\lim_{\lambda\to\infty} \frac{N \left(\lambda\right)}{\lambda^{d + 1}} = \operatorname{vol} \left(M\right) \binom{d}{p}\binom{d}{q} \frac{2}{\pi^{d + 1} \Gamma \left( d + 2\right)} \int_{-\infty}^\infty \left( \frac{x}{\sinh x}\right)^d e^{- \left(d - 2q\right)x}\,dx.\] \end{corollary} In the remainder of the paper, we prove the main theorem and its corollary. \section{Proofs} \subsection{Compact Quotients} For the proof of our theorem, we first establish some notation and properties of lattice subgroups of $\mathbb{H}_d$. \begin{definition} Let $\ell = \left(\ell_1,\ell_2,\ldots,\ell_d\right)$ be a $d$-tuple of positive integers so that $\ell_1 \mid \ell_2 \mid \cdots \mid \ell_d$. Define \[\Gamma_\ell = \left\{\left(p,q,s\right): p,q\in \mathbb{Z}^d, s\in \mathbb{Z}, \ell_j \mid q_j\text{ for all } 1 \leq j \leq d\right\}.\] $\Gamma_\ell$ is a lattice subgroup of the polarized Heisenberg group. Importantly, for a given $\Gamma_\ell$, define the constant $L = \ell_1 \ell_2 \cdots \ell_d$. \end{definition} \begin{theorem*} \cite[Proposition 2.1]{Folland2004CompactHM} Given any lattice subgroup $\Gamma$ of $\mathbb{H}_d$, there exists a single $\ell$ and an automorphism $\Phi$ of $\mathbb{H}_d$ so that $\Phi \left(\Gamma\right) = \Gamma_\ell$. \end{theorem*} Thus, we can associate every lattice subgroup with the constant $L$ given by $\Gamma_\ell$. Another useful property is that the center of a lattice subgroup is of the form $\left(0,0,c\mathbb{Z}\right)$ for some $c>0$. Knowing this information, the volume of $M$ can be computed as $L c^{d + 1}$, as shown in \cite{Strichartz2015}. We can now state Folland's result on the joint spectrum of $\mathcal{L}_0$ and $i^{-1} T$. \begin{definition} Let $A$ and $B$ be two operators on a vector space $V$. The {\it joint spectrum} of $A$ and $B$ is \[\sigma \left(A,B\right) = \left\{\left(\lambda,\mu\right): v\in V\setminus\left\{0\right\}, Av = \lambda v, Bv = \mu v\right\},\] counting multiplicities of $\left(\lambda,\mu\right)$. \end{definition} \begin{theorem*} \cite[Theorem 3.2]{Folland2004CompactHM} Given a lattice with center $\left(0,0,c\mathbb{Z}\right)$, the joint spectrum of $\mathcal{L}_0$ and $i ^{-1} T$ on $L^2 \left( M\right)$ is \[\left\{ \left( \frac{ \pi \left| n \right|}{2c} \left( d + 2j \right), \frac{ \pi n }{2c} \right): j \in \mathbb{Z}_{\geq 0}, n \in \mathbb{Z}\setminus \left\{ 0 \right\}\right\} \cup \left\{ \left( \frac{ \pi}{2} \left| \xi \right|^2,0 \right): \xi \in \Lambda'\right\}\] and the multiplicity of $ \left( \frac{ \pi \left| n \right|}{2c} \left( d + 2j \right), \frac{ \pi n }{2c} \right)$ is \[\left| n\right|^d L \binom{j + d - 1}{d - 1}.\] $\Lambda'$ is the dual lattice of the lattice $\Lambda = \pi \left(\Gamma\right)$, where $\pi : \mathbb{H}_d \to \mathbb{C}^d$ is the quotient map $\pi\left(z,t\right) = z$. The multiplicity of an eigenvalue coming from the second set is dependent on the structure of $\Lambda'$. \end{theorem*} Since $\mathcal{L}_0$ and $i ^{-1} T$ are self-adjoint strongly commuting operators, as shown in \cite{STRICHARTZ1991350}, we have the following corollary of the theorem above. \begin{corollary*}\cite[Corollary 3.3]{Folland2004CompactHM} For $\alpha\in \mathbb{R}$, the spectrum of $\mathcal{L}_\alpha$ on $M$ is \[ \underbrace{\left\{ \frac{ \pi \left| n \right|}{2c} \left( d + 2j - \alpha \operatorname{sgn} n \right): j \in \mathbb{Z}_{\geq 0}, n \in \mathbb{Z}\setminus \left\{ 0 \right\}\right\}}_{\text{type } \left(a\right)} \cup \underbrace{\left\{ \frac{ \pi}{2} \left| \xi \right|^2: \xi \in \Lambda'\right\}}_{\text{type } \left(b\right)}.\] \end{corollary*} We label the eigenvalues in the first set as type $\left(a\right)$, and the eigenvalues in the second set as type $\left(b\right)$. Moreover, since the operators are self-adjoint and strongly commuting, the total multiplicity of a type $\left(a\right)$ eigenvalue $\lambda$ is the sum of the multiplicities of elements from the joint spectrum coming from distinct $n,j,\xi$ that add up to $\lambda$. For example, if \[\lambda = \frac{\pi \left|n\right|}{2c} \left(d + 2j - \alpha \operatorname{sgn} n\right) = \frac{\pi \left|n'\right|}{2c} \left(d + 2j' - \alpha \operatorname{sgn} n'\right) = \frac{\pi}{2}\left|\xi\right|^2\] for some $\xi$ and $\left(n,j\right) \neq \left(n',j'\right)$, then the multiplicity of $\lambda$ is exactly \[\left|n\right|^d L \binom{j + d - 1}{d - 1} + \left|n'\right|{^d} L \binom{j' + d - 1}{d - 1} + \operatorname{mult} \left(\frac{\pi}{2} \left|\xi\right|^2\right).\] The corollary above allows us to define the generating function $\sum_{\lambda} e^{-\lambda_j t}$ where the terms are repeated according to the multiplicity of $\lambda_j$. This function appears in section \ref{sect:proof_of_main}, where we invoke a Tauberian theorem to understand the distribution of eigenvalues. We now decompose the eigenvalue counting function $N\left(\lambda\right)$ for $\mathcal{L}_\alpha$ into two parts. Let $N_a\left(\lambda\right)$ and $N_b\left(\lambda\right)$ be the positive eigenvalue counting functions of type $\left(a\right)$ and $\left(b\right)$ respectively for $\mathcal{L}_\alpha$. Formally, \[N_a\left(\lambda\right)=\#\left\{j: 0< \lambda_j\leq \lambda, \lambda_j \text{ is of type }\left(a\right)\right\} \text{ and }N_b\left(\lambda\right)=\#\left\{j: 0<\lambda_j\leq \lambda, \lambda_j \text{ is of type }\left(b\right)\right\}.\] Therefore, to study $N(\lambda)$, it suffices to analyze $N_a(\lambda)$ and $N_b(\lambda)$ separately. \begin{remark} Finally, before we provide the details of the proofs, we make a note on isospectral quotients. As noted in \cite{Folland2004CompactHM} the automorphisms of the Heisenberg group decompose into three categories: symplectic automorphisms, inner automorphisms, and dilations. We suspect that few automorphisms of $\mathbb{H}_d$ yield isospectral quotient manifolds. That is, if $\varphi\in \operatorname{Aut}\left(\mathbb{H}_d\right)$, then $\Gamma\setminus \mathbb{H}_d$ and $\varphi\left(\Gamma\right)\setminus \mathbb{H}_d$ are unlikely to be isospectral. Our reasoning is based on the following observations. Indeed, dilations by $r$ change both type $\left(a\right)$ and $\left(b\right)$ eigenvalues by a factor of $r^2$. Similarly, we see that inner automorphisms by $\left(w,t\right)$, though they preserve the lattice structure, are unlikely to preserve the center for generic $w$. Thus, the symplectic matrices that preserve the lengths and multiplicities in the dual lattice are the only building block automorphisms of $\mathbb{H}_d$ that can reasonably yield isospectral manifolds. However, such a statement does not yield a rich class of examples. We leave a formal statement and investigation of isospectral Heisenberg manifolds to another study. \end{remark} \subsection{Sums to Integrals} In this part, we provide the analytical details of the proofs. Taking a cue from \cite{REU2020Weyl}, we first define the scaled ceiling function. \begin{definition}\label{def:scaledceil} For $t> 0$, the {\it scaled ceiling function} $\left\lceil \cdot \right\rceil_t:\mathbb{R}\rightarrow \mathbb{R}$ is \[\left\lceil x \right\rceil_t =t \left\lceil x/t \right\rceil.\] \end{definition} Note that \[\left\lceil x \right\rceil_t=t \min\left\{n \in \mathbb{Z} : n \geq x/t\right\}=t \min\left\{n \in \mathbb{Z} : tn \geq x\right\}.\] Therefore, $\left\lceil x \right\rceil_t$ can be thought of $x$ rounded up to the nearest integer multiple of $t$. This implies that for a fixed $x \in \mathbb{R}$ and $t>0$, we have $0 \leq \left\lceil x \right\rceil_t -x < t $. As direct consequence, we have the following two properties. \begin{enumerate} \item \label{property1} For any fixed $x \in \mathbb{R}$, $\lim_{t \to 0^+}\left \lceil x \right\rceil_t =x$. \item \label{property2} Let $f:\left[a,b\right] \to \mathbb{R}$ be a monotonically decreasing function. Then for a fixed $0< t<b-a$, for all $x \in \left[a,b-t\right]$, we have $f\left(\left\lceil x \right\rceil_t\right) \leq f\left(x\right)$. \end{enumerate} The following lemma makes use of the definition of the scaled ceiling function to convert a right Riemann sum into an integral. It is used to simplify calculations in the proof of our main theorem. \begin{lemma}\label{lem:convertintegral} For $u,v>0$, \[t^{d+1} \sum_{n=1}^\infty n^d \frac{ e^{-t uv n }}{\left( 1 -e^{-2t u n} \right){^d}}=\int_{0}^\infty \left\lceil x \right\rceil_t^d \frac{ e^{- u v\left\lceil x \right\rceil_t }}{\left( 1 -e^{-2 u \left\lceil x \right\rceil_t} \right){^d}}\, dx.\] \end{lemma} \begin{proof} We have that \begin{align*} t^{d+1} \sum_{n=1}^\infty n^d \frac{ e^{-t uv n }}{\left( 1 -e^{-2t u n} \right){^d}} &= t^{d + 1} \sum_{n=1}^\infty \int_{n-1}^n \left\lceil m \right\rceil^d \frac{ e^{-t u v\left\lceil m \right\rceil }}{\left( 1 -e^{-2t u \left\lceil m \right\rceil} \right){^d}}\, dm\\ &= \int_{0}^\infty t^{d+1} \left\lceil m \right\rceil^d \frac{ e^{-t u v\left\lceil m \right\rceil}}{\left( 1 -e^{-2t u \left\lceil m \right\rceil} \right){^d}}\, dm\\ &= \int_{0}^\infty \left(t \lceil x/t \rceil\right)^d \frac{ e^{-t u v\left\lceil x/t \right\rceil }}{\left( 1 -e^{-2t u \lceil x/t \rceil} \right){^d}}\, dx\\ &= \int_{0}^\infty \left\lceil x \right\rceil_t^d \frac{ e^{- u v\left\lceil x \right\rceil_t}}{\left( 1 -e^{-2 u \left\lceil x \right\rceil_t} \right){^d}}\, dx, \end{align*} thus completing the proof. \end{proof} The next lemma demonstrates how the scaled ceiling function can be removed from the integrand through a limit. \begin{lemma} \label{lem:exchange_lim_int} For $u, v > 0 $, \begin{equation*}\label{eq:int1} \lim_{t \to 0^+} \int_{0}^{\infty} \left\lceil x \right\rceil_{t}^d \frac{e^{-uv\left\lceil x \right\rceil_{t}}}{\left(1-e^{-2u \left\lceil x \right\rceil_{t}}\right){^d}}\,dx =\int_{0}^{\infty} x^d \frac{e^{-u v x}}{\left(1-e^{-2u x}\right){^d}}\, dx. \end{equation*} \end{lemma} \begin{proof} Define \[f\left(x\right) = x^d \frac{ e^{- u v x}}{\left( 1 - e^{-2u x}\right){^d}}.\] Note that there exists an $M>0$ such that for all $x\geq M$, we have \[\frac{1}{2} < \left(1 - e^{-2u x}\right){^d} \text{ and } x^d \leq e^{cx} \text{, where } c = \frac{uv}{2}.\] It follows that for all $x\geq M$, \[f\left(x\right) \leq 2 e^{- cx}.\] By compactness, continuity, and property (2) of the scaled ceiling function, it follows that for all $n\in \mathbb{N}$, $f(\left \lceil x\right\rceil_{1/n})$ is dominated by $R\chi_{\left[0,M\right]} + 2 e^{- cx}$ for some $R > 0$. Thus, we can apply the dominated convergence theorem and use property (1) of the scaled ceiling function to obtain the claim. \end{proof} When we invoke the above two lemmas in the following section, we only consider $v = d\pm \alpha$, where $- d < \alpha < d$. \subsection{Proof of Theorem \ref{mainT}} \label{sect:proof_of_main} In this section, we prove an asymptotic result for $N_a(\lambda)$, from which Theorem \ref{mainT} follows. \begin{theorem}\label{thm:main} Fix $-d \leq \alpha \leq d$. We have that \[\lim_{\lambda\to\infty} \frac{N_a \left(\lambda\right)}{\lambda^{d + 1}} =C_{d,\alpha} \operatorname{vol} \left(M\right) . \] \end{theorem} \begin{proof} By symmetry of $\operatorname{sgn} n$, it suffices to consider the case where $0 \leq \alpha \leq d$. We separate the cases for $0 \leq \alpha < d$ and $\alpha = d$ and center our approach on Karamata's Tauberian theorem. Let $u = \frac{\pi}{2c}$. First assume $0 \leq \alpha < d$. Setting $G(t)=\sum_{j \in \mathbb{N}} e^{-\lambda_j t}$, where $\lambda_j$ are the type $\left(a\right)$ eigenvalues of $\mathcal{L}_\alpha$ on $M$ included with multiplicity, we see that \begin{align*} G \left( t\right) &= \sum_{\substack{n \in \mathbb{Z}\setminus \left\{ 0\right\} \\ j \in \mathbb{Z}_{\geq 0}}} \left| n\right|^d L \binom{j + d - 1}{d - 1} e ^{- t u \left| n \right|\left( d + 2j - \alpha \operatorname{sgn} n \right)}\\ &=L \sum_{\substack{n = 1\\ j = 0}}^\infty n^d \binom{j + d - 1}{d - 1} e ^{- t u n \left( d + \alpha + 2j \right)} + L \sum_{\substack{n = 1\\ j = 0}}^\infty n^d \binom{j + d - 1}{d - 1} e ^{- t u n \left( d - \alpha + 2j \right)}\\ &= L \left(G_{-} \left( t\right) + G_+ \left( t\right)\right), \end{align*} where $G_-,G_+$ are the parts of $G$, excluding multiplication by $L$, indexed by negative and positive $n$ respectively. Recall that \[\frac{ 1}{\left( 1 - z \right)^d} = \sum_{j=0}^\infty \binom{j + d - 1}{d - 1} z^j.\] We see that, \begin{align*} G_- \left( t\right) &= \sum_{n=1}^\infty n^d e^{-t u n \left( d + \alpha \right) } \sum_{j=0}^\infty \binom{j + d - 1}{d - 1} e^{-2t u n j} = \sum_{n=1}^\infty n^d \frac{ e^{-t u n \left( d + \alpha \right) }}{\left( 1 - e^{-2t u n} \right){^d}} \\ \intertext{and} G_+ \left( t\right) &= \sum_{n=1}^\infty n^d e ^{-t u n \left( d - \alpha \right)} \sum_{j=0}^\infty \binom{j + d - 1}{d - 1} e ^{-2 t u n j} = \sum _{n = 1} ^{\infty} n ^{d} \frac{ e^{-t u n \left( d - \alpha \right) }}{\left( 1 - e^{-2t u n} \right){^d}}. \end{align*} To analyze $t^{d+1} G \left( t\right)$, we convert the above sums into integrals. We have that \begin{align*} \lim _{t\to 0^+}t ^{d + 1} G \left( t\right) &=\lim_{t \rightarrow 0^+}L\left(\sum _{n = 1} ^{\infty} n ^{d} t^{d+1}\frac{ e^{-t u n \left( d + \alpha \right) }}{\left( 1 - e^{-2t u n} \right){^d}}+\sum _{n = 1} ^{\infty} n ^{d} t^{d+1}\frac{ e^{-t u n \left( d - \alpha \right) }}{\left( 1 - e^{-2t u n} \right){^d}} \right)\\ &=L\lim_{t\rightarrow 0^+} \left(\int_0^\infty \lceil x \rceil_t^d \frac{ e ^{- u \left( d +\alpha \right)\lceil x \rceil_t}}{\left( 1 - e ^{-2u\lceil x \rceil_t} \right){^d}}\,dx + \int_0^\infty \lceil x \rceil_t^d \frac{ e ^{- u \left( d -\alpha \right)\lceil x \rceil_t}}{\left( 1 - e ^{-2u\lceil x \rceil_t} \right){^d}}\,dx\right) & \text{(Lemma \ref{lem:convertintegral})}\\ &= L \left(\int_0^\infty x^d \frac{ e ^{- u \left( d +\alpha \right)x}}{\left( 1 - e ^{-2ux} \right){^d}}\,dx + \int_0^\infty x^d \frac{ e ^{- u\left( d -\alpha \right)x}}{\left( 1 - e ^{-2ux} \right){^d}}\,dx\right) &\text{(Lemma \ref{lem:exchange_lim_int})} \\ &= L \int_{-\infty}^\infty x^d \frac{ e ^{- u \left( d + \alpha \right)x}}{\left( 1 - e ^{-2ux}\right){^d}}\,dx\\ &= L \int_{-\infty}^\infty x^d \frac{ e ^{- \frac{ \pi}{2c} \left( d + \alpha \right)x}}{\left( 1 - e ^{-\frac{ \pi}{c}x}\right){^d}}\,dx. \end{align*} Let $v = \frac{\pi x}{2c}$. Recalling that $\operatorname{vol}(M)=Lc^{d+1}$, we have that \[\lim _{t\to 0^+} t^{d+1}G \left( t\right) = \operatorname{vol} \left( M\right)\frac{2^{d + 1} }{\pi^{d+1}} \int_{-\infty}^\infty v^d\frac{ e ^{- \left( d + \alpha \right)v}}{\left( 1 - e ^{-2v} \right){^d}}\,dv =\operatorname{vol} \left( M\right) \frac{2}{\pi^{d + 1}} \int_{-\infty}^\infty \left(\frac{v}{\sinh v}\right)^d e^{- \alpha v} \,dv.\] Therefore, \[\lim_{\lambda\to\infty} \frac{N_a \left(\lambda\right)}{\lambda^{d + 1}} = \operatorname{vol}\left(M\right) \frac{2}{\pi^{d + 1} \Gamma \left( d + 2\right)}\int_{-\infty}^\infty \left(\frac{x}{\sinh x}\right)^d e^{- \alpha x} \,dx.\] Now fix $\alpha = d$. Note that when both $j=0$ and $n > 0$, we obtain eigenvalues equal to zero. This case is omitted as the kernel of $\mathcal{L}_d$ is infinite dimensional. Let $u = \pi/c$. We see that \begin{align*} G \left( t\right) &= L \sum _{\substack{n=1 \\ j = 0}}^\infty n^d \binom{j + d - 1}{d - 1} e ^{- t u n \left( d + j \right)} + L \sum _{\substack{n = 1 \\ j = 1}}^\infty n^d \binom{j + d - 1}{d - 1} e ^{- t u n j}\\ &= L\left( G_- \left( t\right) + G_+ \left( t\right)\right). \end{align*} Note that the $G_-$ and the $G_+$ that appear here are different from the previous case. By a similar analysis, \begin{align*} G_- \left( t\right) &= \sum _{n = 1} ^{\infty} n^d \frac{ e ^{-t u n d}}{\left( 1 - e ^{-t u n} \right){^d}}\\ G_+ \left( t\right) &= \sum _{n = 1} ^{\infty} n^d \left( \frac{ 1}{\left( 1 - e ^{ -t u n} \right){^d}} - 1\right). \end{align*} We now convert to integrals. We refer to the analysis of $G_+$ in \cite[Lemma 2.5 and Proposition 2.8]{REU2020Weyl} and to the analysis of $G_-$ in \cite[Lemma 2.11 and Proposition 2.13]{REU2020Weyl}. From their calculations, we obtain \[\lim _{t\to 0^+} t^{d + 1} G \left( t\right) = L \int_0^\infty x^d \frac{ 1}{\left(e ^{\frac{\pi x}{c}} - 1\right){^d}}\,dx + L \int_0^\infty x^d \left( \frac{ 1}{\left( 1 - e ^{\frac{-\pi x}{c}} \right){^d}} - 1\right)\,dx.\] Let $v = \frac{\pi x}{2c}$. We have that \[\lim _{t\to 0^+} t ^{d + 1} G \left( t\right) =\operatorname{vol} \left( M \right) d! \frac{ 2^{d + 1}}{\pi^{d + 1}} \frac{ 1}{d!} \int_0^\infty v^d \left( \frac{ 1}{\left( 1 - e^{-2 v} \right){^d}} - 1 + \frac{ 1}{\left( e^{2 v} - 1 \right){^d}}\right)\,dv.\] The above integral is manipulated in \cite{REU2020Weyl} to obtain a form that is compatible with the results of \cite{Stanton1984TheHE}. Following their computation, \begin{align*} \lim _{t\to 0^+} t^{d + 1} G \left( t\right) &= \operatorname{vol} \left( M\right) d! \frac{ 2^{d + 1}}{\pi^{d + 1}} \operatorname{vol} \left( S^{2d + 1}\right) \frac{ d}{\left( 2\pi \right)^{d + 1} \left( d + 1 \right)} \int_{-\infty}^\infty \left( \frac{ x}{\operatorname{sinh}x } \right)^{d + 1} e^{-\left(d - 1\right)x }\,dx\\ &= \operatorname{vol} \left( M\right)\frac{ 2}{\pi^{d + 1}} \frac{ d}{d + 1} \int_{-\infty}^\infty \left( \frac{ x}{\operatorname{sinh}x } \right)^{d + 1} e^{-\left(d - 1\right)x}\,dx . \end{align*} Therefore, \[\lim_{\lambda\to\infty} \frac{N_a \left(\lambda\right)}{\lambda^{d + 1}} = \operatorname{vol} \left(M\right) \frac{2d}{\left(d + 1\right)\pi^{d + 1}\Gamma \left(d + 2\right)}\int_{-\infty}^\infty \left( \frac{ x}{\operatorname{sinh}x } \right)^{d + 1} e^{-\left(d - 1\right)x}\,dx, \] completing our proof. \end{proof} After understanding the asymptotics of $N_a(\lambda)$, we look at the distribution of type $(b)$ eigenvalues, which comes down to counting lattice points in $\mathbb{R}^{2d}$. Here, we use a more general theorem on the distribution of eigenvalues for the standard Laplacian on flat tori. Indeed, we invoke the following statement from \cite[page 26]{ANPS09}. \begin{theorem*}[Weyl's Law for Flat Tori] Let $\Lambda$ be a full-rank lattice in $\mathbb{R}^n$, and $N\left(\lambda\right)$ be the eigenvalue counting function for the standard Laplacian on the flat torus, $T = \Lambda \setminus \mathbb{R}^n$. That is, \[N \left(\lambda\right) = \# \left\{\mu \in \Lambda' : \left| \mu\right| \leq \frac{\lambda^{1/2}}{2\pi}\right\}.\] Then, \[\lim_{\lambda\to\infty} \frac{N \left(\lambda\right)}{\lambda^{n/2}} =\frac{\operatorname{vol}\left(T\right)}{ \left(4\pi\right)^{n/2}\Gamma\left(\frac{n}{2} + 1\right)}. \] \end{theorem*} As the above theorem implies that $N_b(\lambda) \in O(\lambda^d)$, we conclude that $N_b(\lambda)$ does not contribute to the leading coefficient asymptotics. Thus, we conclude that Theorem \ref{mainT} follows from Theorem \ref{thm:main} and Weyl's law for flat tori. \subsection{Proof of Corollary \ref{mainC}} The computations in Theorem \ref{thm:main} can be also used to obtain an analog of Weyl's law on $\left(p,q\right)$-forms since the action of $\square_b$ on $\left(0,q\right)$-forms is expressed diagonally by $\mathcal{L}_{d - 2q}$.The only technicality that remains is the multiplicity. Note that any computation for multiplicity for $\left(0,q\right)$-forms extends directly to $\left(p,q\right)$-forms by multiplication by $\binom{d}{p}$. If $\omega$ is a $q$-form, then it can be written as \[\sum_{|J|=q}\omega_J d\overline{z}^J,\] where $\omega_J$ are functions, $J=\left(j_1, \ldots, j_q\right)$ with $1 \leq j_1 < \cdots <j_q \leq d$, and $d\overline{z}^J=d\overline{z}_1 \wedge \cdots \wedge d\overline{z}_q$. Noting that the $d\overline{z}^J$ are linearly independent, we have $\square_b \omega =\lambda \omega$ if and only if $\square_b \omega_J =\lambda \omega_J$ for each $J$. From the convention $1 \leq j_1 < \cdots <j_q \leq d$, there are $\binom{d}{q}$ possibilities for $J$. Since $\square_b f=\lambda f$ implies $\square_b f d\overline{z}^J=\lambda f d\overline{z}^J$, each eigenfunction of $\square_b$ induces $\binom{d}{q}$ many eigenforms. Therefore, to study $(0,q)$-forms, we set $\alpha = d - 2q$ in the $- d < \alpha < d$ case of Theorem \ref{thm:main} and multiply the result by $\binom{d}{q}$. Then for $N_a(\lambda)$, \[\lim_{\lambda \rightarrow \infty} \frac{N_a(\lambda)}{\lambda^{d+1}}=\operatorname{vol} \left(M\right)\binom{d}{q} \frac{2}{\pi^{d + 1}\Gamma(d+2)} \int_{-\infty}^\infty \left(\frac{x}{\sinh x}\right)^d e^{- \left(d-2q\right) x} \,dx.\] Corollary \ref{mainC} follows immediately by multiplication by $\binom{d}{p}$ and noting that $N_b(\lambda)$ for $\square_b$ on $(p,q)$-forms is still in $O \left(\lambda^{d}\right)$. This result is strikingly similar to the Weyl's law analog obtained by Stanton and Tartakoff in \cite{Stanton1984TheHE}. Note however, that their theorem requires that the manifold be an embedded hypersurface (that is co-dimension one). Though the Heisenberg group is such a manifold, we lose this property when passing to the quotient. The quotient is not a co-dimension one manifold. This difference is reflected in the difference by a factor of $2^{-d-2}$ in the leading coefficients. Furthermore, the result in \cite{Stanton1984TheHE} does not apply to functions, whereas our result in this note covers functions and differential forms of all degrees. \section*{Acknowledgements} First, we thank the other members of Team Hermann: Zoe Plzak, Ian Shors, and Samuel Sottile for their support during this work. We also thank Kamryn Spinelli for his helpful comments on an earlier version of this paper. This research was completed at the REU Site: Mathematical Analysis and Applications at the University of Michigan-Dearborn. We would like to thank the National Science Foundation (DMS-1950102), the National Security Agency (H98230-21), the College of Arts, Sciences, and Letters, and the Department of Mathematics and Statistics for their support. \newcommand{\etalchar}[1]{$^{#1}$}
1,108,101,566,293
arxiv
\section{Introduction} \label{intro} Dispersal of a passive solute, such as a dye in a pipe flow or a pollutant in a river, is a classical fluid mechanics transport phenomenon that falls within the subject of macrotransport processes \cite{be93}. G.\ I.\ Taylor \cite{t53}, followed by Aris \cite{a56}, showed that the dispersal of a passive solute in a pressure-driven laminar flow in a circular pipe of radius $R$ can be described, at long times and far downstream from its injection point, by a cross-sectionally averaged advection-diffusion process in which the mean solute concentration $\bar{c}$ is advected by the mean flow $\overline{v_x}$ but diffuses with an \emph{effective dispersivity} $\mathcal{D}$ that depends on its molecular diffusivity $D_\mathrm{m}$, the mean flow speed $\overline{v_x}$, and the typical length scale $R$ associated with the cross-section of the flow vessel. In particular, $\mathcal{D} = D_\mathrm{m} + \overline{v_x}^2 R^2/(48 D_\mathrm{m})$ \cite{be93,t53,a56}. (Note that $\mathcal{D}$ is undefined in the limit of $D_\mathrm{m}\to0$ because non-diffusive solutes are simply advected by the flow and remain on the streamlines they start on for all time.) Many variations of the classical Taylor dispersion problem have been considered in the fluid mechanics literature \cite{be93,yj91}. Although the phenomenon has been mentioned in studies of self-diffusion of granular materials in shear flow, in which the diffusivity is inferred from the mean squared displacement \cite{nht95,c97}, to the best of our knowledge the dispersion problem has not been posed ``in the spirit of Taylor'' for rapidly flowing dense granular materials, despite the fact that the latter can behave similar to fluids and can be approximated as a continuum \cite{s84,jnb96,afp13,at09}. At the same time, there are practical implications to understanding the spread and dispersal of one type of granular material, such as a pharmaceutical powder, glass beads in the laboratory, or rocks and vegetation in a landslide, in a second granular material. For example, understanding granular dispersion is relevant for industrial separation processes such as the drying of powders for the purposes of dehydrating food \cite{hd08}. Another aspect to this process is the vibration of the vessel with the goal of mixing a flowing powder with another powder injected into the flow via diffusion in the transverse direction \cite{setal08}. Modeling transport of particulate materials is also important in geophysical flows such as snow avalanches, mud and land slides \cite{i97,ph07}. For example, in a polydisperse ava\-lanche, segregation drives the large particles to the front \cite{gk10}, which can lead to fingering instabilities \cite{pds97}. The resulting distribution of debris upon the cessation of flow can dictate the ecological impact of the event \cite{nist93}. Hence, it is important to know how the various constituent materials are dispersed during the landslide. More quantitatively, we can estimate the relevance of shear dispersion in the geophysical context by noting that a typical landslide can reach speeds up to $\overline{v_x} \simeq 10$ m/s, has a runout distance $\ell \simeq 10-100$ km, a depth of $h \simeq 0.5-1$ m, and an effective diameter $d\simeq 1$ mm$-1$ m for the particulate material \cite{i97}. Let us estimate the debris as being relatively fine, $d \simeq 10$ cm, and thus more likely to be monodisperse. Then, the diffusivity can be estimated by dimensional considerations as $D_0 \propto d^2 \overline{v_x}/h \simeq 10^{-1}$ m$^2$/s (see the discussion in Section~\ref{sec:flow_incline} below), from which we estimate, based on an analogy to Taylor's result \cite{t53}, the shear-augmented portion of the effective dispersivity as $\overline{v_x}^2h^2/D_0 \simeq 10^3$ m$^2$/s. For a laboratory-scale chute flow experiment, on the other hand, the typical values are $D_0 \simeq 10^{-6}$ m$^2$/s, $\overline{v_x} \simeq 1$ m/s and $h \simeq 10^{-2}$ m \cite{hh80}, which gives $\overline{v_x}^2h^2/D_0 \simeq 10^{-2}$ m$^2$/s. Both of these estimates indicate that the shear-augmented portion of the effective dispersivity is not negligible, specifically it is several orders of magnitude larger than $D_0$. Thus, the goal of the present work is to pose the shear dispersion problem for rapid flows of particulate materials and to present solutions for the effective dispersivity for some elementary dense granular flows. We restrict our discussion to dry, cohesionless monodisperse materials to avoid, in particular, the complicating effects of segregation of bidisperse and polydisperse mixtures due to flow \cite{sl88}. By ``solute'' we mean a set of tagged particles released at the upstream end of the flow ($x=0$ in Fig.~1 below). \section{Mathematical theory of shear dispersion} \label{sec:math_disp} Consider a steady two-dimensional (2D) flow $v_x(z)$ that is uniform in $x$ with $x\in[0,\infty)$ as the streamwise coordinate and $z\in[0,h]$ as the transverse coordinate. The evolution of the concentration $c$ (number of particles per unit area) of a diffusive passive tracer with (non-constant) diffusivity $D$ advected by such a flow obeys \begin{equation} \frac{\partial c}{\partial t} + v_x(z)\frac{\partial c}{\partial x} = \frac{\partial}{\partial x}\left(D\frac{\partial c}{\partial x}\right) + \frac{\partial}{\partial z}\left(D\frac{\partial c}{\partial z}\right). \label{eq:adv_diff} \end{equation} Equation~\eqref{eq:adv_diff} is supplemented with no-flux boundary conditions $\partial c/\partial z = 0$ at $z = 0,h$, since material is not allowed to leave through the layer's boundaries, an initial condition $c(x,z,0)=c_i(x,z)$, and decay boundary conditions $c\to 0$ as $|x|\to\infty$. Formally, we can always let $c(x,z,t) \equiv \bar{c}(x,t) + c'(x,z,t)$ and $v_x(z) \equiv \overline{v_x} + v_x'(z)$, where an overline denotes the depth-averaging operator $\overline{(\cdot)} = \frac{1}{h}\int_0^h (\cdot) \,\mathrm{d}z$, and primes denote deviation from the average. By construction, the overlined quantities can only depend on the axial coordinate $x$ and time $t$ and $\overline{c'} = \overline{v_x'} = 0$. Then, following Taylor \cite{t53}, we analyze the flow in the limit that the transverse diffusion time $h^2/D_0$ is much shorter than the typical streamwise advection time $\ell/\overline{v_x}$, where $\ell$ is a characteristic axial length scale over which we study the flow, and $D_0$ is a characteristic diffusivity. Based on the estimates given in the introduction, $h^2/D_0\simeq 10$ s and $\ell/\overline{v_x}\simeq 10^3-10^4$ s for a geophysical debris flow.\footnote{For the laboratory-scale chute flow from \cite{hh80}, $\ell \simeq 1$ m, so $h^2/D_0\simeq 10^2$ s and $\ell/\overline{v_x}\simeq 1$ s. In this particular experimental setup, we would not expect to see dispersion because the granular layer is too thin, and the device is too short in the streamwise direction.} Therefore, $\ell/h\gg \overline{v_x}h/D_0$ and, for $|c'|/\bar{c} \ll 1$, the evolution of the mean $\bar{c}$ separates from the fluctuations $c'$, leading to a one-way coupled set of macrotransport equations. In general, for $D = D(c,x,z,t)$, one obtains an advection-diffusion equation for the mean concentration $\bar{c}$ and an ordinary differential equation for the spatial structure of the fluctuations (see the appendix): \begin{align} \frac{\partial \bar{c}}{\partial t} + \overline{v_x}\frac{\partial \bar{c}}{\partial x} &\approx \frac{\partial}{\partial x}\left(\overline{D}\frac{\partial \bar{c}}{\partial x}\right) - \overline{v_x'\frac{\partial c'}{\partial x}},\label{eq:cbar}\\ \frac{\partial}{\partial z}\left({D}\frac{\partial c'}{\partial z}\right) &\approx v_x'\frac{\partial \bar{c}}{\partial x}, \label{eq:cprime} \end{align} where $\overline{D}$ is the depth-averaged diffusivity. Equation \eqref{eq:cprime} can be integrated, and then the fluctuation induced diffusive flux, i.e., the last term on the right-hand side of Eq.~\eqref{eq:cbar}, can be evaluated using the fact that $v'_x$ is independent of $x$:% \begin{equation} \overline{v_x'\frac{\partial c'}{\partial x}} = \frac{\partial}{\partial x}\left[ \overline{ v_x'(z)\int_0^z \frac{1}{D} \int_0^{\tilde{z}} v_x'(\tilde{\tilde{z}}) \,\mathrm{d}\tilde{\tilde{z}} \,\mathrm{d}\tilde{z}} \frac{\partial \bar{c}}{\partial x}\right]. \label{eq:fluc_flux} \end{equation} Combining Eqs.~\eqref{eq:cbar} and \eqref{eq:fluc_flux}, we can define the effective dispersivity of $\bar{c}$ (see also \cite{be93,gs12}) as \begin{equation} \mathcal{D} = \frac{1}{h} \int_0^h D \,\mathrm{d}z - \frac{1}{h} \int_0^h v_x'(z)\int_0^z \frac{1}{D} \int_0^{\tilde{z}} v_x'(\tilde{\tilde{z}}) \,\mathrm{d}\tilde{\tilde{z}} \,\mathrm{d}\tilde{z} \,\mathrm{d}z. \label{eq:d_eff} \end{equation} The first term is the influence of the basic diffusion process alone, while the second terms gives the contribution of the shear via the ``fluctuations'' $v_x'$ in the velocity. \section{Rapid granular flow down an inclined plane} \label{sec:flow_incline} Consider the flow of a granular material down an incline at an angle $\theta$ with respect to the horizontal, as shown in Fig.~\ref{fig:shear_flow}. We assume the flow is fully developed and steady, and the thickness of the layer is approximately $h$ everywhere. The local viscoplastic rheology model \cite{jfp06} can be used to show \cite{afp13,at09,k11} that the local shear rate varies as the square root of the local depth: \begin{equation} \dot\gamma \equiv \frac{\partial v_x}{\partial z} = A\sqrt{h-z}, \label{eq:const_rel} \end{equation} where $A$ is a constant. Typically, this type of model corresponds to an experiment performed at constant pressure at the free surface, so that the pressure distribution throughout the layer is hydrostatic \cite{afp13}. Under these conditions, the layer thickness $h$ can fluctuate.\footnote{Streamwise variations of the layer thickness of the form $h(z) = h_0[1+\beta f(z)]$ have been shown to lead to contributions on the order of $\beta^2$ to the effective dispersivity $\mathcal{D}$ \cite{bdlb09}. Hence, streamwise variations of the layer could be incorporated into the dispersion calculation, by replacing $h$ with $h(z)$ everywhere, without changing the result, as long as the variations are small, i.e., $\beta = \mathcal{O}(h_0/\ell) \ll 1$, which renders the $\mathcal{O}(h_0^2/\ell^2)$ contributions to $\mathcal{D}$ negligible within the chosen order of approximation (see the appendix). Furthermore, we expect that the Bagnold profile remains valid for such $h(z)$ with $\beta\ll1$.} However, here, we assume $h\approx const.$ and similarly the volume fraction $\phi\approx const.$ to a first approximation. This assumption is consistent with experiments \cite{afp13}. Thus, $h$ is representative of the thickness of the layer of \emph{fluidized} material, not of the static packing prior to flow. Integrating Eq.~\eqref{eq:const_rel} and enforcing ``no slip'' at the bottom surface, $v_x(0) = 0$, yields the classical Bagnold profile \cite{b54,seghlp01}: \begin{multline} v_x(z) = \frac{2}{3}A\left[h^{3/2} - \left(h-z\right)^{3/2}\right],\\ A = \frac{I_0}{d}\left(\frac{\tan\theta - \tan\theta_0}{\tan\theta_2 - \tan\theta}\right)\sqrt{\phi g\cos\theta}, \label{eq:Bagnold} \end{multline} where $d$ is the particle diameter, $I_0$ is a dimensionless model parameter, $\theta_0$ is the marginal angle of repose at which flow begins, $\theta_2$ is the angle beyond which steady flow is impossible, $\phi$ is the volume fraction,\footnote{That is, the proportion of volume occupied by the number of particles in a unit area. Note that $c$ is the concentration of the injected or ``tagged'' particles while $\phi$ is the volume fraction of the granular material, i.e., \emph{all} particles present in a unit area, not just tagged ones.} and $g$ is the acceleration due to gravity. \begin{figure} \centerline{\includegraphics[width=0.825\columnwidth]{shear_flow}} \caption{Schematic of a rapid dense granular shear flow down an incline at an angle $\theta$. The granular material is assumed to be dry, cohesionless and monodisperse (i.e., the particles are of identical size, density, surface roughness, etc.) and the flow is steady and fully developed so that it can be approximated by the continuous profile $v_x(z)$ at any streamwise location $x$. The layer is typically dozens to hundreds of particles thick, hence $d/h\ll1$.} \label{fig:shear_flow} \end{figure} Unlike molecular solutes \cite{t53,a56} or colloidal suspensions \cite{gs12,ebs77,la87,vsb10}, granular materials are macroscopic and, thus, not subject to thermal fluctuations or ordinary Brownian motion. Nevertheless, inelastic collision between particles can give rise to macroscopic diffusion \cite{hh80,sb76,s93}. The precise theory of diffusion of granular materials is unsettled \cite{cs12} and many models exist. For example, as early as the 1980s, ``shear-induced diffusion'' models were proposed empirically to provide better fits to experimental data \cite{hh80}. In this case, the diffusivity is modeled as $D = D_0(1 + K \dot\gamma)$, for some constants $D_0$ and $K$. Although such an expression can be motivated for hydrodynamically-interacting colloidal particles \cite{gs12}, it appears to be problematic for granular flows in which if motion ceases ($v_x=0\Rightarrow\dot\gamma=0$) so do the inter-particle collisions, and, hence, we would expect no effective diffusion ($D=0$). On the other hand, kinetic theory for hard spheres can be successfully used for dilute granular flows (``granular gases'') \cite{g03}, and it has been suggested that such theories hold (with appropriate corrections) even for a moderately dense volume fraction of $\phi \approx 0.5$ and beyond \cite{at09}. In particular, it has been shown by Savage and Dai \cite{s93,sd93} that \begin{equation} D = \chi(\phi,e) d^2\left|\dot\gamma\right| \label{eq:d_gd} \end{equation} where $\chi(\phi,e)$ is a dimensionless function that depends solely on the volume fraction $\phi$ and the restitution coefficient $e$ for particle collisions. In this work, we assume that $\phi$ can be taken to be constant to a first approximation in the fully developed steady flow down an incline, hence $\chi=const.$ as well. This assumption is supported by particle-dynamics simulations \cite{sd93}. It should be noted that Eq.~\eqref{eq:d_gd} can also be deduced using only dimensional analysis. \section{Dispersion in granular shear flow on an incline} Now, we combine the mathematical results from Section~\ref{sec:math_disp} with the model from Section~\ref{sec:flow_incline}. The Bagnold profile from Eq.~\eqref{eq:Bagnold} can be re-written as \begin{equation} v_x(z) = \frac{5}{3}\overline{v_x}\left[1-\left(1-\frac{z}{h}\right)^{3/2}\right],\qquad \overline{v_x} \equiv \frac{2}{5} A h^{3/2}. \label{eq:Bagnold2} \end{equation} Then, the Savage--Dai diffusivity from Eq.~\eqref{eq:d_gd} becomes \begin{equation} D(z) = D_0 \sqrt{1-\frac{z}{h}},\qquad D_0 \equiv \frac{5}{2}\chi d^2\frac{\overline{v_x}}{h}. \label{eq:DiffBagnold} \end{equation} Substituting Eqs.~\eqref{eq:Bagnold2} and \eqref{eq:DiffBagnold} into Eq.~\eqref{eq:d_eff}, we find that \begin{equation} \mathcal{D} = \frac{5}{3} \frac{\overline{v_x}}{h} \chi d^2 + \frac{4}{275} \frac{h^3 \overline{v_x}}{\chi d^2} = D_0\left[ \frac{2}{3} + \frac{8}{1375} \left(\frac{h^2}{\chi d^2}\right)^2 \right]. \label{eq:effective_d} \end{equation} The inclination angle $\theta$ enters into the effective dispersivity only through the constant $A$ in the mean flow speed $\overline{v_x}$, while the particle diameter enters both the base diffusivity $D_0$ directly and also $\overline{v_x}$ through $A$. Also, note that the effective dispersivity $\mathcal{D}$ depends on the ratio $h/d$ to the fourth power, which can be extremely large given that $d/h \simeq 10^{-5}-1$ in the context of landslides and debris flows, as discussed in the introduction. By analogy to the fluids context, we can introduce a P\'{e}clet number $Pe = \overline{v_x}h/D_0$ as the ratio of the transverse diffusion and advection time scales. Using the definition of $D_0$ from Eq.~\eqref{eq:DiffBagnold}, $Pe = 2h^2/(5\chi d^2)$, then the effective dispersivity from Eq.~\eqref{eq:effective_d} can be written as $\mathcal{D} = D_0\frac{2}{3}\left( 1 + \frac{3}{55} Pe^2 \right)$. Furthermore, let us introduce the dimensionless variables:\footnote{By the linearity of Eq.~\eqref{eq:cbar}, $c_0$ is arbitrary. For definiteness, it can be taken to be, e.g., $c_0 = \int_{-\infty}^{+\infty} \bar{c}(x,0) \,\mathrm{d} x$ for a finite mass initial condition.} $\bar{c} = c_0\bar{C}$, $t = ({\ell}/\overline{v_x})T$, $z = hZ$, $x = \ell X$, with $h/\ell \equiv \epsilon$, then Eq.~\eqref{eq:cbar} becomes \begin{equation} \frac{\partial \bar{C}}{\partial T} + \frac{\partial \bar{C}}{\partial X} = \frac{\epsilon}{Pe}\frac{2}{3} \left(1 + \frac{3}{55} Pe^2 \right)\frac{\partial^2 \bar{C}}{\partial X^2}. \end{equation} In dispersion problems, one is typically interested in the release of a finite mass of material, which can be approximated by a point-source initial condition $\bar{C}(X,0) = \delta(X)$, where $\delta(\cdot)$ is the Dirac delta function, subject to decay boundary conditions $\bar{C}(X,T) \to 0$ as $|X|\to \infty$; other initial conditions are possible as well \cite{t53}. Switching to the moving frame, where $\xi = (X-T)/\sqrt{3 Pe/(2\epsilon)}$ is the streamwise coordinate, we arrive at the final form of the macrotransport equation: \begin{equation} \frac{\partial \bar{C}}{\partial T} = \left(1 + \frac{3}{55} Pe^2 \right)\frac{\partial^2 \bar{C}}{\partial \xi^2}. \label{eq:mte_gran} \end{equation} For the point-source initial condition, the exact solution to the ``dispersion equation'' \eqref{eq:mte_gran} is \begin{equation} \bar{C}(\xi,T) = \frac{1}{\sqrt{4\pi \tilde{\mathcal{D}} T}}\exp\left(-\frac{\xi^2}{4 \tilde{\mathcal{D}} T}\right), \end{equation} where $\tilde{\mathcal{D}} = 3\mathcal{D}/(2D_0) = 1+(3/55)Pe^2$ using Eq.~\eqref{eq:effective_d}. In other words, the dispersing material spreads like a Gaussian with diffusivity $\tilde{\mathcal{D}}$ in the moving frame. Meanwhile, the {classical Taylor--Aris} version of Eq.~\eqref{eq:mte_gran} for plane Couette flow \cite{be93} is \begin{equation} \frac{\partial \bar{C}}{\partial T} = \left( 1 + \frac{1}{30} Pe^2 \right)\frac{\partial^2 \bar{C}}{\partial \zeta^2},\qquad \zeta = \frac{X-T}{\sqrt{Pe/\epsilon}}. \label{eq:mte_couette} \end{equation} The effective dispersivities in Eqs.~\eqref{eq:mte_gran} and \eqref{eq:mte_couette} are the same order of magnitude ($3/55 \approx 0.055$, $1/30 \approx 0.033$) for a given $Pe$. Therefore, Taylor--Aris shear dispersion should be an observable phenomenon in rapid dense granular flow, just as it is for molecular solutes in fluids. \section{Dispersion in a generic 2D shear profile} More generally, we can consider the shear profiles given by the velocity field \begin{multline} v_x(z) = \left(\frac{1+\alpha}{\alpha}\right) \overline{v_x} \left[1 - \left(1 - \frac{z}{h}\right)^\alpha \right] \\ \Rightarrow\quad D = D_0 \left(1 - \frac{z}{h}\right)^{\alpha-1}, \quad D_0 \equiv (1+\alpha)\frac{\overline{v_x}}{h}\chi d^2. \label{eq:gen_shear} \end{multline} For monodisperse materials, we expect that $1\le \alpha \le 2$, where $\alpha = 1$ and $\alpha = 2$ correspond to Couette and Poiseuille flow, respectively, of a Newtonian fluid between two parallel plates, while $\alpha = 3/2$ is the Bagnold profile for granular flow on an incline. For $\alpha < 1$, the velocity profile is convex; such profiles have been measured experimentally \cite{waecmmga11,fsuol14} in bidisperse chute flows, in which significant size segregation occurs. Following the same procedure as above, we obtain the effective dispersivities for such flow profiles: \begin{equation} \frac{\mathcal{D}}{D_0} = \begin{cases} \displaystyle\frac{1}{\alpha} \left[1 + \displaystyle\frac{\alpha}{2(4-\alpha)(4+\alpha)} Pe^2\right], \;&D \propto \dot\gamma,\\[4mm] 1 + \displaystyle\frac{2}{3(9+9\alpha+2\alpha^2)} Pe^2, &D = const.,\end{cases} \label{eq:d_eff_gen_shear} \end{equation} with the P\'eclet number defined as before. Let us the define the \emph{enhancement factor} as the coefficient of $Pe^2$ in the expressions in Eq.~\eqref{eq:d_eff_gen_shear}. Figure~\ref{fig:efac} shows the dependence of the enhancement factors on the shear profile exponent $\alpha$. It is evident that for larger $\alpha$, the dispersivity of a material with shear-rate-dependent diffusivity increases significantly over the constant-diffusivity case. \begin{figure}[h] \centerline{\includegraphics[width=0.8\columnwidth]{efac.pdf}} \caption{Enhancement factors (i.e., coefficients of $Pe^2$ in Eq.~\eqref{eq:d_eff_gen_shear}) as functions of the shear profile exponent $\alpha$ in Eq.~\eqref{eq:gen_shear}. The solid curve represents the case of shear-rate-dependent diffusivity, while the dashed curve corresponds to the case of constant diffusivity. Vertical dotted lines are a guide-to-eye representing $\alpha = 1$ (plane Couette flow of a Newtonian fluid) and $\alpha = 3/2$ (Bagnold profile for a dense granular flow down an incline).} \label{fig:efac} \end{figure} \section{Conclusion} In this paper, we presented the calculation of the Taylor--Aris effective dispersivity for the rapid flow of a dry, cohesionless monodisperse granular material down an incline, assuming that volume fraction variations are negligible in the fully-developed Bagnold profile and that the diffusivity is proportional to the shear rate. In particular, for this prototypical granular flow, we found that the enhancement of the diffusivity due to the shear flow varies as the P\'eclet number squared, which is the same dependence found for molecular solutes with constant diffusivity in a shear flow of a Newtonian fluid. This result suggests that shear dispersion is a relevant transport mechanism in flows of granular materials. Moreover, we showed that with increasing concavity of the shear profile, the enhancement factor for a shear-rate-dependent diffusivity grows significantly, while the constant-diffusivity enhancement factor decays. This feature could suggest approaches for maximizing/minimizing dispersion in flows of particulate materials by controlling the shear profile. A limitation of the present work is that we have assumed, to a first approximation, a constant volume fraction and that the particle flux $\vec{q}$ relative to the flow profile is Fickian, namely $\vec{q} \propto -D\nabla c$, where $D$ is allowed to depend on any of the independent variables, explicitly or implicitly. Thus, an avenue of future work is to incorporate non-Fickian effects such as volume-fraction variation and segregation of bidisperse materials by generalizing Eq.~\eqref{eq:adv_diff} using mixture theory \cite{gt05}, which leads to the addition of, e.g., a term proportional to $S\dot\gamma\phi(1-\phi)$ in $\vec{q}$, where $S\dot\gamma$ is a percolation velocity (see, e.g., \cite{sl88,waecmmga11,fsuol14}). For the case of granular materials immersed in a viscous fluid (e.g., concentrated colloidal suspensions), shear-induced migration effects due to hydrodynamic interactions \cite{ebs77,la87,vsb10,pabga92} could also be included along these lines by augmenting $\vec{q}$ with a term proportional to $d^2 \phi \nabla(\phi\dot\gamma)$. These extensions of the problem lead to concentration-dependence effects and, consequently, to \emph{nonlinear} dispersion equations (see, e.g., \cite{gs12,yzpb11,gc12}) and/or dispersion processes with streamwise variations of the mean flow speed \cite{sb99}. Finally, in the related context of porous media, it has been suggested that even \emph{nonlocal} effects can arise in the macrotransport equation \cite{kb87} (see also the discussion in \cite{yj91}). In conclusion, we hope that this work will stimulate further research on the interaction between shear, diffusion and dispersion in flows of granular materials. In particular, it would be of interest to design experiments that lead to the verification of the theoretical results presented herein. \raggedbottom \begin{acknowledgements} I.C.C.\ was supported by the National Science Foundation (NSF) under Grant No.\ DMS-1104047 (at Princeton University) and by the LANL/LDRD Program through a Feynman Distinguished Fellowship (at Los Alamos National Laboratory). LANL is operated by Los Alamos National Security, L.L.C. for the National Nuclear Security Administration of the U.S. Department of Energy under Contract No. DE-AC52-06NA25396. H.A.S.\ thanks the NSF for support via Grant No.\ CBET-1234500. We acknowledge useful discussions with Ian Griffiths and Gregory Rubinstein on the derivation of the dispersion equations for the case of non-constant diffusivity, and we thank Ben Glasser for helpful conversations. \end{acknowledgements}
1,108,101,566,294
arxiv
\section{Introduction} \label{282P:introduction} Volatiles are vital to life as we know it and are critically important to future space exploration, yet basic knowledge about where volatiles (e.g., H$_2$O, CO, CH$_4$) are located within our own solar system is still incomplete. Moreover, the origin of solar system volatiles, including terrestrial water, remains inconclusive. Investigating sublimation-driven active solar system bodies can help answer these questions \citep{hsiehPopulationCometsMain2006,jewittAsteroidCometContinuum2022}. We define volatile reservoirs as a dynamical class of minor planet that harbors volatile species, such as water ice. Comets have long been known to contain volatiles, but other important reservoirs are coming to light, such as the active asteroids -- objects on orbits normally associated with asteroids, such as those found in the main-belt, that surprisingly display cometary features such as tails and/or comae \citep{jewittActiveAsteroids2015a}. Fewer than 30 active asteroids have been discovered \citep{chandlerSAFARISearchingAsteroids2018} since the first, (4015)~Wilson-Harrington, was discovered in 1949 \citep{cunninghamPeriodicCometWilsonHarrington1950} and, as a result, they remain poorly understood. One scientifically important subset of active asteroids consists of members that display recurrent activity attributed to sublimation: the \acp{MBC} \citep{hsiehMainbeltCometsPanSTARRS12015}. An important diagnostic of indicator sublimating volatiles, like water ice, is recurrent activity near perihelion \citep{hsiehOpticalDynamicalCharacterization2012,snodgrassMainBeltComets2017}, a feature common to the \acp{MBC} \citep{hsiehMainbeltCometsPanSTARRS12015,agarwalBinaryMainbeltComet2017,hsieh2016ReactivationsMainbelt2018}. Fewer than 10 recurrently active \acp{MBC} have been discovered (though others exhibit activity attributed to sublimation), and as a result we know very little about this population. Another potential volatile reservoir, active Centaurs, came to light after comet 29P/Schwassmann-Wachmann 1 \citep{schwassmannNEWCOMET1927} was identified as a Centaur following the 1977 discovery of (2060)~Chiron \citep{kowalSlowMovingObjectKowal1977}. Centaurs, found between the orbits of Jupiter and Neptune, are cold objects thought to primarily originate in the Kuiper Belt prior to migrating to their current orbits (see review, \citealt{jewittActiveCentaurs2009}). The dynamical properties of these objects are discussed in Section \ref{282P:sec:dynamicalClassification}. Fewer than 20 active Centaurs have been discovered to date, thus they, like the active asteroids, are both rare and poorly understood. In order to enable the study of active objects in populations not typically associated with activity (e.g., \acp{NEO}, main-belt asteroids), we created a Citizen Science project designed to identify roughly 100 active objects via volunteer identification of activity in images of known minor planets. The Citizen Science paradigm involves concurrently crowdsourcing tasks yet too complex for computers to perform, while also carrying out an outreach program that engages the public in a scientific endeavor. Launched in Fall 2021, our \ac{NSF} funded, \acs{NASA} partner program \textit{Active Asteroids}\footnote{\url{http://activeasteroids.net}} immediately began yielding results. \begin{figure*}[ht] \centering \begin{tabular}{cccc} \begin{overpic}[width=0.23\linewidth]{093_0323137H_20210314-08071_4179806_8.8y_qC_Ob_090s_i-3_p2_ovX.png}\put (5,7) {\huge\color{green} \textbf{\contour{black}{a}}}\put (45,8) {\large\color{green} \textbf{\contour{black}{2021-03-14}}}\end{overpic} & \begin{overpic}[width=0.23\linewidth]{282P20210331UT2208sec12x120stellar_Jaeger_crop2.png}\put (5,7) {\huge\color{green} \textbf{\contour{black}{b}}}\put (45,8) {\large\color{green} \textbf{\contour{black}{2021-03-31}}}\end{overpic} & \begin{overpic}[width=0.23\linewidth]{282P_20210404_2250_5B_5min_1600_R_Fichtl_crop2.png}\put (5,7) {\huge\color{green} \textbf{\contour{black}{c}}}\put (45,8) {\large\color{green} \textbf{\contour{black}{2021-04-04}}}\end{overpic} & \begin{overpic}[width=0.23\linewidth]{20220607utGMOSS_MADS_blue_arrows_psAngAmv.png}\put (5,7) {\huge\color{green} \textbf{\contour{black}{d}}}\put (45,8) {\large\color{green} \textbf{\contour{black}{2022-06-07}}}\end{overpic}\\ \begin{overpic}[width=0.23\linewidth]{2003_BM80_2012-03-28_09.15.27.160000_1534624p_chip23-ccd22_126arcsec_NuEl.png}\put (5,7) {\huge\color{green} \textbf{\contour{black}{e}}}\put (45,8) {\large\color{green} \textbf{\contour{black}{2012-03-28}}}\end{overpic} & \begin{overpic}[width=0.23\linewidth]{2003_BM80_2013-05-05_06.13.23.267946_c4d_130505_061323_opi_r_v1_chip42-N11_126arcsec_NuEl.png}\put (5,7) {\huge\color{green} \textbf{\contour{black}{f}}}\put (45,8) {\large\color{green} \textbf{\contour{black}{2013-05-05}}}\end{overpic} & \begin{overpic}[width=0.23\linewidth]{2003_BM80_2013-06-13_10.35.07.260000_1631654p_chip23-ccd22_126arcsec_NuEl.png}\put (5,7) {\huge\color{green} \textbf{\contour{black}{g}}}\put (45,8) {\large\color{green} \textbf{\contour{black}{2013-06-13}}}\end{overpic} & \begin{overpic}[width=0.23\linewidth]{2003_BM80_2021-03-17_06.13.35.614647_c4d_210317_061335_opi_i_v1_chip60-N29_126arcsec_NuEl.png}\put (5,7) {\huge\color{green} \textbf{\contour{black}{h}}}\put (45,8) {\large\color{green} \textbf{\contour{black}{2021-03-17}}}\end{overpic} \end{tabular} \caption{ Top row: four images, spanning 15 months, showing \objnameFull{} activity during the recent 2021--2022 activity epoch. \textbf{(a)} Epoch II thumbnail image of 282P{} was classified as ``active'' by 14 of 15 volunteers of our Citizen Science project \textit{Active Asteroids}, a NASA Partner program. This 90~s $i$ band image was taken with the Dark Energy Camera on UT 2021 March 14, Prop. ID 2019A-0305 (\acs{PI} Drlica-Wagner). \textbf{(b)} Epoch II, 12$\times$300~s co-added exposures imaged by Michael Jäger with a QHY600 camera on a 14'' Newtonian telescope in Weißenkirchen, Austria. Image reproduced with permission of Michael Jäger. \textbf{(c)} Epoch II 5$\times$300~s co-added images captured by Roland Fichtl using a CDS cooled Canon 5D Mark III camera on a 16'' Newtonian telescope in Engelhardsberg, Germany. Image reproduced with permission of Roland Fichtl. \textbf{(d)} For this most recent Epoch II image we co-added six 120~s $g'$ band images of 282P{} (green dashed arrow) we acquired on UT 7 June 2022 with the \ac{GMOS} imager on the 8.1~m Gemini South telescope (Prop. ID GS-2022A-DD-103, \acs{PI} Chandler); a tail is clearly visible (orange arrows). Bottom row: Archival images of 282P{} that show clear evidence of activity. For each 126\arcsec$\times$126\arcsec thumbnail image, north is up and east is left. With the center of each image as the origin, the antisolar (yellow -$\odot$) and antimotion (red -$v$) directions (often correlated with tail appearance) are indicated. 282P{} is indicated by the green dashed arrow, and visible activity is marked by the white arrows. \textbf{(e)} Epoch I image from UT 2012 March 28 MegaPrime 120~s $r$ band, Prop. ID 12AH16 (\acs{PI} Wainscoat). \textbf{(f)} Epoch I image from UT 2013 May 5 DECam 150~s $r$ band, Prop. ID 2013A-0327 (\acs{PI} Rest). \textbf{(g)} Epoch I image from UT 2013 June 13 MegaPrime 120~s $r$ band, Prop. ID 13AH09 (\acs{PI} Wainscoat). \textbf{(h)} Epoch II image from UT 2021 March 17 DECam 90~s $i$ band, Prop. ID 2019A-0305 (\acs{PI} Drlica-Wagner). } \label{282P:fig:282P} \end{figure*} \objnameFull{}, hereafter 282P{}, was originally discovered as 2003~BM$_{80}$ on UT 2003 Jan 31 by Brian Skiff of the \ac{LONEOS} survey, and independently as 2003~FV$_{112}$ by \ac{LINEAR} on UT 2003 Apr 18. 282P{} was identified to be active during its 2012--2013 epoch (centered on its perihelion passage) in 2013 \citep{bolinComet2003BM2013}. Here, we introduce an additional activity epoch, spanning 2021--2022. In this work we (1) present our \ac{NASA} Partner Citizen Science project \textit{Active Asteroids}, (2) describe how volunteers identified activity that led to our investigation into 282P{}, (3) present (a) archival images and (b) new observations of 282P{} that show it has undergone periods of activity during at least two epochs (2012--2013 and 2021--2022) spanning consecutive perihelion passages, (4) classify 282P{} as a \ac{QHO}, (5) explore the migratory nature of this object through dynamical modeling, including identification of a dynamical pathway between \acp{QHO} and active asteroids, and (6) determine volatile sublimation as the most probable activity mechanism. \section{Citizen Science} \label{282P:subsec:citsci} We prepared thumbnail images (e.g., Figure \ref{282P:fig:282P}a) for examination by volunteers of our NASA Partner Citizen Science project \textit{Active Asteroids}, hosted on the Zooniverse\footnote{\url{https://www.zooniverse.org}} online Citizen Science platform. First we extract thumbnail images from publicly available pre-calibrated \ac{DECam} archival images using a pipeline, \ac{HARVEST}, first described in \cite{chandlerSAFARISearchingAsteroids2018} and expanded upon in \cite{chandlerSixYearsSustained2019,chandlerCometaryActivityDiscovered2020a,chandlerRecurrentActivityActive2021}. Each 126\arcsec$\times$126\arcsec\ thumbnail image shows one known minor planet at the center of the frame. We optimize the Citizen Science process by automatically excluding thumbnail images based on specific criteria, for example when (a) the image depth is insufficient for detecting activity, (b) no source was detected in the thumbnail center, and (c) too many sources were in the thumbnail to allow for reliable target identification; see \cite{chandlerChasingTailsActive2022} for an in-depth description Our workflow is simple: we show volunteers an image of a known minor planet and ask whether or not they see evidence of activity (like a tail or coma) coming from the object at the center of the image, as marked by a reticle (Figure \ref{282P:fig:282P}a). Each thumbnail is examined by at least 15 volunteers to minimize volunteer bias. To help train volunteers and validate that the project is working as intended, we created a training set of thumbnail images that we positively identified as showing activity, consisting of comets and other active objects, such as active asteroids. Training images are injected at random, though the interval of injection decays over time so that experienced volunteers only see a training image 5\% of the time We take the ratio of ``positive for activity'' classifications to the total number of classifications the object received, as a score to estimate the likelihood of the object being active. Members of the science team visually examines all images with a likelihood score of $\ge$80\% and flag candidates that warrant archival image investigation and telescope follow-up (Section \ref{282P:sec:observations}). We also learn of activity candidates through Zooniverse forums where users interact with each other, moderators, and our science team. Volunteers can share images they find interesting which has, in turn, led us directly to discoveries As of this writing, over 6,600 volunteers have participated in \textit{Active Asteroids}. They have conducted over 2.8$\times10^6$ classifications, completing assessment of over 171,000 thumbnail images. One image of 282P{} from UT 2021 March 14 (Figure \ref{282P:fig:282P}a) received a score of 93\% after 14 of 15 volunteers classified the thumbnail as showing activity. A second image from UT 2021 March 17 (Figure \ref{282P:fig:282P}h) was classified as active by 15 of 15 volunteers, providing additional strong evidence of activity from 2021 March. \section{Observations} \label{282P:sec:observations} \subsection{Archival Data} \label{282P:susbec:archivalData} \begin{figure*}[ht] \centering \includegraphics[width=0.8\linewidth]{323137_2003_BM80_at807_G37.pdf} \caption{282P{} heliocentric distance, observability, apparent brightness, observability, and temperature, from 2012 through 2025. \textbf{Heliocentric Distance:} Activity detections (triangles) are marked as positive (filled red) and negative (unfilled blue) detections and as either inbound ($\blacktriangledown$) or outbound ($\blacktriangle$). Observations are cataloged in Appendix \ref{282P:sec:observationsTable}. Also indicated are perihelion (orange $q$) and aphelion (blue $Q$) passages. \textbf{Apparent Magnitude:} The apparent $V$-band magnitude through time of 282P{}. \textbf{Observability:} Our observability metric for \acf*{CTIO}, site code 807 (blue solid line) and the \acf*{LDT}, site code G37 (orange dashed line), depicting the number of hours 282P{} was observable ($>15\degr$ above the horizon between sunset and sunrise) during a given \ac{UT} observing date. Opposition events and conjunctions result in maxima and minima concurrent with apparent magnitude, respectively. \textbf{Temperature:} Modeled temperature by date for the thermophysical extremes: a ``flat slab'' ($\chi=1$, top line), and an isothermal body ($\chi=4$, bottom line). } \label{282P:fig:ActivityTimeline} \end{figure*} For each candidate active object stemming from \textit{Active Asteroids} we conduct an archival data investigation, following the procedure described in \cite{chandlerRecurrentActivityActive2021}. For this task, we query public astronomical image archives and identify images which may show 282P{} in the \ac{FOV}. We download the data, extract thumbnail images centered on 282P{}, and visually examine all images to search for evidence of activity. After visually inspecting $>400$ thumbnail images we found 57 images (listed in Section \ref{282P:sec:observationsTable}) in which we could confidently identify 282P{} in the frame. The remaining images either did not probe faintly enough, did not actually capture 282P{} (e.g., 282P{} was not on a detector), or suffered from image artifacts that made the image unsuitable for activity detection. The 57 images span 22 observing dates; nine dates had at least one image we ascertained showed probable activity, five from the 2012--2013 epoch and four dates from the 2021--2022 apparition. Section \ref{282P:sec:observations} provides a complete listing of observations used in this work. Figure \ref{282P:fig:ActivityTimeline} shows three plots with shared $x$-axes (years). Apparent magnitude and observability (the number of hours an object is above the horizon and the Sun is below the horizon) together provide insight into potential observational biases. For example, observations for detecting activity are ideal when 282P{} is brightest, near perihelion, and observable for many hours in an observing night. When contrasting hemispheres, this plot makes it clear that some periods (e.g., 2016 -- 2020) are more favorable for observations in the northern hemisphere, whereas other observation windows (e.g., 2013 -- 2015, 2022) are better suited to southern hemisphere facilities. \subsection{Follow-up Telescope Observations} \label{282P:subsec:telescopeobservations} \paragraph{Magellan} During twilight on UT 2022 March 7 we observed 282P{} with the \ac{IMACS} instrument \citep{dresslerIMACSInamoriMagellanAreal2011} on the Magellan 6.5~m Baade telescope located atop Las Campanas Observatory (Chile). We successfully identified 282P{} in the images, however 282P{} was in front of a dense part of the Milky Way preventing us from unambiguously identifying activity. We used these observations to inform our Gemini \ac{SNR} calculations. \paragraph{VATT} On UT 2022 April 6 we observed 282P{} with the 1.8~m \ac{VATT} at the \ac{MGIO} in Arizona (Proposal ID S165, \ac{PI} Chandler). 282P{} was in an especially dense part of the galaxy so we conducted test observations to assess the viability of activity detection under these conditions. We concluded object detection would be challenging and activity detection essentially impossible in such a dense field. \paragraph{LDT} On UT 2022 May 21 we observed 282P{} with the \ac{LDT} in Arizona (PI: Chandler). Finding charts indicated 282P{} was in a less dense field compared to our \ac{VATT} observations, however we were hardly able to resolve 282P{} or identify any activity because the field was still too crowded. \paragraph{Gemini South} On UT 2022 June 7 we observed 282P{} with the \ac{GMOS} South instrument \citep{hookGeminiNorthMultiObjectSpectrograph2004,gimenoOnskyCommissioningHamamatsu2016} on the 8.1~m Gemini South telescope located atop Cerro Pachón in Chile (Proposal ID GS-2022A-DD-103, \acs{PI} Chandler). We timed this observation to take place during a $\sim$10 day window when 282P{} was passing in front of a less dense region of the Milky Way. We acquired eighteen images, six each in $g'$, $r'$, and $i'$. Activity was clearly visible in the reduced data in all filters, with activity appearing strongest in $g'$ (Figure \ref{282P:fig:282P}d). Our observations confirmed 282P{} was still active, 15 months after the 2021 archival data, evidence supporting sublimation as the most likely cause for activity (Section \ref{282P:sec:mechanism}). \section{Dynamical Modeling} \label{282P:subsec:dynamicalmodeling} We analyzed 282P{} orbital characteristics in order to (1) determine its dynamical class (Section \ref{282P:sec:dynamicalClassification}), and (2) inform our activity mechanism assessment (Section \ref{282P:sec:mechanism}). We simulated a cloud of 500 282P{} orbital clones, randomly drawn from Gaussian distributions centered on the current fitted parameters of 282P{}, with widths corresponding to uncertainties of those fits (Appendix \ref{282P:sec:ObjectData} lists parameters and associated uncertainties), as reported by \acs{JPL} Horizons \citep{giorginiJPLOnLineSolar1996}. We modeled the gravitational influence of the Sun and the planets (except Mercury) on each orbital clone using the \texttt{\ac{IAS15}} N-body integrator \citep{reinIAS15FastAdaptive2015}, typically accurate to machine precision, with the \texttt{REBOUND} \texttt{Python} package\footnote{\url{https://github.com/hannorein/rebound}} \citep{reinREBOUNDOpensourceMultipurpose2012,reinHybridSymplecticIntegrators2019}. We ran simulations 1,000 years forward and backward through time. Longer integrations were unnecessary because dynamical chaos ensues prior to $\sim$200 years ago and after $\sim$350 years into the future, thus no meaningful orbital elements can be derived outside of this window \begin{figure*} \centering \begin{tabular}{cc} \includegraphics[width=0.45\linewidth]{dynamics_282P_orbit_t0.pdf} & \includegraphics[width=0.48\linewidth]{dynamics_a_323137_10e3years.png}\\ (a) & (b)\\ \\ \includegraphics[width=0.48\linewidth]{dynamics_e_323137_10e3years.png} & \includegraphics[width=0.48\linewidth]{dynamics_i_323137_10e3years.png}\\ (c) & (d)\\ \end{tabular} \caption{ Results from dynamical integration of 282P{} orbital clones. For all plots, time $t=0$ corresponds to UT 2022 January 21. Jovian and Saturnian close encounters prevent accurate orbital parameter determination outside $-180\lesssim t\lesssim300$ yrs, given known orbital uncertainties. \textbf{(a)} Orbital diagram for 282P{} and nearby planets; note that Uranus and Neptune were included in our simulations but they are not shown here. \textbf{(b)} Semi-major axis $a$ evolution. \textbf{(c)} Eccentricity $e$ evolution. \textbf{(d)} Inclination $i$ evolution. } \label{282P:fig:orbitevolution1} \end{figure*} \begin{figure*} \centering \begin{tabular}{cc} \includegraphics[width=0.48\linewidth]{dynamics_dJ_323137_10e3years.png} & \includegraphics[width=0.48\linewidth]{dynamics_dS_323137_10e3years.png}\\ (a) & (b)\\ \\ \includegraphics[width=0.48\linewidth]{dynamics_r_323137_10e3years.png} & \includegraphics[width=0.48\linewidth]{dynamics_TJ_323137_10e3years.png}\\ (c) & (d)\\ \end{tabular} \caption{ Additional results from dynamical integration of 282P{} orbital clones. For each plot, time $t=0$ is UT 2022 January 21. Close encounters with Jupiter and Saturn are so significant that orbital elements cannot be accurately determined before/after $-180\lesssim t\lesssim300$ yrs, given orbital uncertainties. \textbf{(a)} Distance between Jupiter and 282P{} as a function of time. Indicated Hill radii provide references for the degree of orbit alteration imparted by a close encounter. For reference, the semi-major axes of two Jovian moons are shown: Callisto, the outermost Galilean satellite, and Sinope \citep{nicholsonDiscoveryNinthSatellite1914}, a likely captured \citep{gravPhotometricSurveyIrregular2003a} distant irregular and retrograde Jovian moon. \textbf{(b)} Distance between Saturn and 282P{} as a function of time. The semi-major axis of the irregular Saturnian moon Phoebe, believed to be captured through close encounter \citep{johnsonSaturnMoonPhoebe2005,jewittIrregularSatellitesPlanets2007}, is given for reference. \textbf{(c)} Heliocentric distance $r$ evolution. \textbf{(d)} Tisserand parameter with respect to Jupiter $T_\mathrm{J}$ (Equation \ref{282P:eq:TJ}), where the horizontal orange line representing $T_\mathrm{J}=3$ indicates the widely-adopted boundary between comet-like and asteroid-like orbits. } \label{282P:fig:orbitevolution2} \end{figure*} Results from the dynamical evolution of the 282P{} orbital clones are shown in Figure \ref{282P:fig:orbitevolution1} and Figure \ref{282P:fig:orbitevolution2}. For all plots, time $t=0$ corresponds to \ac{JD} 2459600.5 (UT 2022 Jan 21) and time ranges from $t=-250$ to $t=+350$ (1772--2372 AD). Horizontal lines at distances of one, three, and five Hill radii (Equation \ref{282P:eq:rH}) from Jupiter and Saturn are shown in Figure \ref{282P:fig:orbitevolution2} panels a and b. The Hill Radius \citep{hillResearchesLunarTheory1878} $r_H$ is a metric of orbital stability and indicates the region where a secondary body (e.g., a planet) has dominant gravitational influence over a tertiary body (e.g., a moon), with both related to a primary body, such as the Sun. At pericenter, the Hill radius of the secondary body can be approximated as \begin{equation} r_\mathrm{H} \approx a(1-e)(m/3M)^{1/3}, \label{282P:eq:rH} \end{equation} \noindent where $a$, $e$, and $m$ are the semi-major axis, eccentricity and mass of the secondary (Jupiter or Saturn in our case), respectively, and $M$ is the mass of the primary (here, the Sun). Close passages of a small body within a few Hill radii of a planet are generally considered to be significant perturbations and may drastically alter the orbit of the small body (see \citealt{hamiltonOrbitalStabilityZones1992} Section 2.1.2 for discussion). From $\sim$180 years ago until $\sim$300 years in the future, the orbit of 282P{} is well-constrained in our simulations. Figure \ref{282P:fig:orbitevolution2}a illustrates that 282P{} has roughly 10 close encounters (within $\sim$2 au) with Jupiter, and one with Saturn, over the range $-250<t<350$ yr. These encounters have a strong effect on the semi-major axis $a$ of 282P{} (Figure \ref{282P:fig:orbitevolution1}b), and, as illustrated by Figure \ref{282P:fig:orbitevolution2}d, a noticeable influence on its Tisserand parameter with respect to Jupiter $T_\mathrm{J}$, \begin{equation} T_\mathrm{J} = \frac{a_\mathrm{J}}{a} + 2\cos(i)\sqrt{\frac{a}{a_\mathrm{J}}\left(1-e^2\right)}, \label{282P:eq:TJ} \end{equation} \noindent where $a_\mathrm{J}$ is the semi-major axis of Jupiter and $a$, $e$ and $i$ are the semi-major axis, eccentricity and inclination of the body, respectively. $T_\mathrm{J}$ essentially describes an object's close approach speed to Jupiter or, in effect, the degree of dynamical perturbation an object will experience as a consequence of Jovian influence. $T_\mathrm{J}$ is often described as invariant \citep{kresakJacobianIntegralClassificational1972} or conserved, meaning that changes in orbital parameters still result in the same $T_\mathrm{J}$, although, in practice, its value does change slightly as a result of close encounters (see Figure \ref{282P:fig:orbitevolution2}d). Due to the small Jupiter-centric distances of 282P{} during these encounters, compounded by its orbital uncertainties, the past orbit of 282P{} (prior to $t\approx-180$ yrs) is chaotic. This dynamical chaos is plainly evident in all panels as orbital clones take a multitude of paths within the parameter space, resulting in a broad range of possible orbital outcomes due only to slight variations in initial 282P{} orbital parameters. A consequential encounter with Saturn occurred around 1838 ($t\approx-184$~yr; Figure \ref{282P:fig:orbitevolution2}b), followed by another interaction with Jupiter in 1846 ($t=-176$ yr; Figure \ref{282P:fig:orbitevolution2}a). After these encounters 282P{} was a \ac{JFC} (100\% of orbital clones) with a semi-major axis between Jupiter's and Saturn's semi-major axes (Figure \ref{282P:fig:orbitevolution1}b), and crossing the orbits of both planets (Figure \ref{282P:fig:orbitevolution2}c). These highly perturbative passages placed 282P{} on the path that would lead to its current Quasi-Hilda orbit. In 1940 ($t=-82$~yr), 282P{} had a very close encounter with Jupiter, at a distance of 0.3~au -- interior to one Hill radius. As seen in Figure \ref{282P:fig:orbitevolution1}a, this encounter dramatically altered 282P{}'s orbit, shifting 282P{} from an orbit primarily exterior to Jupiter to an orbit largely interior to Jupiter (Figure \ref{282P:fig:orbitevolution1}b). This same interaction also caused 282P{}'s orbit to migrate from Jupiter- and Saturn-crossing to only a Jupiter-crossing orbit (Figure \ref{282P:fig:orbitevolution2}c). This step in the orbital evolution of 282P{} also changed its $T_\mathrm{J}$ (Figure \ref{282P:fig:orbitevolution2}d) to be close to the traditional $T_\mathrm{J}=3$ comet--asteroid dynamical boundary. At this point in time, 282P{} remained a \ac{JFC} (100\% of orbital clones) despite its dramatic change in orbit. Around $t\approx200$ yr, 282P{} crosses the $T_\mathrm{J}=3$ boundary dividing the \ac{JFC}s and the asteroids on the order of 10 times. Although no major changes in the orbit 282P{} occur during this time, because of the stringency of this boundary, relatively minor perturbations result in oscillation between dynamical classes. After a major encounter with Jupiter around 2330 AD ($t\approx308$ yrs), dynamical chaos again becomes dominant and remains so for the rest of the simulation. Following this encounter, the orbit of 282P{} does not converge around any single solution. Slight diffusion following the previous several Jupiter passages are also visible in Figure \ref{282P:fig:orbitevolution1}b-d and Figure \ref{282P:fig:orbitevolution2}a-d, and these also add uncertainty concerning encounters around 2301 to 2306 ($t\approx280$ to $285$ yrs). Although we are unable to precisely determine past and future orbits of 282P{} outside of $-180\lesssim t\lesssim300$ because of dynamical chaos, we are able to examine the fraction of orbital clones that finish the simulation (forwards and backwards) on orbits associated with different orbital classes. \section{Dynamical Classifications: Past, Present and Future} \label{282P:sec:dynamicalClassification} Minor planets are often classified dynamically, based on orbital characteristics such as semi-major axis. 282P{} was labeled a \ac{JFC} by \cite{hsiehMainbeltCometsPanSTARRS12015}, in agreement with a widely adopted system that classifies objects dynamically based on their Tisserand parameter with respect to Jupiter, $T_\mathrm{J}$ (Equation \ref{282P:eq:TJ}) Via Equation \ref{282P:eq:TJ}, Jupiter's $T_\mathrm{J}$ is 2.998 given $a_\mathrm{J}=5.20$, $e_\mathrm{J}=0.049$, and $i_\mathrm{J}=0.013$. Notably, objects with $T_\mathrm{J}>3$ cannot cross the Jovian orbit, thus their orbits are entirely interior or exterior to Jupiter's orbit \citep{levisonCometTaxonomy1996}. Objects with $T_\mathrm{J}<3$ are considered cometary \citep{levisonCometTaxonomy1996}, while those with $T_\mathrm{J}>3$ are not \citep{vaghiOriginJupiterFamily1973,vaghiOrbitalEvolutionComets1973}, a classification approach first suggested by \cite{carusiHighOrderLibrationsHalleyType1987,carusiCometTaxonomy1996}. \acp{JFC} have $2<T_\mathrm{J}<3$ (see e.g., \citealt{jewittActiveCentaurs2009}), and Damocloids and have $T_\mathrm{J}<2$ \citep{jewittFirstLookDamocloids2005}. We note, however, that the traditional $T_\mathrm{J}$ asteroid -- \ac{JFC} -- Damocloid continuum does not include (or exclude) \acp{QHO}. As discussed in Section \ref{282P:introduction}, we adopt the \cite{jewittActiveCentaurs2009} definition of Centaur, which stipulates that a Centaur has an orbit entirely exterior to Jupiter, with both $q$ and $a$ interior to Neptune, and the body is not in 1:1 resonance with a planet. 282P{} has a semi-major axis $a=4.240$~au, well interior to Jupiter's $a_\mathrm{J}=5.2$~au. This disqualifies 282P{} as presently on a Centaurian orbit. Active objects other than comets orbiting interior to Jupiter are primarily the active asteroids, defined as (1) $T_\mathrm{J}>3$, (2) displaying comet-like activity, and (3) orbiting outside of mean-motion resonance with any of the planets. This last stipulation rules out the Jupiter Trojans (1:1 resonance) and the Hildas (3:2 resonance with Jupiter), even though both classes have members above and below the $T_\mathrm{J}=3.0$ asteroid--comet transition line. We compute $T_\mathrm{J}=2.99136891\pm(3.73\times10^{-8})$ for 282P{} (see Appendix \ref{282P:sec:ObjectData} for a list of orbital parameters). These values do not exceed the traditional $T_\mathrm{J}=3$ cutoff; thus 282P{} cannot be considered an active asteroid in its current orbit. \acp{MBC} are an active asteroid subset defined as orbiting entirely within the main asteroid belt \citep{hsiehMainbeltCometsPanSTARRS12015}. Figure \ref{282P:fig:orbitevolution2}c shows that 282P{}'s heliocentric distance does not stay within the boundaries of the Asteroid Belt (i.e., between the orbits of Mars and Jupiter), and so 282P{} does not qualify as a \ac{MBC}. \begin{figure*} \centering \begin{tabular}{ccc} \includegraphics[width=0.32\linewidth]{corotating_with_jupiter_Elst-Pizarro.pdf} & \includegraphics[width=0.32\linewidth]{corotating_with_jupiter_67P.pdf} & \includegraphics[width=0.32\linewidth]{corotating_with_jupiter_Chiron.pdf}\\ (a) Active Asteroid & (b) \acf{JFC} & (c) Centaur\\ \\ \includegraphics[width=0.32\linewidth]{corotating_with_jupiter_Hilda.pdf} & \includegraphics[width=0.32\linewidth]{corotating_with_jupiter_246P.pdf} & \includegraphics[width=0.32\linewidth]{corotating_with_jupiter_282P.pdf} \\ (d) Hilda & (e) Quasi-Hilda & (f) Quasi-Hilda\\ \end{tabular} \caption{The orbital motion of minor planets (blue lines) as seen in the reference frame corotating with Jupiter (orange lines at right edge of plots). (a) \ac{MBC} (7968)~Elst-Pizarro (133P). (b) \ac{JFC} 67P/Churyumov-Gerasimenko (previously visited by the \ac{ESA} Rosetta Spacecraft). (c) Centaur (2060)~Chiron (95P). (d) (153)~Hilda, the namesake of the Hilda dynamical class, in the 3:2 interior mean-motion resonance with Jupiter. (e) Quasi-Hilda 246P/\acs{NEAT}, also designated 2010~V$_2$ and 2004~F$_3$. (f) Our object of study, 282P{}, in its Quasi-Hilda orbit. } \label{282P:fig:corotatingFrame} \end{figure*} Blurring the lines between \ac{JFC} and Hilda is the Quasi-Hilda regime. A Quasi-Hilda, also referred to as a \ac{QHO}, \ac{QHA} \citep{jewittOutburstingQuasiHildaAsteroid2020}, or \ac{QHC}, is a minor planet on an orbit similar to a Hilda \citep{tothQuasiHildaSubgroupEcliptic2006,gil-huttonCometCandidatesQuasiHilda2016}. Hildas are defined by their 3:2 interior mean-motion resonance with Jupiter, however Quasi-Hildas are not in this resonance, though they do orbit near it. Quasi-Hildas likely migrated from the \ac{JFC} region (see discussion, \citealt{jewittOutburstingQuasiHildaAsteroid2020}). We favor the term \ac{QHO} or \ac{QHA} over \ac{QHC}, given that fewer than 15 Quasi-Hildas have been found to be active, while the remainder of the $>270$ identified Quasi-Hildas \citep{gil-huttonCometCandidatesQuasiHilda2016} have not been confirmed to be active. Notable objects from the Quasi-Hilda class are 39P/Oterma \citep{otermaNEWCOMETOTERMA1942}, an object that was a Quasi-Hilda prior to 1963, when a very close (0.095~au) encounter with Jupiter redirected the object into a Centuarian orbit. Another notable Quasi-Hilda was D/Shoemaker-Levy~9, which famously broke apart and impacted Jupiter in 1994 (e.g., \citealt{weaverHubbleSpaceTelescope1995}). Quasi-Hildas have orbital parameters similar to that of the Hildas, approximately $3.7 \lesssim a \lesssim 4.2$~au, $e\le0.3$, and $i\le20\degr$. In rough agreement, 282P{} has $a=4.24$~au, $e=0.188$, and $i=5.8\degr$ (Appendix \ref{282P:sec:ObjectData}). Hildas are also known for their trilobal orbits as viewed in the Jupiter corotating frame (caused by their residence in the 3:2 interior mean motion resonance with Jupiter), especially the namesake asteroid (153)~Hilda (Figure \ref{282P:fig:corotatingFrame}d). Because (153)~Hilda is in a stable 3:2 resonant orbit with Jupiter, its orbit remains roughly constant, with a small amount of libration over time. By contrast, Quasi-Hildas like 246P/\acs{NEAT} (Figure \ref{282P:fig:corotatingFrame}e) are near the same resonance and show signs of this characteristic trilobal pattern, however their orbits drift considerably on timescales of hundreds of years. 282P{} (Figure \ref{282P:fig:corotatingFrame}f) also displays a typical Quasi-Hilda orbit as viewed in the Jupiter corotating reference frame. In the past, prior to 250~yr ago, 52\% (260) of the 500 orbital clones were \acp{JFC}, 48\% (239) were Cenaturs, 5\% (26) were already \acp{QHO}, and one (0.2\%) was an \ac{OMBA}. The most probable scenario prior to 250 years ago was that was either a \ac{JFC} or Centaur, both classes that trace their origins to the Kuiper Belt (see reviews, \citealt{morbidelliKuiperBeltFormation2020} and \citealt{jewittActiveCentaurs2009}, respectively). In the future, after 350 years time, 81\% (403) of clones become \acp{JFC}, 18\% (90) remain \acp{QHO}, 14\% (69) become \acp{OMBA}, and 5.6\% (28) return to Centaurian orbits. Clearly the most likely scenario is that 282P{} will become a \ac{JFC}, however there are still significant possibilities that 282P{} remains a \ac{QHO} or becomes an active \ac{OMBA}. \section{Thermodynamical Modeling} \label{282P:sec:thermo} In order to understand the approximate temperature ranges that 282P{} experiences over the course of its present orbit in order to (1) understand what role, if any, thermal fracture may play in the activity we observe, and (2) evaluate the likelihood of ices surviving on the surface, albeit with limited effect because of the narrow window ($\sim$500 years) of dynamically well-determined orbital parameters available (Section \ref{282P:subsec:dynamicalmodeling}). Following the procedure of \cite{chandlerRecurrentActivityActive2021} (originally adapted from \citealt{hsiehMainbeltCometsPanSTARRS12015}), we compute the surface equilibrium temperature $T_\mathrm{eq}$ for 282P{} as a gray airless body. To accomplish this we begin with the water ice sublimation energy balance equation \begin{equation} {F_{\odot}\over r_h^2}(1-A) = \chi\left[{\varepsilon\sigma T_\mathrm{eq}^4 + L f_\mathrm{D}\dot m_{w}(T_\mathrm{eq})}\right] \label{equation:sublim1} \end{equation} \noindent with the solar constant $F_{\odot}=1360$~W~m$^{-2}$, heliocentric distance of the airless body $r_h$ (au), and the body's assumed Bond albedo is $A=0.05$; note that the true albedo could differ significantly from this value and thus it would be helpful to measure the albedo in the future when 282P{} is inactive. The heat distribution over the body is accounted for by $\chi$, which is bound by the coldest temperatures via the fast-rotating isothermal approximation ($\chi=1$), and the hottest temperatures via the ``slab'' sub-solar approximation, where one side of the object always faces the Sun. The assumed effective infrared emissivity is $\varepsilon=0.9$, $\sigma$ is the Stefan--Boltzmann constant, the latent heat of sublimation of water ice (which we approximate here as being independent of temperature) is $L=2.83$~MJ~kg$^{-1}$, the mantling-induced sublimation efficiency dampening is assumed to be $f_\mathrm{D}=1$ (absence of mantle), and the sublimation-driven water ice mass-loss rate in a vacuum $\dot m_\mathrm{w}$ is given by \begin{equation} \dot m_\mathrm{w} = P_\mathrm{v}(T) \sqrt{\mu\over2\pi k T} \label{equation:sublim2} \end{equation} \noindent where the mass of one water molecule is $\mu=2.991\cdot 10^{-26}$~kg, $k$ is the Boltzmann constant, \noindent and the vapor pressure (in Pa) as a function of temperature $P_\mathrm{v}(T)$ is derived from the Clausius--Clapeyron relation, \begin{equation} P_\mathrm{v}(T) = 611 \times \exp\left[{{\Delta H_\mathrm{subl}\over R_g}\left({{1\over 273.16} - {1\over T}}\right)}\right] \label{equation:sublim3} \end{equation} \noindent where the heat of sublimation for ice to vapor is $\Delta H_\mathrm{subl}=51.06$~MJ~kmol$^{-1}$, and the ideal gas constant is $R_g=8.314\times10^{-3}~\mathrm{MJ}~\mathrm{kmol}^{-1}$~K$^{-1}$. Solving Equations \ref{equation:sublim1} -- \ref{equation:sublim3} for the body's heliocentric distance $r_\mathrm{h}$ (in au) as a function of equilibrium temperature $T_\mathrm{eq}$ and $\chi$ yields \begin{equation} r_\mathrm{h}(T_\mathrm{eq},\chi) = \frac{F_\odot\left(1-A\right)\chi^{-1}}{\epsilon\sigma T_\mathrm{eq}^4 + L f_\mathrm{D} \cdot 611\ e^{\frac{\Delta H_\mathrm{subl}}{R_\mathrm{G}}\left(\frac{1}{273.16\mathrm{K}} - \frac{1}{T_\mathrm{eq}}\right)}} \label{282P:eq:teq} \end{equation} We translate Equation \ref{282P:eq:teq} to a function of equilibrium temperature $T_\mathrm{eq}$ by computing $r_\mathrm{h}$ for an array of values (100~K to 300~K in this case), then fit a model to these data with a \texttt{SciPy} \citep{virtanenSciPyFundamentalAlgorithms2020} (\texttt{Python} package) univariate spline. Using this model we compute 282P{} temperatures for 282P{} heliocentric distances from perihelion and aphelion. Figure \ref{282P:fig:ActivityTimeline} (bottom panel) shows the temperature evolution for the maximum and minimum solar heating distribution scenarios ($\chi=1$ and $\chi=4$, respectively) for 282P{} from 2012 through 2024. Temperatures range between roughly 175~K and 220~K for $\chi=1$, or 130~K and 160~K for $\chi=4$, with a $\sim45$~K maximum temperature variation in any one orbit. 282P{} spends some ($\chi=4$) or all ($\chi=1$) of its time with surface temperatures above 145~K. Water ice is not expected to survive above this temperature on Gyr timescales \citep{schorghoferLifetimeIceMain2008,snodgrassMainBeltComets2017}, however we showed in Section \ref{282P:subsec:dynamicalmodeling} that, prior to $\sim80$ years ago, 282P{} had a semi-major axis of $a>6$~au, a region much colder than 145~K. Even if 282P{} had spent most of its life with temperatures at the high end of our computed temperatures ($>220$~K), water ice can survive on Gyr timescales at shallow (a few cm) depths \citep{schorghoferLifetimeIceMain2008,prialnikCanIceSurvive2009}. Some bodies, such as (24)~Themis, have been found to have surface ices \citep{campinsWaterIceOrganics2010,rivkinDetectionIceOrganics2010} that suggest that an unknown mechanism may replenish surface ice with subsurface volatiles. In this case the ice lifetimes could be greatly extended. \section{Activity Mechanism} \label{282P:sec:mechanism} Infrequent stochastic events, such as impacts (e.g., (596)~Scheila, \citealt{bodewitsCollisionalExcavationAsteroid2011,ishiguroObservationalEvidenceImpact2011,moreno596ScheilaOutburst2011}), are highly unlikely to be the activity mechanism given the multi-epoch nature of the activity we identified in this work. Moreover, it is unlikely that activity ceased during the 15 month interval between the UT 2021 March 14 archival activity and our UT 7 June 2022 Gemini South activity observations (Section \ref{282P:sec:observations}), when 282P{} was at a heliocentric distance $r_\mathrm{H}=3.548$~au and $r_\mathrm{H}$=3.556~au, respectively, and 282P{} was only closer to the Sun in the interim. Similarly, our archival data shows activity lasted $\sim15$ months during the 2012 -- 2013 apparition. Recurrent activity is most commonly caused by volatile sublimation (e.g., 133P, \citealt{boehnhardtComet1996N21996,hsiehStrangeCase133P2004}) or rotational instability (e.g., (6478)~Gault, \citealt{kleynaSporadicActivity64782019,chandlerSixYearsSustained2019}). Rotational instability is impossible to rule out entirely for 282P{} because its rotation period is unknown. However, (1) no activity attributed to rotational stability for any object has been observed to be continuous for as long as the 15 month episodes we report, and (2) rotational instability is not correlated with perihelion passage. It is worth noting that there are not yet many known objects with activity attributed to rotational disruption, so it is still difficult to draw firm conclusions about the behavior of those objects. In any case it would be useful to measure a rotation period for 282P{} to help assess potential influence of rotational instability in the observed activity of 282P{}. The taxonomic class of 282P{} is unknown, but should 282P{} be classified as a member of a desiccated spectral class (e.g., S-type), then sublimation would not likely be the underlying activity mechanism. Color measurements or spectroscopy when 282P{} is quiescent would help determine its spectral class. A caveat, however, is that many of our archival images were taken when 282P{} was significantly fainter than the images showing activity (Figure \ref{282P:fig:ActivityTimeline}), thereby making activity detection more difficult than if 282P{} was brighter. Consequently, archival images showing 282P{} were predominitely taken near its perihelion passage. The farthest evidently quiescent image of 282P{} was taken when it was at $\sim$4~au (Figure \ref{282P:fig:ActivityTimeline}). Thus we cannot state with total certainty that 282P{} was inactive elsewhere in its orbit. Thermal fracture can cause repeated activity outbursts. For example, (3200)~Phaethon undergoes 600~K temperature swings, peaking at 800~K -- 1100~K, exceeding the serpentine-phyllosilicate decomposition threshold of 574~K \citep{ohtsukaSolarRadiationHeatingEffects2009}, and potentially causing thermal fracture \citep{licandroNatureCometasteroidTransition2007,kasugaObservations1999YC2008} including mass loss \citep{liRecurrentPerihelionActivity2013,huiResurrection3200Phaethon2017}. Temperatures on 282P{} reach at most $\sim220$~K (Figure \ref{282P:fig:ActivityTimeline}), with $\sim45$~K the maximum variation. Considering the relatively low temperatures and mild temperature changes we (1) consider it unlikely that 282P{} activity is due to thermal fracture, and (2) reaffirm that thermal fracture is generally considered a nonviable mechanism for any objects other than \acp{NEO}. Overall, we find volatile sublimation on 282P{} the most likely activity mechanism, because (1) it is unlikely that an object originating from the Kuiper Belt such as 282P{} would be desiccated, (2) archival and new activity observations are from when 282P{} was near perihelion (Figure \ref{282P:fig:ActivityTimeline}), a characteristic diagnostic of sublimation-driven activity \citep[e.g.,][]{hsiehOpticalDynamicalCharacterization2012}, and (3) 15 months of continuous activity has not been reported for any other activity mechanism (e.g., rotational instability, impact events) to date, let alone two such epochs. \section{Summary and Future Work} \label{282P:sec:summary} This study was prompted by Citizen Scientists from the NASA Partner program \textit{Active Asteroids} classifying two images of 282P{} from 2021 March as showing activity. Two additional images by astronomers Roland Fichtl and Michael Jäger brought the total number of images (from UT 2021 March 31 and UT 2021 April 4) to four. We conducted follow-up observations with the Gemini South 8.1~m telescope on UT 2022 June 7 and found 282P{} still active, indicating it has been active for $>15$ months during the current 2021 -- 2022 activity epoch. Our archival investigation revealed the only other known apparition, from 2012--2013, also spanned $\sim15$ months. Together, our new and archival data demonstrate 282P{} has been active during two consecutive perihelion passages, consistent with sublimation-driven activity. We conducted extensive dynamical modeling and found 282P{} has experienced a series of $\sim5$ strong interactions with Jupiter and Saturn in the past, and that 282P{} will again have close encounters with Jupiter in the near future. These interactions are so strong that dynamical chaos dominates our simulations prior to 180 years ago and beyond 350 years in the future, but we are still able to statistically quantify a probable orbital class for 282P{} prior to $-180$ yr (52\% \acp{JFC}, 48\% Centaur) and after $+350$ yr (81\% \acp{JFC}, 18\% \ac{QHO}, 14\% \ac{OMBA}). We classify present-day 282P{} as a \acf{QHO}. We carried out thermodynamical modeling that showed 282P{} undergoes temperatures ranging at most between 135~K and 220~K, too mild for thermal fracture but warm enough that surface water ice would not normally survive on timescales of the solar system lifetime. However, 282P{} arrived at its present orbit recently; prior to 1941 282P{} was primarily exterior to Jupiter's orbit and, consequently, sufficiently cold for water ice to survive on its surface. Given that both activity apparitions (Epoch I: 2012 -- 2013 and Epoch II: 2021 -- 2022) each lasted over 15 months, and both outbursts spanned perihelia passage, we determine the activity mechanism to most likely be volatile sublimation. Coma likely accounts for the majority of the reflected light we observe emanating from 282P{}, so it is infeasible to determine the color of the nucleus and, consequently, 282P{}'s spectral class (e.g., C-type, S-type). Measuring its rotational period would also help assess what (if any) role rotational instability plays in the observed activity. Specifically, a rotation period faster than the spin-barrier limit of two hours would indicate breakup. Most images of 282P{} were taken when it was near perihelion passage (3.441~au), though there were observations from Epoch I that showed 282P{} clearly, without activity, when it was beyond $\sim$4~au. 282P{} is currently outbound and will again be beyond 4~au in mid-2023 and, thus, likely inactive; determining if/when 282P{} returns to a quiescent state would help bolster the case for sublimation-driven activity because activity occurring preferentially near perihelion, and a lack of activity elsewhere, is characteristic of sublimation-driven activity. 282P{} is currently observable, especially from the southern hemisphere, however the object is passing in front of dense regions of the Milky Way until the end of 2022 November (see Lowell \texttt{AstFinder}\footnote{\url{https://asteroid.lowell.edu/astfinder/}} finding charts). 282P{} will be in a less dense region of the Milky Way and be observable, in a similar fashion to our Gemini South observations (Section \ref{282P:sec:observations}) on UT 2022 September 26 for $\sim$12 days, carefully timed for sky regions with fewer stars. As Earth's orbit progresses around the Sun, 282P{} becomes observable for less time each night through 2022 November, until UT 2022 December 26, when it becomes observable only during twilight. Observations during this window would help constrain the timeframe for periods of quiescence. \section{Acknowledgements} \label{282P:sec:acknowledgements} The authors express their gratitude to the anonymous referee whose feedback improved the quality of this work a great deal. We thank Dr.\ Mark Jesus Mendoza Magbanua of \ac{UCSF} for his frequent and timely feedback on the project. Many thanks for the helpful input from Henry Hsieh of the \ac{PSI} and David Jewitt of \ac{UCLA}. We thank the \ac{NASA} Citizen Scientists involved in this work, with special thanks to moderator Elisabeth Baeten (Belgium) and our top classifier, Michele T. Mazzucato (Florence, Italy). Thanks also to super volunteers Milton K D Bosch MD (Napa, USA), C. J. A. Dukes (Oxford, UK), Virgilio Gonano (Udine, Italy), Marvin W. Huddleston (Mesquite, USA), and Tiffany Shaw-Diaz (Dayton, USA), all of whom also classified images of 282P{}. Many thanks to additional classifiers of the three images of 282P{}: R. Banfield (Bad Tölz, Germany), @Boeuz (Penzberg, Germany), Dr. Elisabeth Chaghafi (Tübingen, Germany), Juli Fowler (Albuquerque, USA), M. M. Habram-Blanke (Heidelberg, Germany), @EEZuidema (Driezum, Netherlands), Brenna Hamilton (DePere, USA), Patricia MacMillan (Fredericksburg, USA), A. J. Raab (Seattle, USA), Angelina A. Reese (Sequim, USA), Arttu Sainio (Järvenpää, Finland), Timothy Scott (Baddeck, Canada), Ivan A. Terentev (Petrozavodsk, Russia), and Scott Virtes (Escondido, USA) . Thanks also to \ac{NASA} Citizen Scientists Thorsten Eschweiler (Übach-Palenberg, Germany) and Carl Groat (Okeechobee, USA) . The authors express their gratitude to Prof. Mike Gowanlock (\acs{NAU}), Jay Kueny of \ac{UA} and Lowell Observatory, and the Trilling Research Group (\acs{NAU}), all of whom provided invaluable insights which substantially enhanced this work. Thank you William A. Burris (San Diego State University) for his insights into Citizen Science classifications. The unparalleled support provided by Monsoon cluster administrator Christopher Coffey (\acs{NAU}) and the High Performance Computing Support team facilitated the scientific process. We thank Gemini Observatory Director Jennifer Lotz for granting our \ac{DDT} request for observations, German Gimeno for providing science support, and Pablo Prado for observing. Proposal ID GS-2022A-DD-103, \acs{PI} Chandler. The VATT referenced herein refers to the Vatican Observatory’s Alice P. Lennon Telescope and Thomas J. Bannan Astrophysics Facility. We are grateful to the Vatican Observatory for the generous time allocations (Proposal ID S165, \acs{PI} Chandler). We especially thank Vatican Observatory Director Br. Guy Consolmagno, S.J. for his guidance, Vice Director for Tucson Vatican Observatory Research Group Rev.~Pavel Gabor, S.J. for his frequent assistance, Astronomer and Telescope Scientist Rev. Richard P. Boyle, S.J. for patiently training us to use the \ac{VATT} and for including us in minor planet discovery observations, Chris Johnson (\ac{VATT} Facilities Management and Maintenance) for many consultations that enabled us to resume observations, Michael Franz (\acs{VATT} Instrumentation) and Summer Franks (\ac{VATT} Software Engineer) for on-site troubleshooting assistance, and Gary Gray (\ac{VATT} Facilities Management and Maintenance) for everything from telescope balance to building water support, without whom we would have been lost. This material is based upon work supported by the \acs{NSF} \ac{GRFP} under grant No.\ 2018258765. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the \acl{NSF}. The authors acknowledge support from the \acs{NASA} Solar System Observations program (grant 80NSSC19K0869, PI Hsieh) and grant 80NSSC18K1006 (PI: Trujillo). Computational analyses were run on Northern Arizona University's Monsoon computing cluster, funded by Arizona's \ac{TRIF}. This work was made possible in part through the State of Arizona Technology and Research Initiative Program. \acf{WCS} corrections facilitated by the \textit{Astrometry.net} software suite \citep{langAstrometryNetBlind2010}. This research has made use of data and/or services provided by the \ac{IAU}'s \ac{MPC}. This research has made use of \acs{NASA}'s Astrophysics Data System. This research has made use of The \acf{IMCCE} SkyBoT Virtual Observatory tool \citep{berthierSkyBoTNewVO2006}. This work made use of the \texttt{FTOOLS} software package hosted by the \acs{NASA} Goddard Flight Center High Energy Astrophysics Science Archive Research Center. \ac{SAO} \ac{DS9}: This research has made use of \texttt{\acs{SAO}Image\acs{DS9}}, developed by \acl{SAO} \citep{joyeNewFeaturesSAOImage2006}. \acf{WCS} validation was facilitated with Vizier catalog queries \citep{ochsenbeinVizieRDatabaseAstronomical2000} of the Gaia \ac{DR} 2 \citep{gaiacollaborationGaiaDataRelease2018} and the \acf{SDSS DR-9} \citep{ahnNinthDataRelease2012} catalogs. This work made use of AstOrb, the Lowell Observatory Asteroid Orbit Database \textit{astorbDB} \citep{bowellPublicDomainAsteroid1994,moskovitzAstorbDatabaseLowell2021}. This work made use of the \texttt{astropy} software package \citep{robitailleAstropyCommunityPython2013}. Based on observations at \ac{CTIO}, \acs{NSF}’s \acs{NOIRLab} (\acs{NOIRLab} Prop. ID 2019A-0305; \acs{PI}: A. Drlica-Wagner, \acs{NOIRLab} Prop. ID 2013A-0327; \acs{PI}: A. Rest), which is managed by the \acf{AURA} under a cooperative agreement with the \acl{NSF}. This project used data obtained with the \acf{DECam}, which was constructed by the \acf{DES} collaboration. Funding for the \acs{DES} Projects has been provided by the US Department of Energy, the US \acl{NSF}, the Ministry of Science and Education of Spain, the Science and Technology Facilities Council of the United Kingdom, the Higher Education Funding Council for England, the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign, the Kavli Institute for Cosmological Physics at the University of Chicago, Center for Cosmology and Astro-Particle Physics at the Ohio State University, the Mitchell Institute for Fundamental Physics and Astronomy at Texas A\&M University, Financiadora de Estudos e Projetos, Fundação Carlos Chagas Filho de Amparo à Pesquisa do Estado do Rio de Janeiro, Conselho Nacional de Desenvolvimento Científico e Tecnológico and the Ministério da Ciência, Tecnologia e Inovação, the Deutsche Forschungsgemeinschaft and the Collaborating Institutions in the Dark Energy Survey. The Collaborating Institutions are Argonne National Laboratory, the University of California at Santa Cruz, the University of Cambridge, Centro de Investigaciones Enérgeticas, Medioambientales y Tecnológicas–Madrid, the University of Chicago, University College London, the \acs{DES}-Brazil Consortium, the University of Edinburgh, the Eidgenössische Technische Hochschule (ETH) Zürich, Fermi National Accelerator Laboratory, the University of Illinois at Urbana-Champaign, the Institut de Ciències de l’Espai (IEEC/CSIC), the Institut de Física d’Altes Energies, Lawrence Berkeley National Laboratory, the Ludwig-Maximilians Universität München and the associated Excellence Cluster Universe, the University of Michigan, \acs{NSF}’s \acs{NOIRLab}, the University of Nottingham, the Ohio State University, the OzDES Membership Consortium, the University of Pennsylvania, the University of Portsmouth, \ac{SLAC} National Accelerator Laboratory, Stanford University, the University of Sussex, and Texas A\&M University. These results made use of the \acf{LDT} at Lowell Observatory. Lowell is a private, non-profit institution dedicated to astrophysical research and public appreciation of astronomy and operates the \acs{LDT} in partnership with Boston University, the University of Maryland, the University of Toledo, \acf{NAU} and Yale University. The \acf{LMI} was built by Lowell Observatory using funds provided by the \acf{NSF} (AST-1005313). \ac{VST} OMEGACam \citep{arnaboldiVSTVLTSurvey1998,kuijkenOmegaCAM16k16k2002,kuijkenOmegaCAMESONewest2011} data were originally acquired as part of the \ac{KIDS} \citep{dejongFirstSecondData2015}. The \acs{Pan-STARRS}1 Surveys (PS1) and the PS1 public science archive have been made possible through contributions by the Institute for Astronomy, the University of Hawaii, the \acs{Pan-STARRS} Project Office, the Max-Planck Society and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, the Queen's University Belfast, the Harvard-Smithsonian Center for Astrophysics, the \ac{LCOGT} Network Incorporated, the National Central University of Taiwan, the \acl{STScI}, the \acl{NASA} under Grant No. NNX08AR22G issued through the Planetary Science Division of the \acs{NASA} Science Mission Directorate, the \acf{NSF} Grant No. AST-1238877, the University of Maryland, \ac{ELTE}, the Los Alamos National Laboratory, and the Gordon and Betty Moore Foundation. Based on observations obtained with MegaPrime/MegaCam, a joint project of \ac{CFHT} and \ac{CEA}/\ac{DAPNIA}, at the \ac{CFHT} which is operated by the \acf{NRC} of Canada, the Institut National des Science de l'Univers of the \acf{CNRS} of France, and the University of Hawaii. The observations at the \acf{CFHT} were performed with care and respect from the summit of Maunakea which is a significant cultural and historic site. Magellan observations made use of the \ac{IMACS} instrument \citep{dresslerIMACSInamoriMagellanAreal2011}. This research has made use of the \acs{NASA}/\ac{IPAC} \ac{IRSA}, which is funded by the \acl{NASA} and operated by the California Institute of Technology. \vspace{5mm} \facilities{ Astro Data Archive, Blanco (DECam), CFHT (MegaCam), Gaia, Gemini-South (GMOS-S), IRSA, LDT (LMI), Magellan: Baade (TSIP), PO:1.2m (PTF, ZTF), PS1, Sloan, VATT (VATT4K), VST (OmegaCAM) } \software{{\tt astropy} \citep{robitailleAstropyCommunityPython2013}, {\tt astrometry.net} \citep{langAstrometryNetBlind2010}, {\tt FTOOLS}\footnote{\url{https://heasarc.gsfc.nasa.gov/ftools/}}, {\tt IAS15} integrator \citep{reinIAS15FastAdaptive2015}, {\tt JPL Horizons} \citep{giorginiJPLOnLineSolar1996}, {\tt Matplotlib} \citep{hunterMatplotlib2DGraphics2007}, {\tt NumPy} \citep{harrisArrayProgrammingNumPy2020}, {\tt pandas} \citep{mckinneyDataStructuresStatistical2010,rebackPandasdevPandasPandas2022}, {\tt REBOUND} \citep{reinREBOUNDOpensourceMultipurpose2012,reinHybridSymplecticIntegrators2019}, {\tt SAOImageDS9} \citep{joyeNewFeaturesSAOImage2006}, {\tt SciPy} \citep{virtanenSciPyFundamentalAlgorithms2020}, {\tt Siril}\footnote{\url{https://siril.org}}, {\tt SkyBot} \citep{berthierSkyBoTNewVO2006}, {\tt termcolor}\footnote{\url{https://pypi.org/project/termcolor}}, {\tt tqdm} \citep{costa-luisTqdmFastExtensible2022}, {\tt Vizier} \citep{ochsenbeinVizieRDatabaseAstronomical2000} } \clearpage
1,108,101,566,295
arxiv
\section{Introduction} Let $G$ be a finite graph. We often denote by $V(G)$ the vertex set of $G$. For $x\in V(G)$, the {\em neighborhood} $N_G(x)$ of $x$ is the set of vertices adjacent to $x$; the {\em closed neighborhood} $N_G[x]$ of $x$ is the union of $\{x\}$ and $N_G(x)$. For subsets $C$ and $S$ of $V(G)$, we say that $C$ {\em covers} $S$ if the set $N_G[x]\cap C$ is nonempty for each $x\in S$; we say that $C$ {\em separates} $S$ if the sets $N_G[x]\cap C$ are distinct for all $x\in S$. An {\em identifying code} of $G$ is a set of vertices which covers and separates $V(G)$. If $G$ admits an identifying code, we say that $G$ is \emph{identifiable} and denote by $\gamma^{ID}(G)$ the minimum cardinality of an identifying code of $G$. Note that $G$ is identifiable if and only if the sets $N_G[x]$ are distinct for all $x\in V(G)$. The concept of identifying codes was introduced by Karpovsky et al. \cite{Ka1} to model a fault-detection problem in multiprocessor systems. It was noted in \cite{Ch,Coh} that determining the identifying code with the minimum cardinality in a graph is an NP-complete problem. Many researchers focused on studying identifying codes of some restricted graphs, for example, paths \cite{Be}, cycles \cite{Be,Gr,Xu}, grids \cite{ben,coh2,ho1} and triangle-free graphs \cite{fo}. The identifying codes of graph products were studied; see \cite{Bl1,Gra,Ho,ja,Ka2,Mo} for cartesian products, \cite{fe} for lexicographic products and \cite{ra} for direct products. The {\em corona product} $H\odot G$ of two graphs $H$ and $G$ is defined as the graph obtained from $H$ and $G$ by taking one copy of $H$ and $|V(H)|$ copies of $G$ and joining by an edge each vertex from the $i$th-copy of $G$ with the $i$th-vertex of $H$. For each $v\in V(H)$, we often refer to $G_v$ the copy of $G$ connected to $v$ in $H\odot G$. This paper is aimed to investigate identifying codes of the corona product $H\odot G$ of graphs $H$ and $G$. In Section 2, we first give a necessary and sufficient condition for the identifiable corona product $H\odot G$, and then construct some identifying codes of $H\odot G$. In Section 3, some inequalities for $\gamma^{ID}(H\odot G)$ are established. In Section 4, we express $\gamma^{ID}(H\odot G)$ in terms of $\gamma^{ID}(G)$ and the (total) domination number of $H$. In Section 5, we compute $\gamma^{ID}(H\odot G)$ for some special graphs $G$. \section{Constructions} In this section, we first give a necessary and sufficient condition for the identifiable corona product $H\odot G$, and then construct some identifying codes of $H\odot G$. \begin{thm}\label{identifiable} Let $G$ be a graph. {\rm(i)} Suppose $K_1$ is a trivial graph. Then $K_1\odot G$ is identifiable if and only if $G$ is an identifiable graph with maximum degree at most $|V(G)|-2$. {\rm(ii)} If $H$ is a nontrivial connected graph, then $H\odot G$ is identifiable if and only if $G$ is identifiable. \end{thm} \proof (i) Write $V(K_1)=\{v\}$. Note that $N_{K_1\odot G}[v]=V(K_1\odot G)$. For any vertices $x$ and $y$ of $G_v$, we have $N_{K_1\odot G}[x]=N_{K_1\odot G}[y]$ if and only if $N_{G_v}[x]=N_{G_v}[y]$. Hence, the desired result follows. (ii) If $H\odot G$ is identifiable, then $G_v$ is identifiable for each $v\in V(H)$, which implies that $G$ is identifiable. Conversely, suppose that $G$ is identifiable. Pick any two distinct vertices $x$ and $y$ of $H\odot G$. If $\{x,y\}\not\subseteq V(G_v)$ for any $v\in V(H)$, then $N_{H\odot G}[x]\neq N_{H\odot G}[y]$. If there exists a vertex $v\in V(H)$ such that $\{x,y\}\subseteq V(G_v)$, by $N_{G_v}[x]\neq N_{G_v}[y]$ we have $N_{H\odot G}[x]\neq N_{H\odot G}[y]$. So $H\odot G$ is identifiable. $\qed$ In the remaining of this section, some identifying codes of the identifiable corona product $H\odot G$ are constructed. We begin by a useful lemma. \begin{lemma}\label{corona-identifying} A set $C$ of vertices in the corona product $H\odot G$ is an identifying code if, for each $v\in V(H)$, the following three conditions hold. {\rm(i)} $C\cap V(G_v)$ is nonempty and separates $V(G_v)$ in $G_v$. {\rm(ii)} $N_H(v)\cap C\neq\emptyset$, or $C\cap V(G_v)\not\subseteq N_{G_v}[x]$ for any $x\in V(G_v)$. {\rm(iii)} $v\in C$, or $C\cap V(G_v)$ covers $V(G_v)$ in $G_v$. \end{lemma} \proof Since $C\cap V(G_v)\neq\emptyset$, the set $C\cap V(G_v)$ covers $\{v\}$. Since $\{v\}$ covers $V(G_v)$, by (iii) the set $C\cap(V(G_v)\cup\{v\})$ covers $V(G_v)$. It follows that $C$ covers $V(H\odot G)$. Hence, we only need to show that, for any two distinct vertices $x$ and $y$ in $V(H\odot G)$, \begin{equation}\label{c1} N_{H\odot G}[x]\cap C\neq N_{H\odot G}[y]\cap C. \end{equation} {\em Case 1.} $\{x,y\}\cap V(H)\neq\emptyset$. Without loss of generality, assume that $x\in V(H)$. If $y\in V(H\odot G)\setminus V(G_x)$, pick $z\in C\cap V(G_x)$, then $z\in (N_{H\odot G}[x]\cap C)\setminus N_{H\odot G}[y]$, which implies that (\ref{c1}) holds. Now suppose that $y\in V(G_x)$. If $C\cap V(G_x)\not\subseteq N_{G_x}[y]$, then $N_{H\odot G}[x]\cap C\not\subseteq N_{H\odot G}[y]$, and so (\ref{c1}) holds; If $C\cap V(G_x)\subseteq N_{G_x}[y]$, by (ii) we can pick $z'\in N_H(x)\cap C$. Then $z'\in (N_{H\odot G}[x]\cap C)\setminus N_{H\odot G}[y]$, and so (\ref{c1}) holds. {\em Case 2.} $\{x,y\}\cap V(H)=\emptyset$. Then there exist vertices $u$ and $v$ of $H$ such that $x\in V(G_u)$ and $y\in V(G_v)$. If $u=v$, since $C\cap V(G_u)$ separates $\{x,y\}$ in $G_u$, the set $C$ separates $\{x,y\}$ in $H\odot G$, which implies that (\ref{c1}) holds; If $u\neq v$, then $N_{H\odot G}[x]\cap N_{H\odot G}[y]=\emptyset$. Since $C$ covers $\{x,y\}$, the inequality (\ref{c1}) holds. $\qed$ Next we shall construct identifying codes of $H\odot G$. \begin{cor}\label{cons1} Let $H$ be an arbitrary graph and $G$ be an identifiable graph with maximum degree at most $V(G)-2$. Then $$ \bigcup_{v\in V(H)}S_v $$ is an identifying code of $H\odot G$, where $S_v$ is an identifying code of $G_v$ such that $S_v\not\subseteq N_{G_v}[x]$ for any vertex $x$ of $G_v$. \end{cor} \proof It is immediate from Lemma~\ref{corona-identifying}.$\qed$ \begin{prop}\label{lemma1} Let $S$ be a set of vertices in an identifiable graph $G$. If $S$ separates $V(G)$, then there exists a vertex $z\in V(G)$ such that $S\cup\{z\}$ is an identifying code of $G$, and so $|S|\geq\gamma^{ID}(G)-1$. \end{prop} \proof If $S$ covers $V(G)$, then $S\cup\{z\}$ is an identifying code of $G$ for any $z\in V(G)$. Now suppose that $S$ does not cover $V(G)$. Then there exists a unique vertex $z\in V(G)$ such that $N_G[z]\cap S=\emptyset$, which implies that $S\cup\{z\}$ is an identifying code of $G$, as desired. $\qed$ From the above proposition, a set of vertices that separates the vertex set is an identifying code, or is obtained from an identifying code by deleting a vertex. Now we use this set of vertices in $G$ and the vertex set of $H$ to construct identifying codes of $H\odot G$. \begin{cor}\label{cons2} Let $H$ be a nontrivial connected graph and $G$ be a nontrivial identifiable graph. Write $$ C=\bigcup_{v\in V(H)}S_v\cup V(H), $$ where $S_v$ is a set of vertices separating $V(G_v)$ in $G_v$. Then $C$ is an identifying code of $H\odot G$. \end{cor} \proof For each $v\in V(H)$, we have $C\cap V(G_v)=S_v\neq\emptyset$, $N_H(v)\cap C\neq\emptyset$ and $v\in C$. It follows from Lemma~\ref{corona-identifying} that $C$ is an identifying code of $H\odot G$. $\qed$ Let $H$ be a graph. For a set $D$ of vertices, we say that $D$ is a {\em dominating set} of $H$ if $D$ covers $V(H)$; we say that $D$ is a {\em total dominating set} of $H$ if the set $N_H(x)\cap D$ is nonempty for each $x\in V(H)$. The {\em domination number} of $H$, denoted by $\gamma(H)$, is the minimum cardinality of a dominating set of $H$; the {\em total domination number} of $H$, denoted by $\gamma_t(H)$, is the minimum cardinality of a total dominating set of $H$. Domination and its variations in graphs are now well studied. The literature on this subject has been surveyed and detailed in the the book \cite{ha1}. The (total) dominating set of $H$ can be used to construct identifying codes of $H\odot G$. The proofs of the following corollaries are immediate from Lemma~\ref{corona-identifying}. \begin{cor}\label{cons3} Let $H$ be an arbitrary graph and $G$ be an identifiable graph with maximum degree at most $|V(G)|-2$. Suppose that $D$ is a dominating set of $H$. Then $$ \bigcup_{v\in V(H)}S_v\cup D $$ is an identifying code of $H\odot G$, where $S_v$ is an identifying code of $G_v$ if $v\in V(H)\setminus D$; $S_v$ is a set of vertices separating $V(G_v)$ in $G_v$ such that $S_v\not\subseteq N_{G_v}[x]$ for any vertex $x$ of $G_v$ if $v\in D$. \end{cor} \begin{cor}\label{cons4} Let $H$ be a nontrivial connected graph and $G$ be an identifiable graph. Suppose that $T$ be a total dominating set of $H$. Then $$ \bigcup_{v\in V(H)}S_v\cup T $$ is an identifying code of $H\odot G$, where $S_v$ is an identifying code of $G_v$. \end{cor} \section{Upper and lower bounds} In this section, we shall establish some inequalities for $\gamma^{ID}(H\odot G)$ by discussing the existence of some special identifying codes of $G$. In order to obtain upper bounds for $\gamma^{ID}(H\odot G)$, it suffices to construct identifying codes of $H\odot G$. By Corollaries~\ref{cons1}, \ref{cons2} and \ref{cons3}, we need to consider the identifying codes $S$ of $G$ satisfying one of the following conditions: \begin{itemize} \item [(a)] $|S|=\gamma^{ID}(G)$ and $S\not\subseteq N_G[x]$ for any $x\in V(G)$. \item [(b)] $|S|=\gamma^{ID}(G)$ and there is a vertex $z\in S$ such that $S\setminus\{z\}$ separates $V(G)$. \item [(c)] $|S|=\gamma^{ID}(G)+1$ and there exists a vertex $z\in S$ such that $S\setminus\{z\}$ separates $V(G)$ and $S\setminus\{z\}\not\subseteq N_G[x]$ for any $x\in V(G)$. \end{itemize} The identifying codes satisfying (b) or (c) were studied in \cite{Bl1,fe}. \begin{lemma}\label{lemma4} Let $G$ and $H$ be two graphs. If there exists an identifying code $S$ of $G$ satisfying {\rm(a)}, then $ \gamma^{ID}(H\odot G)\leq|V(H)|\cdot\gamma^{ID}(G). $ \end{lemma} \proof For each $v\in V(H)$, let $S_v$ be the copy of $S$ in $G_v$. Corollary~\ref{cons1} implies that $\cup_{v\in V(H)}S_v$ is an identifying code of $H\odot G$ with size $|V(H)|\cdot\gamma^{ID}(G)$, as desired. $\qed$ \begin{lemma}\label{lemma6} Let $G$ and $H$ be two nontrivial graphs. Suppose that $H$ is connected. If there is an identifying code $S$ of $G$ satisfying {\rm(b)}, then $ \gamma^{ID}(H\odot G)\leq |V(H)|\cdot\gamma^{ID}(G). $ \end{lemma} \proof Note that there exists a vertex $z\in S$ such that $S\setminus\{z\}$ separates $V(G)$. For each $v\in V(H)$, let $S_v$ be the copy of $S\setminus\{z\}$ in $G_v$. It follows from Corollary~\ref{cons2} that $\cup_{v\in V(H)}S_v\cup V(H)$ is an identifying code of $H\odot G$ with size $|V(H)|\cdot\gamma^{ID}(G)$. Therefore, the desired inequality holds. $\qed$ \begin{lemma}\label{lemma10} Let $G$ and $H$ be two nontrivial graphs. If there exists an identifying code $S$ of $G$ satisfying {\rm(c)}, then $ \gamma^{ID}(H\odot G)\leq|V(H)|\cdot\gamma^{ID}(G)+\gamma(H). $ \end{lemma} \proof Observe that there exists a vertex $z\in S$ such that $S\setminus\{z\}$ separates $V(G)$ and $S\setminus\{z\}\not\subseteq N_G[x]$ for any vertex $x\in V(G)$. Suppose that $W$ is an identifying code of $G$ with size $\gamma^{ID}(G)$ and $D$ is a dominating set of $H$ with size $\gamma(H)$. For each $v\in D$, let $S_v$ be the copy of $S\setminus\{z\}$ in $G_v$. For each $v\in V(H)\setminus D$, let $S_v$ be the copy of $W$ in $G_v$. It follows from Corollary~\ref{cons3} that $\cup_{v\in V(H)}S_v\cup D$ is an identifying code of $H\odot G$ with size $|V(H)|\cdot\gamma^{ID}(G)+\gamma(H)$, as desired. $\qed$ With reference to Corollary~\ref{cons4}, let $T$ and $S_v$ have the sizes $\gamma_t(H)$ and $\gamma^{ID}(G)$, respectively. Then we get the following result immediately. \begin{lemma}\label{lemma12} Let $G$ be an identifiable graph and $H$ be a nontrivial connected graph. Then $ \gamma^{ID}(H\odot G)\leq|V(H)|\cdot\gamma^{ID}(G)+\gamma_t(H). $ \end{lemma} In the remaining of this section, we give lower bounds for $\gamma^{ID}(H\odot G)$. We begin by discussing the properties of an identifying code of $H\odot G$. \begin{lemma}\label{lemma2} Let $C$ be an identifying code of $H\odot G$ and let $v$ be a vertex of the first factor $H$. Then $C\cap V(G_v)$ separates $V(G_v)$ in $G_v$. Moreover, if $v\not\in C$, then $C\cap V(G_v)$ is an identifying code of $G_v$. \end{lemma} \proof Note that $v$ is adjacent to every vertex in $V(G_v)$, and there are no edges joining $V(H\odot G)\setminus(\{v\}\cup G_v)$ with $V(G_v)$. Since $C$ separates $V(G_v)$ in $H\odot G$, the set $C\cap V(G_v)$ separates $V(G_v)$ in $G_v$. If $v\not\in C$, since $C$ covers $V(G_v)$ in $H\odot G$, the set $C\cap V(G_v)$ covers $V(G_v)$ in $G_v$, which implies that $C\cap V(G_v)$ is an identifying code of $G_v$. $\qed$ \begin{lemma}\label{lemma3} If $H\odot G$ is identifiable, then $ \gamma^{ID}(H\odot G)\geq |V(H)|\cdot\gamma^{ID}(G). $ \end{lemma} \proof Let $C$ be an identifying code of $H\odot G$ with size $\gamma^{ID}(H\odot G)$. Combining Lemma~\ref{lemma2} and Proposition~\ref{lemma1}, we have \begin{equation*} |C\cap V(G_v)|\geq\left\{ \begin{array}{ll} \gamma^{ID}(G)-1,&\textup{if } v\in V(H)\cap C,\\ \gamma^{ID}(G),&\textup{if }v\in V(H)\setminus C. \end{array}\right. \end{equation*} Then $$ \gamma^{ID}(H\odot G)=\sum_{v\in V(H)\cap C}(|C\cap V(G_v)|+1)+\sum_{v\in V(H)\setminus C}|C\cap V(G_v)|\geq |V(H)|\cdot\gamma^{ID}(G), $$ as desired. $\qed$ \begin{lemma}\label{lemma5} Let $G$ be an identifiable graph with maximum degree at most $|V(G)|-2$. If any identifying code of $G$ does not satisfy {\rm(a)}, then $ \gamma^{ID}(K_1\odot G)\geq\gamma^{ID}(G)+1. $ \end{lemma} \proof By Theorem~\ref{identifiable}, the coronal product $K_1\odot G$ is identifiable. Hence, Lemma~\ref{lemma3} implies that $\gamma^{ID}(K_1\odot G)\geq\gamma^{ID}(G)$. Suppose for the contradiction that there exists an identifying code $C$ of $K_1\odot G$ with size $\gamma^{ID}(G)$. Write $V(K_1)=\{v\}$. {\em Case 1.} $v\not\in C$. Then $C$ is an identifying code of $G_v$ with cardinality $\gamma^{ID}(G)$ by Lemma~\ref{lemma2}. Hence, there is a vertex $x\in V(G_v)$ such that $C\subseteq N_{G_v}[x]$, which implies that $N_{H\odot G}[x]\cap C=C=N_{H\odot G}[v]\cap C$, a contradiction. {\em Case 2.} $v\in C$. Then $C\cap V(G_v)=C\setminus\{v\}$. Combining Proposition~\ref{lemma1} and Lemma~\ref{lemma2}, there exists a vertex $z\in V(G_v)$ such that $(C\setminus\{v\})\cup\{z\}$ is an identifying code of $G_v$ with cardinality $\gamma^{ID}(G)$. Hence, we have $(C\setminus\{v\})\cup\{z\}\subseteq N_{G_v}[y]$ for some $y\in V(G_v)$, which implies that $(C\setminus\{v\})\subseteq N_{G_v}[y]$. Consequently, we get $N_{H\odot G}[y]\cap C=C=N_{H\odot G}[v]\cap C$, a contradiction. $\qed$ \begin{lemma}\label{lemma7} Suppose that $C$ is an identifying code of $H\odot G$. If any identifying code of $G$ does not satisfy {\rm(b)}, then $|C\cap V(G_v)|\geq\gamma^{ID}(G)$ for each $v\in V(H)$. \end{lemma} \proof Lemma~\ref{lemma2} implies that $C\cap V(G_v)$ separates $V(G_v)$ in $G_v$. Then $|C\cap V(G_v)|\geq\gamma^{ID}(G)-1$ by Proposition~\ref{lemma1}. If $|C\cap V(G_v)|=\gamma^{ID}(G)-1$, there exists a vertex $z\in V(G)$ such that $(C\cap V(G_v))\cup\{z\}$ is an identifying code of $G_v$ satisfying (b), a contradiction. $\qed$ For a set $C$ of vertices in $H\odot G$, write $$ H(C)=V(H)\cap C,\quad H'(C)=\{v\mid v\in V(H), |C\cap G_v|\geq\gamma^{ID}(G)+1\}. $$ \begin{lemma}\label{lemma8} Suppose that $C$ is an identifying code of $H\odot G$. If any identifying code of $G$ does not satisfy {\rm(b)}, then $ |C|\geq|V(H)|\cdot\gamma^{ID}(G)+|H(C)|+|H'(C)|. $ \end{lemma} \proof Write $H_1=V(H)\setminus(H(C)\cup H'(C))$, $H_2=H'(C)\setminus H(C)$, $H_3=H(C)\setminus H'(C)$ and $H_4=H(C)\cap H'(C)$. Let $C_v=C\cap V(G_v)$. By Lemma~\ref{lemma7} we get $|C_v|=\gamma^{ID}(G)$ for each $v\in H_1\cup H_3$. Then \begin{eqnarray*} |C|&=&\sum_{v\in H_1}|C_v|+\sum_{v\in H_2}|C_v|+\sum_{v\in H_3}(|C_v|+1) +\sum_{v\in H_4}(|C_v|+1)\\ &\geq& |H_1|\gamma^{ID}(G)+|H_2|(\gamma^{ID}(G)+1)+|H_3|(\gamma^{ID}(G)+1)+|H_4|(\gamma^{ID}(G)+2)\\ &=&|V(H)|\cdot\gamma^{ID}(G)+|H(C)|+|H'(C)|, \end{eqnarray*} as desired. $\qed$ \begin{lemma}\label{lemma9} Let $G$ be a nontrivial identifiable graph and $H$ be a nontrivial connected graph. If each identifying code of $G$ satisfies neither {\rm(a)} nor {\rm(b)}, then $ \gamma^{ID}(H\odot G)\geq|V(H)|\cdot\gamma^{ID}(G)+\gamma(H). $ \end{lemma} \proof Theorem~\ref{identifiable} implies that $H\odot G$ is identifiable. Let $C$ be an identifying code of $H\odot G$ with size $\gamma^{ID}(H\odot G)$. Write $$ D=H(C)\cup H'(C). $$ We shall show that $D$ is a dominating set of $H$. Pick any $v\in V(H)\setminus D$. Note that $v\not\in C$ and $|C\cap V(G_v)|\leq\gamma^{ID}(G)$. Then $C\cap V(G_v)$ is an identifying code of $G_v$ with size $\gamma^{ID}(G_v)$ by Lemma~\ref{lemma2}. Since each identifying code of $G_v$ does not satisfy (a), there exists a vertex $x\in V(G_v)$ such that $C\cap V(G_v)\subseteq N_{G_v}[x]$. Since $N_{H\odot G}[v]\cap C\neq N_{H\odot G}[x]\cap C=C\cap V(G_v)$, we have $N_H(v)\cap H(C)\neq\emptyset$, which implies that $N_H(v)\cap D\neq\emptyset$. Then $D$ is a dominating set of $H$. Hence, we have $|D|\geq\gamma(H)$. By Lemma~\ref{lemma8}, we get $$ \gamma^{ID}(H\odot G)=|C|\geq |V(H)|\cdot\gamma^{ID}(G)+|H(C)\cup H'(C)|\geq |V(H)|\cdot\gamma^{ID}(G)+\gamma(H), $$ as desired. $\qed$ \begin{lemma}\label{lemma11} Let $G$ be a nontrivial identifiable graph and $H$ be a nontrivial connected graph. If each identifying code of $G$ satisfies none of the conditions {\rm(a)}, {\rm(b)} and {\rm(c)}, then $ \gamma^{ID}(H\odot G)\geq|V(H)|\cdot\gamma^{ID}(G)+\gamma_t(H). $ \end{lemma} \proof For each vertex $v\in V(H)$, pick a vertex $v'\in N_H(v)$. Theorem~\ref{identifiable} implies that $H\odot G$ is identifiable. Let $C$ be an identifying code of $H\odot G$ with size $\gamma^{ID}(H\odot G)$. Write $$ T=H''(C)\cup H(C), $$ where $H''(C)=\{v'\mid v\in H'(C)\}$. We claim that $T$ is a total dominating set of $H$. Pick any $v\in V(H)$. If $v\in H'(C)$, since $N_H(v)\cap H''(C)\neq\emptyset$ we have $N_H(v)\cap T\neq\emptyset$. Now suppose that $v\not\in H'(C)$. By Lemma~\ref{lemma7} we get $|C\cap V(G_v)|=\gamma^{ID}(G_v)$. If $C\cap V(G_v)\not\subseteq N_{G_v}[x]$ for any vertex $x\in V(G_v)$, then $C\cap V(G_v)$ is not an identifying code of $G_v$. It follows from Lemma~\ref{lemma2} and Proposition~\ref{lemma1} that there exists a vertex $z\in V(G_v)$ such that $(C\cap V(G_v))\cup\{z\}$ is an identifying code of $G_v$ satisfying (c), a contradiction. Therefore, there exists a vertex $x\in V(G_v)$ such that $C\cap V(G_v)\subseteq N_{G_v}[x]$. Since $N_{H\odot G}[v]\cap C\neq N_{H\odot G}[x]\cap C$, we have $N_H(v)\cap H(C)\neq\emptyset$, which implies that $N_H(v)\cap T\neq\emptyset$. Hence, our claim is valid. Since $|T|\geq\gamma_t(H)$ and $|H'(C)|\geq |H''(C)|$, we get $|H'(C)|+|H(C)|\geq\gamma_t(H)$. By Lemma~\ref{lemma8}, we have $$ \gamma^{ID}(H\odot G)=|C|\geq |V(H)|\cdot\gamma^{ID}(G)+\gamma_t(H), $$ as desired. $\qed$ \section{Minimum cardinality} In this section, we shall compute $\gamma^{ID}(H\odot G)$. \begin{thm}\label{main3} Let $G$ and $H$ be two nontrivial graphs. Suppose that $H$ is connected. If there exists an identifying code of $G$ satisfying {\rm(a)} or {\rm(b)}, then $$ \gamma^{ID}(H\odot G)=|V(H)|\cdot\gamma^{ID}(G). $$ \end{thm} \proof It is immediate from Theorem~\ref{identifiable}, Lemmas~\ref{lemma4}, \ref{lemma6} and \ref{lemma3}. $\qed$ \begin{thm}\label{main4} Let $G$ be a nontrivial identifiable graph and $H$ be a nontrivial connected graph. Suppose that each identifying code of $G$ satisfies neither {\rm(a)} nor {\rm(b)}. {\rm(i)} If there exists an identifying code of $G$ satisfying {\rm(c)}, then $$ \gamma^{ID}(H\odot G)=|V(H)|\cdot\gamma^{ID}(G)+\gamma(H). $$ {\rm(ii)} If any identifying code of $G$ does not satisfy {\rm(c)}, then $$ \gamma^{ID}(H\odot G)=|V(H)|\cdot\gamma^{ID}(G)+\gamma_t(H). $$ \end{thm} \proof (i) holds by Lemmas~\ref{lemma10} and \ref{lemma9}. By Lemmas~\ref{lemma12} and \ref{lemma11}, (ii) holds. $\qed$ Now, we compute $\gamma^{ID}(K_1\odot G)$ and $\gamma^{ID}(H\odot K_1)$. \begin{thm}\label{main1} Suppose that $G$ is an identifiable graph with maximum degree at most $|V(G)|-2$. {\rm (i)} If there exists an identifying code of $G$ satisfying {\rm(a)}, then $$ \gamma^{ID}(K_1\odot G)=\gamma^{ID}(G). $$ {\rm(ii)} If any identifying code of $G$ does not satisfy {\rm(a)}, then $$ \gamma^{ID}(K_1\odot G)=\gamma^{ID}(G)+1. $$ \end{thm} \proof Theorem~\ref{identifiable} implies that $K_1\odot G$ is identifiable. (i) It is immediate from Lemmas~\ref{lemma4} and \ref{lemma3}. (ii) By Lemma~\ref{lemma5} we only need to construct an identifying code of $K_1\odot G$ with size $\gamma^{ID}(G)+1$. Let $W$ be an identifying code of $G$ with size $\gamma^{ID}(G)$. Note that there exists a unique vertex $x\in V(G)$ such that $W\subseteq N_G[x]$. Pick $y\in V(G)\setminus N_G[x]$. Write $V(K_1)=\{v\}$. Let $S_v$ be the copy of $W\cup\{y\}$ in $G_v$. Then $S_v$ is an identifying code of $G_v$ with $S_v\not\subseteq N_{G_v}[z]$ for any vertex $z\in V(G_v)$. It follows from Corollary~\ref{cons1} that $S_v$ is an identifying code of $K_1\odot G$ with size $\gamma^{ID}(G)+1$, as desired. $\qed$ \begin{cor}\label{cor} Let $G$ be an identifiable graph and $H$ be a connected graph. Suppose that $G$ satisfies one of the following conditions. {\rm(i)} The graph $G$ is not connected. {\rm(ii)} The diameter of $G$ is at least five. {\rm(iii)} The maximum degree of $G$ is less than $\gamma^{ID}(G)-1$. \\* Then $$ \gamma^{ID}(H\odot G)=|V(H)|\cdot\gamma^{ID}(G). $$ \end{cor} \proof Note that the identifying codes of $G$ with size $\gamma^{ID}(G)$ satisfy (a). Combining Theorems~\ref{main3} and \ref{main1}, we get the desired result. $\qed$ \begin{thm}\label{kn} Let $n\geq 2$. Then $\gamma^{ID}(K_n\odot K_1)=n+1$, where $K_n$ is the complete graph on $n$ vertices. \end{thm} \proof Since $K_1$ is identifiable, Theorem~\ref{identifiable} implies that $K_n\odot K_1$ is identifiable. Write $V=V(K_n)=\{v_1,\ldots,v_n\}$. For each $i\in\{1,\ldots,n\}$, denote by $\{u_i\}$ the vertex set of the copy of $K_1$ connected to $v_i$ in $K_n\odot K_1$. Write $V'=\{u_1,\ldots,u_n\}$. Note that $V(K_n\odot K_1)=V\cup V'$. Let $C$ be an identifying code of $K_n\odot K_1$ with size $\gamma^{ID}(K_n\odot K_1)$. We have the following two claims. {\em Claim 1.} $|V\cap C|\geq 2$. In fact, for any $i\in\{1,\ldots,n\}$, since $$ (V\cup\{u_i\})\cap C=N_{K_n\odot K_1}[v_i]\cap C\neq N_{K_n\odot K_1}[u_i]\cap C=\{u_i,v_i\}\cap C, $$ we have $|(V\setminus\{v_{i}\})\cap C|\geq 1$. So $|V\cap C|\geq 2$. {\em Claim 2.} $|V'\cap C|\geq n-1$. In fact, if there exist two distinct vertices $u_{i}$ and $u_{j}$ neither of which belongs to $C$, then $N_{K_n\odot K_1}[v_{i}]\cap C=N_{K_n\odot K_1}[v_{j}]\cap C$, a contradiction. Combining Claim 1 and Claim 2, we have $$ \gamma^{ID}(K_n\odot K_1)=|V\cap C|+|V'\cap C|\geq n+1. $$ It is routine to show that $ \{u_{i}\mid 2\leq i\leq n\}\cup\{v_{1},v_{2}\} $ is an identifying code of $K_n\odot K_1$ with size $n+1$. Hence, the desired result follows. $\qed$ \begin{thm}\label{main2} Let $H$ be a connected graph that is not complete. Then $$ \gamma^{ID}(H\odot K_1)=|V(H)|. $$ \end{thm} \proof Theorem~\ref{identifiable} implies that $H\odot K_1$ is identifiable. Since $\gamma^{ID}(K_1)=1$, by Lemma~\ref{lemma3} it suffices to construct an identifying code of $H\odot K_1$ with size $|V(H)|$. For any $u, v\in V(H)$, define $u\equiv v$ if $N_{H}(u)=N_{H}(v)$. Note that $``\equiv"$ is an equivalence relation. Let $O_u$ denote the equivalence class containing $u$. Pick a representative system $D$ with respect to this equivalence relation. For each $v\in V(H)$, denote by $\{u_v\}$ the vertex set of the copy of $K_1$ connected to $v$ in $H\odot K_1$. Let $$ C=\{u_{v}\mid v\in V(H)\setminus D\}\cup D. $$ Observe that $|C|=|V(H)|$. Since $C$ covers $V(H\odot K_1)$, it suffices to show that, for any two distinct vertices $x$ and $y$ of $H\odot K_1$, \begin{equation}\label{c2} N_{H\odot K_1}[x]\cap C\neq N_{H\odot K_1}[y]\cap C. \end{equation} {\em Case 1.} $|\{x,y\}\cap V(H)|=2$. If $N_H[x]\neq N_H[y]$, there exists a vertex $z\in V(H)$ such that $\{z\}$ separates $\{x,y\}$ in $H$. Note that there exists a vertex $z'\in D$ such that $O_{z'}=O_z$. Then $N_H[z']=N_H[z]$, and so $\{z'\}$ separates $\{x,y\}$ in $H$. It follows that $\{z'\}$ separates $\{x,y\}$ in $H\odot K_1$. Since $z'\in C$, the inequality (\ref{c2}) holds. If $N_H[x]=N_H[y]$, then $O_x=O_y$, which implies that $x\not\in D$ or $y\not\in D$. Without loss of generality, we may assume that $x\not\in D$. Then $u_{x}\in (N_{H\odot K_1}[x]\cap C)\setminus N_{H\odot K_1}[y]$, which implies that (\ref{c2}) holds. {\em Case 2.} $|\{x,y\}\cap V(H)|=1$. Without loss of generality, assume that $x\in V(H)$ and $y=u_v$ for some $v\in V(H)$. If $x\neq v$, since both $\{x\}$ and $\{u_{x}\}$ separate $\{x,y\}$ in $H\odot G$, we obtain (\ref{c2}) by $\{x,u_{x}\}\cap C\neq\emptyset$. Now suppose that $x=v$. Since $H$ is not complete, we have $|D|\geq 2$. Hence, there is a vertex $w\in D$ such that $w$ is adjacent to $x$ in $H$. It follows that $w\in (N_{H\odot K_1}[x]\cap C)\setminus N_{H\odot K_1}[y]$, and so (\ref{c2}) holds. {\em Case 3.} $|\{x,y\}\cap V(H)|=0$. Then $N_{H\odot K_1}[x]\cap N_{H\odot K_1}[y]=\emptyset$. Since $C$ covers $\{x,y\}$ in $H\odot G$, the inequality (\ref{c2}) holds. $\qed$ Let $T_1=K_1\odot K_1$ and $T_n=T_{n-1}\odot K_1$ for $n\geq 2$. We call $T_n$ a {\em binomial tree}, which is a useful data structure in the context of algorithm analysis and design \cite{co}. Note that $T_n$ is a spanning tree of the hypercube $Q_n$. The problem of computing $\gamma^{ID}(Q_n)$ is still open. By Theorem~\ref{main2}, we get the following corollary. \begin{cor} Let $n\geq 3$. Then $\gamma^{ID}(T_n)=2^{n-1}$. \end{cor} For a connected graph with pendant edges, we have the following more general result than Theorem~\ref{main2}. \begin{cor} Let $H$ be a connected graph with $m$ vertices. Suppose that $H_1$ is a graph obtained from $H$ by adding $n_i (\geq 1)$ pendant edges to the $i$th-vertex of $H$. If $H_1$ is not isomorphic to $K_m\odot K_1$, then \begin{equation}\label{c4} \gamma^{ID}(H_1)=\sum_{i=1}^mn_i. \end{equation} \end{cor} \proof It is routine to show that (\ref{c4}) holds for $m=1$. Now suppose $m\geq 2$. Write $V(H)=\{v_1,\ldots,v_m\}$. For each $i\in\{1,\ldots,m\}$, let $S_i=\{u_{ij}\mid 1\leq j\leq n_i\}$ be the set of vertices adjacent to $v_i$ in $V(H_1)\setminus V(H)$. Then the subgraph of $H_1$ induced by $S_i$ is isomorphic to $\overline K_{n_i}$. Similar to the proof of Lemma~\ref{lemma3}, we have \begin{equation*} \gamma^{ID}(H_1)\geq \sum_{i=1}^m\gamma^{ID}(\overline K_{n_i})=\sum_{i=1}^m n_i. \end{equation*} In order to prove (\ref{c4}), it suffices to construct an identifying code of $H_1$ with size $\sum_{i=1}^m n_i$. {\em Case 1.} $H$ is a complete graph. Then there exists an index $j\in\{1,\ldots,m\}$ such that $n_j\geq 2$. Pick $k\in\{1,\ldots,m\}\setminus\{j\}$. It is routine to show that $$ \{v_j,v_k\}\cup(S_j\setminus\{u_{j1}\})\cup(S_k\setminus\{u_{k1}\})\cup \bigcup_{i\in\{1,\ldots,n\} \setminus \{j,k\}} S_i $$ is an identifying code of $H_1$ with size $\sum_{i=1}^m n_i$. {\em Case 2.} $H$ is not a complete graph. Write $$ A=\bigcup_{i=1}^m\{v_i,u_{i1}\},\quad B=V(H_1)\setminus A. $$ Then the subgraph $H_1[A]$ of $H_1$ induced by $A$ is isomorphic to $H\odot K_1$. Pick a subset $A_0\subseteq A$ such that $A_0$ is an identifying code of $H_1[A]$ with the minimum cardinality. By Theorem~\ref{main2} we have $|A_0|=\gamma^{ID}(H\odot K_1)=m$. Let $ C=A_0\cup B. $ Note that $|C|=\sum_{i=1}^mn_i$. It suffices to show that $C$ is an identifying code of $H_1$. The fact that $A_0$ covers $A$ in $H_1[A]$ implies that $C$ covers $V(H_1)$ in $H_1$. Therefore, we only need to show that, for any two distinct vertices $x$ and $y$ of $H_1$, \begin{equation}\label{c3} N_{H_1}[x]\cap C\neq N_{H_1}[y]\cap C. \end{equation} {\em Case 2.1.} $\{x,y\}\subseteq A$. Then there is a vertex $z\in A_0$ such that $\{z\}$ separates $\{x,y\}$ in $H_1[A]$, which implies that $z\in C$ and $\{z\}$ separates $\{x,y\}$ in $H_1$. So (\ref{c3}) holds. {\em Case 2.2.} $\{x,y\}\not\subseteq A$. Without loss of generality, we may assume that $x\not\in A$. Then $x\in B$. Write $x=u_{ij}$, where $1\leq i\leq m$ and $2\leq j\leq n_i$. If $y\neq v_i$, then $x\in (N_{H_1}[x]\cap C)\setminus N_{H_1}[y]$, which implies that (\ref{c3}) holds. Now suppose that $y=v_i$. Since $\{u_{i1},v_i\}\subseteq A$, there exists a vertex $z\in A_0$ such that $\{z\}$ separates $\{u_{i1},v_i\}$ in $H_1[A]$, which implies that $z\in C$ and $\{z\}$ separates $\{x,y\}$ in $H_1$. So (\ref{c3}) holds. $\qed$ \section{Examples} In this section, we shall find some graphs satisfying each condition in Theorems~\ref{main3}, \ref{main4} and \ref{main1}, respectively. As a result, we compute $\gamma^{ID}(H\odot G)$ for some special graphs $G$. The minimum cardinality of an identifying code of the path $P_n$ or the cycle $C_n$ was computed in \cite{Be, Gr}. \begin{prop}\label{pc} {\rm (\cite{Be, Gr})} {\rm(i)} For $n\geq3$, $\gamma^{ID}(P_{n})=\lfloor\frac{n}{2}\rfloor+1$. {\rm(ii)} For $n\geq 6$, $\gamma^{ID}(C_{n})=\left\{ \begin{array}{ll} \frac{n}{2}, &n ~\textup{is even},\\ \frac{n+3}{2}, &n ~\textup{is odd}. \end{array}\right.$ \end{prop} Note that $\gamma^{ID}(C_4)=\gamma^{ID}(C_5)=3$. Each identifying code of $P_3$, $P_4$, $C_4$ or $C_5$ satisfies none of the conditions (a), (b) and (c). There exists an identifying code of $P_n$ (resp. $C_n$) satisfying (a) for $n\geq 5 $ (resp. $n\geq 6$). Combining Theorems~\ref{identifiable}, \ref{main3}, \ref{main4}, \ref{main1}, Corollary~\ref{cor} and Proposition~\ref{pc}, we get Examples \ref{ex1}, \ref{ex2} and Corollary~\ref{pc2}. \begin{example}\label{ex1} Let $F_n$ be a fan, that is $F_n=K_1\odot P_n$. If $1\leq n\leq 3$, then $F_n$ is not identifiable; If $n\geq 4$, then $$ \gamma^{ID}(F_n)=\left\{ \begin{array}{ll} 4, &\textup{if } n =4,\\ \lfloor\frac{n}{2}\rfloor+1, &\textup{if }n \geq 5. \end{array}\right. $$ \end{example} \begin{example}\label{ex2} Let $W_n$ be a wheel, that is $W_n=K_1\odot C_n$. Then $W_3$ is not identifiable. For $n\geq 4$, we have $$ \gamma^{ID}(W_n)=\left\{ \begin{array}{ll} 4& \textup{if } n=4,\\ \frac{n}{2}, &\textup{if }n \textup{ is even and }n\geq 6,\\ \frac{n+3}{2}, &\textup{if }n \textup{ is odd and }n\geq 5. \end{array}\right. $$ \end{example} \begin{cor}\label{pc2} Let $H$ be a nontrivial connected graph with $m$ vertices. {\rm(i)} $\gamma^{ID}(H\odot P_3)=2m+\gamma_t(H).$ {\rm(ii)} $\gamma^{ID}(H\odot P_4)=\gamma^{ID}(H\odot C_4)=\gamma^{ID}(H\odot C_5)=3m+\gamma_t(H).$ {\rm(iii)} For $n\geq 5$, we have $\gamma^{ID}(H\odot P_n)=m(\lfloor\frac{n}{2}\rfloor+1)$. {\rm(iv)} For $n\geq 6$, we have $ \gamma^{ID}(H\odot C_n)=\left\{ \begin{array}{ll} \frac{mn}{2}, &n ~\textup{is even},\\ \frac{m(n+3)}{2}, &n ~\textup{is odd}. \end{array}\right. $ \end{cor} Let $S_n$ be a star, that is $S_n=K_1\odot\overline K_n$, where $\overline K_n$ is the empty graph on $n$ vertices. Suppose $n\geq 3$. By Corollary~\ref{cor}, we get $\gamma^{ID}(S_n)=n$. Each identifying code of $S_n$ with size $n$ satisfies (b). By Theorem~\ref{main3}, we have the following result. \begin{cor}\label{cor3} Let $H$ be a nontrivial connected graph with $m$ vertices. If $n\geq 3$, then $\gamma^{ID}(H\odot S_n)=mn.$ \end{cor} \begin{center} \setlength{\unitlength}{1mm} \begin{picture}(23,26) \put(14,16){\line(0,1){8}} \put(14,16){\line(-2,-1){8}} \put(14,16){\line(2,-1){8}} \put(14,8){\line(-2,1){8}} \put(14,8){\line(2,1){8}} \put(6,12){\line(0,1){8}} \put(22,12){\line(0,1){8}} \put(14,24){\line(-2,-1){8}} \put(14,24){\line(2,-1){8}} \put(14,8){\circle*{1}}\put(13,4){1} \put(6,12){\circle*{1}}\put(3,9){6} \put(22,12){\circle*{1}}\put(23,9){2} \put(14,16){\circle*{1}}\put(13,12){0} \put(6,20){\circle*{1}}\put(3,20){5} \put(22,20){\circle*{1}}\put(23,20){3} \put(14,24){\circle*{1}}\put(13,25){4} \put(3,0){Figure 1: $G_3$} \end{picture} \end{center} Let $G_3$ be the graph in Figure 1. Note that $\gamma^{ID}(G_3)=3$ and each identifying code with size three is contained in $\{0, 2, 4, 6\}$. Any subset of $V(G_3)$ with size two can not separates $V(G_3)$. Therefore, each identifying code of $G_3$ satisfies neither (a) nor (b). The fact that $\{1,3,5\}$ separates $V(G_3)$ implies that $\{0,1,3,5\}$ is an identifying code of $G_3$ satisfying (c). By Theorems~\ref{main4}, we get the following result. \begin{cor} Let $H$ be a nontrivial connected graph with $m$ vertices. Then $$ \gamma^{ID}(H\odot G_3)=3m+\gamma(H). $$ \end{cor} \section*{Acknowledgement} This research is supported by NSFC(11271047) and the Fundamental Research Funds for the Central University of China.
1,108,101,566,296
arxiv
\section*{Introduction} The problem of filtering closed 3-manifolds in order to study them systematically has been approached by many mathematicians. The aim is to find a function from the class of closed 3-manifolds to the set of natural numbers. The number associated to a closed 3-manifold should be a measure of how much the manifold is complicated. For closed surfaces, this can be achieved by means of genus. For closed 3-manifolds, the problem has been studied very much and many possible functions has been found. For example, the Heegaard genus, the Gromov norm, the Matveev complexity have been considered. All these functions fulfil many properties. For instance, they are additive under connected sum. However, some of them have drawbacks. The Heegaard genus and the Gromov norm are not finite-to-one, while the Matveev complexity is. Hence, in order to carry out a classification process, the latter one is more suitable than the former ones. The Matveev complexity is also a natural measure of how much the manifold is complicated, because if a closed 3-manifold is $\mathbb{P}^2$-irreducible\ and different from the sphere $S^3$, the projective space $\mathbb{RP}^3$ and the Lens space $L_{3,1}$, then its Matveev complexity is the minimum number of tetrahedra in a triangulation of the manifold (the Matveev complexity of $S^3$, $\mathbb{RP}^3$ and $L_{3,1}$ is zero). Such functions could also be tools to give proofs by induction. For instance, the Heegaard genus was used by Rourke to prove by induction that every closed orientable 3-manifold is the boundary of a compact orientable 4-manifold~\cite{Rourke}. The aim of this paper is to define another function (we will call {\em surface-complexity}) from the class of closed 3-manifolds to the set of natural numbers, to prove that it fulfils some properties, to give bounds for it, and to start an enumeration process (we will give a complete list of closed 3-manifolds with complexity one in a subsequent paper~\cite{Amendola:next}). In~\cite{Vigara:calculus} Vigara used triple points of particular transverse immersions of connected closed surfaces to define the triple point spectrum of a 3-manifold. The definition of the surface-complexity is similar to Vigara's one, but it has the advantage of being more flexible. This flexibility will allow us to prove many properties fulfilled by the surface-complexity. We now sketch out the definition and the results of this paper. The surface-complexity of a closed 3-manifold will be defined by means of {\em quasi-filling Dehn surfaces} ({\em i.e.}~images of transverse immersions of closed surfaces that divide the manifold into balls). \begin{description} \item[Definition.] The surface-complexity $sc(M)$ of a closed 3-manifold $M$ is the minimal number of triple points of a quasi-filling Dehn surface of $M$. \end{description} Three properties we will prove are the following ones. \begin{description} \item[Finiteness.] For any integer $c$ there exists only a finite number of connected closed $\mathbb{P}^2$-irreducible\ 3-manifolds having surface-complexity $c$. \item[Naturalness.] The surface-complexity of a connected closed $\mathbb{P}^2$-irreducible\ 3-manifold $M$, different from $S^3$, $\mathbb{RP}^3$ and $L_{4,1}$, is the minimal number of cubes in a cubulation of $M$. The surface complexity of $S^3$, $\mathbb{RP}^3$ and $L_{4,1}$ is zero. \item[Subadditivity.] The complexity of the connected sum of closed 3-manifolds is less than or equal to the sum of their complexities. \end{description} The naturalness property will follow from the features of {\em minimal} quasi-filling Dehn surfaces of connected closed $\mathbb{P}^2$-irreducible\ 3-manifolds, where minimal means ``with a minimal number of triple points''. We will call a quasi-filling Dehn surface of a 3-manifold $M$ {\em filling}, if its singularities induce a cell-decomposition of $M$. The cell-decomposition dual to a filling Dehn-surface of $M$ is actually a cubulation of $M$. Hence, in order to prove the naturalness property, we will prove that every connected closed $\mathbb{P}^2$-irreducible\ 3-manifold, different from $S^3$, $\mathbb{RP}^3$ and $L_{4,1}$, has a minimal filling Dehn surface. We point out that not all the minimal quasi-filling Dehn surfaces of $\mathbb{P}^2$-irreducible\ 3-manifolds are indeed filling. However, they can be all constructed starting from filling ones (except for $S^3$, $\mathbb{RP}^3$ and $L_{4,1}$, for which non-filling ones must be used) and applying a simple move, we will call {\em bubble-move}. The surface-complexity is related to the Matveev complexity. Indeed, if $M$ is a connected closed $\mathbb{P}^2$-irreducible\ 3-manifold different from $L_{3,1}$ and $L_{4,1}$, the double inequality $\frac{1}{8}c(M) \leqslant sc(M) \leqslant 4c(M)$ holds, where $c(M)$ denotes the Matveev complexity of $M$. For the sake of completeness, we recall that we have $c(L_{3,1})=0$, $sc(L_{3,1})>0$, $c(L_{4,1})>0$ and $sc(L_{4,1})=0$. The two inequalities above give also estimates of the surface-complexity. In general, an exact calculation of the surface-complexity of a closed 3-manifold is very difficult, however it is relatively easy to estimate it. More precisely, it is quite easy to give upper bounds for it, because constructing a quasi-filling Dehn surface of the manifold with the appropriate number of triple points suffices. With this technique, we will give upper bounds for the surface-complexity of a closed 3-manifold starting from a triangulation, a Heegaard splitting and a surgery presentation on a framed link (in $S^3$) of it. In the Appendix, we will state some results on closed 3-manifolds with surface-complexity one and we will give some examples. However, we will postpone the theoretical proof of these results and the classification of the closed 3-manifolds with surface-complexity one to a subsequent paper~\cite{Amendola:next}. For the sake of completeness, in the Appendix we will also give a brief description of what happens in the 2-dimensional case. We plan to cope with the 4-dimensional case in a subsequent paper. \section{Definitions} Throughout this paper, all 3-manifolds are assumed to be connected and closed. By $M$, we will always denote such a (connected and closed) 3-manifold. Using the {\em Hauptvermutung}, we will freely intermingle the differentiable, piecewise linear and topological viewpoints. \paragraph{Dehn surfaces} A subset $\Sigma$ of $M$ is said to be a {\em Dehn surface of $M$}~\cite{Papa} if there exists an abstract (possibly non-connected) closed surface $S$ and a transverse immersion $f\colon\thinspace S\to M$ such that $\Sigma = f(S)$. Let us fix for a while $f\colon\thinspace S\to M$ a transverse immersion (hence, $\Sigma = f(S)$ is a Dehn surface of $M$). By transversality, the number of pre-images of a point of $\Sigma$ is 1, 2 or 3; so there are three types of points in $\Sigma$, depending on this number; they are called {\em simple}, {\em double} or {\em triple}, respectively. Note that the definition of the type of a point does not depend on the particular transverse immersion $f\colon\thinspace S\to M$ we have chosen. In fact, the type of a point can be also defined by looking at a regular neighbourhood (in $M$) of the point, as shown in Fig.~\ref{fig:neigh_Dehn_surf}. The set of triple points is denoted by $T(\Sigma)$; non-simple points are called {\em singular} and their set is denoted by $S(\Sigma)$. \begin{figure}[t] \centerline{ \begin{tabular}{ccc} \begin{minipage}[c]{3.5cm}{\small{\begin{center} \includegraphics{neigh_Dehn_surf_simple.EPS} \end{center}}}\end{minipage} & \begin{minipage}[c]{3.5cm}{\small{\begin{center} \includegraphics{neigh_Dehn_surf_double.EPS} \end{center}}}\end{minipage} & \begin{minipage}[c]{3.5cm}{\small{\begin{center} \includegraphics{neigh_Dehn_surf_triple.EPS} \end{center}}}\end{minipage} \\ \begin{minipage}[t]{3.5cm}{\small{\begin{center} Simple\\point \end{center}}}\end{minipage} & \begin{minipage}[t]{3.5cm}{\small{\begin{center} Double\\point \end{center}}}\end{minipage} & \begin{minipage}[t]{3.5cm}{\small{\begin{center} Triple\\point \end{center}}}\end{minipage} \end{tabular}} \caption{Neighbourhoods of points (marked by thick dots) of a Dehn surface.} \label{fig:neigh_Dehn_surf} \end{figure} From now on, in all figures, triple points are always marked by thick dots and the singular set is also drawn thick. \begin{rem} The topological type of the abstract surface $S$ is determined unambiguously by $\Sigma$. \end{rem} \paragraph{(Quasi-)filling Dehn surfaces} A Dehn surface $\Sigma$ of $M$ will be called {\em quasi-filling} if $M \setminus \Sigma$ is made up of balls. Moreover, $\Sigma$ is called {\em filling}~\cite{Montesinos} if its singularities induce a cell-decomposition of $M$; more precisely, \begin{itemize} \item $T(\Sigma) \neq \emptyset$, \item $S(\Sigma) \setminus T(\Sigma)$ is made up of intervals (called {\em edges}), \item $\Sigma \setminus S(\Sigma)$ is made up of discs (called {\em regions}), \item $M \setminus \Sigma$ is made up of balls ({\em i.e.}~$\Sigma$ is quasi-filling). \end{itemize} Since $M$ is connected and $M\setminus\Sigma$ is made up of (disjoint) balls, the quasi-filling Dehn surface $\Sigma$ is connected. Consider a small regular neighbourhood ${\cal R}(\Sigma)$ of $\Sigma$ in $M$, then $M\setminus{\cal R}(\Sigma)$ is made up of balls whose closures are disjoint and $M$ can be obtained from ${\cal R}(\Sigma)$ by filling up its boundary components with balls. Moreover, we have that $M$ minus some balls, {\em i.e.}~${\cal R}(\Sigma)$, collapses to $\Sigma$. \begin{rem}\label{rem:Sigma_surf} Suppose $\Sigma$ is a surface ({\em i.e.}~$S(\Sigma)=\emptyset$). Then the boundary of ${\cal R}(\Sigma)$ of $\Sigma$ in $M$ is a two-fold covering of $\Sigma$. Since the boundary of ${\cal R}(\Sigma)$ is made up of spheres and $\Sigma$ is connected, $\Sigma$ is the sphere $S^2$ or the projective plane $\mathbb{RP}^2$. Therefore, $M$ is the sphere $S^3$ or the projective space $\mathbb{RP}^3$, respectively. \end{rem} Let us give some other examples. Two projective planes intersecting along a loop non-trivial in both of them form a quasi-filling Dehn surface of $\mathbb{RP}^3$, which will be called {\em double projective plane} and denoted by $2\times\RP^2$. The sphere intersecting a torus (resp.~a Klein bottle) along a loop is a quasi-filling Dehn surface of $S^2\times S^1$ (resp.~$S^2 \timtil S^1$) without triple points. The {\em quadruple hat} ({\em i.e.}~a disc whose boundary is glued four times along a circle) is a quasi-filling Dehn-surface of the lens-space $L_{4,1}$ without triple points. If we identify the sphere $S^3$ with $\mathbb{R}^3\cup\{\infty\}$, the three coordinate planes in $\mathbb{R}^3$, with $\infty$ added, form a filling Dehn surface of $S^3$ with two triple points: $(0,0,0)$ and $\infty$. It is by now well-known that a filling Dehn surface determines $M$ up to homeomorphism and that every $M$ has standard filling Dehn surfaces (see, for instance, Montesinos-Amilibia~\cite{Montesinos} and Vigara~\cite{Vigara:present}, see also~\cite{Amendola:surf_inv}). It is not clear how any two standard filling Dehn spheres of the same $M$ are related to each other. There are only partial results; for instance, we provided in~\cite{Amendola:surf_inv} a finite calculus for nullhomotopic filling Dehn spheres, deducing it from another one, described by Vigara~\cite{Vigara:calculus}, which has been derived from the more general Homma-Nagase calculus~\cite{Homma-NagaseI, Homma-NagaseII} (see also Hass and Hughes~\cite{Hass-Hughes} and Roseman~\cite{Roseman}). \paragraph{Abstract filling Dehn surfaces} A filling Dehn surface $\Sigma$ of $M$ is contained $M$. However, we can think of if as an abstract cell complex. For the sake of completeness, we point out that the abstract cell complex $\Sigma$ determines $M$ (and the abstract surface $S$ such that $\Sigma=f(S)$ where $f \colon\thinspace S \rightarrow M$) up to homeomorphism. The proof of this fact is quite easy (and not strictly connected with the aim of this paper), so we leave it to the reader. \paragraph{Surface-complexity} The surface-complexity of $M$ can now be defined as the minimal number of triple points of a quasi-filling Dehn surface of $M$. More precisely, we give the following. \begin{defi} The {\em surface-complexity} $sc(M)$ of $M$ is equal to $c$ if $M$ possesses a quasi-filling Dehn surface with $k$ triple points and has no quasi-filling Dehn surface with less than $k$ triple points. In other words, $sc(M)$ is the minimum of $|T(\Sigma)|$ over all quasi-filling Dehn surfaces $\Sigma$ of $M$. \end{defi} We will classify the 3-manifolds having surface-complexity zero in the following section. At the moment, we can only say that $S^3$, $\mathbb{RP}^3$, $S^2\times S^1$, $S^2 \timtil S^1$ and $L_{4,1}$ have surface-complexity zero, because we have seen above that they have quasi-filling Dehn surfaces without triple points. \paragraph{Triple point spectrum} For the sake of completeness, we give also Vigara's definition of the triple point spectrum~\cite{Vigara:calculus}. The {\em triple point spectrum} of $M$ is a sequence of integers $t_i(M)$, with $i\in\mathbb{N}$, such that $t_i(M)$ is the minimal number of triple points of a filling Dehn surface with genus $i$ of $M$. \section{Minimality and finiteness} A quasi-filling Dehn surface $\Sigma$ of $M$ is called {\em minimal} if it has a minimal number of triple points among all quasi-filling Dehn surfaces of $M$, {\em i.e.}~$|T(\Sigma)|=sc(M)$. \begin{teo}\label{teo:minimal_filling} Let $M$ be a (connected and closed) $\mathbb{P}^2$-irreducible\ 3-manifold. \begin{itemize} \item If $sc(M)=0$, then $M$ is the sphere $S^3$, the projective space $\mathbb{RP}^3$ or the lens space $L_{4,1}$. \item If $sc(M)>0$, then $M$ has a minimal filling Dehn surface. \end{itemize} \end{teo} \begin{proof} Let $\Sigma$ be a minimal quasi-filling Dehn surface of $M$. If we have $S(\Sigma)=\emptyset$ ({\em i.e.}~$\Sigma$ is a surface), by virtue of Remark~\ref{rem:Sigma_surf}, we have that $M$ is the sphere $S^3$ or the projective space $\mathbb{RP}^3$. Then, we suppose $S(\Sigma)\neq\emptyset$. We will first prove that $M$ has a quasi-filling Dehn surface $\Sigma'$ such that $\Sigma' \setminus S(\Sigma')$ is made up of discs. In fact, suppose there exists a component $C$ of $\Sigma \setminus S(\Sigma)$ that is not a disc. $C$ contains a non-trivial orientation preserving (in $C$) simple closed curve $\gamma$. Consider a strip $A$ contained in a small regular neighbourhood ${\cal R}(\Sigma)$ of $\Sigma$ in $M$ such that $A\cap\Sigma=\gamma$ and $A\cap\partial{\cal R}(\Sigma)=\partial A$. (Note that $A$ is an annulus or a M\"obius strip depending on whether $\gamma$ is orientation preserving in $M$ or not.) Since $M\setminus{\cal R}(\Sigma)$ is made up of balls, we can fill up $\partial A$ with one or two discs disjoint from $\Sigma$ getting a sphere or a projective plane (depending on whether $A$ is an annulus or a M\"obius strip). Note that both the sphere and the projective plane are transversely orientable, hence the second case cannot occur (because $M$ is $\mathbb{P}^2$-irreducible). In the first case, since $M$ is $\mathbb{P}^2$-irreducible, the sphere found bounds a ball, say $B$. Since $B$ is a ball and $\Sigma\cap\partial B=\gamma$ is a simple closed curve, we can replace the portion of $\Sigma$ contained in $B$ with a disc, getting a new quasi-filling Dehn surface of $M$. Note that the Euler characteristic of the component of $\Sigma \setminus S(\Sigma)$ containing $\gamma$ has increased, that no new non-disc component has been created and that the number of triple points has not changed. Hence, by repeatedly applying this procedure, we eventually get a quasi-filling Dehn surface, say $\Sigma'$, of $M$ such that $\Sigma' \setminus S(\Sigma')$ is made up of discs. Since $\Sigma'$ is connected and $\Sigma' \setminus S(\Sigma')$ is made up of discs, we have that $S(\Sigma')$ is also connected. If we have $sc(M)>0$ ({\em i.e.}~$T(\Sigma')$ is not empty), $S(\Sigma) \setminus T(\Sigma)$ cannot contain circles and hence $\Sigma'$ is filling ({\em i.e.}~$M$ has a minimal filling Dehn surface). Otherwise, if we have $sc(M)=0$ ({\em i.e.}~$T(\Sigma')$ is empty), $S(\Sigma)$ is made up of one circle. Since $\Sigma' \setminus S(\Sigma')$ is made up of discs, the Dehn surface $\Sigma'$ is completely determined by the regular neighbourhood of $S(\Sigma')$ in $\Sigma'$. This neighbourhood depends on how the germs of disc are interchanged along the curve $S(\Sigma)$. Among all possibilities we must rule out those not yielding a quasi-filling Dehn surface, hence only three ones must be taken into account (up to symmetry): \begin{itemize} \item two spheres intersecting along the circle $S(\Sigma)$ which form a Dehn surface of $S^3$; \item the double projective plane $2\times\RP^2$ which is a Dehn surface of $\mathbb{RP}^3$; \item the four-hat which is a Dehn surface of $L_{4,1}$. \end{itemize} The proof is complete. \end{proof} Since there is a finite number of filling Dehn surfaces having a fixed number of triple points, we have the following corollary of Theorem~\ref{teo:minimal_filling}. \begin{cor} For any integer $c$ there exists only a finite number of (connected and closed) $\mathbb{P}^2$-irreducible\ 3-manifolds having surface-complexity $c$. \end{cor} \subsection{Minimal quasi-filling Dehn surfaces} Not all the minimal quasi-filling Dehn surfaces of $\mathbb{P}^2$-irreducible\ 3-manifolds are indeed filling. However, they can be all constructed starting from filling ones (except for $S^3$, $\mathbb{RP}^3$ and $L_{4,1}$, for which non-filling ones must be used) and applying a simple move. The move acts on quasi-filling Dehn surfaces near a simple point as shown in Fig.~\ref{fig:bubble_move} and it is called {\em bubble-move}. \begin{figure}[t] \centerline{\includegraphics{bubble_move.EPS}} \caption{Bubble-move.} \label{fig:bubble_move} \end{figure} Note that the result of applying a bubble-move to a quasi-filling Dehn surface of $M$ is a quasi-filling Dehn surface of $M$, but the result of applying a bubble-move to a filling Dehn-surface is not a filling Dehn-surface. Note also that the bubble-move increases (by two) the number of connected components of $M\setminus\Sigma$. If a quasi-filling Dehn surface $\Sigma$ is obtained from a quasi-filling Dehn surface $\overline{\Sigma}$ by repeatedly applying bubble-moves, we will say that $\Sigma$ {\em is derived from} $\overline{\Sigma}$. Note that if $\Sigma$ is a quasi-filling Dehn surface of $M$ and is derived from $\overline{\Sigma}$, than $\overline{\Sigma}$ is a quasi-filling Dehn surface of $M$. Theorem~\ref{teo:minimal_filling} can be improved by means of a slightly subtler analysis. \begin{lem}\label{lem:all_minimal_filling_sphere} Let $\Sigma$ be a minimal quasi-filling Dehn surface of the sphere $S^3$ and let $D$ be a closed disc disjoint from the singular set of $\Sigma$. Then $\Sigma$ is derived from a sphere $S^2$ by means of bubble-moves not involving $D$. \end{lem} \begin{proof} Since the surface complexity of $S^3$ is zero, the number of triple points of $\Sigma$ is zero and hence the connected components of $S(\Sigma)$, if there is any, are simple closed curves. If we have $S(\Sigma)=\emptyset$ ({\em i.e.}~$\Sigma$ is a surface), by virtue of Remark~\ref{rem:Sigma_surf}, we have that $\Sigma$ is the sphere $S^2$. Then, we will suppose $S(\Sigma)\neq\emptyset$ and we will prove the statement by induction on the number of connected components of $S(\Sigma)$. Suppose that $S(\Sigma)$ has one connected component. We will firstly prove that $\Sigma \setminus S(\Sigma)$ is made up of discs. In fact, suppose by contradiction that there exists a component $C$ of $\Sigma \setminus S(\Sigma)$ that is not a disc. $C$ contains a non-trivial orientation preserving (in $C$) simple closed curve $\gamma$. Consider a strip $A$ contained in a small regular neighbourhood ${\cal R}(\Sigma)$ of $\Sigma$ in $S^3$ such that $A\cap\Sigma=\gamma$ and $A\cap\partial{\cal R}(\Sigma)=\partial A$. Note that $A$ is an annulus because $\gamma$ is orientation preserving in $S^3$. Since $S^3\setminus{\cal R}(\Sigma)$ is made up of balls, we can fill up the annulus $\partial A$ with two discs disjoint from $\Sigma$ getting a sphere. This sphere bounds two balls, say $B_1$ and $B_2$. Since $S(\Sigma)$ does not intersect the disconnecting sphere, we have that $S(\Sigma)$ is wholly contained either in $B_1$ or in $B_2$ (we can assume in $B_1$). Hence, $\Sigma \cap B_2$ is a surface cutting $B_2$ up into two balls, whose boundaries contain $\Sigma \cap B_2$. Since $\Sigma \cap B_2$ has only one boundary component ($\gamma$), it is a disc and hence $\gamma$ is trivial in $C$, a contradiction. We have proved that $\Sigma \setminus S(\Sigma)$ is made up of discs. Since $S(\Sigma)$ is connected and does not contain triple points, it is a circle and the Dehn surface $\Sigma$ is completely determined by the regular neighbourhood of $S(\Sigma')$ in $\Sigma$. This neighbourhood depends on how the germs of disc are interchanged along the curve $S(\Sigma)$. Among all possibilities, only one yields a quasi-filling Dehn surface of $S^3$: more precisely, $\Sigma$ is composed of two spheres intersecting along the circle $S(\Sigma)$. We conclude by noting that $\Sigma$ is derived from the sphere $S^2$ by means of a bubble-move not involving $D$. Finally, suppose that $S(\Sigma)$ has $n$ components with $n>1$ and suppose that the statement is true for all minimal quasi-filling Dehn surfaces of $S^3$ whose singular set has less than $n$ components. Consequently, $S(\Sigma)$ is not connected and there is a connected component of $\Sigma \setminus S(\Sigma)$ that is not a disc. This component contains a non-trivial orientation preserving (in $C$) simple closed curve $\gamma$ disjoint from the disc $D$. As done above, we can construct a sphere $S^2$ intersecting $\Sigma$ along $\gamma$. Cutting up $S^3$ by this sphere, we obtain two balls (say $B_1$ and $B_2$) both of which contain some components of $S(\Sigma)$. Note that the disc $D$ is wholly contained either in $B_1$ or in $B_2$ (we can assume in $B_1$). Consider now $\Sigma_2 = \Sigma \cap B_2$. If we fill up $\Sigma_2$ with a disc (say $D_2$) by gluing it along $\gamma$, we obtain a minimal quasi-filling Dehn surface $\Sigma_2'$ of $S^3$ such that $S(\Sigma_2')$ has less than $n$ components. We can apply the inductive hypothesis and we have that $\Sigma_2'$ is derived from a sphere $S^2$ by means of bubble-moves not involving $D_2$. Since these moves do not involve the disc $D_2$, we can repeat these moves on $\Sigma$ obtaining a minimal quasi-filling Dehn surface $\Sigma_1'$ of $S^3$ such that $S(\Sigma_1')$ has less than $n$ components. Note that all moves do not involve the disc $D$. By applying again the inductive hypothesis, we obtain that $\Sigma_1'$ is derived from a sphere $S^2$ by means of bubble-moves not involving $D$. Summing up $\Sigma$ is derived from a sphere $S^2$ by means of bubble-moves not involving $D$. This concludes the proof. \end{proof} We are now able to prove the theorem that tells us how to construct all minimal quasi-filling Dehn surfaces starting from the filling ones (except for $S^3$, $\mathbb{RP}^3$ and $L_{4,1}$, for which non-filling ones must be used). \begin{teo} Let $\Sigma$ be a minimal quasi-filling Dehn surface of a (connected and closed) $\mathbb{P}^2$-irreducible\ 3-manifold $M$. \begin{itemize} \item If $sc(M)=0$, one of the following holds: \begin{itemize} \item $M$ is the sphere $S^3$ and $\Sigma$ is derived from the sphere $S^2$, \item $M$ is the projective space $\mathbb{RP}^3$ and $\Sigma$ is derived from the projective plane $\mathbb{RP}^2$ or from the double projective plane $2\times\RP^2$, \item $M$ is the lens space $L_{4,1}$ and $\Sigma$ is derived from the four-hat. \end{itemize} \item If $sc(M)>0$, then $\Sigma$ is derived from a minimal filling Dehn surface of $M$. \end{itemize} \end{teo} \begin{proof} The scheme of the proof is the same as that of Theorem~\ref{teo:minimal_filling}. Hence, we will often refer to the proof of Theorem~\ref{teo:minimal_filling} also for notation. Let $\Sigma$ be a minimal quasi-filling Dehn surface of $M$. If we have $S(\Sigma)=\emptyset$ ({\em i.e.}~$\Sigma$ is a surface), by virtue of Remark~\ref{rem:Sigma_surf}, we have that $\Sigma$ is the sphere $S^2$ or the projective plane $\mathbb{RP}^2$, and that $M$ is the sphere $S^3$ or the projective space $\mathbb{RP}^3$, respectively. Then, we suppose $S(\Sigma)\neq\emptyset$. We will first prove that $\Sigma$ is derived from a (minimal) quasi-filling Dehn surface $\Sigma'$ of $M$ such that either $\Sigma' \setminus S(\Sigma')$ is made up of discs or $\Sigma'$ is a surface. In fact, suppose there exists a component $C$ of $\Sigma \setminus S(\Sigma)$ that is not a disc. $C$ contains a non-trivial orientation preserving (in $C$) simple closed curve $\gamma$. As done in the proof of Theorem~\ref{teo:minimal_filling}, we can construct a sphere $S^2$ contained in $M$ such that $S^2\cap\Sigma=\gamma$. Since $M$ is $\mathbb{P}^2$-irreducible, the sphere found bounds a ball, say $B$. Consider now $\Sigma_1 = \Sigma \setminus B$ and $\Sigma_2 = \Sigma \cap B$. If we fill up $\Sigma_1$ with a disc by gluing it along $\gamma$, we obtain a minimal quasi-filling Dehn surface $\Sigma_1'$ of $M$. Analogously, if we fill up $\Sigma_2$ with a disc (say $D$) by gluing it along $\gamma$, we obtain a minimal quasi-filling Dehn surface $\Sigma_2'$ of $S^3$. By virtue of Lemma~\ref{lem:all_minimal_filling_sphere}, $\Sigma_2'$ is derived from a sphere $S^2$ by means of bubble-moves not involving $D$. These moves can by applied to $\Sigma_1'$ because they do not involve $D$. Note that the Euler characteristic of the component of $\Sigma \setminus S(\Sigma)$ containing $\gamma$ has increased, that no new non-disc component has been created and that the number of triple points has not changed. Hence, by repeatedly applying this procedure, we eventually get a (minimal) quasi-filling Dehn surface $\Sigma'$ of $M$ from which $\Sigma$ is derived and such that either $\Sigma' \setminus S(\Sigma')$ is made up of discs or $\Sigma'$ is a surface. If $\Sigma'$ is a surface, by virtue of Remark~\ref{rem:Sigma_surf}, $\Sigma'$ is either $S^2$ or $\mathbb{RP}^2$, and $M$ is $S^3$ or $\mathbb{RP}^3$, respectively; therefore, we have done. Then, we suppose that $\Sigma' \setminus S(\Sigma')$ is made up of discs. Since $\Sigma'$ is connected, we have that $S(\Sigma')$ is also connected. If we have $sc(M)>0$ ({\em i.e.}~$T(\Sigma')$ is not empty), $S(\Sigma) \setminus T(\Sigma)$ cannot contain circles and hence $\Sigma'$ is filling ({\em i.e.}~$\Sigma$ is derived from a minimal filling Dehn surface of $M$). Otherwise, if we have $sc(M)=0$ ({\em i.e.}~$T(\Sigma')$ is empty), $S(\Sigma)$ is made up of one circle. Since $\Sigma' \setminus S(\Sigma')$ is made up of discs, the Dehn surface $\Sigma'$ is completely determined by the regular neighbourhood of $S(\Sigma')$ in $\Sigma'$. This neighbourhood depends on how the germs of disc are interchanged along the curve $S(\Sigma')$. Among all possibilities we must rule out those not yielding a quasi-filling Dehn surface, hence only three ones must be taken into account (up to symmetry): \begin{itemize} \item two spheres intersecting along the circle $S(\Sigma')$ which form a Dehn surface of $S^3$; \item the double projective plane $2\times\RP^2$ which is a Dehn surface of $\mathbb{RP}^3$; \item the four-hat which is a Dehn surface of $L_{4,1}$. \end{itemize} Note that in the first case $\Sigma'$ is derived from the sphere $S^2$. Therefore, $\Sigma$ is derived from $S^2$, $2\times\RP^2$ or the four hat. The proof is complete. \end{proof} \section{Cubulations} A {\em cubulation} of $M$ is a cell-decomposition of $M$ such that \begin{itemize} \item each 2-cell (called {\em face}) is glued along 4 edges, \item each 3-cell (called {\em cube}) is glued along 6 faces arranged like the boundary of a cube. \end{itemize} Note that self-adjacencies and multiple adjacencies are allowed. In Fig.~\ref{fig:cubul_example} we have shown a cubulation of the 3-dimensional torus $S^1\times S^1\times S^1$ with two cubes (the identification of each pair of faces is the obvious one, {\em i.e.}~the one without twists). \begin{figure}[t] \centerline{\includegraphics{cubul_example.EPS}} \caption{A cubulation of the 3-dimensional torus $S^1\times S^1\times S^1$ with two cubes (the identification of each pair of faces is the obvious one, {\em i.e.}~the one without twists).} \label{fig:cubul_example} \end{figure} The following construction is well-known (see~\cite{Aitchison-Matsumotoi-Rubinstein, Funar, Babson-Chan}, for instance). Let ${\cal C}$ be a cubulation of a closed 3-manifold; consider, for each cube of ${\cal C}$, the three squares shown in Fig.~\ref{fig:cube_to_surf}; the subset of $M$ obtained by gluing together all these squares is a filling Dehn surface $\Sigma$ of $M$ (up to isotopy, we can suppose that the squares fit together through the faces). \begin{figure}[t] \centerline{\includegraphics{cube_to_surf.EPS}} \caption{Local behaviour of duality.} \label{fig:cube_to_surf} \end{figure} Conversely, a cell-decomposition ${\cal C}$ can be constructed from a filling Dehn surface $\Sigma$ of $M$ by considering an abstract cube for each triple point of $\Sigma$ and by gluing the cubes together along the faces (the identification of each pair of faces is chosen by following the four germs of regions adjacent to the respective edge of $\Sigma$); the cell-decomposition ${\cal C}$ just constructed is indeed a cubulation of $M$. The cubulation and the filling Dehn surface constructed in such a way are said to be {\em dual} to each other. An obvious corollary of Theorem~\ref{teo:minimal_filling} is the following result. \begin{cor} The surface-complexity of a (connected and closed) $\mathbb{P}^2$-irreducible\ 3-manifold, different from $S^3$, $\mathbb{RP}^3$ and $L_{4,1}$, is equal to the minimal number of cubes in a cubulation of $M$. \end{cor} \section{Subadditivity}\label{sec:subadditivity} An important feature of a complexity function is to behave well with respect to the cut-and-paste operations. In this section, we will prove that the surface-complexity is subadditive under connected sum. We do not know whether it is indeed additive. \begin{teo}\label{teo:sub_additivity} The complexity of the connected sum of (connected and closed) 3-manifolds is less than or equal to the sum of their complexities. \end{teo} \begin{proof} In order to prove the theorem, it is enough to prove the statement in the case where the number of the manifolds involved in the connected sum is two. Hence, if we call $M_1$ and $M_2$ the two manifolds, we need to prove that $sc(M_1\# M_2) \leqslant sc(M_1) + sc(M_2)$. Let $\Sigma_1$ (resp.~$\Sigma_2$) be a quasi-filling Dehn surface of $M_1$ (resp.~$M_2$) with $sc(M_1)$ (resp.~$sc(M_2)$) triple points. If the balls we remove to obtain the connected sum are disjoint from the $\Sigma_i$'s, we can suppose that $\Sigma_1$ and $\Sigma_2$ are embedded also in the connected sum $M_1\# M_2$. All the components of $(M_1\# M_2) \setminus (\Sigma_1 \cup \Sigma_2)$ are balls except one that is a product $S^2 \times [0,1]$; see Fig.~\ref{fig:connected_sum}-left. \begin{figure}[t] \centerline{\includegraphics{connected_sum.EPS}} \caption{The Dehn surface $\Sigma_1 \cup \Sigma_2$ in $M_1\# M_2$ (left) and its modification $\Sigma$ being quasi-filling (right).} \label{fig:connected_sum} \end{figure} We modify $\Sigma_1 \cup \Sigma_2$ as shown in Fig.~\ref{fig:connected_sum}-right, getting a Dehn surface, say $\Sigma$. The complement $(M_1\# M_2) \setminus \Sigma$ is made up of the same balls as before (up to isotopy), a new small ball and a product $D^2 \times [0,1]$ (which is indeed a ball). Therefore, $\Sigma$ is a quasi-filling Dehn surface of $M_1\# M_2$. Since $\Sigma$ has $sc(M_1)+sc(M_2)$ triple points, we have $sc(M_1\# M_2) \leqslant sc(M_1) + sc(M_2)$. \end{proof} \section{Estimations} \subsection{Matveev complexity}\label{subsec:Matveev_complexity} The Matveev complexity~\cite{Matveev:compl} of a closed 3-manifold $M$ is defined using simple spines. A polyhedron $P$ is {\em simple} if the link of each point of $P$ can be embedded in the 1-skeleton of the tetrahedron. The points of $P$ whose link is the whole 1-skeleton of the tetrahedron are called {\em vertices}. A sub-polyhedron $P$ of $M$ is a {\em spine} of $M$ if $M \setminus P$ is a ball. The {\em Matveev complexity} $c(M)$ of $M$ is the minimal number of vertices of a simple spine of $M$. The Matveev complexity is a natural measure of how much the manifold is complicated, because if $M$ is $\mathbb{P}^2$-irreducible\ and different from the sphere $S^3$, the projective space $\mathbb{RP}^3$ and the Lens space $L_{3,1}$, then its Matveev complexity is the minimum number of tetrahedra in a triangulation of it (the Matveev complexity of $S^3$, $\mathbb{RP}^3$ and $L_{3,1}$ is zero). A simple spine of $M$ is {\em standard} if it is purely 2-dimensional and its singularities induce a cell-decomposition of $M$. The dual cellularization of a standard spine of $M$ is a one-vertex triangulation of $M$, see~\cite{Matveev:book}. The Matveev complexity is related to the surface-complexity. Before describing more precisely this relation, we describe two constructions allowing us to create standard spines (or one-vertex triangulations, by duality) from filling Dehn surfaces (or cubulations, by duality), and {\em vice versa}. Let ${\cal T}$ be a one-vertex triangulation of $M$. Consider, for each tetrahedron of ${\cal T}$, the four triangles shown in Fig.~\ref{fig:tria_to_surf}. \begin{figure}[t] \centerline{\includegraphics{tria_to_surf.EPS}} \caption{Construction of a nullhomotopic filling Dehn sphere from a one-vertex triangulation.} \label{fig:tria_to_surf} \end{figure} The subset of $M$ obtained by gluing together all these triangles is a Dehn surface $\Sigma$ of $M$ with $4c(M)$ triple points (up to isotopy, we can suppose that the triangles fit together through the faces). It is very easy to prove that $\Sigma$ is filling, so we leave it to the reader. The construction just described is the dual counterpart of the well-known construction consisting in dividing a tetrahedron into four cubes~\cite{Shtanko-Shtogrin, Dolbilin-Shtanko-Shtogrin, Funar}. Conversely, let ${\cal C}$ be a cubulation of $M$. Consider, for each cube of ${\cal T}$, the five tetrahedra shown in Fig.~\ref{fig:cube_to_tetra}. \begin{figure}[t] \centerline{\includegraphics{cube_to_tetra.EPS}} \caption{Construction of a triangulation from a cubulation.} \label{fig:cube_to_tetra} \end{figure} The idea is to glue together these ``bricks'' (each of which is made up of five tetrahedra) by following the identifications of the faces of ${\cal C}$. Note that the faces of the cubes are divided by diagonals into two triangles and that it may occur that these pairs of triangles do not match each other. If they do not match each other, we insert a tetrahedron between them as shown in Fig.~\ref{fig:insert_tetra}. \begin{figure}[t] \centerline{\includegraphics{insert_tetra.EPS}} \caption{Inserting a tetrahedron between two pairs of triangles not matching each other.} \label{fig:insert_tetra} \end{figure} Eventually, we get a triangulation ${\cal T}$ of $M$ with $5$ tetrahedra for each cube of ${\cal C}$ and at most one tetrahedron for each face of ${\cal C}$. Since the number of faces of a cubulation is thrice the number of cubes, we have that the triangulation ${\cal T}$ we have constructed has at most $8sc(M)$ tetrahedra. We note that there are two different identifications of the abstract ``brick'' with each cube, so there are $2^{sc(M)}$ possibilities for the identifications with the cubes of ${\cal C}$. Some of them may need less insertions of tetrahedra (for matching the pairs of triangles in the faces of ${\cal C}$) than others. Hence, optimal choices may lead to a triangulation of $M$ with $5sc(M)$ tetrahedra or few more. The two constructions above and the list of the (connected and closed) $\mathbb{P}^2$-irreducible\ 3-manifolds with $c=0$ or $sc=0$ obviously imply the following. \begin{teo}\label{teo:esimation_matveev-surface} Let $M$ be a (connected and closed) $\mathbb{P}^2$-irreducible\ 3-manifold different from $L_{3,1}$ and $L_{4,1}$; then we have $$ sc(M) \leqslant 4c(M) \quad \mbox{and} \quad c(M) \leqslant 8sc(M). $$ Moreover, we have $c(L_{3,1})=0$, $sc(L_{3,1})>0$, $c(L_{4,1})>0$ and $sc(L_{4,1})=0$. \end{teo} \subsection{Other estimations} In general, calculating the surface-complexity $sc(M)$ of $M$ is very difficult, however it is relatively easy to estimate it. More precisely, it is quite easy to give upper bounds for it. If we construct a quasi-filling Dehn surface $\Sigma$ of $M$, the number of triple points of $\Sigma$ is an upper bound for the surface-complexity of $M$. Afterwards, the (usually difficult) problem of proving the sharpness of this bound arises. We can construct quasi-filling Dehn surfaces of $M$ from many presentations of $M$ and hence we obtain estimates from many presentations of $M$. We use here three presentations: triangulations, Heegaard splittings and Dehn surgery. We have already constructed a filling Dehn surface of $M$ from a one-vertex triangulation of $M$ in Section~\ref{subsec:Matveev_complexity}. The same construction applies to any triangulation of $M$, yielding the following result. \begin{teo} Suppose a closed 3-manifold $M$ has a triangulation with $n$ tetrahedra. Then, we have $sc(M) \leqslant 4n$. \end{teo} \begin{proof} Let ${\cal T}$ be the triangulation of $M$ with $n$ tetrahedra. Consider, for each tetrahedron of ${\cal T}$, the four triangles shown in Fig.~\ref{fig:tria_to_surf}. The subset of $M$ obtained by gluing together all these triangles is a Dehn surface $\Sigma$ of $M$ with $4n$ triple points. It is very easy to prove that $\Sigma$ is filling, so we leave it to the reader. \end{proof} \begin{teo}\label{teo:heegaard_estimation} Suppose $M = H_1 \cup H_2$ is a Heegaard splitting of a closed 3-manifold $M$ such that the meridians of the handlebody $H_1$ intersect those of $H_2$ transversely in $n$ points. Then, we have $sc(M) \leqslant 4n$. \end{teo} \begin{proof} Let $g$ be the number of meridians of $H_*$ and $H$ be the common boundary of $H_1$ and $H_2$. Let $\mu_1^i$ (resp.~$\mu_2^i$), with $i=1,\ldots,g$, be the meridians of $H_1$ (resp.~$H_2$). Let us suppose at first that each meridian of $H_1$ intersects at least one of $H_2$, and {\em vice versa}. Let us call $D_j^i$ be the disc bounded by $\mu_j^i$ in $H_j$. The boundary of a small regular neighbourhood of a disc $D_j^i$ is a sphere (say $S_j^i$) intersecting $H$ transversely along two loops parallel to $\mu_j^i$, see Fig.~\ref{fig:meridian}. \begin{figure}[t] \centerline{\includegraphics{meridian.eps}} \caption{The sphere $S_j^i$ near $H$.} \label{fig:meridian} \end{figure} The union of $H$ and the spheres $S_*^*$ is a Dehn surface, say $\Sigma$; we show it near an intersection point between a meridian of $H_1$ and one of $H_2$ in Fig.~\ref{fig:meridian_intersection}. \begin{figure}[t] \centerline{\includegraphics{meridian_intersection.EPS}} \caption{The Dehn sphere $\Sigma$ near an intersection point between a meridian of $H_1$ and one of $H_2$.} \label{fig:meridian_intersection} \end{figure} We prove now that $\Sigma$ is quasi-filling. Since $H$ is contained in $\Sigma$ and since the role of the two handlebodies is symmetrical, it is enough to prove that $H_1\setminus\Sigma$ is made up of balls. The spheres $S_1^*$ divide $H_1$ into balls, because the discs $D_1^*$ do and each $S_1^i \cap H_1$ is made up of two discs parallel to $D_1^i$. Moreover, the spheres $S_2^*$ divide these balls into smaller balls, because each meridian of $H_2$ intersects at least one of $H_1$. Each point of intersection of two meridians yields 4 triple point (see Fig.~\ref{fig:meridian_intersection}), hence the number of triple points of $\Sigma$ is $4n$. Therefore, we have proved that if each meridian of $H_1$ intersects at least one of $H_2$, and {\em vice versa}, then we have $sc(M) \leqslant 4n$. Suppose now that some meridian of $H_1$ does not intersect any of $H_2$ (the case of a meridian of $H_2$ is symmetrical). We can suppose, without loss of generality, that this meridian is $\mu_1^1$. This Heegaard splitting is reducible. A loop in $H$ parallel to $\mu_1^1$ bounds in fact a disc both in $H_1$ and in $H_2$. Let us call these discs $D_1$ and $D_2$, respectively. We can suppose that $D_1$ is parallel to $D_1^1$. The disc $D_1$ does not disconnect $H_1$, therefore the sphere $D_1 \cup D_2$ does not disconnect $M$. Hence, $M$ is the connected sum of $S^2 \times S^1$ (or $S^2 \timtil S^1$) and another manifold, say $M'$. Moreover, we can explicitly construct a Heegaard splitting of $M'$. Namely, the two handlebodies, say $H'_j$, are obtained by cutting $H_j$ along $D_j$ (for $j=1,2$); the gluing map coincides with the old one out of the two pairs of discs created in the boundary of $H_i$ by the cut and identifies the four discs in pairs. Moreover, we consider the class of meridians of $H'_1$ made up of $\mu_1^i$, with $i=2,\ldots,g$. In order to get a class of meridians of $H'_2$, we consider the meridians $\mu_2^i$ discarding one of them: in order to choose the one to discard, we look at the two spheres with holes obtained by cutting the boundary of $H'_2$ along the meridians $\mu_2^i$ and we discard a $\mu_2^i$ that is adjacent to both spheres. Note that a good choice of the $\mu_2^i$ to discard may yield a decrease of the number of intersections between the meridians; however, we only know that the number of intersections between the meridians does not increase. If now some meridian of $H'_1$ does not intersect any of $H'_2$, or {\em vice versa}, then we repeat the procedure. Eventually, we have that $M$ is the connected sum of some copies of $S^2 \times S^1$ (and $S^2 \timtil S^1$) and another manifold $M^{(l)}$. Moreover, we obtain that $M^{(l)}$ has a Heegaard splitting such that each meridian of a handlebody intersects at least one of the other handlebody. If there is no meridian, $M^{(l)}$ is the sphere $S^3$ and hence $sc(M^{(l)})=0 \leqslant 4n$; otherwise, we have constructed a quasi-filling Dehn surface of $M^{(l)}$ with 4 triple points for each intersection of the meridians of the Heegaard splitting of $M^{(l)}$. The number of intersections cannot increase along the procedure, therefore we have $sc(M^{(l)}) \leqslant 4n$. Since we have $sc(S^2 \times S^1)=0$ (and $sc(S^2 \timtil S^1)=0$), by virtue of Theorem~\ref{teo:sub_additivity}, we have $sc(M) \leqslant sc(M^{(l)}) \leqslant 4n$. The proof is complete. \end{proof} \begin{rem} In~\cite{Matveev:book} the following is proven:\\ {\em Suppose $M = H_1 \cup H_2$ is a Heegaard splitting of a closed 3-manifold $M$ such that the meridians of the handlebody $H_1$ intersect those of $H_2$ transversely in $n$ points. Suppose also that the closure of one of the components into which the meridians of $H_1$ and $H_2$ divide $\partial H_1 = \partial H_2$ contains $m$ such points. Then, we have $c(M) \leqslant n-m$.} We can use this result to improve the estimation of $sc(M)$ of Theorem~\ref{teo:heegaard_estimation} if the decomposition of $M$ into prime manifolds contains only $\mathbb{P}^2$-irreducible\ closed 3-manifolds different from $L_{3,1}$. Let $M$ be the connected sum of $\mathbb{P}^2$-irreducible\ manifolds $M_k$, such that no $M_k$ is $L_{3,1}$. By virtue of the statement above, we have $c(M) \leqslant n-m$. Since the Matveev complexity is additive under connected sum, we have $\sum_k c(M_k) = c(M) \leqslant n-m$. Note that Theorem~\ref{teo:esimation_matveev-surface} implies $sc(M_k) \leqslant 4c(M_k)$ for each $M_k$. Then, by virtue of Theorem~\ref{teo:sub_additivity}, we have $sc(M) \leqslant \sum_k sc(M_k) \leqslant \sum_k 4c(M_k) \leqslant 4n-4m$. \end{rem} \begin{teo}\label{teo:chir_to_surf} Suppose $M$ is obtained by Dehn surgery along a framed link $L$ in $S^3$ (hence $M$ is orientable). Moreover, suppose $L$ has a projection such that the framing is the blackboard one, there are $n$ crossing points and there are $m$ components containing no overpass. Then, we have $sc(M) \leqslant 8n + 4m$. \end{teo} \begin{proof} The projection plane can be regarded as a subset of a sphere $S^2$ contained in $S^3$. We add a cylinder for each arc of the projection, as shown in Fig.~\ref{fig:cylinder}. \begin{figure}[t] \centerline{\includegraphics{cylinder.eps}} \caption{Adding a cylinder for each arc of the projection.} \label{fig:cylinder} \end{figure} We connect these cylinders by a pair of intersecting cylinders for each crossing point, as shown in Fig.~\ref{fig:crossing}. \begin{figure}[t] \centerline{\includegraphics{crossing.eps}} \caption{Connecting the cylinders through the crossings.} \label{fig:crossing} \end{figure} The result is a Dehn surface, say $\Sigma$, contained in $S^3$ and made up of the sphere $S^2$ and some tori $T_i$ (namely, one torus for each component of $L$). The complement of $\Sigma$ in $S^3$ is made up of balls and solid tori. More precisely, we have one torus for each component of $L$ and one more torus for each component containing no overpass: note indeed that if $T_i$ is the torus corresponding to one of these components, then $S^3 \setminus (S^2 \cup T_i)$ is made up of two balls and two tori, both of which are not divided into balls by overpasses. In order to divide them, we add one small sphere for each component containing no overpass, as shown in Fig.~\ref{fig:small_sphere}, getting another Dehn surface, say $\Sigma'$. \begin{figure}[t] \centerline{\includegraphics{small_sphere.EPS}} \caption{Adding small spheres to divide the useless tori into balls.} \label{fig:small_sphere} \end{figure} Now, the complement of $\Sigma'$ in $S^3$ is made up of some balls and one torus for each component of the link $L$. Moreover, up to isotopy, we can suppose that the union of the tori is a regular neighbourhood of the link $L$. Hence, the Dehn surface $\Sigma'$ can be regarded as a Dehn surface in $M$. Furthermore, the complement of $\Sigma'$ in $M$ is also made up of some balls and one torus, say $T'_i$, for each component of the link $L$. In order to get a quasi-filling Dehn surface of $M$, we will divide the tori into balls by adding spheres. For each torus $T_i$, let $\gamma_i$ be a curve giving the (blackboard) framing, disjoint from $S^2$ and lying above $S^2$ (with respect to the projection); see Fig.~\ref{fig:framing}. \begin{figure}[t] \centerline{\includegraphics{framing.eps}} \caption{The choice of the curves giving the (blackboard) framing.} \label{fig:framing} \end{figure} Moreover, consider a disc $D_i$ bounded in $T'_i$ by the curve $\gamma_i$ and consider the boundary of a small regular neighbourhood of the disc, say $S_i$. Each surface $S_i$ is a sphere intersecting $\Sigma'$ along two curves (see Fig.~\ref{fig:framing_sphere}). \begin{figure}[t] \centerline{\includegraphics{framing_sphere.EPS}} \caption{The spheres dividing the tori into balls yield 4 triple points for each crossing point of the projection of $L$.} \label{fig:framing_sphere} \end{figure} We add the spheres $S_i$ to $\Sigma'$ and we call the result $\Sigma''$. The complement of the Dehn surface $\Sigma''$ in $M$ is made up of balls, so $\Sigma''$ is a quasi-filling Dehn surface of $M$. Moreover, $\Sigma''$ has 8 triple points for each crossing of the projection of $L$ (see Fig.~\ref{fig:framing_sphere}) and 4 triple points for each component of $L$ containing no overpass (see Fig.~\ref{fig:small_sphere}). The proof is complete. \end{proof} \begin{rem} If the framing is not the blackboard one, some curls must be added to the curves $\gamma_i$, as shown in Fig.~\ref{fig:curl}. \begin{figure}[t] \centerline{\includegraphics{curl.EPS}} \caption{A curl added if the framing is not the blackboard one.} \label{fig:curl} \end{figure} However, a simple generalisation of the procedure shown in the proof of Theorem~\ref{teo:chir_to_surf} yields an upper bound for surface-complexity by means of non-blackboard framings, where the wirthe ({\em i.e.}~the blackboard framing) appears. More precisely, with the notation of Theorem~\ref{teo:chir_to_surf}, if $fr_i$ and $w_i$ are, respectively, the framing and the wirthe of the $i$-th component of the link, then we have $sc(M) \leqslant 8n + 4m + 4\sum_i |fr_i-w_i|$. \end{rem}
1,108,101,566,297
arxiv
\section*{Introduction} \noindent Before the GWAS (genome-wide association study) era, many genetic determinants of disease were found via analysis of multiplex pedigrees, that is, by looking for genetic markers that run in families in a similar way as disease. GWAS advent has robbed pedigree analysis of its luster. Future scientific methodology seesaw might bring pedigree analysis back into the spotlight. {\vspace{0.15cm} \noindent} After the recent discovery of hundreds of disease-associated variants, interest is focusing on the way these variants affect downstream molecular markers, such as transcripts and protein levels, and on the way the resulting changes in these markers in turn affect disease risk. Statistical methods such as Mendelian Randomization \cite{Katan1986}, hereafter denoted as MR, represent important tools in this effort. Most MR studies are based on data from unrelated individuals, a notable exception being \cite{Brumpton2019}. In the present paper we argue that by enriching these data with data from family-related individuals, a number of difficulties that are encountered in MR can be significantly attenuated. {\vspace{0.15cm} \noindent} Motivated by the above considerations, this paper discusses extensions of MR to deal with pedigree data. We adopt the Bayesian MR framework proposed by Berzuini and colleagues \cite{Berzuini2018a}, and extend it in various ways to deal with pedigree data. The proposed method exploits recent developments in Markov chain Monte Carlo (MCMC) inference, as offered by the {\tt Stan} probabilistic programming language \cite{carpenter2017}. {\vspace{0.15cm} \noindent} We illustrate the method with the aid of data generated by ImmunoChip genotyping and transcriptome/protein assays on members of Multiple Sclerosis (MS) multiplex pedigrees from an isolated Sardinian (italian island) population. With this kind of data, environmental confounding and population stratification are expected to have less impact on causal effect estimates, and the effects of rare variants to be easier to detect. Thanks to our Bayesian technology, we perform a "clever" analysis where an initial model is gradually elaborated to bring biological theory and relevant information in general to bear. In this paper, we include in the MR model such information as a family indicator, parental protein levels and kinship. Not only do such enhancements provide extra protection against bias, but they also allow us to explore a number of secondary aspects of the biological mechanism. A further advantage of the Bayesian approach is the simple way it deals with incomplete information. In our study, missing values of the exposure (the level of a protein) are treated as additional parameters to be estimated from the data, without incurring biases, as is natural in Bayesian analysis. {\vspace{0.15cm} \noindent} The "outcome" variable of our analysis is the MS disease indicator. MS lends itself well to a MR study. This disease tends to become manifest early during reproductive lifespan of most humans, throughout history, and is therefore likely to have a strong genetic component. Genetic variants are therefore expected to act as good instruments for the MR analysis. The main scientific question in this paper is whether the plasma level of IL12A protein (which in our analysis will be referred to as the "exposure”) is causal with respect to development of MS (outcome). It is believed that dysregulation of circulating proteins is a causal determinant in many pathologies, more directly so than genetic variants. Our analysis is further motivated by the importance of proteins as natural drug targets. We could have harnessed publicly available eQTL information to involve in the analysis protein concentrations in tissues other than blood, but we do not pursue this here, not to obscure the main points of the paper, whose main message is methodological. \vspace{1cm} \section*{Methods} \subsection*{\Large Sample Description} \noindent Our MS patients were ascertained through the case register established in 1995 in the province of Nuoro, Sardinia, Italy. Cases were diagnosed according to Poser’s criteria \cite{Poser1983}. Twenty extended MS multiplex pedigrees were selected for the analysis, for a total of $N=936$ individuals (98 cases and 838 unaffected relatives). A subset of the pedigree members had complete data, consisting of the observed levels of the IL12A protein (the exposure), the known disease indicator (the outcome variable), and the genotypes at all loci of Immunochip (see below). The remaining individuals had complete data except for a missing value for the protein level. \subsection*{\Large Genotyping Data} \noindent Genotyping data were obtained by using Immunochip Illumina Infinium HD custom array (hereafter “Immunochip” for brevity), designed for fine mapping of 184 established autoimmune loci \cite{Beecham2013}. {\vspace{0.15cm} \noindent} The quality control-filtered dataset included 127134 Single Nucleotide Polymorphisms (SNPs) across Immunochip \cite{Fazia2017}. For a first stage of our analysis, we imposed a maximum correlation of $r^2 = 0.20$ between candidate instrumental SNPs within a 100 Kb window, by using the {\tt indep-pairwise} command of the PLINK package \cite{Purcell2007}. This yielded a total of 19121 candidate SNP instruments across Immunochip. \subsection*{\Large Protein Selection and Profiling} \noindent The protein we chose for our illustrative study was IL12A. Choice was made prior to considering the data, on the basis of Genome-Wide Significant (GWS) association between MS and genetic variants located within (e.g. exonic, intronic, in the UTR) or in the proximity (e.g. downstream, intergenic) of the protein-coding gene \cite{Beecham2013} and on the basis of literature evidence on the biological role of this cytokine in the context of MS \cite{constantinescu1998antibodies} \cite{jahanbani2019serum} \cite{rentzos2008effect} \cite{sun2015interleukin}. Detailed information about the locations of the strongest MS association signals within or in the proximity of the protein-coding genes, and about the strengths of the MS associations, are reported for IL12A in the Supplementary Material. {\vspace{0.15cm} \noindent} Plasma profiles were analysed by using a bead-based antibody array format, consisting of polyclonal Human Protein Atlas \cite{Nilsson2005} antibodies immobilized onto microspheres in suspension \cite{Schwenk2007} \cite{Schwenk2008} (see Supplementary Material for details). \subsection*{\Large Selection of Instrumental Variants} \noindent Genetic variants with a significant marginal association ($p<5 \times 10^{-3}$) with the level of the protein of interest and mutual $r^2<0.20$ correlation were selected to act as instrumental variables (IVs) in the first stage of our analysis. The liberal $p<5 \times 10^{-3}$ threshold is justified by the fine genotyping of candidate gene regions and by recent arguments \cite{Wang2016} \cite{Yang2011} in favour of using sub-genome-wide-significance loci to strengthen biologically interesting signals. It is also justified by the relative ability of our Bayesian MR method (when compared with most frequentist approaches) to deal with the weak instrument bias, thanks to the uncertainty of the estimated exposure coefficients being explicitly included in the model. \subsection*{\Large Notation} \noindent In our analysis, the putative causal factor (with respect to disease) is the circulating level of protein IL12A. We call this variable the "exposure", and denote it as $X$. We let the symbol $\Sigma_X$ denote a regime indicator \cite{Dawid2000} \cite{Dawid2002} which tells us whether we are considering the {\em actual} data generating regime for $X$, which is observational, or a {\em hypothetical} regime where variable $X$ in each individual is set to a value $x$ by intervention. The observational regime corresponds to $\Sigma_X=\emptyset$, whereas the latter, interventional, regime corresponds to $\Sigma_X=x$. In our analysis the outcome variable, $Y$, indicates whether the individual has the disease ($Y=1$) or not ($Y=0$). We are interested in the "causal effect" of $X$ on $Y$, that is, in the way the distribution of $Y$ changes when $X$ is first set by intervention to a reference value $x_0$ and then forced to take the new value $x_1$. Throughout this paper we take this causal effect to be defined as the causal odds ratio (COR): \begin{equation} COR = \frac{P(Y=1\mid \Sigma_X = x_1)} {P(Y=1\mid \Sigma_X=x_0)} \frac{1-P(Y=1\mid \Sigma_X = x_0)} {1-P(Y=1\mid \Sigma_X=x_1)} \end{equation} \noindent The reason why we can't generally measure causal effect by standard regression of $Y$ on $X$ is that the regression coefficient will have no causal interpretation in the presence of unobserved confounders of the exposure-outcome relationship, which we denote as $U$. This is, indeed, why we need to use MR. We shall model $U$ as an individual-level scalar variable, more precisely, a one-dimensional reduction of the unknown collection of confounders. MR requires availability of a set of instrumental variables, or instruments, denoted as $Z \equiv (Z_1, \ldots , Z_J) $, which in a standard analysis will often correspond to the individual's genotypes at a set of SNP loci. Each of these genotypes we code as "allele doses", with values $(0,1,2)$ respectively indicating presence of zero, one and two copies of the "alternative" allele at the locus. For most individuals in the pedigree, we also have observed {\em (i)} maternal and paternal genotypes at each instrumental locus and {\em (ii)} the levels of protein IL12A in the father and in the mother. Let the collection of maternal (rep., paternal) genotypes for the generic individual be denoted as $Z_m$ ($Z_p$). Let the protein levels for the mother and the father of the generic individual be denoted as $W_M$and $W_F$, respectively . We further introduce an individual-level categorical variable, denoted as $F$, which indicates the individual's pedigree of membership, or family. Further notation will be introduced in the next sections, as required. \subsection*{\Large Assumptions} \noindent This paper uses Dawid's conditional independence formalism \cite{Dawid1979}, with the $\mbox{$\,\perp\!\!\!\perp\,$}$ symbol representing conditional independence, so that $A \mbox{$\,\perp\!\!\!\perp\,$} B \mid C$, stands for ``$A$ is independent of $B$ given $C$, and $A \mbox{$\,\not\!\perp\!\!\!\perp\,$} B$, means ``$A$ is not independent of $B$ ''\xspace. Conditions introduced in this section are required for method validity, except for one of them. They are essentially identical to those required by standard MR methods. {\vspace{0.15cm} \noindent} Here are the assumptions. Each $j$th instrumental variable, $Z_j$, must satisfy the {\em confounder independence} condition $Z_j \mbox{$\,\perp\!\!\!\perp\,$} U$, stating that the instrument is unrelated to exposure-outcome confounders. A further condition called {\em exclusion-restriction} requires that $Y \mbox{$\,\perp\!\!\!\perp\,$} Z_j \mid (X,U)$, that is, each $j$th instrument can be associated with response only via the exposure. Exclusion-restriction is a desirable condition, however, unlike the remaining conditions in this section, it is not required by our method. Next comes the {\em instrument relevance} condition, $Z_j \mbox{$\,\not\!\perp\!\!\!\perp\,$} X$, stating that no instrument is independent of the exposure. We have also conditions involving the regime indicator, $\Sigma_X$. The {\em confounder invariance} condition, $U \mbox{$\,\perp\!\!\!\perp\,$} \Sigma_X$, requires that the distribution of the confounders $U$ be the same, whether or not we intervene on $X$, and regardless of the value imposed on or observed in $X$. Next comes the {\em interventional irrelevance} condition $\Sigma_X \mbox{$\,\perp\!\!\!\perp\,$} Z$, requiring that any intervention on $X$ has no consequence on $Z$, and the {\em interventional modularity} condition, $\Sigma_X \mbox{$\,\perp\!\!\!\perp\,$} Y \mid (X,U)$, asserting that once we are told the values of $X$ and $U$, the distribution of $Y$ no longer depends on the way the value of $X$ has arisen, whether observationally or through the intervention of interest. {\vspace{0.15cm} \noindent} Those independence relationships that involve the (non-stochastic) regime indicator should be interpreted in the light of the extended conditional independence calculus described by Constantinou et al \cite{constantinou2017}. The relationships between $\Sigma_X$ and the remaining variables, as depicted in Figure 1, characterize the influence of $X$ on $Y$, corresponding to the $X \rightarrow Y$ arrow, as causal. The remaining arrows in the graph, eg $Z \rightarrow X$, do not necessarily have to be interpreted as causal, which greatly expands method applicability. {\vspace{0.15cm} \noindent} How realistic are the above assumptions? This is a crucial question, considering that all the above assumptions, except for instrumental relevance, are at best only indirectly testable, or corroborated on the basis of bological knowledge. Take, for example, the confounder independence assumption. In our application, where the exposure is a low-level biological mark, it may be reasonable to assume that those genetic variants that operate in {\em cis} with respect to the studied protein, exert no effect on common causal precursors of exposure and outcome other than effects mediated by the exposure. This assumption can be further corroborated by investigations based on eQTL data and on the known linkage disequilibrium (LD) pattern in the DNA region of interest. The assumption of confounder invariance requires more attention than is usually the case. In our application, for example, if the intervention represented by $\Sigma_X$ consisted of a particular diet, then confounder invariance would be violated, because a diet will hardly modify the level of the protein without altering a constellation of metabolites that act as potential confounders. Interventional irrelevance is defendable in our applicative situation, by using randomization arguments. As concerns interventional modularity, in our study this condition implies, in particular, that a unit increase in $X$ caused by one of the variants in the instrumental set should exert on $Y$ the same effect as a unit increase in $X$ caused by the intervention of interest. In our application, where the instrumental effects are regulatory and the intervention of interest consists of a pharmacological modification of $X$, interventional modularity appears to be a defendable assumption. {\vspace{0.15cm} \noindent} All the conditions defined above, except for exclusion-restriction, are required by our method. {\vspace{0.15cm} \noindent} Sometimes it is possible, and then helpful, to represent the qualitative structure of a statistical model by a directed acyclic graph \cite{Lauritzen1996}. A stripped-down representation of the class of MR models discussed in the present paper is shown in Figure 1. All the conditions stated above (except for exclusion-restriction) can be read off the graph of Figure 1 by applying $d$-separation \cite{Geiger1990} or moralization \cite{Lauritzen1996}, with the following additional rules: {\em (i)} faithfulness \cite{Spirtes2001} of the $Z \rightarrow X$ edges (which means assuming that any distribution which follows the model only exhibits independence relations represented by the directed acyclic graph), and {\em (ii)} assigning a value $x$ to $\Sigma_X$ implies the simultaneous assignment of the same value to $X$, and {\em (iii)} assigning a value $x$ to $\Sigma_X$ implies that all arrows into $X$ except for $\Sigma_X \rightarrow X$ are severed. Because most of the conditions introduced at the beginning of this section are not directly testable on the basis of the data, the Reader should be aware that graphs like the one shown in Figure 1 describe an {\em assumed}, ultimately uncertified, albeit plausible, state of affairs. We shall assume throughout the paper that the above described conditions, bar exclusion-restriction, are valid. {\vspace{0.15cm} \noindent} We conclude this section with a brief discussion of the exclusion-restriction assumption. This assumption (which is not required by our method) does not allow an instrument to exert an effect on $Y$ other than that exerted though the mediating effect of $X$. In our graph of Figure 1, this condition is violated by the $Z_J \rightarrow Y$ arrow. Because of this, the effect of instrument $Z_J$ on $Y$ is said to be `` pleiotropic ''\xspace according to Figure 1. In the context of our application, pleiotropic effects may arise from two broad classes of mechanism. The first is due to the eQTL variants used as instruments being in linkage disequilibrium (LD) with eQTLs of nearby genes. The second is due to the instrumental variant exerting a causal effect on $Y$ through a pathway independent of $X$. Although the former type of pleiotropy could, in principle, be neutralized by conditioning on the eQTLs in the region, except for the instrumental variants, the latter cannot be directly tested from the data. It would therefore be uncautious to perform MR by using a method that does {\em not} allow for general types of pleiotropy. Our Bayesian approach deals with the problem by explicitly introducing the unknown pleiotropic effects in the model, and by treating them as unknown parameters to be estimated from the data. \begin{center} \begin{figure} \centering \caption{\footnotesize Graphical representation of a Mendelian randomization model for the analysis of unrelated individuals.} \scalebox{0.55}{ \begin{tikzpicture}[align=center,node distance=4.5cm] \node (Uchild) [fill=none,circle] {\LARGE unobserved}; \node (Ulab) [below of=Uchild,fill=none,node distance=1cm] {\LARGE $U$}; \node (Xchild) [below left of=Uchild,fill=none,rectangle,node distance=6cm] {\LARGE \LARGE protein \\ \LARGE level}; \node (Xlabchild) [below of=Xchild,fill=none,node distance=1.2cm] {\LARGE $X$}; \node (Ychild) [right of=Xchild,fill=none, rectangle,node distance=9cm] {\LARGE affected by\\ \LARGE disease?}; \node (Ylabchild) [below of=Ychild,fill=none,node distance=1 .2cm] {\LARGE $Y$}; \node (handle) [left of=Xchild,fill=none,node distance=3.7cm]{}; \node (ZchildJ) [below left of=handle,fill=none,rectangle] {\LARGE \LARGE instrumental \\ \LARGE genotype}; \node (ZlabchildJ) [left of=ZchildJ,fill=none,node distance=2.6cm] {\LARGE $Z_J$}; \node (Zchild1) [above left of=handle,fill=none,rectangle] {\LARGE \LARGE instrumental \\ \LARGE genotype}; \node (Zlabchild1) [left of=Zchild1,fill=none,node distance=2.6cm] {\LARGE $Z_1$}; \node (dot1) [below of=Zchild1,fill=none,node distance=1.4cm] {\huge $\ldots$}; \node (dot2) [below of=dot1,fill=none,node distance=1.2cm] {\huge $\ldots$}; \node (dot3) [below of=dot2,fill=none,node distance=1.2cm] {\huge $\ldots$}; \node (dot4) [below of=dot3,fill=none,node distance=1.2cm] {\huge $\ldots$}; \node (F) [above of=Xchild,fill=none,node distance=4cm] {\LARGE $\Sigma_X$}; \draw[-triangle 45,bend left] (Uchild) to node {} (Ychild); \draw[-triangle 45, bend right] (Uchild) to node {} (Xchild); \draw[-triangle 45] (Zchild1) to node {} (Xchild); \draw[-triangle 45] (ZchildJ) to node {} (Xchild); \draw[-triangle 45] (Xchild) to node {} (Ychild); \draw[-triangle 45] (F) to node {} (Xchild); \draw[-triangle 45, bend right] (ZchildJ) to node [above] {} (Ychild); \end{tikzpicture} } \end{figure} \end{center} \subsection*{\Large Progressive Elaboration of the Model} \noindent A "naive" approach consists of analyzing the pedigree data by using the Bayesian MR model proposed by Berzuini and colleagues, as described in \cite{Berzuini2018a}, as if the individuals were independent. This will, of course, produce biased estimates. We shall use this "independence model" in a preliminary analysis of the data. We shall then step through a sequence of re-analyses of the data based on more elaborated, and more realistic, models, that we describe in the following. \vspace{2.5cm} \begin{center} {\large Independence Model} \end{center} \vspace{0.1cm} {\vspace{0.15cm} \noindent} The model of Berzuini and colleagues \cite{Berzuini2018a} assumes that individuals are independent, and that the $X$ variable has been standardized to have zero mean and unit standard deviation. The data generating equations of the model conform with the conditional independence assumptions expressed in Figure 1, and take the form: \begin{eqnarray} \label{full1} P(U) &=& \mbox{N}(0,1),\\ \label{full2} P(X \mid Z_1, \ldots , Z_J, U) &=& \mbox{N}( \sum_{j=1}^J \alpha_j Z_j + \delta_X U, \sigma_X^2),\\ \label{full3} P(Y \mid X, Z_1, \ldots , Z_J, U) &=& \mbox{logit}^{-1}(\omega_Y + \theta X + \sum_{j=1}^J \beta_j Z_j + U), \end{eqnarray} \noindent where $\mbox{N}(a,b)$ stands for a normal distribution with mean $a$ and variance $b$, the symbol $\alpha \equiv (\alpha_1, \ldots , \alpha_J)$ denotes the instrument (i)-exposure (e) associations and $\beta \equiv (\beta_1, \ldots , \beta_J)$ are the pleiotropic effects. The only difference from Berzuini et al here is that the outcome variable $Y$ is no longer normal, but Bernoulli, as appropriate for a binary random variable. Recall that, in our study, some components of the $X$ vector (protein level measurements) are missing, which is not made explicit in the notation. The Bayesian inference engine identifies the missing components and treats them as unknown parameters, effectively integrating them out to obtain the posterior distribution for the parameters of inferential interest. Note that this way of dealing with missing data is more efficient than, say, imputing each missing component of $X$ on the basis of the individual's observed $Z$ values, thanks to the fact that, in our method, the missing values are estimated by using information about both $X$ and $Y$. {\vspace{0.15cm} \noindent} In the above equations, the causal effect of interest, denoted as $\theta$, represents the change in log-odds of probability of $Y=1$ caused by an interventional change of one standard deviation in $X$. {\vspace{0.15cm} \noindent} As shown in \cite{Berzuini2018a} for the normal case, parameters $(\alpha, \tau_X)$ are identified by the data, but the remaining parameters, including the causal effect of interest, $\theta$, are not. Berzuini and colleagues deal with the problem by a combination of two devices. The first consists of introducing the additional (untestable) assumption that each $j$th component of $\beta$ is a priori independent of the remaining parameters of the model, formally, $P(\beta_j \mid \alpha_j,\tau_X) = P(\beta_j)$. This is called the Instrument Effects Orthogonality (IEO) condition. The second consists of introducing a proper, scientifically plausible, prior for $\beta$, which makes inferences possible by inducing on $\theta$ (and on further parameters of potential posterior interest) a proper posterior. \noindent As concerns the prior component of our Bayesian model, we invite the Reader to consult \cite{Berzuini2018a}. {\vspace{0.15cm} \noindent} Variations have been introduced. While still imposing on the pleiotropic effects $\beta$ a horseshoe prior \cite{carvalho2010}, we are now using the enhanced version of this distribution proposed by Piironen and Vehtari \cite{Piironen2017}. Also, we take $\theta$ -- the causal effect of main inferential interest -- to have a Cauchy(0,2.5) prior, with the following justification. Because $X$ has been standardized to have mean 0 and unit standard deviation (SD), the mentioned prior for $\theta$ states as unlikely that a one-SD change in protein level causes a change in risk of disease exceeding 5 points on a logit scale, which corresponds to shifting a probability of disease occurrence from, say, 0.01, to 0.5, or from 0.5 to 0.99. This is also in agreement with current evidence on the effect of circulating proteins on disease \cite{Sun2018}. {\vspace{0.15cm} \noindent} Finally, we are now taking the i-e associations, $\alpha$, to be independently distributed according to a double-exponential distribution with mean 0 and unknown scale. One merit of this prior is to shrink the small effects to zero, which reduces the weak instrument bias, so that the model works with an adaptively selected subset of strong instruments. \vspace{0.3cm} \begin{center} {\large Introducing Kinship} \end{center} \noindent Treating members of a pedigree as independent individuals, which they are not, will produce overconfident and biased estimates. We remedy this by introducing in the model between-individual correlation in the form of the kinship matrix, which can be derived by a standard algorithm from the structure of the pedigree. We are currently working with a single, overarching, kinship matrix of size $N \times N$, where $N$ is the total number of individuals in the sample. This large matrix contains zeros corresponding to pairs of individuals in different families. The method could be made computationally more efficient by introducing family-specific matrices. Kinship information is introduced in the model by writing: \begin{eqnarray} \label{kinship1} \label{full3} P(Y \mid X, Z_1, \ldots , Z_J, U) &=& \mbox{Bernoulli}(\pi),\\ logit(\pi) &=& \mbox{MVN}(\mu, \Sigma),\\ \mu &=& \omega_Y + \theta X + \sum_{j=1}^J \beta_j Z_j + U, \end{eqnarray} \noindent where $\Sigma$ is the $N \times N$ kinship matrix, the notation $\mbox{MVN} (a,b)$ stands for multivariate normal distribution with vector mean $a$ and variance-covariance matrix $b$. \vspace{0.3cm} \begin{center} {\large Introducing Family Effects} \end{center} {\vspace{0.15cm} \noindent} In our analysis, we incorporate family information simply by designating a categorical variable $F$ to indicate the individual's family, with $F \in (1, \ldots, M)$, with $M=12$, and by modifying the outcome and exposure models to take the following form: \begin{eqnarray*} \label{family1} P(X \mid Z_1, \ldots , Z_J, U, F) &=& \mbox{N}(\nu,\sigma_X^2),\\ \nu &=& \sum_{j=1}^J \alpha_j Z_j + \delta_X U+ \sum_{f=1}^{M} I_{F=f} \; \gamma^X_f ,\\ \label{family2} P(Y \mid X, Z_1, \ldots , Z_J, U, F) &=& {\rm Bernoulli}(\pi),\\ logit(\pi) &=& \mbox{MVN}(\mu, \Sigma),\\ \mu &=& \omega_Y + \theta X + \sum_{j=1}^J \beta_j Z_j + U+ \sum_{f=1}^{M} I_{F=f}\; \gamma^Y_f, \end{eqnarray*} \noindent where $I_A$ stands for the indicator function, taking value 1 if the logical condition $A$ is true, and value 0 otherwise. The quantities $\gamma^X \equiv (\gamma^X_1, \ldots , \gamma^X_M)$ and $\gamma^Y \equiv (\gamma^Y_1, \ldots , \gamma^Y_M)$ are vectors of unknown "family effects", respectively on $X$ and on $Y$. In our analysis, we have imposed on these parameters independent and mildly informative priors, with greater spread than the prior for $\theta$. {\vspace{0.15cm} \noindent} The family indicator appears in the graph of Figure 2 with the symbol $F$. According to this graph, failure to condition on this indicator (that is, removing the $F$ variable from the model) "opens" (unblocks) the $Z \leftarrow F \rightarrow Y$ path, and the $Z \leftarrow F \rightarrow U$ path, in the terminology of $d$-separation. Which means that failure to condition on family creates a spurious, exposure-unmediated, association between instrument and outcome and, what's even worse, violates the Confounder Independence assumptions. Hence, inclusion of the family indicator in the model prevents the estimate of the causal effect from being unduly distorted. In situations where the sample contains unrelated (in addition to related) individuals, the unrelateds may be lumped into a single, notional, family. \begin{center} \begin{figure} \centering \caption{\footnotesize Incorporating a family indicator variable ($F$).} \scalebox{0.55}{ \begin{tikzpicture}[align=center,node distance=4.5cm] \node (Uchild) [fill=none] {\LARGE unobserved}; \node (Ulab) [below of=Uchild,fill=none,node distance=1cm] {\LARGE $U$}; \node (family) [above of=Uchild, fill=none,node distance=4cm] {\LARGE Family\\ \LARGE indicator}; \node (Flab) [above of=family,fill=none,node distance=1cm] {\LARGE $F$}; \node (Xchild) [below left of=Uchild,fill=none,rectangle,node distance=6cm] {\LARGE \LARGE protein \\ \LARGE level}; \node (Xlabchild) [below of=Xchild,fill=none,node distance=1.2cm] {\LARGE $X$}; \node (Ychild) [right of=Xchild,fill=none,node distance=9cm] {\LARGE affected by\\ \LARGE disease?}; \node (Ylabchild) [below of=Ychild,fill=none,node distance=1 .2cm] {\LARGE $Y$}; \node (handle) [left of=Xchild,fill=none,node distance=3.7cm]{}; \node (Zchild) [left of=handle,fill=none,rectangle] {\LARGE \LARGE instrumental \\ \LARGE genotypes}; \node (Zlabchild) [below of=Zchild,fill=none,node distance=1.2cm] {\LARGE $Z$}; \node (Zlabchild) [below of=Zchild,fill=none,node distance=1.2cm] {\LARGE $Z$}; \draw[-triangle 45, bend left] (Zchild) to node {} (family); \draw[-triangle 45,bend left] (Uchild) to node {} (Ychild); \draw[-triangle 45, bend right] (Uchild) to node {} (Xchild); \draw[-triangle 45] (Zchild) to node {} (Xchild); \draw[-triangle 45] (Xchild) to node {} (Ychild); \draw[-triangle 45] (family) to node {} (Uchild); \draw[-triangle 45,bend right] (family) to node {} (Xchild); \draw[-triangle 45, bend left] (family) to node {} (Ychild); \draw[-triangle 45, bend right] (Zchild) to node [above] {} (Ychild); \end{tikzpicture} } \end{figure} \end{center} \vspace{0.3cm} \begin{center} {\large Introducing Parental Protein Information} \end{center} \noindent In this final elaboration step of the model we introduce information about the measured level of protein in the individual's parents. This is motivated by the assumption that there are unobserved loci in DNA, denoted by $Z'$, that (individully or collectively) have an effect on the protein of interest. The individual's protein level becomes associated with that of their parents through $Z'$. And, because of this, parental protein level become additional candidate instruments in the analysis. We incorporate parental protein information simply by designating the continuous variables $P_M$ and $P_F$ to represent the measured level of circulating IL12A protein in the individual's mother and father, respectively, after standardizing them to have zero mean and unit variance. The two variables are incorporated in the exposure model by writing: \begin{eqnarray*} \label{protein} P(X \mid Z_1, \ldots , Z_J, U, F, P_M, P_F) &=& \mbox{N}(\nu,\sigma_X^2),\\ \nu &=& \sum_{j=1}^J \alpha_j Z_j + \delta_X U+ \sum_{f=1}^{M} I_{F=f} \; \gamma^X_f + \alpha^M P_M \alpha^F P_F \end{eqnarray*} \noindent with $\alpha^M$ and $\alpha^F$ to be estimated from the data. It can be shown (but this is outside the scope of the present work) that the modification is valid provided we assume that $Z$ and $Z'$ are not correlated, and that $Z'$ does not influence $Y$ other than through changes in $X$. \section*{Results} \vspace{0.1cm} \begin{center} {\Large Results from Initial Model} \end{center} \noindent Estimates of the causal effect of the circulating level of IL12A on risk of MS were obtained by using {\tt R} package {\tt MendelianRandomization} \cite{yavorska2017}, as found on http://cran.r-project.org. The frequentist causal effect estimates, expressed on a log-odds-ratio scale with their corresponding 95$\%$ confidence intervals, are summarised in Table 1. Difficulties introduced by the missing IL12A values have been sidestepped in the simplest way: by discarding individuals who had a missing IL12A value when calculating the i-e associations. \begin{table}[ht] \centering \begin{tabular}{rlrrrrr} \hline & Method & Estimate & Std Error & \multicolumn{2}{c}{95\% confidence interval}& P-value \\ \hline 1 & Simple median & -0.30 & 0.15 & -0.59 & -0.02 & 0.04 \\ 2 & Weighted median & -0.07 & 0.14 & -0.34 & 0.20 & 0.61 \\ 3 & Penalized weighted median & -0.15 & 0.14 & -0.42 & 0.12 & 0.28 \\ 4 & IVW & -0.14 & 0.09 & -0.33 & 0.04 & 0.12 \\ 5 & Penalized IVW & -0.21 & 0.10 & -0.40 & -0.02 & 0.03 \\ 6 & Robust IVW & -0.21 & 0.12 & -0.44 & 0.02 & 0.08 \\ 7 & Penalized robust IVW & -0.23 & 0.10 & -0.42 & -0.04 & 0.02 \\ 8 & MR-Egger & 0.51 & 0.37 & -0.22 & 1.25 & 0.17 \\ 9 & Penalized MR-Egger & 0.51 & 0.37 & -0.22 & 1.25 & 0.17 \\ 10 & Robust MR-Egger & 0.51 & 1.01 & -1.48 & 2.50 & 0.61 \\ 11 & Penalized robust MR-Egger & 0.51 & 1.01 & -1.48 & 2.50 & 0.61 \\ \hline \end{tabular} \label{Table 1} \caption{\footnotesize Estimates for the causal effect of the circulating level of IL12A on risk of MS obtained by using {\tt R} package Mendelian ({\tt http://cran.r-project.org}). Estimated causal effects are expressed on a log-odds-ratio scale. } \end{table} {\vspace{0.15cm} \noindent} According to Table 1, estimates from the frequentist MR methods considered in this paper exhibit a poor consistency. A significant estimate of the causal effect was obtained only with the Simple Median and with the Penalized IVW methods, the latter requiring the assumption of no pleiotropy. {\vspace{0.15cm} \noindent} The model by Berzuini and colleagues \cite{Berzuini2018a}, which also assumes sample individuals to be independent of each other (see Methods section), gave an estimated log-odds-ratio causal effect of -0.202, with a standard error of 0.078, and a 95$\%$ credible interval of -0.418 through -0.091. This result was obtained by treating the missing protein levels as additional unknown parameters to be estimated from the data. \vspace{2cm} \begin{center} {\Large Results after Introducing Kinship} \end{center} {\vspace{0.15cm} \noindent} Our frequentist analyses were repeated in a sounder fashion, by estimating the disease-instrument log-odds-ratio associations via a mixed-effects model ({\tt lmekin} function of {\tt R}, as described in \cite{Pinheiro2000}), that allows family relationships between pedigree members, as expressed by the kinship matrix, to be taken into account \cite{Fazia2017}. Significant estimates were then obtained by using IVW ($\hat{\theta}=-0.18, p < 0.0001$) and WME ($\hat{\theta}=-0.11, p = 0.012$), but not by using MR-ER ($\hat{\theta}=-0.23, p = 0.7$). {\vspace{0.15cm} \noindent} By contrast, when we extended the model by Berzuini and colleagues to incorporate family relationships, as expressed by the kinship matrix (see Methods section), and used it to re-analyse the data, the estimated causal effect was no longer significant, as reported in Table 2. This was not unexpected, when one considers that between-individual correlation reduces the "effective" sample size, and, as a consequence, statistical power. \newcommand{\ra}[1]{\renewcommand{\arraystretch}{#1}} \vspace{1cm} \begin{table*}[ht] \centering \ra{1.3} \begin{tabular}{lrrrrr} \rowcolor{lightgray} \vspace{-0.1cm} &\multicolumn{5}{c}{PERCENTILES OF POSTERIOR}\\ \rowcolor{lightgray} CAUSAL EFFECT OF 1SD CHANGE& \multicolumn{5}{c}{DISTRIBUTION}\\ \rowcolor{lightgray} IN PROTEIN LEVEL ON MS RISK& $5$ & $25$ & $50$ & $75$ & $95$\\ \midrule Causal Log Odds Ratio Effect& -0.91 & -0.59 & -0.39 & -0.17 & 0.10 \\ Causal Odds Ratio Effect& 0.4 & 0.55 & 0.67 & 0.84 & 1.1 \\ \end{tabular} \caption{\footnotesize Estimates for the causal effect of the circulating level of IL12A on risk of MS obtained by using an extension of the model by Berzuini and colleagues which incorporates family relationships, as expressed by the kinship matrix.} \end{table*} \vspace{2.9cm} \begin{center} {\Large Results after Introducing Family Effects} \end{center} \noindent In the Methods section we have seen that(pedigree) membership may introduce bias in the estimated causal effect by acting as a confounder of the relationship between instrumental genotypes and outcome, in a way similar to what population stratification does. This is a consequence of the family variable being generally associated with both the individual's genetic set-up and with disease-linked unobserved factors (genetic variants, environment, education, and so on). See the Methods section for a more rigorous discussion of the issue. When we introduced both kinship information and the family variable (as a 12-level categorical factor) in the model, we got the causal effect estimate summarised in Table 3. \vspace{1cm} \begin{table*}[ht] \centering \ra{1.3} \begin{tabular}{lrrrrr} \rowcolor{lightgray} \vspace{-0.1cm} &\multicolumn{5}{c}{PERCENTILES OF POSTERIOR}\\ \rowcolor{lightgray} CAUSAL EFFECT OF 1SD CHANGE& \multicolumn{5}{c}{DISTRIBUTION}\\ \rowcolor{lightgray} IN PROTEIN LEVEL ON MS RISK& $5$ & $25$ & $50$ & $75$ & $95$\\ \midrule Causal Exposure Log Odds Ratio & -1.05 & -0.69 & -0.43& -0.19 & 0.14\\ Causal Exposure Odds Ratio & 0.35 & 0.50 & 0.65 & 0.82& 1.15 \\ \end{tabular} \caption{\footnotesize Estimated causal effect of IL12A protein level on MS, expressed on both a log-odds ratio and an odds ratio scale, as obtained by an analysis that incorporates both kinship information and the family indicator.} \end{table*} {\vspace{0.15cm} \noindent} A comparison with the preceding table shows that introduction of the family variable left the point estimate of the causal effect substantially unchanged, while widening the credible interval, with a consequent, further, reduction in statistical significance of the result. This is hardly suprising, when one considers that families 3 and 7 (out of our 12 families) impacted on both exposure and outcome with effects of the same sign, as described later in this section. This will inevitably inflate association between exposure and outcome beyond the component of association due to a genuinely causal relationship. \vspace{1.6cm} \begin{center} {\Large Results from Final Model} \end{center} \noindent In addition to kinship and to the family indicator, our final model includes the measured parental levels of circulating IL12A protein, which means the protein level in the mother and in the father. See Methods section for technical details. This final elaboration increased the amount of instrumental information in the model, and produced the estimates summarized in Table 4. The point estimate for the causal effect of IL12A protein level on risk of MS was -0.49 on a log-odds ratio scale, and 0.61 on an odds-ratio scale. The corresponding 95$\%$ credible interval, also reported in Table 4, was entirely contained in the negative real axis, and included effect values of biological importance. \vspace{1cm} \begin{table*}[ht] \centering \ra{1.3} \begin{tabular}{lrrrrr} \rowcolor{lightgray} \vspace{-0.1cm} &\multicolumn{5}{c}{PERCENTILES OF POSTERIOR}\\ \rowcolor{lightgray} CAUSAL EFFECT OF 1SD CHANGE& \multicolumn{5}{c}{DISTRIBUTION}\\ \rowcolor{lightgray} IN PROTEIN LEVEL ON MS RISK& $5$ & $25$ & $50$ & $75$ & $95$\\ \toprule\\ Causal Log Odds Ratio Effect of Exposure on Outcome& -1.12 & -0.71 & -0.49 & -0.29 & -0.1 \\ Causal Odds Ratio Effect of Exposure on Outcome & 0.33 & 0.49 & 0.61 & 0.75 & 0.90 \\ \end{tabular} \caption{\footnotesize Causal effect estimates from a model that incorporates kinship information, family indicator, and parental protein levels.} \end{table*} \vspace{1cm} \newcommand{\midheader}[2]{% \topmidheader{#1}{#2}} \newcommand\topmidheader[2]{\multicolumn{#1}{c}{\textsc{#2}}\\% \addlinespace[0.5ex]} \begin{table*}[ht] \centering \ra{1.3} \begin{tabular}{lrrrrr} \rowcolor{lightgray} \vspace{-0.1cm} & \multicolumn{5}{c}{PERCENTILES OF POSTERIOR}\\ \rowcolor{lightgray} FAMILY& \multicolumn{5}{c}{DISTRIBUTION OF EFFECT}\\ \rowcolor{lightgray} & $5$ & $25$ & $50$ & $75$ & $95$\\ \toprule\\ \topmidheader{6}{\begin{minipage}{6.9cm} Family-specific {\textbf {Indirect}} Causal Effect on Risk of MS (Odds Ratio)\end{minipage}} family 2 & 0.77 & 0.92 & 0.99 & 1.04 & 1.18 \\ family 3 & 0.70 & 0.86 & 0.95 & 1.00 & 1.12 \\ family 4 & 0.83 & 0.97 & 1.03 & 1.15 & 1.44 \\ family 5 & 0.76 & 0.91 & 0.99 & 1.03 & 1.19 \\ family 6 & 0.82 & 0.97 & 1.02 & 1.11 & 1.39 \\ family 7 & 0.31 & 0.53 & 0.70 & 0.88 & 1.12 \\ family 8 & 0.85 & 0.96 & 1.01 & 1.08 & 1.30 \\ family 9 & 0.79 & 0.94 & 1.00 & 1.05 & 1.21 \\ family 10 & 0.82 & 0.96 & 1.02 & 1.11 & 1.35 \\ family 11 & 0.71 & 0.88 & 0.96 & 1.01 & 1.17 \\ family 12 & 0.76 & 0.91 & 0.98 & 1.03 & 1.21 \\ \midheader{6}{\vspace{0.3cm}\begin{minipage}{6.9cm} \vspace{0.4cm} Family-specific {\textbf{Direct}} Causal Effect on Risk of MS (Odds Ratio)\vspace{0.2cm}\end{minipage}\vspace{-0.3cm}} family 2 & 0.20 & 0.44 & 0.73 & 1.09 & 1.74 \\ family 3 & 0.47 & 0.86 & 1.28 & 1.90 & 3.33 \\ family 4 & 0.56 & 1.09 & 1.82 & 3.43 & 10.04 \\ family 5 & 0.33 & 0.66 & 1.00 & 1.49 & 2.70 \\ family 6 & 0.26 & 0.58 & 0.94 & 1.44 & 2.84 \\ family 7 & 0.53 & 0.97 & 1.58 & 2.65 & 5.67 \\ family 8 & 0.36 & 0.69 & 1.07 & 1.61 & 3.09 \\ family 9 & 0.27 & 0.54 & 0.81 & 1.21 & 2.23 \\ family 10 & 0.16 & 0.42 & 0.67 & 1.05 & 1.86 \\ family 11 & 0.58 & 1.04 & 1.59 & 2.39 & 4.28 \\ family 12 & 0.67 & 1.15 & 1.64 & 2.38 & 4.24 \\ \end{tabular} \caption{\footnotesize Additional results from our final model. For each of the 12 families represented in our data, this table reports the estimated effect that being a member of that family has on MS risk, by distinguishing between the direct and the indirect ($=$mediated by changes in the level of circulating IL12A protein) components of the effect. See rigorous definition of direct and indirect effect in the Methods section.} \end{table*} {\vspace{0.15cm} \noindent} Figures 3 through 5 summarize extra output of the analysis via our final model. These figures have been obtained by using the excellent {\tt bayesplot} package, writen in {\tt R} language by Jonah Gabry and colleagues \cite{Gabry2019}, as an aid to studying the output of {\tt Stan} analyses. {\vspace{0.15cm} \noindent} Figure 3 shows posterior intervals of for the instrument-exposure associations, $\alpha$. It is apparent from the figure that a few instruments, eg. instrument 49, stand out in terms of strength. The sparsity prior we have imposed on these effects is able to pick up the few "needles in the haystack", while downplaying the role of weaker instruments, at the same time working in the direction of a reduction of the weak instrument bias. It might be interesting to investigate the strong instruments from a functional point of view. {\vspace{0.15cm} \noindent} Figure 4 shows posterior intervals for familial effects on outcome, that we call "direct" because they are not mediated by the exposure. One may wish to interpret these as familial effects mediated by IL12A-independent pathways, environment and lifestyle. The figure highlights some families (eg., family 12) as characterized by a higher risk of MS, compared with the others. In a separate work we investigate the factors responsible of such differences in detail. Other families (eg., family 2) appear to "protected" from MS due to factors other than IL12A. {\vspace{0.15cm} \noindent} Figure 5 shows posterior intervals for familial effects on outcome, that we call "indirect" because they are mediated by the exposure. They are calculated by including in the model a parameter defined to represent the product of the $F \rightarrow X$ effect and the $X \rightarrow Y$ effect. The posterior distribution for this parameter gets sampled by the MCMC inference engine. The sample are then automatically used to calculate posterior mean and credible interval. A comparison between Figures 4 and 5 suggests that in certain families, eg. family 4 in our sample, both the direct and the indirect effects operate deleteriously, whereas in others, eg. family 7, the two effects tend to cancel each other. \begin{figure}[!h] \label{Figure 3} \caption{\footnotesize Estimated posterior intervals for the instrument-exposure associations, $\alpha$, based on our analysis with the final, complete, Bayesian model, that includes kinship information, family variable and parental protein levels.} \vspace{-0.4cm} \begin{center} \scalebox{1.4}{ \includegraphics[width=0.45\textwidth]{Alphax.pdf} } \end{center} \end{figure} \begin{figure}[!h] \label{Figure 4} \vspace{-0.4cm} \begin{center} \scalebox{1.4}{ \includegraphics[width=0.45\textwidth]{DirectFamily.pdf} } \end{center} \caption{\footnotesize Estimated direct causal effects of family membership on risk of MS, expressed on an odds-ratio scale, based on our analysis with the final, complete, Bayesian model, that includes kinship information, family indicator and parental protein levels.} \end{figure} \begin{figure}[!h] \label{Figure 5} \vspace{-0.4cm} \begin{center} \scalebox{1.4}{ \includegraphics[width=0.45\textwidth]{IndirectFamily.pdf} } \end{center} \caption{\footnotesize Estimated indirect causal effects of family membership on risk of MS, expressed on an odds-ratio scale, based on our analysis with the final, complete, Bayesian model, that includes kinship information, family indicator and parental protein levels. We use the term "indirect" to signify the effect on MS risk exerted by membership to a particular family through the mediation of IL12A plasma level.} \end{figure} {\vspace{0.15cm} \noindent} We calculated posterior predictive check diagnostics based on discrepancies between (a continuous approximation of) the observed outcome variable distribution and the corresponding distribution generated from the posterior values of the unknown parameters of this final model. No signal of model misfit has been found (see Supplementary Material). \begin{figure}[!h] \label{Pathway} \vspace{0.4cm} \begin{center} \scalebox{1.8}{ \includegraphics[width=0.45\textwidth]{pathwayclean.PNG} } \end{center} \caption{\footnotesize IL12 family cytokines as a putative immunological link between IL12-A and MS.} \end{figure} {\vspace{0.15cm} \noindent} Gene IL12A (p35), together with gene IL12B (p40), encodes Interleukin 12 (abbreviated: IL12). IL12 is a pro-inflammatory cytokine, produced mainly by antigen presenting cells (abbreviated: APCs). It acts as an immunological playmaker by inducing Th1 cell differentiation from CD4$+$ naive T cells, interferon $\gamma$ (abbreviated: IFN-$\gamma$) production and tumor necrosis factor-alpha (abbreviated: TNF-$\alpha$) from T cells and natural killer (abbreviated: NK) cells \cite{Aslani2017}. A diagrammatic picture of the relevant pathway is shown in Figure 6. The hypothesised causal effect of IL12A on risk of MS might be mediated by the encoding of IL12 and the subsequent IL12-induced production of IFN-$\gamma$. In fact, IFN-$\gamma$ is a major cytokine found in MS lesions, and it has been found that its levels are greatly increased during MS activity \cite{Lees2007}. IL12-induced IFN-$\gamma$ production is the key point in the Th1 immune responses induction and proliferation. {\vspace{0.15cm} \noindent} Furthermore, in murine models, IL12 has been shown to induce Substance P (SP) precursor mRNA in macrophages via STAT4 pathway \cite{Arsenescu2005} and NK1R expression by both IL12 and IL18 stimulation via NF$\kappa$B in T cells \cite{Weinstock2003}. SP has a demonstrated role in neuroimmune, autoimmune and inflammatory conditions, including MS \cite{Oconnor2004, Kostyk1989}. But while IL12 and IL23 are pro-inflammatory cytokines, IL27 and IL35 are inhibitory cytokines. So, clearly, their immune balance is crucial for the modulation of immune function. \section*{\Large Discussion} \noindent We have extended the Bayesian MR framework of Berzuini and colleagues \cite{Berzuini2018a} for use in the analysis of pedigree data. MR has only rarely been applied to this class of data. Also, MR has been most frequently applied to the study of high-level exposures, such as as obesity \cite{Conde2018,Mariosa2019}, whereas our illustrative application deals with a molecular exposure. Some researchers appear confident that standard MR methods work equally well with molecular exposures, such as transcripts and proteins. Our early experiences in this area do not entirely corroborate this optimism, one reason being the intrinsic paucity of instruments at a molecular level. Although public bioinformatic repositories are sprawling with data, the number of available instruments for the analysis of causality at a molecular level is generally, and inevitably, poor due the the intrinsic nature of the studied mechanism. This makes MR analyses extremely vulnerable to the presence of confounding, not least because of possible, untestable, violations of the confounder independence assumption. MR analysis of pedigree data (as opposed to samples of unrelateds) promises robustness to confounding, and, for this reason, it presents itself as a useful tool for dealing with the information weakness we encounter in the study of causality at a molecular level. Motivated by these considerations, we have extended MR to work with pedigree data. {\vspace{0.15cm} \noindent} Results of our illustrative study point to the circulating level of protein IL12A as a potential cause of MS. While unexciting from a statistical significance viewpoint, our results match existing biological evidence. Interleukin 12 (IL12) is a pro-inflammatory cytokine, produced mainly by Antigen Presenting Cells (APCs). IL12 is a heterodimeric cytokine encoded by two separate genes, IL-12A (p35) and IL-12B (p40). It acts as an immunological playmaker inducing Th1 cell differentiation from CD4+ naive T cells, interferon $\gamma$ (IFN-$\gamma$) production and tumor necrosis factor-alpha (TNF-$\alpha$) from T cells and natural killer (NK) cells \cite{Aslani2017, Katan1986}. IFN-$\gamma$ is a major cytokine found in MS lesions, and its levels are greatly increased during MS activity \cite{lees2007little}. IFN-$\gamma$ production induced by IL12 is the key point in the Th1 immune responses induction and proliferation. Furthermore, in murine models, IL-12 has been shown to induce precursor mRNA of Substance P (SP) in macrophages via STAT4 pathway (IL-12 induction of mRNA encoding substance P in murine macrophages from the spleen and sites of inflammation \cite{Arsenescu2005}). In addition, both IL-12 and IL-18 stimulation induces NK1R expression via NF$k$B pathway in T cells (IL-18 and IL-12 signal through the NF-kappa B pathway to induce NK-1R expression on T cells \cite{Weinstock2003}). SP has a demonstrated role in neuroimmune, autoimmune and inflammatory conditions, including MS \cite{Oconnor2004} \cite{Kostyk1989}. As shown in the figure 6, while IL-12 and IL-23 are pro-inflammatory cytokines, on the contrary IL-27 and IL-35 are inhibitory cytokines. So clearly, the immune balance of all the cytokines involved is crucial for the modulation of immune function where compensatory mechanisms can play a strategic role, that may explain the negative sign of the causal effect we found that is in contradiction with the expected increase of MS risk induced by IL12A. {\vspace{0.15cm} \noindent} From a statistical viewpoint, our IL12A data analysis illustrates a few important points. Firstly, because introduction of kinship information in the model accounts for the reduction in the number of "effective" individuals due to family correlation, it may result in an increased posterior uncertainty about the causal effect, with a reduction of evidence against the null causal hypothesis. Introduction of the family indicator may have a similar effect on the causal estimate, that of a greater posterior uncertainty, with a consequent further reduction of evidence of causality. Recall that family membership is a potential instrument-outcome confounder. The increase in posterior uncertainty consequent to introduction of the family indicator may thus be interpreted as an effect of the de-biasing. Our results suggest that our elaborations of the models tend to avoid over-optimistic results, which we believe to work in the direction of a healtier science. Parental protein information, introduced at the last model elaboration step, acted as instrumental, which resulted in an increase of evidence of causality. {\vspace{0.15cm} \noindent} MR has been traditionally applied to data from unrelated individuals. This is a pity, because MR analysis of family data is inherently more robust to population stratification and heterogeneity than analysis of untelateds. We believe this property to help disentangle inheritable from environmental effects. A potentially fruitful idea is to collect data from unrelated individuals and then to collect further data from the parents of those individuals, for a joint analysis of the two data sources. Such a joint analysis can be performed via our proposed approach by treating parent-child triads as "families". Or one could use information from previous analyses of unrelateds in order to shape informative priors for an analysis of pedigree data along our proposed lines. Pedigree analysis might prove an invaluable tool for studying disease mechanism peculiarities of small, possibly native and isolated, populations. We are, in particular, thinking of small populations characterized by maverick disease patterns, that suffer from inadequate attention from the medical research community, perhaps outside the western "white" world. {\vspace{0.15cm} \noindent} Finally, on a more methodological note, we would emphasize the flexibility of a MCMC-powered Bayesian approach in MR, especially thanks to the possibility of straightforward elaboration of the basic MR model to accommodate extra relevant information and the straightforward handling of missing information. {\vspace{0.15cm} \noindent} We are at present working on an extension of the models discussed here to incorporate haplotype information. \bibliographystyle{plain}
1,108,101,566,298
arxiv
\section{Introduction} \label{sec:intro} It was Calabi \cite{Calabi:almcpl6} who first recognised the rich geometry that can be found on a hypersurface of \( \mathbb R^7 \) when the latter is equipped with its natural cross product and \( \mathrm{G}_2 \)-structure. The realization, much later, of metrics with holonomy \emph{equal} to \( \mathrm{G}_2 \) allowed this theory to be extended, whilst retaining the key features of the ``Euclidean'' theory. The second fundamental form or Weingarten map \( W \) of a hypersurface \( Y \) in a manifold \( X \) with holonomy \( \mathrm{G}_2 \) can be identified with the intrinsic torsion of the associated \( \mathrm{SU}(3) \)-structure. The latter is defined by a 2-form \( \omega \) and a \( 3 \)-form \( \gamma \) induced on \( Y \), and \( W \) is determined by their exterior derivatives. The symmetry of \( W \) translates into a constraint on the intrinsic torsion (equivalently, on \( d\omega \) and \( d\gamma \)) that renders the \( \mathrm{SU}(3) \)-structure what is called \emph{half flat}. Conversely, a \( 6 \)-manifold \( Y \) with an \( \mathrm{SU}(3) \)-structure that is half flat can (at least if it is real analytic) be embedded in a manifold with holonomy \( \mathrm{G}_2 \) \cite{Bryant:emb}. The metric \( g \) on \( X \) is found by solving a system of evolution equations that Hitchin \cite{Hitchin:stable} interpreted as Hamilton's equations relative to a symplectic structure defined (roughly speaking) on the space parametrising the pairs \( (\omega,\gamma) \). The simplest instance of this construction occurs when \( Y \) is a so-called \emph{nearly-K\"ahler} space, in which case \( g \) is a conical metric over \( Y \), in accordance with a more general scheme described by B\"ar \cite{Baer:spinor}. The first explicit metrics known to have holonomy equal to \( \mathrm{G}_2 \) were realized in this way. In this paper, we are concerned with the classification of left-invariant half-flat \( \mathrm{SU}(3) \)-structures on \( S^3\times S^3 \), regarded as a Lie group \( G \), up to an obvious notation of equivalence. One of these structures is the nearly-K{\"a}hler one that can be found on \( G\times G \), for any compact simple Lie group \( G \), by realizing the product as the 3-symmetric space \( (G\times G\times G)\slash G \). Indeed, we verify that this nearly-K\"ahler structure is unique amongst invariant \( \mathrm{SU}(3) \)-structures on \( S^3\times S^3 \) (see Proposition~\ref{prop:nKunique}, that has a dynamic counterpart in Proposition~\ref{prop:NKunique}). Examples of the resulting evolution equations for \( \mathrm{G}_2 \)-metrics have been much studied in the literature \cite{Brandhuber-al:G2,Cvetic-al:M3-G2,Cvetic-al:conifold}, but one of our aims is to highlight those \( \mathrm{G}_2 \)-metrics that arise from half-flat metrics with specific intrinsic torsion, motivated in part by the approach in \cite{Butruille:W14}. Nearly-K\"ahler corresponds to Gray-Hervella class \( \mathcal W_1 \), and it turns out that a useful generalization in our half-flat context consists of those metrics of class \( \mathcal W_1+\mathcal W_3 \); see Section~\ref{sec:symgrp}. By careful choices of the coefficients in \( \omega \) and \( \gamma \), we obtain metrics on \( S^3\times S^3 \) of the same class with zero scalar curvature. Another aim is to develop rigorously the algebraic structure of the space of invariant half-flat structures on \( S^3\times S^3 \), and in Section~\ref{sec:para} we show that the moduli space they define is essentially a finite-dimensional symplectic quotient. This is a description expected from \cite{Hitchin:stable}, and in our treatment relies on elementary matrix theory. For example, the \( 2 \)-form \( \omega \) can be represented by a \( 3\times3 \) matrix \( P \), and mapping \( \omega \) to the 4-form \( \delta=\omega^2=\omega\wedge\omega \) corresponds to mapping \( P \) to the transpose of its adjugate. We shall however choose to use a pair of symmetric \( 4\times4 \) matrices \( (Q,P) \) to parametrise the pair \( (\gamma,\omega) \). The matrix algebra is put to use in Section~\ref{sec:flow} to simplify and interpret the flow equations for the associated Ricci-flat metrics with holonomy \( \mathrm{G}_2 \). The significance of the class \( \mathcal W_1+\mathcal W_3 \) becomes clearer in the evolutionary setting, as it generates known \( \mathrm{G}_2 \)-metrics. In our formulation, the equations (for example in Corollary~\ref{cor:flow}) have features in common with two quite different systems considered in \cite{FeraPontov-al:Painleve} and \cite{Dancer-W:painleve}, but both in connection with Painlev\'e equations. A more thorough analysis of classes of solutions giving rise to \( \mathrm{G}_2 \)-metrics is carried out in Section~\ref{sec:further}. Some of these exhibit the now familiar phenomenon of metrics that are asymptotically circle bundles over a cone (``ABC metrics''). All our \( \mathrm{G}_2 \)-metrics are of course of cohomogeneity one, and this allows us to briefly relate our approach to that of \cite{Dancer-W:superpot}. In the final part of the paper, we present the tip of the iceberg that represents a numerical study of Hitchin's evolution equations for \( S^3\times S^3 \). We recover metrics that behave asymptotically locally conically when \( Q \) belongs to a fixed \( 2 \)-dimensional subspace. More precisely, we show empirically that the planar solutions are divided into two classes, only one of which is of type ABC. This can be understood in terms of the normalization condition that asserts that \( \omega \) and \( \gamma \) generate the same volume form, and is a worthwhile topic for further theoretical study. For the generic case, the flow solutions do not have tractable asymptotic behaviour, but again the geometry of the solution curves (illustrated in Figure~\ref{fig:G2sol3D}) is constrained by the normalization condition that defines a cubic surface in space. This paper grew out of an attempt to reconcile various contributions appearing in the literature. Of particular importance concerning \( \mathrm{SU}(3) \)-structures are Schulte-Hengesbach's classifications of half-flat structures \cite[Theorem 1.4, Chapter 5]{Hengesbach:phd}, and Hitchin's notion of stable forms \cite{Hitchin:stable}. In addition, the explicit constructions of \( \mathrm{G}_2 \)-metrics appearing in this paper are based on the work of Brandhuber et al, Cveti{\v{c}} et al \cite{Brandhuber-al:G2,Cvetic-al:M3-G2,Cvetic-al:conifold}, as well as the contributions of Dancer and Wang \cite{Dancer-W:painleve}. \section{Invariant \boldmath{\(\mathrm{SU}(3)\)}-structures} \label{sec:symgrp} Throughout the paper \( M \) will denote the \( 6 \)-manifold \( S^3\times S^3 \). As this is a Lie group, we can trivialise the tangent bundle. We describe left-invariant tensors via the identification \[ TM\cong M\times \mathfrak{so}(4)\cong M\times\mathbb R^6, \] relative to left multiplication. We keep in mind that there are Lie algebra isomorphisms \[ \mathfrak{su}(2)\oplus\mathfrak{su}(2)\cong\mathfrak{so}(3)\oplus\mathfrak{so}(3)\cong\mathfrak{so}(4), \] which at the group level can be phrased in terms of the diagram \begin{equation} \label{eq:grpseq} \begin{diagram} \node{\mathrm{SU}(2)^2} \arrow{e,l}{2:1} \arrow{se,b} {4:1} \node {\mathrm{SO}(4)} \arrow{s,r} {2:1} \\ \node{} \node { \mathrm{SO}(3)^2} \end{diagram} \end{equation} The cotangent space of \( M \), at the identity, consists of two copies of \( \mathfrak{su}(2)^* \). We shall write \( T^*=T_1^*M=A\oplus B \) and choose bases \( e^1,e^3,e^5 \) of \( A \) and \( e^2,e^4,e^6 \) of \( B \) such that \begin{equation} \label{eq:stdbasis} de^1=e^{35},\, de^2=e^{46}, \textrm{ and so forth}; \end{equation} here \( d \) denotes the exterior differential on \( A \) and \( B \) induced by the Lie bracket. We wish to endow \( M \) with an \( \mathrm{SU}(3) \)-structure. To this end it suffices to specify a suitable pair of real forms: a \( 3 \)-form \( \gamma \), whose stabiliser (up to a \( \mathbb Z\slash2 \)-covering) is isomorphic to \( \mathrm{SL}(3,\mathbb C) \), and a non-degenerate real \( 4 \)-form \( \delta=\omega\wedge\omega=\omega^2 \). These two forms must be compatible in certain ways. Above all, \( \gamma \) must be a \emph{primitive} form relative to \( \omega \), meaning \( \gamma \wedge\omega=0 \). So as to obtain a genuine almost Hermitian structure we also ask for volume matching and positive definiteness: \begin{equation} \label{eq:comp-halfflat} 3\gamma\wedge\hat\gamma=2\omega^3,\quad \omega(\cdot,J\cdot)>0. \end{equation} These forms \( \gamma \) and \( \delta \) are \emph{stable} in the sense their orbits under \( \mathrm{GL}(6,\mathbb R) \) are open in \( \Lambda^kT^* \). The following well known properties (cf. \cite{Hitchin:stable}, and \cite{Reichel:3forms,Westwick:3forms} for the study of \( 3 \)-forms) of stable forms will be used in the sequel: \begin{enumerate} \item There are two types of stable \( 3 \)-forms on \( T \). These are distinguished by the sign of a suitable quartic invariant, \( \lambda \), which is negative precisely when the stabiliser is \( \mathrm{SL}(3,\mathbb C) \) (up to \( \mathbb Z\slash2 \)); each form of this latter type determines an almost complex structure \( J \). \item The stable forms \( \delta \) and \( \gamma \) determine ``dual'' stable forms: \( \delta \) determines the stable \( 2 \)-form \( \pm\omega \), and \( \gamma \) determines the \( 3 \)-form \( \hat\gamma=J(\gamma) \) characterised by the condition that \( \gamma+i\hat\gamma \) be of type \( (3,0) \). \end{enumerate} As \( \mathrm{SU}(3) \)-modules \( \Lambda^kT^* \) decomposes in the following manner: \begin{equation} \begin{gathered} T^*\cong[\![\Lambda^{1,0}]\!]\cong\Lambda^5T^*,\\ \Lambda^2T^*\cong[\![\Lambda^{2,0}]\!]\oplus[\Lambda^{1,1}_0]\oplus\mathbb R\cong\Lambda^4T^*,\\ \Lambda^3T^*\cong[\![\Lambda^{3,0}]\!]\oplus[\![\Lambda^{2,1}_0]\!]\oplus[\![\Lambda^{1,0}]\!], \end{gathered} \end{equation} using the bracket notation of \cite{Sal:Redbook}. In terms of this decomposition (see \cite{Bedulli-V:SU3}), the exterior derivatives of \( \gamma,\omega \) may now be expressed as \begin{equation*} \begin{cases} d\omega=-\frac32w_1\gamma+\frac32\hat{w}_1\hat\gamma+w_4\wedge \omega+w_3,\\ d\gamma=\hat w_1\omega^2+w_5\wedge\gamma+w_2\wedge\omega,\\ d\hat\gamma=w_1\omega^2+(Jw_5)\wedge\gamma+\hat{w}_2\wedge\omega, \end{cases} \end{equation*} where we have used a suggestive notation to indicate the relation between forms and the intrinsic torsion \( \tau \), i.e., the failure of \( \Hol(\NB^{\textup{LC}}) \) to reduce to \( \mathrm{SU}(3) \). Obviously, this expression depends on our specific choice of normalisation (cf.~\eqref{eq:comp-halfflat}). Generally, \( \tau \) takes values in the \( 42 \)-dimensional space \[ T^*\otimes\mathfrak{su}(3)^\perp\cong\mathcal W_1\oplus\mathcal W_2\oplus\mathcal W_3\oplus\mathcal W_4\oplus\mathcal W_5. \] Our main focus, however, is to study the subclass of \emph{half-flat \( \mathrm{SU}(3) \)-structures}: these are characterised by the vanishing of \( \hat w_1, w_2,w_4 \), and \( w_5 \), i.e., \begin{equation*} \begin{cases} d\omega=-\frac32w_1\gamma+w_3,\\ d\gamma=0,\\ d\hat\gamma=w_1\omega^2+\hat w_2\wedge\omega. \end{cases} \end{equation*} \begin{remark} To appreciate the terminology ``half flat'', it helps to count dimensions: \( \dim\mathcal W_1=2 \), \( \dim\mathcal W_2=16 \), \( \dim \mathcal W_3=12 \), \( \dim\mathcal W_4=6=\dim\mathcal W_5 \). In particular, observe that for half-flat structures \( \tau \) is restricted to take its values in \( 21 \) dimensions out of \( 42 \) possible. In this context, ``flat'' would mean \emph{\( \mathrm{SU}(3) \) holonomy}. \end{remark} For emphasis, we formulate: \begin{proposition} \label{prop:intr_space} For any invariant half-flat \( \mathrm{SU}(3) \)-structure \( (\omega,\gamma) \) on \( M \) the following holds: \begin{compactenum} \item if \( \mathcal W_3=0 \) then \( d\omega=-\frac32w_1\gamma \). \item if \( \mathcal W_2^-=0 \) then \( d\hat\gamma=w_1\omega^2 \). \end{compactenum} In particular, any structure with vanishing \( \mathcal W_3 \) component has \( [\gamma]=0\in H^3(M) \). \qed \end{proposition} In the case when \( \mathcal W_3=0 \) we shall say the half-flat structure is \emph{coupled}. The second case above, \( \mathcal W_2^-=0 \), is referred to as \emph{co-coupled}. When the half-flat structure is both coupled and co-coupled, so \( \mathcal W_2^-=0=\mathcal W_3 \), it is said to be \emph{nearly-K\"ahler}. \paragraph{Examples of type \boldmath{\( \mathcal W_1+\mathcal W_3 \)}.} As the next two examples illustrate, it is not difficult to construct half-flat structures of type \( \mathcal W_1+\mathcal W_3 \). \begin{example} \label{ex:W1W3} In this example we fix a non-zero real number \( a\in\mathbb R^* \) and consider the pair of forms \( (\omega,\gamma) \) given by: \begin{equation*} \begin{cases} \omega=-\frac34\alpha a\left(e^{12}+e^{34}+e^{56}\right),\\ \gamma=a(e^{135}-e^{246})+\frac12a\left(e^{352}-e^{146}+e^{514}-e^{362}+e^{136}-e^{524}\right), \end{cases} \end{equation*} where \( \alpha \) is defined via the relation \begin{equation*} \frac{a\alpha^3}{2\sqrt{3}}=\frac49. \end{equation*} Clearly, \( d(\omega^2)=0 \) and \( d\gamma=0 \). A calculation shows \( \lambda=-\frac{27}{16}a^4 \) so that \[ \sqrt{-\lambda}=\frac{3\sqrt{3}}4a^2. \] The \( 3 \)-form \( \hat{\gamma} \) is given by \begin{equation*} \hat{\gamma}=-\frac{\sqrt{3}}2a\left(e^{352}+e^{146}+e^{514}+e^{362}+e^{136}+e^{524}\right). \end{equation*} Note that the following normalisation condition is satisfied: \begin{equation*} \frac23\omega^3=-\frac{27\alpha^3a^3}{16}e^{123456}=-\frac{9\alpha^3}{4}\frac{3a^3}4e^{123456}=-\frac{3\sqrt{3}a^2}2e^{123456}=\gamma\wedge\hat\gamma. \end{equation*} In order to verify that the intrinsic torsion is of type \( \mathcal W_1+\mathcal W_3 \), we calculate the exterior derivatives of \( \omega \), \(\gamma \), and \( \hat\gamma \): \begin{equation*} \begin{cases} d\omega=-\frac32\alpha\gamma+\frac32\alpha a(e^{135}-e^{246}),\\ d\gamma=0,\\ d\hat\gamma=\alpha\omega^2. \end{cases} \end{equation*} Finally, note that the associated metric is given by \[ g=\frac{\sqrt{3}}2\alpha a\sum_{i=1}^3\left(e^{2i-1}\otimes e^{2i-1}+e^{2i}\otimes e^{2i}+\frac12(e^{2i-1}\otimes e^{2i}+e^{2i}\otimes e^{2i-1})\right),\] and one finds that the scalar curvature is positive: \( \mathpzc{s}=\frac4{\sqrt{3}\alpha a}=\frac32\alpha^2 \). \end{example} \begin{example}[Zero scalar curvature metric] \label{ex:W1W3s0} Consider the following pair of stable forms: \begin{equation*} \begin{cases} \omega=a\left(e^{12}+e^{34}+e^{56}\right),\\ \gamma=\sqrt{5}b(e^{135}-e^{246})+b\left(e^{352}-e^{146}+e^{514}-e^{362}+e^{136}-e^{524}\right), \end{cases} \end{equation*} We find that \( \lambda=-8(1+\sqrt{5})b^4 \), and the \( 3 \)-form \( \hat{\gamma} \) is given by \begin{equation*} \begin{split} - {\sqrt{-\lambda}} \hat{\gamma}&=2(\sqrt{5}-1)b^3(e^{135}+e^{246})\\ &\qquad+2(3+\sqrt{5})b^3\left(e^{352}+e^{146}+e^{514}+e^{362}+e^{136}+e^{524}\right). \end{split} \end{equation*} The normalisation condition then reads \[ a^3=-\sqrt{2(1+\sqrt{5})}b^2. \] The associated metric takes the form \[ g=-\frac{2ab^2}{\sqrt{-\lambda}}\sum_{i=1}^3\left((1+\sqrt{5})(e^{2i-1}\otimes e^{2i-1}+e^{2i}\otimes e^{2i})+2(e^{2i-1}\otimes e^{2i}+e^{2i}\otimes e^{2i-1})\right).\] In this case one finds that the scalar curvature is zero. \end{example} \begin{remark}[Group contractions] The author of \cite{Conti:SU3} uses Lie algebra degenerations to study invariant hypo \( \mathrm{SU}(2) \)-structures on \( 5 \)-dimensional nilmanifolds. In a similar way, one could study half-flat structures on the various group contractions of \( S^3\times S^3 \) like \( S^3\times N^3 \), where \( N^3 \) is a compact quotient of the Heisenberg group. (See \cite{Chong-al:G2contr} for partial studies of such contractions). \end{remark} \section{Parametrising invariant half-flat structures} \label{sec:para} The invariant half-flat structures on \( M \) can be described in terms of symmetric matrices. In order to do this, we recall the local identifications \eqref{eq:grpseq} and set \( U=\mathbb R^{3,3} \), the space of real \( 3\times3 \) matrices, and \( V=S^2_0(\mathbb R^4) \), the space of real symmetric trace-free \( 4\times4 \) matrices. There is a well known correspondence between \( U \) and \( V \); a fact which is for example used in the description of the trace-free Ricci-tensor \( \Ric_0\in\Lambda^2_+\otimes\Lambda^2_- \) on a Riemannian \( 4 \)-manifold. \begin{lemma} \label{lem:equiv-iso} There is an equivariant isomorphism \( U\to V \) which maps a \( 3 \times 3 \) matrix \( K=(k_{ij}) \) to the matrix \begin{gather*} \left(\begin{array}{cccc} -k_{11}-k_{22}-k_{33}&k_{23}-k_{32}&-k_{13}+k_{31}&k_{12}-k_{21}\\ k_{23}-k_{32}&-k_{11}+k_{22}+k_{33}&-k_{12}-k_{21}&-k_{13}-k_{31}\\ -k_{13}+k_{31}&-k_{12}-k_{21}&k_{11}-k_{22}+k_{33}&-k_{23}-k_{32}\\ k_{12}-k_{21}&-k_{13}-k_{31}&-k_{23}-k_{32}&k_{11}+k_{22}-k_{33} \end{array}\right). \end{gather*} \end{lemma} \begin{proof} By fixing an oriented orthonormal basis \( \{f_1,f_2,f_3,f_4\} \) of \( (\mathbb R^4)^* \), we make the identifications \( \Lambda^2_+=A \), \( \Lambda^2_-=B \) via \[ e^1=f^{12}+f^{34},\,e^2=f^{12}-f^{34}, \textrm{ and so forth.} \] The asserted isomorphism is then given by contraction on the middle two indices, as in the following example: \begin{equation*} \begin{split} U\cong A\otimes B\ni e^5\otimes e^2&=(f^{14}+f^{23})\otimes(f^{12}-f^{34})\\ &=(f^1f^4-f^4f^1+f^2f^3-f^3f^2)(f^1f^2-f^2f^1-f^3f^4+f^4f^3)\\ &\longmapsto f^1f^3-f^4f^2-f^2f^4+f^3f^1=f^1\odot f^3-f^2\odot f^4\in V. \end{split} \end{equation*} \end{proof} Table \ref{tab:comp-inv-cov} summarises how invariants and covariants are related under the above isomorphism \( U\cong V \). \bigbreak \begin{table}[htp] \centering \begin{tabular}{CC} \toprule \vspace{0.1mm} K\in U & S\in V \vspace{0.2mm}\\ \hline \vspace{0.2mm} K & S \vspace{0.2mm}\\ \hline \vspace{0.2mm} 4\tr(KK^T) & \tr(S^2) \vspace{0.1mm}\\ -2\Adj(K^T) & (S^2)_0 \vspace{0.2mm}\\ \hline \vspace{0.2mm} -24\det(K) & \tr(S^3) \vspace{0.1mm}\\ 4\tr(KK^T)K & \tr(S^2)S \vspace{0.1mm}\\ 2KK^TK & \frac34\tr(S^2)S-(S^3)_0 \vspace{0.2mm}\\ \hline \vspace{0.2mm} 4\tr((KK^T)^2) & 3\det(S)+\frac14 \tr(S^4) \vspace{0.1mm}\\ 2\tr(KK^T)^2 &\det(S)+\frac14\tr(S^4) \vspace{0.1mm}\\ -24\det(K)K & \tr(S^3)S \vspace{0.1mm}\\ 4\tr(KK^T)\Adj(K) & \frac13\tr(S^3)S-(S^4)_0\\ \bottomrule \end{tabular} \caption{Dictionary between invariants and covariants; \( S \) denotes the image of \( K \) under the isomorphism \( U \to V \) of Lemma \ref{lem:equiv-iso}.} \label{tab:comp-inv-cov} \end{table} Now, let us fix a cohomology class \( c=(a,b)\in H^2(M,\mathbb R)\cong\mathbb R^2 \). We have: \begin{theorem} \label{thm:half-flat-param} The set \( \mathcal H_c \) of invariant half-flat structures on \( M \) with \( [\gamma]=c \) can be regarded as a subset of the \emph{commuting variety}: \begin{equation} \label{eq:comm-var} \left\{(Q,P)\in V\oplus V\colon\,[Q,P]=0\right\}. \end{equation} \end{theorem} \begin{proof} Recall \( T^*M=A\oplus B \), where \( A\cong \mathfrak{su}(2)^*\cong B \) so that we have \begin{gather*} \Lambda^2T^*\cong \Lambda^2A\oplus(A\otimes B)\oplus \Lambda^2V\cong\Lambda^4T^*M\\ \Lambda^3T^*\cong \Lambda^3A\oplus(\Lambda^2A\otimes B)\oplus(A\otimes\Lambda^2B)\oplus\Lambda^3B. \end{gather*} The equation \( d(\omega^2)=0 \) implies that \[ \omega\in A\otimes B\cong U\cong V, \] which defines \( P \). Also note \( \delta=\omega^2 \) lies in a space isomorphic to \( V \). We may assume that \[\gamma=ae^{135}+d\beta+be^{246}\] The condition \( \omega\wedge \gamma=0 \) implies \( Q\otimes P \) lies in the kernel of some \( \mathrm{SO}(4) \)-equivariant map \[ V\otimes V\longrightarrow \Lambda^5T^*M\cong A\oplus B\cong \Lambda^2\mathbb R^4, \] which must correspond to \( [Q,P]=QP-PQ \). \end{proof} \begin{remark} Consider the open subset set \( \mathcal U_c \), \(c=(a,b) \), of the commuting variety given by pairs \( (Q,P) \) satisfying \begin{equation} \tr(P^3)\neq0,\quad \det(Q)+\frac{a-b}6\tr(Q^3)+\frac{ab}2\tr(Q^2)+(ab)^2<0. \end{equation} Then \( \mathcal H_c \) is the hypersurface in \( \mathcal U_c \) characterised by the normalisation condition \begin{equation} \label{eq:normMatr} \tr(P^3)=12\left(-\det(Q)-\frac{a-b}6\tr(Q^3)-\frac{ab}2\tr(Q^2)-(ab)^2\right)^{\frac12}. \end{equation} \end{remark} The space \( V\oplus V\cong V\times V^* =T^*V \) has a natural symplectic structure, and \( \mathrm{SO}(4) \) acts Hamiltonian with moment map \( \mu\colon\, V\oplus V\rightarrow \mathfrak{so}(4)\cong\Lambda^2\mathbb R^4\) given by \[ (Q,P)\longmapsto[Q,P]. \] Via (singular) symplectic reduction \cite{Lerman:reduc}, we can the simplify the parameter space significantly: \begin{corollary} \label{cor:param} The set \( \mathcal H_c \) of half-flat structures modulo equivalence relations is a subset of the singular symplectic quotient \[ \frac{\mu^{-1}(0)}{\mathrm{SO}(4)}\cong\frac{\mathbb R^3\oplus\mathbb R^3}{S_3}. \] \qed \end{corollary} For later use, we observe that in terms of the matrix framework, the dual \( 3 \)-form \( \hat\gamma \) has exterior derivative given as follows: \begin{lemma} \label{lem:hatgamma} Fix a cohomology class \( c=(a,b)\in H^3(M) \). For any element \( (Q,P)\in\mathcal H_c \) corresponding to an invariant half-flat structure, the associated \( 4 \)-form \( d\hat\gamma \) corresponds to the matrix \( \hat R=\frac1{\sqrt{-r}} R \), where \begin{equation*} \begin{cases} R= -(Q^3)_0+\frac{a-b}2(Q^2)_0+(ab+\frac12\tr(Q^2))Q,\\ 4r=\det(Q)+\frac{a-b}6\tr(Q^3)+\frac{ab}2\tr(Q^2)+(ab)^2\,(=\lambda(c,Q)) \end{cases} \end{equation*} In particular, if \( a+b=0 \) and we set \( \hat Q=Q+a I \) then \begin{equation*} \begin{cases} R= (\Adj(\hat Q))_0,\\ 4r=\det(\hat Q) \end{cases} \end{equation*} \end{lemma} \begin{proposition} Let \( (Q,P)\in \mathcal H_c \): \begin{compactenum} \item if \( (Q,P) \) corresponds to a coupled structure then \( c=0 \) and \( P=-\frac32\alpha Q \) for a non-zero constant \( \alpha\in\mathbb R \). \item if \( (Q,P) \) corresponds to a co-coupled structure then \( \hat R=\alpha (P^2)_0 \) for a non-zero constant \( \alpha\in\mathbb R \). \end{compactenum} \qed \end{proposition} \begin{example} Obviously, the half-flat pair \( (Q,P) \) is of type \( \mathcal W_1+\mathcal W_3 \) if and only if the matrices \( (P^2)_0 \) and \( R \) are proportional, i.e., we have \( \hat R=\alpha (P^2)_0 \); the type does not reduce further provided \( c\neq0 \) and \( \alpha\neq0 \). Using these conditions it is easy to show that the structures of Example \ref{ex:W1W3} and Example \ref{ex:W1W3s0} have the type of intrinsic torsion claimed. Indeed, in the first example, using Lemma \ref{lem:hatgamma}, we find that \begin{equation*} (P^2)_0=\frac{9a^2\alpha^2}{8}\diag(3,-1,-1,-1),\quad R=\frac{9a^3}{8}\diag(3,-1,-1,-1), \end{equation*} whilst the matrices of the second example satisfy \begin{equation*} (P^2)_0=2a^2\diag(3,-1,-1,-1),\quad R=(\frac12\sqrt{5}a^2 b + 6b^3)\diag(3,-1,-1,-1). \end{equation*} \end{example} \begin{example}[Nearly-K\"ahler] \label{ex:nK} In this case, the following conditions should be satisfied: \begin{equation*} \begin{cases} P=-\frac32\alpha Q\equiv-\frac32\alpha\diag(-x-y-z,x,y,z),\\ 4\Adj(Q)_0=\sqrt{-\det(Q)}\alpha(P^2)_0=\frac94\alpha^3\sqrt{-\det(Q)}(Q^2)_0, \end{cases} \end{equation*} for some \( \alpha\in\mathbb R^* \). This is equivalent to solving the equations \[ (Q^2)_0=\tilde \alpha\left((Q^3)_0-\frac12\tr(Q^2)Q\right), \] where \( \tilde\alpha=-\frac{16}{9\alpha^3\sqrt{-\det Q}} \). We find this system of equations can be formulated as \begin{equation*} \begin{cases} (y+z)(2x+y+z)=- \tilde\alpha yz(2x+y+z),\\ (x+z)(x+2y+z)=-\tilde\alpha xz(x+2y+z),\\ (x+y)(x+y+2z)=- \tilde\alpha xy(x+y+2z). \end{cases} \end{equation*} Keeping in mind that we must have \( (x+y+z)xy>0 \), we obtain only the following solutions \( (Q,P)\in \mathcal H_0 \): \begin{equation*} \begin{split} x&=y=z=\frac8{9\sqrt{3}\alpha^3}, \\ -\frac13x&=y=z=\frac8{9\sqrt{3}\alpha^3} \quad\textrm{ or with the roles of } x,y,z \textrm{ interchanged}. \end{split} \end{equation*} Note that these solutions are identical after using a permutation; the corresponding matrices \( Q \) are of the form \[ \diag(-3x,x,x,x) \quad \textrm{and} \quad \diag(x,-3x,x,x),\] respectively. \end{example} The above example captures a well known fact about uniqueness of the invariant nearly-K\"ahler structure on \( S^3\times S^3 \). In our framework, this can be summarised as follows (compare with \cite[Proposition 2.5]{Butruille:nK} and \cite[Proposition 1.11, Chapter 5]{Hengesbach:phd}). \begin{proposition} \label{prop:nKunique} Modulo equivalence and up to a choice of scaling \( q\slash p\in\mathbb R^* \), there is a unique invariant nearly-K\"ahler structure on \( M \). It is given by the class \( [(Q,P)] \) where \[ (Q,P)=(q(\diag(-3,1,1,1),p\diag(-3,1,1,1))\in \mathcal H_0. \] \qed \end{proposition} As observed in \cite[Proposition~1.8]{Hengesbach:phd} there are no invariant (integrable) complex structures on \( M \) admitting a left-invariant holomorphic \( (3,0) \)-form. Indeed, in terms of \( 4\times4\) matrices this assertion is captured by \begin{lemma} In the notation of Lemma~\ref{lem:hatgamma}, if \( R=0 \) then \( r\geqslant0 \). \qed \end{lemma} Although we have chosen to focus on the vector space \( V \) and \( 4\times4 \) matrices, we conclude this section with a neat consequence of stability. Consider \( K\in\mathbb R^{3,3} \). The Cayley-Hamilton theorem states that \[ K^3-c_1K^2+c_2K-c_3I=0, \] where \( c_1=\tr K \), \( \tr(K^2)=c_1^2-2c_2 \), and \( c_3=\det K \). Consider now the adjugate \[ \Adj K =K^2-c_1K+c_2I, \] so that \( K(\Adj K)=(\det K)I \). Table~\ref{tab:comp-inv-cov} implies that the mapping \( \omega\mapsto\omega^2 \) corresponds to a multiple of \( K\mapsto \Adj(K^T) \). The following result describes a viable alternative to the square root of a \( 3\times 3 \) matrix; it can be proved directly using the singular value decomposition. \begin{corollary} Any \( 3\times3 \) matrix with positive determinant equals \( \Adj K \) for some unique \( \pm K \). \qed \end{corollary} \section{Evolution equations: from \boldmath{\(\mathrm{SU}(3)\)} to \boldmath{\(\mathrm{G}_2\)}} \label{sec:flow} Let \( I\subset\mathbb R \) be an interval. A \( \mathrm{G}_2 \)-structure and metric on the \( 7 \)-manifold \( M\times I \) can be constructed from a one-parameter family of half-flat structures on \( M \) by setting \begin{equation} \label{eq:G2str} \begin{cases} \varphi=\omega(t)\wedge dt+\gamma(t),\\ {*}\varphi=\hat\gamma(t)\wedge dt+\frac12\delta(t), \end{cases} \end{equation} where \( \delta(t)=\omega(t)^2 \) and \( t\in I \). It is well known \cite{Fernandez-G:G2} the holonomy lies in \( \mathrm{G}_2 \) if and only if \( d\varphi=0=d{*}\varphi \). For structures defined via a one-parameter family of half-flat structures, this can be phrased equivalently as: \begin{proposition} The Riemannian metric associated with the \( \mathrm{G}_2 \)-structure \eqref{eq:G2str} has holonomy in \( \mathrm{G}_2 \) if and only if the family of forms satisfies the equations: \begin{equation} \label{eq:G2flow} \begin{cases} \gamma'=d\omega,\\ \delta'=-2d\hat\gamma. \end{cases} \end{equation} \end{proposition} \begin{proof} Differentiation of \( \varphi \) and \( {*}\varphi \) gives us: \begin{equation*} \begin{cases} d\varphi=d\omega\wedge dt+d\gamma-\gamma'\wedge dt,\\ d{*}\varphi=d\hat\gamma\wedge dt+\frac12d\delta+\delta'\wedge dt, \end{cases} \end{equation*} Since the one-parameter family consists of half-flat \( \mathrm{SU}(3) \)-structures, we have \( d\gamma=0=d\delta \) (for each fixed \( t \)), so the conditions \( d\varphi=0=d{*}\varphi \) reduce to the system \eqref{eq:G2flow}. \end{proof} \begin{remark} As explained in \cite[Theorem 8]{Hitchin:stable}, the evolution equations \eqref{eq:G2flow} can be viewed as the flow of a Hamiltonian vector field on \( \Omega^3_{ex}(M)\times\Omega^4_{ex}(M) \). It is a remarkable fact that this flow does not only preserve the closure of \( \delta \) and \( \gamma \), but also the compatibility conditions \eqref{eq:comp-halfflat}. \end{remark} \begin{remark} In order to show that a given \( \mathrm{G}_2 \)-metric on \( M\times I \) has holonomy equal to \( \mathrm{G}_2 \), one must show there are no non-zero parallel \( 1 \)-forms on the \( 7 \)-manifold (see the treatment by Bryant and the second author \cite[Theorem 2]{Bryant-S:exceptional}). For many of the metrics constructed in this paper, the argument is the same, or a variation of, the one applied in \cite[Section 3]{Bryant-S:exceptional}. \end{remark} In terms of matrices \( (Q,P)\in \mathcal H_c \), we can rephrase the flow equations by \begin{proposition} \label{prop:G2flow-matr} As a flow, \( t\mapsto (Q(t),P(t)) \), in \( \mathcal H_c \), the evolution equations \eqref{eq:G2flow} take the form \begin{equation} \label{eq:G2flow-matr} \begin{cases} Q'=P,\\ (P^2)'_0=-2\hat R. \end{cases} \end{equation} \qed \end{proposition} These equations are particularly simple when the cohomology class \( c=(a,b) \) of \( \gamma \) satisfies the criterion \( a+b=0 \). In this case, by Lemma \ref{lem:hatgamma}, we have: \begin{corollary}\label{cor:flow} For a flow, \( t\mapsto (Q(t),P(t)) \), in \( \mathcal H_{(a,b)} \) with \( a+b=0 \), the equations \eqref{eq:G2flow-matr} take the form: \begin{equation*} \begin{cases} Q'=P,\\ (P^2)'_0=-\frac{4\Adj(\hat Q)_0}{\sqrt{-\det \hat Q}}. \end{cases} \end{equation*} \qed \end{corollary} \begin{remark} When phrased as above, the preservation of the normalisation \eqref{eq:normMatr} essentially amounts to Jacobi's formula for the derivative of a determinant. \end{remark} Proposition \ref{prop:G2flow-matr} tells us that the \( \mathrm{G}_2 \)-metrics on \( M\times I \) that arise from the flow of invariant half-flat structures, can be interpreted as the lift of suitable paths \( t\mapsto Q(t) \) to paths \[ t\mapsto(Q(t),P(t))\in S^2_0(\mathbb R^4)\times S^2_0(\mathbb R^4)\cong T^*(S^2_0(\mathbb R^4)),\] and moreover these paths lie on level sets of the (essentially Hamiltonian) functional \[ H_c(Q,P)=\sqrt{-\lambda(c,Q)}-\frac1{12}\tr(P^3). \] \begin{corollary} Let \( (Q,P) \) be a (normalised) solution of the flow equations \eqref{eq:G2flow-matr}. Then the trajectory \( (Q(t),P(t)) \) lies on the level set \( \{H_c=0\} \) inside the space \( (S^2_0(\mathbb R^4))^2\cong T^*(S^2_0(\mathbb R^4)) \). \qed \end{corollary} \paragraph{Dynamic examples of type \boldmath{\(\mathcal W_1+\mathcal W_3\)}.} Rephrasing results of \cite{Brandhuber-al:G2}, we now consider the one-parameter family of forms \( t\mapsto (\omega(t),\gamma(t)) \) given by \begin{equation*} \begin{cases} \omega(t)=-\frac32\alpha(t)x(t)(e^{12}+e^{34}+e^{56})\equiv-\frac32\alpha(t)x(t)\omega_0,\\ \gamma(t)=x(t)d\omega_0+a(e^{135}-e^{246}). \end{cases} \end{equation*} In this case, we find that \[ \lambda=(a-3x)(x+a)^3, \] and we shall assume \( 3x<a \) and \( x<-a \), so as to ensure \( \lambda<0 \). Also note that \begin{equation*} \begin{split} -\sqrt{-\lambda}\hat{\gamma}&=x(a+x)^2(e^{135}+e^{246})\\ &\quad+(a-2x)(a+x)^2\left(e^{352}+e^{146}+e^{514}+e^{362}+e^{136}+e^{524}\right). \end{split} \end{equation*} In particular, the normalisation condition reads: \begin{equation} \label{eq:norm-Hitchsol1} 27\alpha^3x^3=4\sqrt{(3x-a)(x+a)^3}. \end{equation} In order to solve the flow equations, we also need the \( 4 \)-form \[ d\hat{\gamma}=\frac1{\sqrt{-\lambda}}x(x+a)^2\omega_0^2. \] Based on the above expressions, the system \eqref{eq:G2flow} becomes: \begin{equation*} \begin{cases} x'(t)=-\frac32\alpha(t)x(t),\\ (\alpha^2x^2)'=-\frac{8}9x\sqrt{\frac{x+a}{3x-a}}. \end{cases} \end{equation*} These equations can be rewritten as a system of first order ODEs in \( x \) and \( \alpha \): \begin{equation*} \begin{cases} x'=-\frac32\alpha x\\ \alpha'=\frac32\alpha^2-\frac49\frac1{\alpha x}\sqrt{\frac{x+a}{3x-a}}. \end{cases} \end{equation*} As we require the normalisation \eqref{eq:norm-Hitchsol1} to hold, we cannot choose initial conditions \( (x_i,\alpha_i) \) freely. After suitable reparametrization, we find the explicit solution: \begin{equation} \begin{cases} x(s)=\frac13(4s^3+a),\\ \alpha(s)=\frac{4s^2}{\sqrt{3}}\frac{\sqrt{1+as^{-3}}}{4s^3+a}, \end{cases} \end{equation} where \( -\infty<s<\min\{0,-a^{\frac13}\} \), and \[ t=-2\sqrt{3}\int\!\frac{ds}{\sqrt{1+as^{-3}}}.\] Note that whilst \( x' \) is always non-zero, \( \alpha' \) can be zero. Indeed, this happens if \( a \) is chosen such that the quadratic equation \[ x^2+2ax-a^2=0 \] has a solution \( x(s) \) for some \( s<\min\{0,-a^{\frac13}\} \). This is the case for any non-zero \( a \): if \( a>0 \) the solution is obtained for \[ s=-a^{\frac13}(1+\frac34\sqrt{2})^{\frac13}, \] and if \( a<0 \) the solution occurs when \[ s=a^{\frac13}(-1+\frac34\sqrt{2})^{\frac13}. \] Introducing \( A(t)=-\frac{(\alpha x)'}{\alpha x} \), we can express the exterior derivatives of the defining forms via \begin{equation} \begin{cases} \label{eq:flow13} d\omega=-\frac32A\gamma+\frac32\left(\alpha a(e^{135}-e^{246})+(A-\alpha)\gamma\right)\equiv-\frac32A\gamma+\beta,\\ d\gamma=0,\\ d\hat\gamma=A\omega^2. \end{cases} \end{equation} As \( \gamma\wedge\beta=0=\hat\gamma\wedge\beta \) and \( \omega\wedge\beta=0 \), this implies that the constructed one-parameter family of \( \mathrm{SU}(3) \)-structures consists of members of type \( \mathcal W_1+\mathcal W_3 \). The associated family of metrics takes the form \[ g=-\frac{3\alpha x}{\sqrt{(3x-a)(x+a)}}\left(x\sum_{i=1}^6e^i\otimes e^i+\frac12(a-x)\sum_{i=1}^3(e^{2i-1}\otimes e^{2i}+e^{2i}\otimes e^{2i-1})\right), \] and has scalar curvature given by \[ \mathpzc{s}=\frac{6(a^2-5x^2)}{\sqrt{(3x-a)^3(a+x)}}.\] Zero scalar curvature is obtained for the solution which has \( a=-(5+\sqrt{5}) \). Indeed, in this case the scalar curvature is zero when \( s^3=\frac{1-\sqrt{5}}2 \). Finally, let us remark that the associated \( \mathrm{G}_2 \)-metric is of the form \( dt\otimes dt+g \), or, phrased more explicitly, in terms of the parameter \( s \): \begin{equation*} \begin{split} &\frac{12}{1+as^{-3}}ds\otimes ds+\frac{4s^2+as^{-1}}{\sqrt{3}}\sum_{i=1}^6e^i\otimes e^i - \frac{2s^2-as^{-1}}{\sqrt{3}}\sum_{i=1}^3(e^{2i-1}\otimes e^{2i}+e^{2i}\otimes e^{2i-1})=\\[5pt] &\frac{12}{1+as^{-3}}ds\otimes ds\\ &\hskip20pt+\sum_{i=1}^3\left(\frac{s^2(1+as^{-3})}{\sqrt{3}}(e^{2i-1}+e^{2i})\otimes (e^{2i}+e^{2i-1}) + \sqrt{3}s^2(e^{2i-1}-e^{2i})\otimes (e^{2i}-e^{2i-1})\right). \end{split} \end{equation*} If \( a=0 \) this metric is conical whilst for \( a\neq0 \), the metric is asymptotically conical: when \( |s|\to \infty \) it tends to a cone metric \[ 12ds^2+s^2\sum_{i=1}^3\left(\frac{1}{\sqrt{3}}(e^{2i-1}+e^{2i})\otimes (e^{2i}+e^{2i-1}) + \sqrt{3}(e^{2i-1}-e^{2i})\otimes(e^{2i}-e^{2i-1})\right) \] over \( M \). In terms of the classification \cite{Dancer-W:painleve}, the metrics belong to the family (I). In terms of the matrix framework, the one-parameter families of pairs \( (Q,P) \) take the form: \begin{equation*} Q=-x\diag(3,-1,-1,-1),\quad P=-\frac32\alpha x\diag(3,-1,-1,-1). \end{equation*} In particular, we get another way of verifying the co-coupled condition: \begin{equation*} (P^2)_0=\frac{9\alpha^2x^2}2\diag(3,-1,-1,-1),\quad R=x(a + x)^2\diag(3,-1,-1,-1). \end{equation*} \section{Further examples} \label{sec:further} \paragraph{Metrics with \boldmath{\(\mathrm{SU}(2)^2\times \Delta \mathrm{U}(1)\ltimes \mathbb{Z}\slash2\)} symmetry.} Following mainly \cite{Chong-al:G2contr}, we study examples that relate our framework to certain constructions of \( \mathrm{G}_2 \)-metrics appearing in the physics literature. Our starting point in a one-parameter families half-flat pairs \( (\omega,\gamma) \) of the form: \begin{equation*} \begin{cases} \omega=p_1e^{12}+p_2e^{34}+p_3e^{56},\\ \gamma=ae^{135}+be^{246}+q_1d(e^{12})+q_2d(e^{34})+q_3d(e^{56}). \end{cases} \end{equation*} Using the normalisation condition, we are able to express the associated one-parameter family of metrics on \( M \) as follows: \begin{equation} \begin{split} \label{eq:diagmetr} g&=\frac{q_2q_3+aq_1}{p_2p_3}e^1\otimes e^1+\frac{q_2q_3-bq_1}{p_2p_3}e^2\otimes e^2+\frac{q_1^2-q_2^2-q_3^2-ab}{2p_2p_3}(e^1\otimes e^2{+}e^2\otimes e^1)\\ &+\frac{q_1q_3+aq_2}{p_1p_3}e^3\otimes e^3+\frac{q_1q_3-bq_2}{p_1p_3}e^4\otimes e^4+\frac{q_2^2-q_1^2-q_3^2-ab}{2p_1p_3}(e^3\otimes e^4{+}e^4\otimes e^3)\\ &+\frac{q_1q_2+aq_3}{p_1p_2}e^5\otimes e^5+\frac{q_1q_2-bq_3}{p_1p_2}e^6\otimes e^6+\frac{q_3^2-q_1^2-q_2^2-ab}{2p_1p_2}(e^5\otimes e^6{+}e^6\otimes e^5), \end{split} \end{equation} and the flow equations \eqref{eq:G2flow} read: \begin{equation} \label{eq:flow-specialcase} \begin{cases} q_i'=p_i,\\ (p_2p_3)'=\frac1{p_1p_2p_3}\left(-abq_1+(a-b)q_2q_3+q_1(q_2^2+q_3^2-q_1^2)\right),\quad\textrm{etc.} \end{cases} \end{equation} \begin{remark} Notice that the \( \mathbb{Z}\slash2 \) action which interchanges the two copies of \( S^3 \) preserves the metric \eqref{eq:diagmetr} provided the cohomology class \( [\gamma] \) is of the form \( a+b=0 \), i.e., \( [\gamma]=(a,-a) \). The action interchanges metrics of half-flat structures with \( [\gamma]=(a,0) \) with those for which \( [\gamma]=(0,-a) \). The latter observation is related to the notion of a \emph{flop} \cite{Atiyah-al:flop}. \end{remark} \begin{remark} \label{rem:volgrowth} The quantity \( \sqrt{\det{g(t)}} \) can be viewed as the ratio of the volume of \( g(t) \) relative to a fixed background metric on \( S^3\times S^3 \). As expected, we find that \[ \sqrt{\det(g)}=2\sqrt{-\lambda}, \] where we have used that \( \tr(P^3)=-6\sqrt{-\lambda} \), by the normalisation condition \eqref{eq:normMatr}. \end{remark} A metric ansatz that has led to the discovery of new complete \( \mathrm{G}_2 \)-metrics (see, for instance, \cite{Brandhuber-al:G2,Cvetic-al:orientifolds}) can be expressed in terms of the condition \( a+b=0 \). In this case, we find \begin{equation} \begin{split} \label{eq:triaxmetr} g&=\frac{q_2q_3+aq_1}{p_2p_3}(e^1\otimes e^1+e^2\otimes e^2)+\frac{q_1^2-q_2^2-q_3^2+a^2}{2p_2p_3}(e^1\otimes e^2{+}e^2\otimes e^1)\\ &+\frac{q_1q_3+aq_2}{p_1p_3}(e^3\otimes e^3+e^4\otimes e^4)+\frac{q_2^2-q_1^2-q_3^2+a^2}{2p_1p_3}(e^3\otimes e^4{+}e^4\otimes e^3)\\ &\frac{q_1q_2+aq_3}{p_1p_2}(e^5\otimes e^5+e^6\otimes e^6)+\frac{q_3^2-q_1^2-q_2^2+a^2}{2p_1p_2}(e^5\otimes e^6{+}e^6\otimes e^5)\\ &=\sum_{i=1}^3a_i^2(e^{2i-1}-e^{2i})\otimes(e^{2i-1}-e^{2i})+b_i^2(e^{2i-1}+e^{2i})\otimes(e^{2i-1}+e^{2i}), \end{split} \end{equation} where \begin{equation*} \begin{cases} \label{eq:rel-abqp} a_1^2+b_1^2=\frac{q_2q_3+aq_1}{p_2p_3},b_1^2-a_1^2=\frac{q_1^2-q_2^2-q_3^2+a^2}{2p_2p_3},\\ a_2^2+b_2^2=\frac{q_1q_3+aq_2}{p_1p_3},b_2^2-a_2^2=\frac{q_2^2-q_1^2-q_3^2+a^2}{2p_1p_3},\\ a_3^2+b_3^2=\frac{q_1q_2+aq_3}{p_1p_2},b_3^2-a_3^2=\frac{q_3^2-q_1^2-q_2^2+a^2}{2p_1p_2}, \end{cases} \end{equation*} or, alternatively, \begin{equation} \begin{cases} q_1=-a_1a_2a_3 - a_3b_1b_2 - a_2b_1b_3 + a_1b_2b_3,\\ q_2=-a_1a_2a_3 - a_3b_1b_2 + a_2b_1b_3 - a_1b_2b_3,\\ q_3=-a_1a_2a_3 + a_3b_1b_2 - a_2b_1b_3 - a_1b_2b_3\\ p_2p_3=4a_2a_3b_2b_3,p_1p_3=4a_1a_3b_1b_3,p_1p_2=4a_1a_2b_1b_2,\\ a=-b=a_1a_2a_3 - a_3b_1b_2 - a_2b_1b_3 - a_1b_2b_3. \end{cases} \end{equation} Note that, up to a sign, we have \( p_i=-2a_ib_i \). Expressed in terms of the metric function \( a_i,b_i \), the flow equations \eqref{eq:flow-specialcase} become: \begin{equation*} \begin{cases} \label{eq:flow-specialcase-3axial} 4a_1'= \frac{a_1^2}{a_3b_2} + \frac{a_1^2}{a_2b_3} - \frac{a_2}{b_3} - \frac{a_3}{b_2} - \frac{b_2}{a_3} - \frac{b_3}{a_2},\\ 4b_1'= \frac{b_1^2}{a_2a_3} - \frac{b_1^2}{b_2b_3} - \frac{a_2}{a_3} - \frac{a_3}{a_2} + \frac{b_2}{b_3} + \frac{b_3}{b_2},\\ 4a_2'= \frac{a_2^2}{a_3b_1} + \frac{a_2^2}{a_1b_3} - \frac{a_1}{b_3} - \frac{a_3}{b_1} - \frac{b_1}{a_3} - \frac{b_3}{a1}, \\ 4b_2'= \frac{b_2^2}{a_1a_3} - \frac{b_2^2}{b_1b_3} - \frac{a_1}{a_3} - \frac{a_3}{a_1} +\frac{b_1}{b_3} + \frac{b_3}{b_1},\\ 4a_3'= \frac{a_3^2}{a_2b_1} + \frac{a_3^2}{a_1b_2} - \frac{a_1}{b_2} - \frac{a_2}{b_1} -\frac{b_1}{a_2} - \frac{b_2}{a_1}, \\ 4b_3'= \frac{b_3^2}{a_1a_2} - \frac{b_3^2}{b_1b_2} - \frac{a_1}{a_2} - \frac{a_2}{a_1} + \frac{b_1}{b_2} + \frac{b_2}{b_1}. \end{cases} \end{equation*} The complete metrics constructed by Brandhuber et al \cite{Brandhuber-al:G2} arise as a further specialisation of this system. Indeed, if we take \( a_1=a_2\equiv a \) and \( b_1=b_2\equiv b \) and set \( t=\int\frac{ds}{b_3} \), then the system \eqref{eq:flow-specialcase-3axial} reads \begin{equation*} \begin{cases} 4\frac{\partial a}{\partial s}= \frac{a^2-a^2_3-b^2}{ba_3b_3} - \frac1{a},\\ 4\frac{\partial b}{\partial s}= \frac{b^2- a^2 -a^2_3}{aa_3b_3} + \frac1{b},\\ 2\frac{\partial a_3}{\partial s}= \frac{a_3^2 -a^2-b^2}{abb_3}, \\ 4\frac{\partial b_3}{\partial s}= \frac{b_3}{a^2} - \frac{b_3}{b^2}, \end{cases} \end{equation*} which is the same as in \cite[Equation (3.1)]{Brandhuber-al:G2}, where the authors find the following explicit holonomy \( \mathrm{G}_2 \)-metric: \begin{equation} \label{eq:ABC} \begin{split} \frac{ds^2}{b_3^2}&+\frac{(s-\frac32)(s+\frac92)}{12}\left((e^1-e^2)\otimes(e^1-e^2)+(e^3-e^4)\otimes(e^3-e^4)\right)\\ &+\frac{(s+\frac32)(s-\frac92)}{12}\left((e^1+e^2)\otimes(e^1+e^2)+(e^3+e^4)\otimes(e^3+e^4)\right)\\ &+\frac{s^2}9(e^5-e^6)\otimes(e^5-e^6)+\frac{(s-\frac92)(s+\frac92)}{(s-\frac32)(s+\frac32)}(e^5+e^6)\otimes(e^5+e^6). \end{split} \end{equation} Asymptotically this is the metric of a circle bundle over a cone, in short an \emph{ABC metric}. In terms of the classification \cite{Dancer-W:painleve}, it belongs to the family (II). \paragraph{Cohomogeneity one Ricci flat metrics.} Any solution of \eqref{eq:G2flow} gives us a cohomogeneity one Ricci flat metric on \( M\times I \). An important aspect of the cohomogeneity one terminology is to bridge a gap between our framework and the ``Lagrangian approach'' appearing in the physics literature (see, e.g., \cite[Section 4]{Brandhuber-al:G2}). For example, consider the metric \eqref{eq:triaxmetr} from the above example, assuming for simplicity that \( a_1=a_2\equiv a \) and \( b_1=b_2\equiv b \). By \cite{Eschenburg-W:cohom1}, we know that the shape operator \( L \) of the principal orbit \( S^3\times S^3\subset I\times M \) satisfies the equation \( g'=2g\circ L \). For the given metric, we find that \[ L=\frac12\left(\begin{array}{cccccc} \frac{a'b+ab'}{ab} & \frac{ab'-a'b}{ab} & 0 & 0 & 0 & 0 \\ \frac{ab'-a'b}{ab} & \frac{a'b+ab'}{ab} & 0 & 0 & 0 & 0 \\ 0 & 0 & \frac{a'b+ab'}{ab} & \frac{ab'-a'b}{ab} & 0 & 0\\ 0 & 0 & \frac{ab'-a'b}{ab} & \frac{a'b+ab'}{ab} & 0 & 0\\ 0 & 0 & 0 & 0 & \frac{a'_3b_3+a_3b'_3}{a_3b_3} & \frac{a_3b'_3-a'_3b_3}{a_3b_3}\\ 0 & 0 & 0 & 0 & \frac{a_3b'_3-a'_3b_3}{a_3b_3} & \frac{a'_3b_3+a_3b'_3}{a_3b_3} \end{array}\right). \] We also observe that \begin{equation*} \begin{gathered} \tr(L)^2=\frac{(2a_3b_3ab'+2a_3b_3ba'+aba_3b'_3+abb_3a'_3)^2}{a^2b^2a_3^2b_3^2},\\ \tr(L^2)=\frac{(2a_3^2b_3^2a^2{b'}^2+2a_3^2b_3^2b^2{a'}^2+a^2b^2a_3^2{b'}_3^2+a^2b^2b_3^2{a'}_3^2}{a^2b^2a_3^2b_3^2},\\ \det(g)=64a^4b^4a_3^2b_3^2,\\ \mathpzc{s}=-\frac18\frac{2a_3^4a^2b^2+a_3^2a^4b_3^2-8a^4b^2a_3^2+a_3^2b^4b_3^2-8b^4a^2a_3^2+2a^6b^2-4a^4b^4+2a^2b^6}{a^4b^4a_3^2}. \end{gathered} \end{equation*} In general, the Ricci flat condition can now be expressed as: \begin{equation} \label{eq:cohom1-Rflat} L'+(\tr(L))L-\Ric=0,\quad \tr(L')+\tr(L^2)=0, \end{equation} combined with another equation expressing the Einstein condition for mixed directions. If we take the trace of the first equation in \eqref{eq:cohom1-Rflat}, and combine with the second one, we obtain the following conservation law: \[ (\tr(L))^2-\tr(L^2)-\mathpzc{s}=0. \] As explained in \cite{Dancer-W:painleve}, the above system has a Hamiltonian interpretation. It is this interpretation, in its Lagrangian guise and phrased with the use of superpotentials, one frequently encounters in the physics literature. In this setting, the kinetic and potential energies are given by \begin{equation*} T=\left((\tr(L))^2-\tr(L^2)\right)\sqrt{\det(g)},\quad V=-\mathpzc{s}\sqrt{\det(g)}; \end{equation*} these definitions agree with those in \cite{Brandhuber-al:G2} up to a multiple of \( \sqrt{\det(g)}=8a^2b^2a_3b_3 \). In \cite{Dancer-W:superpot}, the authors provide a relevant description of the superpotential; in classical terms this is a solution of a time-independent Hamilton-Jacobi equation. In the concrete example, the superpotential \( u \) can be viewed as a function of \( a_i,b_i \). Concretely, we can take \[ u=2\left(2a^3bb_3+2ab^3b_3-a^2a_3b_3^2+b^2a_3b_3^2+2aba_3^2b_3\right). \] In terms of \( u \), the flow equations can then be expressed as follows: \[ \frac{\partial\vv{\alpha}}{\partial r}=G^{-1}\frac{\partial u}{\partial \vv{\alpha}}, \] where \( \vv{\alpha}=(\ln(a),\ln(b),\ln(b_3),\ln(a_3))^T \) (assuming \( a_i,b_i>0 \)), \( t=\int\!\!\sqrt{\det(g)}\,dr \) and \[ G=\left(\begin{array}{cccc}2 & 4 & 2 & 2 \\ 4 & 2 & 2 & 2\\ 2 & 2 & 0 & 1 \\ 2 & 2 & 1 & 0\end{array}\right) .\] Finally, we remark that the kinetic and potential terms can be expressed in the form \[ \sqrt{\det(g)}T=\frac{\partial\vv{\alpha}}{\partial r}G\left(\frac{\partial\vv{\alpha}}{\partial r}\right)^T,\quad \sqrt{\det(g)}V=-\frac{\partial u}{\partial \vv{\alpha}}G^{-1}\left(\frac{\partial u}{\partial \vv{\alpha}}\right)^T.\] As a further specialisation, let us consider the case when \( a=0 \) and \( a=a_3=\frac{t}{2\sqrt{3}} \), \( b=b_3=\frac{t}6 \); this is the nearly-K\"ahler case. Then the shape operator is proportional to the identity: \( L=t^{-1}I \), and the kinetic and potential terms are \[ T=\frac{5\sqrt{3}t^4}{324},\quad V=-\frac{5\sqrt{3}t^4}{324},\] respectively. So the total energy is zero \( T+V=0 \) for all \(t>0\). The superpotential is the fifth oder polynomial \[ u=\frac{13t^5}{216\sqrt{3}}.\] \paragraph{Uniqueness: flowing along a line.} In the case when \( (Q,P)\subset\mathcal H_0 \), the flow equations \eqref{eq:G2flow-matr} turn out to have a unique (admissible) solution satisfying for which \( Q \) belongs to a fixed one-dimensional subspace. \begin{proposition} \label{prop:NKunique} Assume \( t\mapsto(Q(t),P(t))\in \mathcal H_0 \) is a solution of \eqref{eq:G2flow-matr}. Then \( Q \) belongs to a fixed \( 1 \)-dimensional subspace of \( S^2_0(\mathbb R^4) \) if and only if the associated \( \mathrm{G}_2 \)-metric is the cone metric over \( S^3\times S^3 \) endowed with its nearly-K\"ahler structure. \end{proposition} \begin{proof} It is easy to see that the solution of \eqref{eq:G2flow-matr} which corresponds to the cone metric over \( S^3\times S^3 \) (with its nearly-K\"ahler structure) is represented by \begin{equation} \begin{cases} \label{eq:NKcone-matr} (Q(t),P(t))=(q(t)\diag(-3,1,1,1),p(t)\diag(-3,1,1,1))\in\mathcal H_0,\\ (q(t),p(t))=-\frac{t^2}{6\sqrt{3}}(\frac{t}3,1). \end{cases} \end{equation} So, in this case, \( Q \) indeed belongs to a fixed \( 1 \)-dimensional subspace of \( S^2_0(\mathbb R^4) \). Conversely, let us assume we are given a solution such that \[ Q(t)=U(t)\diag\left(-1-a-b,a,b,1\right). \] Then the system \eqref{eq:G2flow-matr} reads: \begin{equation*} \begin{cases} \left(1 +b+ c - b^2 + c^2 + bc\right)uu'=\frac{b (-1 + c)^2 + b^2 (1 + c) - 3 c (1 + c)}{\sqrt{b c (1 + b + c)}}U,\\ \left(1 + b +c + b^2 - c^2 + b c\right)uu'=\frac{b^2 (-3 + c) + c (1 + c) + b (-3 - 2 c + c^2)}{\sqrt{b c (1 + b + c)}}U,\\ \left(-1 + b +c + b^2 + c^2 + b c\right)uu'=\frac{b + b^2 + c - 2 b c - 3 b^2 c + c^2 - 3 b c^2}{\sqrt{b c (1 + b + c)}}U. \end{cases} \end{equation*} These equations show that there is a purely algebraic constraint to having a solution: \begin{equation*} \begin{cases} 1 +b+c- b^2 + c^2 + bc=\frac{b (-1 + c)^2 + b^2 (1 + c) - 3 c (1 + c)}{\sqrt{b c (1 + b + c)}}\kappa,\\ 1 + b +c+ b^2 -c^2+ b c=\frac{b^2 (-3 + c) + c (1 + c) + b (-3 - 2 c + c^2)}{\sqrt{b c (1 + b + c)}}\kappa,\\ -1 + b+c + b^2 + c^2 + b c =\frac{b + b^2 + c - 2 b c - 3 b^2 c + c^2 - 3 b c^2}{\sqrt{b c (1 + b + c)}}\kappa, \end{cases} \end{equation*} where \( \kappa\in \mathbb R \). Uniqueness of the ``nearly-K\"ahler cone'', as a flow solution, now follows by observing that these algebraic equations have the following set of solutions: \begin{equation*} \begin{gathered} (\kappa,b,c)=(0,-1,-1), (\kappa,b,c)=(0,1,-1), (\kappa,b,c)=(0,-1,1),\\ (\kappa,b,c)=(\frac1{\sqrt3},-\frac13,-\frac13), (\kappa,b,c)=(-\sqrt3,1,-3),(\kappa,b,c)=(-\sqrt3, -3,1),\\(\kappa,b,c)=(-\sqrt3, 1,1). \end{gathered} \end{equation*} The solutions with \( \kappa=0 \) are not ``admissible'' whilst the remaining solutions all result in one-parameter families of pairs equivalent to \eqref{eq:NKcone-matr}. \end{proof} \section{Numerical solutions} \label{sec:num} As indicated in the earlier parts of this paper, previous studies of \( \mathrm{G}_2 \)-metrics on \( M\times I \) have focused mainly on metrics with isometry group (at least) \(\mathrm{SU}(2)^2\times \Delta \mathrm{U}(1)\ltimes \mathbb{Z}\slash2 \). In addition, most of the attention has been centred around solutions in \( \mathcal H_c \) for \( c=(a,-a)\neq0 \). A technique that seems effective if one is specifically looking for complete metrics is to choose the initial values of the flow equations \eqref{eq:G2flow-matr} to obtain a singular orbit at that point (meaning, in our context, one whose stabilizer has positive dimension in \(\mathrm{SU}(2)^2\)). This approach was adopted in \cite{Reidegeld:Spin7,Cvetic-al:G2-Spin7} for \(\mathrm{Spin}(7)\) holonomy. However, this final section shifts the focus of our investigation in order to illustrate some more generic behaviour of the flow on the space of invariant half-flat structures on \( S^3\times S^3 \). \paragraph{Two-function ansatz.} We first look for solutions in \( \mathcal H_0 \) for which \( Q \) takes the form \[ Q(t)=\diag(-2U(t)-V(t),U(t),U(t),V(t)),\] where \( U,V \) are smooth functions on an interval \( I \subset\mathbb R \). A solution of \eqref{eq:G2flow-matr} is then uniquely specified by the quadruple \[ (U(0),V(0),U'(0),V'(0)). \] We have solved the system for a wide range of initial conditions. A selection of solutions are shown in Figure \ref{fig:G2sol2D}. Apart from the nearly-K\"ahler straight line, these solutions are new. Plotting the metric functions, we find that some of the new metrics have one stabilising direction when \( t \to\infty \) and no collapsing directions (they are therefore ABC metrics of the sort mentioned in connection with \eqref{eq:ABC}). The others have shrinking directions which cause the volume growth to slow down as shown in Figure \ref{fig:G2sol2Dvgrowth}. \bigbreak \begin{figure}[ht!] \begin{center} \subfigure[Solution curves with \( (U(0),V(0)) \) fixed.]{ \label{fig:G2sol2DqcurveA} \includegraphics[width=0.35\textwidth]{planar-solutionsA.pdf} } \qquad \subfigure[Solution curves with \( (U'(0),V'(0)) \) fixed.]{ \label{fig:G2sol2DqcurveB} \includegraphics[width=0.35\textwidth]{planar-solutionsB.pdf} }\\[10pt] \subfigure[Volume growth for selected solutions.]{ \label{fig:G2sol2Dvgrowth} \includegraphics[width=0.6\textwidth]{planar-solutionsA-volgrwth.pdf} } \end{center} \caption{A collection of ``planar solutions'' satisfying \( a=0=b \). The solution curves are given in terms of \( t\mapsto(U(t),V(t)) \) whilst the volume growth refers to \( t\mapsto\sqrt{-\lambda(t)} \).} \label{fig:G2sol2D} \end{figure} More precisely, in the case \( U(0)=V(0) \), the normalisation forces \( Q'(0) \), written as \( (x,y)=(U'(0),V'(0)) \), to lie on the curve \begin{equation} \label{eq:nromcurve} x(x+y)^2= -2\sqrt{3}, \end{equation} which has two branches separated by the line \( x+y=0 \). One branch corresponds to positive-definite metrics, including the nearly-K\"ahler solution \begin{equation} \label{eq:nkc} x=y=\nu,\quad\hbox{where}\quad\nu=-3^{1/6}/2^{1/3}=-0.953\ldots \end{equation} The ABC metrics are those for which \(\nu<x<0\), and appear on the top left of the nearly-K\"ahler line in Figure \ref{fig:G2sol2DqcurveA}, in green in the coloured version. When \( U(0)\neq V(0) \), the nearly-K\"ahler solution is excluded. Nevertheless, the overall picture remains valid, meaning one branch of the normalisation curve corresponds to positive-definite metrics, and this branch itself has two half pieces, one corresponding to ABC curves and one to the other solutions. In the trace-free case, \( a=0=b \), all solutions degenerate at a point \( t_0 \). The ABC solutions are ``half complete'', meaning that away from the degeneration they are complete in one direction of time. (See \cite{Apostolov-S:G2,Chiossi-F:G2} for other examples of half-complete \( \mathrm{G}_2 \)-metrics). The other solutions reach another degeneracy point \( t_1 \) in finite time. The singularity at \( t_0 \) cannot be resolved. In particular, it is not possible to find complete \( \mathrm{G}_2 \)-metrics. One way to circumvent this issue is to consider flow solutions for which \( [\gamma]\neq0 \); solutions of this form include the metrics discovered by Brandhuber et al \cite{Brandhuber-al:G2}. \paragraph{Three-function ansatz.} Now, turning to ``less symmetric'' \( \mathrm{G}_2 \)-metrics, we consider for solutions in \( \mathcal H_0 \) with \( Q \) of the (generic) form: \[ Q(t)=\diag(-U(t)-V(t)-W(t),U(t),V(t),W(t)),\] where \( U,V,W \) are smooth functions on an interval \( I \subset\mathbb R \). A solution of \eqref{eq:G2flow-matr} is then uniquely specified by the sextuple \[ (U(0),V(0),W(0),U'(0),V'(0),W'(0)). \] As in the case of planar solutions, we have solved the flow equations for a large number of initial conditions. In contrast with the planar case, we have not been able to find metrics with one stabilising directions as \( t \to\pm\infty \). We shall confine our presentation to the class of solutions with the same initial point \[ (U(0),V(0),W(0))=(1,1,1) \] as the nearly-K\"ahler solution, but with varying velocity vector \begin{equation} \label{eq:initialvel3d} (x,y,z)=(U'(0),V'(0),W'(0)). \end{equation} Similar to the planar case, the flow lines are governed by the normalization condition, and \eqref{eq:nromcurve} is replaced by the cubic surface \begin{equation} \label{eq:spaceconstraint} (x+y)(x+z)(y+z)=-4\sqrt3. \end{equation} The asymptotic planes corresponding to the vanishing of \( x+y,\,x+z,\,y+z \) separate the surface into four hyperboloid-shaped components, and only the one with all factors negative is relevant to our study of positive-definite metrics with holonomy \( \mathrm{G}_2 \). The nearly-K\"ahler solution \(x=y=z=\nu\) (cf.~\eqref{eq:nkc}) corresponds to its centre point. \bigbreak \begin{figure}[ht!] \begin{center} \subfigure[Side view with diagonal nearly-K\"ahler line.]{ \label{fig:G2sol3DqcurveA} \includegraphics[width=0.8\textwidth]{umbS.pdf} }\\[10pt] \subfigure[Looking down the line.]{ \label{fig:G2sol3DqcurveB} \includegraphics[width=0.54\textwidth]{umbF.pdf} }\kern-5pt \subfigure[Planar ABC solutions.]{ \label{fig:G2sol3DqcurveC} \includegraphics[width=0.45\textwidth]{umbP.pdf} } \end{center} \caption{Families of space curve solutions satisfying \( a=0=b \). The solution curves are given in terms of \( t\mapsto(U(t),V(t),W(t)) \).} \label{fig:G2sol3D} \end{figure} Families of solutions are shown in Figure~\ref{fig:G2sol3D} which, like those in Figure~\ref{fig:G2sol2D}, were plotted using \textsl{Mathematica} and the command \textsl{NDSolve}. To obtain the curves, it was convenient to further reduce attention to the case in which \( x,y,z \) are all negative. The corresponding subset of \eqref{eq:spaceconstraint} is now a curved triangle \( \mathscr{T} \) with truncated vertices. By issuing a plotting command for \( \mathscr T \), we obtained an abundant sample of mesh points to feed into \eqref{eq:initialvel3d} as initial values. One can then regard each curve as the continuing trajectory of a particle launched towards a point of \( \mathscr T \), which fits in close to the apex of Figure~\ref{fig:G2sol3DqcurveA}. \smallbreak All the solutions, apart from the central nearly-K\"ahler one, are new. They tend to have shrinking directions, causing the volume growth to slow down. The \( 5250 \) solution curves in Figure~\ref{fig:G2sol3DqcurveA} are plotted for the range \( -0.97\leqslant t \leqslant 0 \) since many develop singularities close to \( t=-1 \) (and close to \( t=0.2 \) though positive \( t \) is not shown). In the coloured ``cocktail umbrella'' picture, they are separated into groups distinguished by the value of the function \( x^2+y^2+z^2 \) of the initial condition, with the nearly-K\"ahler line \(x=y=z\) and its close neighbours in red. Solutions resulting from one of the coordinates being positive can be short-lived in comparison to the others, leading to less coherent plots, and this is why they are absent. The view looking down the nearly-K\"ahler line from a point \( (u,u,u)\) with \( u\gg1 \) is shown in Figure~\ref{fig:G2sol3DqcurveB}. The \( \mathbb{Z}\slash{3\mathbb{Z}} \) symmetry obtained by permuting the coordinates is evident. The splitting behaviour at the three ``ends'' is to some extent artificial, reflecting as it does the truncation that has resulted from our decision to restrict attention to the negative octant. The ABC two-function solutions of Figure~\ref{fig:G2sol2DqcurveA} in the previous subsection arise when two of \( x,y,z \) coincide and assume a common value greater than \(\nu\). The projection of these planar curves orthogonal to the nearly-K\"ahler line can be seen in Figure~\ref{fig:G2sol3DqcurveC}. Computations confirm that, unlike the generic curves of Figure~\ref{fig:G2sol3DqcurveB} emanating from \((1,1,1)\), these can be extended for all \(t\to-\infty\). \smallbreak In addition to the solutions in \( \mathcal H_0=\mathcal H_{(0,0)} \), we have investigated solutions in \( \mathcal H_{(1,-1)} \). Regarding the asymptotic behaviour of the associated \( \mathrm{G}_2 \)-metrics, the overall picture appears not dissimilar to the one we have described by deforming the nearly-K\"ahler velocity. Taking account also of the numerical analysis in \cite{Cvetic-al:G2-Spin7}, we conjecture that the only solutions that can be extended for \( t\to-\infty \) or \(t\to\infty\) lie in a plane. \paragraph*{Acknowledgements.} Both authors thank Mark Haskins for discussions that helped initiate this research, and in particular for bringing \cite{Hengesbach:phd} to their attention. The first author gratefully acknowledge financial support from the \textsc{Danish Council for Independent Research, Natural Sciences}.
1,108,101,566,299
arxiv
\section{Introduction} \label{sec:intro} \IEEEPARstart{F}{ace} video inpainting targets at restoring corrupted or occluded regions of faces in videos. It is an important research topic in computer vision and has many practical applications such as video overlay removal~\cite{3dconv-vd} and partially occluded face recognition in surveillance videos~\cite{mathai2019does}. Note that faces in videos often exhibit diverse poses and expressions. This makes face video inpainting a challenging task. \import{}{fig-intro.tex} Correspondences between frames serve as crucial clues in video inpainting for retrieving missing information from neighboring frames and ensuring temporal consistency. Existing video inpainting methods mainly focus on restoring the backgrounds of natural scenes which are mostly stationary and consist of repetitive patterns. They typically fill missing regions by copying and propagating similar patterns or textures from other regions \cite{tempo-coherent-vd, newson2014video, 7112116, 4060949}. % However, directly referring to other frames often results in improper contents when elements in a video move around and change their appearances. Hence, these methods are only capable of tackling narrow masks and static background. Recently, a number of learning-based methods have been proposed \cite{3dconv-vd, 3dconv-2-vd, frame-recurrent-vd, deep-vd-inpaint, copy-paste-vd, short-long-vd, vd-temp-spatial, flow-vd}. These methods successfully learn domain knowledge from an enormous number of training samples and can generate proper content for large missing regions. Most of them are based on special-temporal attention or assisted by optical flow to learn the correspondences across frames, and are suitable for natural scenes with simple motions. For face videos, however, the appearance of a face can vary a lot under different poses and expressions. These methods have difficulties in finding proper reference in neighboring frames to restore reasonable contents for faces. They often fail to generate visually plausible face structures when no reference can be found in neighboring frames due to their lack of face prior knowledge. Hence, they cannot guarantee recovering proper faces in the videos. Owing to the prior knowledge of the 3D face structure, human can interpret, recognize, or even ``reconstruct'' a corrupted face image with relative ease. For instance, human can recognize faces in low quality videos under diverse viewpoints and partial occlusions, as well as under different face poses and expressions. % Inspired by this, we propose to exploit 3D face prior for face video inpainting. % In this paper, we employ an expressive 3D face model as our 3D face prior. By fitting this 3D face model to the video frames, we can transform the face from the image space to the UV (texture) space and vice versa. % Note that faces in the UV space represent unwarped face textures which are well aligned. This helps to remove the influence of poses and expressions, and makes the learning of face structure much easier. Besides, the well alignment and symmetry of the face features in the UV space also make it trivial to locate correspondences in neighboring frames which provide rich information for face video inpainting. Based on these observations, we propose to carry out face video inpainting in the UV space (see~Fig.~\ref{fig-intro}). We introduce a Multi-reference UV-map Completion Network\xspace (MUC-Net\xspace) with a novel Frame-wise Attention\xspace (FA\xspace) module to perform reference-based face completion in the UV space. Our proposed method is a two-stage approach. As a pre-processing step, we fit the 3D Morphable Model (3DMM) \cite{3dmm} to every frame of the face video. We use the estimated model parameters to transform the face between the image space and the UV space in the two core stages. In Stage~I, namely UV-map completion stage, we first transform the face to the UV space and carry out UV-map completion using our proposed MUC-Net\xspace. Our FA\xspace module is designed specifically to take full advantage of the well aligned face features in the UV space to find proper correspondences in neighboring frames in an efficient and effective manner. In Stage~II, namely face video refinement stage, we transform the inpainted UV-map back to the image space and perform face video refinement using our proposed Face Video Refinement Network\xspace (FVR-Net\xspace). FVR-Net\xspace inpaints any background regions not covered in Stage~I and at the same time refines the inpainted face regions. In contrast to other methods, our method ensures the plausibility of face structure through the use of 3D face prior. Our method is more robust for faces under large pose and expression variations, and can better exploit correspondences in neighboring frames. Our key contributions include: \renewcommand{\labelitemi}{$\bullet$} \begin{itemize} \item To the best of our knowledge, we are the first to perform face video inpainting via the UV space. Thanks to the well alignment and symmetry of the face features in the UV space, our MUC-Net\xspace can robustly restore the missing face regions with plausible face structures and textures. \item We propose a novel Frame-wise Attention\xspace (FA\xspace) module that can take full advantage of the well aligned face features in the UV space to find proper correspondences efficiently in neighboring frames to assist face inpainting. \item Our method achieves state-of-the-art performance in face video inpainting, especially for the challenging cases with large face pose and expression variations. Comprehensive experiments demonstrate the effectiveness and robustness of our method. \end{itemize} \section{Related Work} \subsection{Face Image Inpainting} Traditional image inpainting methods fill the missing regions by progressively propagating pixels from the neighboring regions \cite{bertalmio2000image-img,ballester2001filling, structure-texture-2003-img} or by iteratively searching for matching patches \cite{criminisi2004region, patchmatch-2009-img, exemplar-2003, efros2001image, hays2007scene}. % These conventional methods are capable of handling cases with stationary textures and relatively small holes, but fail when textures and structures are non-repetitive. Some studies~\cite{hwang2003reconstruction,face-inpainting-2004-img} exploit specific face domain knowledge for face image inpainting. Constrained by the representation ability of their models, however, they can only restore specific face regions in frontal faces. Recently, learning-based methods \cite{global-local-inpaint-img, context-encoder-img, yang2017high} have been proposed to perform inpainting by learning from large image datasets. These methods are more robust and expressive than the traditional non-learning-based methods. Some of them focus on improving the structure and texture coherence by introducing additional guidance such as edges~\cite{edgeconnect-img}, segmentation~\cite{segmentation-img}, structure flow~\cite{structureflow-img}, and foreground contour~\cite{foreground-img}. Others \cite{pyramid-img, liu2019coherent, sagong2019pepsi} explore feature matching between masked regions and known regions in the semantic space by proposing new modules such as contextual attention~\cite{generative-inpaint-img}. There are also inpainting works that focus on progressively filling in the holes from the boundary regions and treating masked regions (usually input as 0 value) and non-masked regions separately, such as partial convolution~\cite{pconv-img} and gated convolution~\cite{gated-img}. Partial convolution~\cite{pconv-img} updates a binary valued mask regularly, while gated convolution~\cite{gated-img} adopt a learnable soft valued mask. In response to the need of face related applications, a number of studies \cite{lahiri2018improved-img-face, hwang2003reconstruction, face-inpainting-2004-img} have been proposed specifically for the face inpainting task. To stabilize the generated face structures, some of them introduce additional guidance such as landmarks~\cite{lafin-img-face,zhang2020domain-img-face} and face parsing~\cite{li2017generative, song2019geometry-img-face} into the pipeline to serve as intermediate output or loss term. Others propose to make use of additional inputs such as reference images~\cite{identity-bmvc-2018-img-face} from the same person to preserve identities, or colorized sketches~\cite{sc-fegan-img-face,faceshop} to perform interactive face editing (modify the shape / color of the given face). Li \emph{et al}\onedot~\cite{li2020learning-img-face} propose to utilize face symmetry by perform illumination-aware feature warping from flipped images. However, human face may not look strictly symmetric under large pose variations. Based on observations, most of the face image inpainting solutions perform unsatisfactorily under large pose or expression variations. This is due to their lack of 3D face priors to help understand and restore face structures from 2D images. Furthermore, these image-based methods can only achieve sub-optimal results in face video inpainting as they do not exploit information provided by correspondences in neighboring frames. \import{}{fig-framework.tex} \subsection{Video Inpainting}\label{video-related} Different from image inpainting, video inpainting takes a sequence of frames as input and restores the missing regions based on both spatial and temporal information. Compared with image-based methods, video inpainting methods explore correspondences between frames as crucial clues to retrieve missing information from neighboring frames to ensure temporal consistency. Traditional video inpainting methods \cite{tempo-coherent-vd, newson2014video, patwardhan2005video-vd-trad, wexler2007space-vd-trad} typically perform patch-based or optical-flow-based optimizations which require heavy computations. They are capable of generating plausible contents for general videos consisting of stationary background with repetitive patterns and consistent textures. However, they may fail miserably when structures and textures are complicated and their appearances vary largely across frames. Boosted by deep learning techniques, learning-based methods have been proposed to explore solutions for better utilizing spatial and temporal information by introducing flow-warping \cite{flow-vd, gao2020flow-vd, copy-paste-vd, deep-vd-inpaint, frame-recurrent-vd, zou2021progressive}, cross-frame attention~\cite{short-long-vd, onion-vd}, and 3D convolution \cite{vd-temp-spatial, 3dconv-vd, 3dconv-2-vd}. Optical flow is often adopted as an intermediate guidance \cite{flow-vd, gao2020flow-vd} or used to warp frames into alignment \cite{zou2021progressive}. This facilitates the calculation of warping loss \cite{deep-vd-inpaint}. The above methods mainly retrieve correspondences by searching for similar patches or making the patterns aligned based on flow-warping. However, human face can appear very differently under large pose and expression variations. This makes it more difficult to find proper reference from neighboring frames due to the large appearance variations. There is also a video inpainting work \cite{img2vd-vd-face} focusing on face re-identification. They target at the restoration of the de-identificated face videos by giving the original landmarks as input. The mask is designed to cover all the key face components for all the input frames while the background is preserved. Under this setting, no reference is available from other frames to recover the missing regions. They instead focus on predicting consistent identity for all the frames from the given landmarks. In this paper, we aim at efficiently retrieving proper correspondences from neighboring frames for face video inpainting. We exploit an effective way to transform face textures into a well-aligned space which greatly facilitates both correspondence learning and feature restoration. \subsection{3D Face Prior} Human domain knowledge has become a powerful tool in numerous tasks owing to the learnable human prior (e.g., body structure~\cite{wu2019deep} and face prior~\cite{de-occlu-3dmm-img-face}). In this paper, we focus on face prior assisted face video inpainting. Commonly used face priors include face parsing, face landmarks, and face model \cite{Zeng_2019_ICCV, chaudhuri2020personalized}. In particular, 3D face morphable model (3DMM) \cite{3dmm, accu-face-recons-19, egger20203d, sariyanidi2020inequality, lin2020towards, egger20203d} has achieved stable and excellent performance in face reconstruction, and has been wildly adopted in face related works such as face recognition\cite{uv-gan}, face frontalization~\cite{gecer2021ostec}, face editing \cite{cao2020task}, makeup transfer\cite{nguyen2021lipstick}, face reenactment\cite{xu2020deep}, face super-resolution~\cite{hu2020face}, face deblurring~\cite{ren2019face}, and animation \cite{lee1997model}. The impressive results of these works well demonstrate the advantages of embedding 3D face prior into face-related tasks. Among works that utilize 3D face prior, UV-GAN~\cite{uv-gan} is closely related to our work. UV-GAN also utilizes face model and UV map to recover face regions. However, their motivation and contributions are different from ours. UV-GAN is proposed to reconstruct face models and synthesize novel views to enlarge the diversity of poses for pose-invariant face recognition. They exploit UV textures and leverage symmetry of the face to recover self-occluded regions in the fitted model. They only deals with single images. In this paper, we target at robust face video inpainting by making use of 3D face prior to facilitate both face structure learning and correspondence finding from the well-aligned feature maps in the UV space. % \section{Method} \subsection{Overview} As briefly introduced in Sec.~\ref{sec:intro}, our method is a two-stage approach. Fig.~\ref{fig-framework} shows an overview of our proposed pipeline. Given a face video, we first fit 3DMM to every frame to obtain per-frame shape, texture, and pose parameters. The shape and pose parameters are used for transforming the face between the image space and UV space, while the texture parameters are used to generate synthesized texture to provide auxiliary information for the inpainting task. In Stage~I, we first transform the face from the image space to the well-aligned UV space and use our proposed MUC-Net\xspace to perform UV-map completion. FA\xspace module is proposed for MUC-Net\xspace to facilitate the correspondence retrieval across UV texture frames. In Stage~II, we transform the completed UV-map in Stage~I back to the image space, and use our proposed FVR-Net\xspace to inpaint any background (non-face) regions not covered in Stage~I as well as refine the inpainted face regions. In Sec.~\ref{sec:3DMM}, we will first give a brief review of 3DMM which serves as our 3D face prior and facilitates the transformation of faces between the image space and the UV space. We will then describe the details of our MUC-Net\xspace and FA\xspace module for UV-map completion in \ref{sec:Stage-I}. Details of face video refinement using our FVR-Net\xspace will be covered in Sec.~\ref{sec:Stage-II} . \subsection{3D Face Prior}\label{sec:3DMM} \subsubsection{Face Reconstruction} In this work, we employ 3DMM \cite{3dmm} as our 3D face prior. We adopt the method described by Deng \emph{et al}\onedot~\cite{accu-face-recons-19} to fit 3DMM to the video frames using a modified ResNet-50 network \cite{resnet}. We retrain the network with masked face images as input, and the output is a combined vector $(\boldsymbol{\alpha}, \boldsymbol{\beta}, \boldsymbol{\delta}, \boldsymbol{\gamma}, \boldsymbol{p}) \in \mathbb{R}^{257}$, where $\boldsymbol{\alpha} \in \mathbb{R}^{80}$, $\boldsymbol{\beta} \in \mathbb{R}^{64}$, $\boldsymbol{\delta} \in \mathbb{R}^{80}$, $\boldsymbol{\gamma} \in \mathbb{R}^{27}$, and $\boldsymbol{p} \in \mathbb{R}^{6}$ represent face identity, expression, texture, illumination, and pose respectively. Concretely, the pose vector $\boldsymbol{p}$ is composed of a rotation vector\footnote{Euler angles \textit{yaw}, \textit{pitch}, and \textit{roll} for constructing the rotation matrix $\mathbf{R} \in \mathrm{SO}(3)$} $\boldsymbol{r} \in \mathbb{R}^{3}$ and a translation vector $\mathbf{t} \in \mathbb{R}^{3}$. With the predicted parameters, the shape $\mathbf{S}$ and texture $\mathbf{T}$ of the 3D face can be modeled as: \begin{equation}\label{eq:3dmm-para} \begin{split} \mathbf{S}&=\bar{\mathbf{S}}+\mathbf{B}_{id} \boldsymbol{\alpha}+\mathbf{B}_{exp} \boldsymbol{\beta}, \\ \mathbf{T}&=\bar{\mathbf{T}}+\mathbf{B}_{tex} \boldsymbol{\delta}, \end{split} \end{equation} where $\bar{\mathbf{S}}$ and $\bar{\mathbf{T}}$ denote the mean shape and texture, $\mathbf{B}_{id}$, $\mathbf{B}_{exp}$, and $\mathbf{B}_{tex}$ are the PCA bases for face identity, expression, and texture respectively. Similar to Deng \emph{et al}\onedot~\cite{accu-face-recons-19}, we adopt $\bar{\mathbf{S}}$, $\bar{\mathbf{T}}$, $\mathbf{B}_{id}$, and $\mathbf{B}_{tex}$ from BFM~\cite{bfm}, and $\mathbf{B}_{exp}$ built from FaceWarehouse~\cite{faceware}. \subsubsection{Texture Sampling and UV Mapping} Given the predicted shape $(\boldsymbol{\alpha}, \boldsymbol{\beta})$ and pose $(\boldsymbol{r}, \boldsymbol{t})$ parameters, we can transform a face from the image space to the UV space through texture sampling and UV mapping. We first project the 3D face model onto the image using the pose parameters and perform bilinear sampling to compute per-vertex texture for the 3D face model. For self-occluded and back-facing vertices, as well as vertices projected onto the masked regions, we simply assign zero to their texture values. Finally, we carry out UV mapping to transform the 3D face model texture to the UV space. For the rest of this paper, we denote a corrupted input frame and its ground truth as $\mathbf{I}_{in}$ and $\mathbf{I}_{gt}$ respectively, and their UV-maps as $\mathbf{U}_{in}$ and $\mathbf{U}_{gt}$ respectively. We represent the missing regions in the image space using a 2D binary mask $\mathbf{I}_{m}$, and denote its UV-map as $\mathbf{U}_{m}$. Similarly, we represent the valid projection of the 3D face model in the image space using a 2D binary mask $\mathbf{I}_{v}$, and denote its UV-map as $\mathbf{U}_{v}$ (see Fig.~\ref{fig-framework}). We also map the synthesized texture $\mathbf{T}$ to the UV space and denote it as $\mathbf{U}_{t}$. \import{}{fig-uvattn.tex} \subsection{Stage~I: UV-map Completion}\label{sec:Stage-I} We first transform the face from the image space to the UV space and carry out UV-map completion. As mentioned previously, the UV maps of a face represent unwarped face textures which are well aligned and largely invariant to face poses and expressions. This greatly facilitates the learning of face structures and the finding of correspondences in neighboring frames. \subsubsection{Multi-reference UV-map Completion Network\xspace (MUC-Net\xspace)} We adopt an encoder-decoder network equipped with gated convolutions \cite{gated-img} as the backbone of our MUC-Net\xspace (network details can be found in the supplementary material). We concatenate each frame $\mathbf{U}_{in}^{i}$ with its flipped UV-map $\hat{\mathbf{U}}_{in}^i$, synthesized texture map $\mathbf{U}_{t}^i$, valid face projection $\mathbf{U}_{v}^i$, and missing regions $\mathbf{U}_{m}^i$, and feed them to the encoder to generate the feature map $\mathbf{F}^i$: \begin{equation} \mathbf{F}^i = En(\mathbf{U}_{in}^i, \hat{\mathbf{U}}_{in}^i, \mathbf{U}_{t}^i, \mathbf{U}_{v}^i, \mathbf{U}_{m}^i). \end{equation} The flipped UV map $\hat{\mathbf{U}}_{in}$ exploits symmetry to provide auxiliary information when only parts of the symmetrical face features are being masked, whereas the synthesized texture map $\mathbf{U}_{t}$ helps to provide auxiliary information when symmetrical face features are being completely masked. To exploit information provided by correspondences in neighboring frames, we propose a Frame-wise Attention\xspace (FA\xspace) module to fuse features from neighboring frames. Specifically, for each {\em target frame}, we select $n$ other frames as its {\em reference frames} and fuse their features using the FA\xspace module: \begin{equation} \mathbf{Z}^i = Attn(\mathbf{F}^i, \{\mathbf{F}^{i+j}~|~j \in \Omega\}), \end{equation} where $\Omega$ is the set of offset indices for the reference frames. In our experiments, we take $\Omega=\{-2,-1,+1,+2\}$. Finally, the fused feature map $\mathbf{Z}^i$ is fed to the decoder to generate the completed UV map $\mathbf{U}_{out}^i$: \begin{equation} \mathbf{U}_{out}^i = De(\mathbf{Z}^i). \end{equation} \subsubsection{Frame-wise Attention\xspace} Inspired by the recently proposed attention mechanism \cite{non-local}, we design a frame-wise attention block to explore correspondences between a target frame and its reference frames. Thanks to the well alignment of the face features in the UV space, we can limit our search for correspondences in a small local window. Concretely, we pick {\it query} points from the masked regions in the feature map of the target frame. For each {\it query}, we define a $s\times s$ small window (we set $s = 3$ in our experiments) centered at the {\it query} for selecting its {\it reference} points from the feature maps of the reference frames (see Fig.~\ref{fig-uvattn}). This small window design is employed to account for any slight misalignment of the UV maps. Given the query $\mathbf{q} \in \mathbb{R}^C$ evaluated at the query point, and the keys $\mathbf{K} \in \mathbb{R}^{C \times (s^2 \times n)}$ % and values $\mathbf{V} \in \mathbb{R}^{C \times (s^2 \times n)}$ evaluated at the reference points, frame-wise attention is accomplished by \begin{align} \begin{split} \bm{\alpha}&= \frac{\exp\left(\mathbf{K}^{\rm T} \mathbf{q}\right)}{\sum_{m=1}^{N}\exp\left(\mathbf{K}_{m}^{\rm T} \mathbf{q}\right)}, \\ \mathbf{z}&= \mathbf{f} + W_{z}(\mathbf{V}\bm{\alpha}), \end{split} \end{align} where $N\!=\!s^2 \!\times\!n$ gives the total number of reference points; $\bm{\alpha}$, $\mathbf{f}$, $\mathbf{z}$, and $W_{z}$ denote the attention vector, input feature vector, output feature vector, and output embedding layers respectively. Compared with previous works such as STTN \cite{sttn} which uses spatial-temporal non-local attention to find correspondences across different frames, our design dramatically cuts down unnecessary computations and greatly improves the time complexity. \subsubsection{Loss Functions} We use $\mathcal{L}_{1}$ loss and SSIM loss \cite{ssim} for both UV maps and back-projected faces to train MUC-Net\xspace. $\mathcal{L}_{1}$ loss aims at minimizing the distance between the ground-truth and predicted UV maps, whereas SSIM loss is adopted for maximizing structure similarity. The loss for the UV map is computed as \begin{align} \begin{split} \mathcal{L}_{U}&= \mathcal{L}^{U}_{1} + \mathcal{L}^{U}_{\mathit{SSIM}},\\ \mathcal{L}^{U}_{1}&= \left\|\mathbf{U}_{v} \circ \left(\mathbf{U}_{out} - \mathbf{U}_{gt}\right) \right\|_{1} , \\ \mathcal{L}^{U}_{\mathit{SSIM}}&= - \frac{\left(2 \mu_{\mathbf{U}_{\mathit{out}}} \mu_{\mathbf{U}_{\mathit{gt}}}+c_{1}\right)\! \left(2 \sigma_{\mathbf{U}_{\mathit{out}}} \sigma_{\mathbf{U}_{\mathit{gt}}}+c_{2}\right)} {\left(\mu_{\mathbf{U}_{\mathit{out}}}^{2} \! +\!\mu_{\mathbf{U}_{\mathit{gt}}}^{2} \! +\!c_{1}\right)\! \left(\sigma_{\mathbf{U}_{\mathit{out}}}^{2} \! +\!\sigma_{\mathbf{U}_{\mathit{gt}}}^{2} \! +\!c_{2}\right)} , \end{split} \end{align} where $\circ$ denotes element-wise multiplication, $\mu$ and $\sigma$ denote the mean and variance, $c_1$ and $c_2$ are stabilization constants set as $0.01^2$ and $0.03^2$ according to \cite{ssim}. Similarly, we define $\mathcal{L}^{I}_{1}$, $\mathcal{L}^{I}_{\mathit{SSIM}}$, and $\mathcal{L}_{I}$ for the back-projected face $\mathbf{I}_{bp}$ in the image space. The overall loss for Stage~I is given by \begin{align}\label{eq:uv-recons} \begin{split} \mathcal{L}_{\rm (I)} &= \lambda_{\alpha} \cdot \mathcal{L}_{U} + \lambda_{\beta} \cdot \mathcal{L}_{I}, \end{split} \end{align} where the weights $\lambda_{\alpha}$ and $\lambda_{\beta}$ are empirically set as 1.0 and 2.0 respectively. \import{}{vd_compare.tex} \import{}{fig-vd-sota-all.tex} \subsection{Stage~II: Face Video Refinement}\label{sec:Stage-II} We transform the output from MUC-Net\xspace back to the image space by rendering the 3D face model with the predicted UV map, and denote this back-projected face as $\mathbf{I}_{bp}$. We then perform face video refinement to inpaint any background (non-face) regions not covered in Stage~I as well as to refine and fuse the inpainted face regions with the input frame. \subsubsection{Face Video Refinement Network\xspace (FVR-Net\xspace)} Similar to MUC-Net\xspace, we adopt an encoder-decoder network as the backbone for our FVR-Net\xspace (networks details can be found in the supplementary material). We concatenate each frame $\mathbf{I}_{in}^i$ with its masked back-projected face $\mathbf{I}_{mbp}^i = \mathbf{I}_m^i \circ \mathbf{I}_{bp}^i$ and missing regions $\mathbf{I}_m^i$, and feed them to the encoder to generate the feature map $\tilde{\mathbf{F}}^i$. A Mask-wise Attention (MA) module is proposed to fuse features from non-masked regions in neighboring frames. MA block is similar to the FA block, but with the reference points taken from the non-masked regions of both the target and reference frames. The fused feature map is fed to the decoder to generate the predicted image $\mathbf{I}_{out}$. The final output $\mathbf{I}_{c}^i$ is then obtained by \begin{align} \begin{split} \mathbf{I}_{c}^i = \mathbf{I}_{m}^i \circ \mathbf{I}_{out}^i + (1 - \mathbf{I}_{m}^i) \circ \mathbf{I}_{in}^i. \end{split} \end{align} \subsubsection{Loss Functions} Similar to Stage~I, we adopt $\mathcal{L}^{I}_{\mathit{SSIM}}$ and a slightly modified version of $\mathcal{L}_{1}^{I}$ to train FVR-Net\xspace. In addition, we also use perceptual loss $\mathcal{L}_{per}^{I}$ to minimize the distance in the semantic feature space. The overall loss for Stage~II is given by \begin{align} \begin{split} \mathcal{L}_{\rm (II)}&= \mathcal{L}_{1+}^{I} + \mathcal{L}_{\mathit{SSIM}}^{I} + 0.1 \!\cdot\! \mathcal{L}_{per}^{I}, \end{split} \end{align} where \begin{align} \begin{split} \mathcal{L}^{I}_{1+}&= \left\|\mathbf{I}_{\mathit{out}}\! - \mathbf{I}_{\mathit{gt}} \right\|_{1} + 2\cdot \left\|\mathbf{I}_{m} \circ \left(\mathbf{I}_{\mathit{out}} \! - \mathbf{I}_{\mathit{gt}}\right) \right\|_{1}, \\ \mathcal{L}^{I}_{\mathit{SSIM}}&= - \frac{\left(2 \mu_{\mathbf{I}_{\mathit{out}}} \mu_{\mathbf{I}_{\mathit{gt}}}+c_{1}\right)\! \left(2 \sigma_{\mathbf{I}_{\mathit{out}}} \sigma_{\mathbf{I}_{\mathit{gt}}}+c_{2}\right)} {\left(\mu_{\mathbf{I}_{\mathit{out}}}^{2} \! +\!\mu_{\mathbf{I}_{\mathit{gt}}}^{2} \! +\!c_{1}\right)\! \left(\sigma_{\mathbf{I}_{\mathit{out}}}^{2} \! +\!\sigma_{\mathbf{I}_{\mathit{gt}}}^{2} \! +\!c_{2}\right)},\\ \mathcal{L}^{I}_{per}&= \frac{1}{C_k H_k W_k} \left\|\phi_{k}\left(\mathbf{I}_{\mathit{out}}\right) - \phi_{k}\left(\mathbf{I}_{\mathit{gt}}\right)\right\|_{2}^{2}, \end{split} \end{align} where $\phi_{k}$ is the $k$-th layer output of a pretrained VGG-16 network \cite{vgg}, $C_k$, $H_k$, and $W_k$ denote the channel number, height, and width of the $k$-th layer output respectively. \section{Experiments} \subsection{Implementation Details} \subsubsection{Dataset} We use the 300VW~\cite{300vw} dataset for our experiments. 300VW dataset contains 114 face videos with diverse face poses and expressions. We excluded low quality videos and selected 75 videos for training and 20 for evaluation. \subsubsection{Inpainting Settings} We followed the pre-processing described by Deng \emph{et al}\onedot~\cite{accu-face-recons-19} to crop and resize the face regions. The image size adopted for face videos is $224\times 224$ and the UV maps have a dimension of $256\times 256$. To verify our contribution in handling large pose variations, we extracted every $10$-th frames from the original face videos as our test sequences. \subsubsection{Mask Settings} We consider two kinds of masks for evaluation:% \begin{itemize} \renewcommand\labelitemi{--} \item \textbf{Shifting masks} are generated with slightly altered shapes and quick motions across frames, which mimic non-stationary occlusions in face videos. \item \textbf{Static masks} keep consistent shapes and locations for the whole video sequence, which also commonly happen in real scenes. \end{itemize} We also consider two kinds of mask shapes: % \begin{itemize} \renewcommand\labelitemi{--} \item \textbf{Rectangular masks} are a representative case which is commonly used in inpainting tasks. \item \textbf{Irregular masks}~\cite{sttn} mimic arbitrarily shaped occlusion objects in face videos. \end{itemize} The generated masks occupy between $8\%$-$20\%$ of the whole image. Irregular masks are only evaluated in the baseline comparison (Sec. ~\ref{sec:sota}). We test both mask shapes under the shifting and static cases. \subsubsection{Metric Settings} We consider four different metrics in our quantitative evaluations, namely (1) $\ell_{1}$ error; (2) PSNR (Peak Signal-to-Noise Ratio); (3) SSIM~\cite{ssim} (Structural Similarity); and (4) VFID~\cite{3dconv-vd,wang2018vid2vid} (Video-based Fr\'echet Inception Distance, a video perceptual measure). \import{}{fig-vd-ablation.tex} \import{}{fig-uv-ablation.tex} \import{}{uv_ablation.tex} \import{}{fig-uvattn-viz.tex} \subsection{Comparison with State-of-the-Arts}\label{sec:sota} In this section, we conducted comparison between the proposed method and other inpainting methods to illustrate the strength of our framework. \subsubsection{Baselines} To the best of our knowledge, limited works have been proposed for face videos and consider the combination of face prior with video inpainting pipeline. We therefore look for video inpainting works that have been tested on face videos with codes publicly available, and select \cite{3dconv-vd} for comparison. Apart from~\cite{3dconv-vd}, we also select two representative video inpainting baselines~\cite{deep-vd-inpaint,sttn} and two image inpainting works~\cite{lafin-img-face,gated-img} for comparison. To evaluate the effectiveness of our method on face videos, we also reimplement a face video re-identification method~\cite{img2vd-vd-face} for comparison. All the baselines are recent deep learning methods developed for general scenes or face images. Below gives a brief summary of them: \begin{itemize} \renewcommand\labelitemi{--} \item \textbf{DeepfillV2}~\cite{gated-img}, an encoder-decoder structured method based on 2D gated convolutions and contextual attention. \item \textbf{LaFin}~\cite{lafin-img-face}, a landmark-guided two-stage method proposed for face image inpainting. \item \textbf{STN-GAN}~\cite{img2vd-vd-face}, a GAN-based model proposed for face re-identification using 3D Residual blocks to aggregate features. \item \textbf{VINet}~\cite{deep-vd-inpaint}, a context aggregation method based on recurrent structures and flow-warping. \item \textbf{3DGated}~\cite{3dconv-vd}, an encoder-decoder network based on 3D gated convolutions. \item \textbf{STTN}~\cite{sttn}, a transformer-based method by spatial and temporal patch-matching. \end{itemize} For a fair comparison, we retrained their models on the same dataset using their publicly-available code. Since codes are not available for STN-GAN~\cite{img2vd-vd-face}, we reimplemented their method according to the details in their paper. However, due to the nature of their task, they require ground truth landmarks as additional inputs, which are not available for face inpainting tasks. We therefore trained a landmark prediction network\cite{bulat2017far} to predict landmarks from the corrupted faces for them. \subsubsection{Quantitative Comparison} Table~\ref{table:vd-sota} summarizes the quantitative comparison results, where our method consistently outperformed the other methods on all four metrics under the two different mask settings. Due to the lack of face priors, all three video baselines fail to reconstruct the faces under the static mask setting. Due to the difficulty of correspondence retrieval in face videos with large pose / expression variations, they also performed poorly in the shifting mask setting. For image-based methods, even though face prior was utilized, they still failed since no temporal information was considered. Further, it is observed that the performance of landmark guided method may be affected by the limited accuracy of the landmarks predicted from corrupted faces. \subsubsection{Qualitative Comparison} We further conducted visual comparison on four classic scenes: \begin{itemize} \item[(A)] Face expression appears differently in reference frames; \item[(B)] Face pose changes frequently; \item[(C)] No useful reference in other frames (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, static masks); \item[(D)] No useful reference in other frames, however, it can be self-referenced (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, one eye covered). \end{itemize} Results are shown in Fig.~\ref{fig-vd-sota-all} for both rectangular masks and irregular masks. For each kind of masks, case A, B, C, and D are presented from top to bottom. Since we target at face videos, where correspondence retrieval is much more difficult than general scenes due to large face pose and expression variations, all the video baselines failed in these challenging cases (A \& B). Specifically, in case A, other video-based methods either attended to or directly copied the opened eyes from the reference frames and produced incorrect results. In case B, when face pose varied largely between frames, even though reference could be retrieved from other frames, they failed to comprehend the 3D face structure and directly incorporated the nose under a different pose to the target frame, resulting in a distorted face. For case C and case D, due to the lack of face prior, they all failed to predict proper face structures when no useful reference could be obtained (though it could be self-referenced in case D). The flow-based context aggregation method VINet failed completely in the static mask setting. As expected, our method performed the best on these challenging cases and achieved the most visually pleasant results compared to the other baselines. Through the use of 3D face prior, our method can take full advantage of the well alignment and symmetry properties of the UV maps and robustly restore the missing face regions even under large face pose and expression variations. For methods targeting at single face~\cite{3dconv-vd,lafin-img-face}, they do not retrieve useful information from other frames but merely synthesize the missing regions for the current frame. Hence, they treat all the testing cases (shifting \& static) the same way. It is obvious that they all failed to generate temporal consistent contents for the missing regions. Note that LaFin~\cite{lafin-img-face} and STN-GAN~\cite{img2vd-vd-face} also utilize face prior (i.e., landmarks) as their guidance. However, since the inpainting branch heavily depends on the landmark detection results, it will generate obvious artifacts when the predicted landmarks are incorrect (see Fig.~\ref{fig-vd-sota-all}). \subsection{Analysis of the Proposed Framework} In this section, we present experimental results to verify the design of our framework. \subsubsection{Effectiveness of UV-map Completion} We first carried out analysis on the effectiveness of our UV-map completion stage. We considered three variants, namely (a) single frame without UV maps as guidance, (b) single frame with UV maps as guidance, and (c) multi-frame without UV maps as guidance. Results are shown in Fig.~\ref{fig-vd-ablation} and Table~\ref{table:vd-ablation}. Our full model achieved the most plausible results compared to these variant models. It is also observed that the performance improved considerably with UV maps as guidance especially under the static mask setting. \subsubsection{Effectiveness of FA Module} We also conducted ablation study to evaluate our FA module. For comparison, we considered three different baselines, namely (a) simply taking a single frame as input, (b) concatenating the target frame with its reference frames as input, and (c) fusing (concatenating) the features of all the frames in the latent space before decoder. The quantitative analysis evaluated on $\mathbf{U}_{out}$ are listed in Table~\ref{table:uv-ablation}. Our method achieved the best performance with the assistance of Frame-wise Attention\xspace. Referring to the qualitative results shown in Fig.~\ref{fig-uv-ablation}, it is observed that our full model outperformed all the others in both detail generation and texture consistency, which also demonstrates the effectiveness of the FA module in retrieving proper correspondences for corrupted regions. \subsubsection{Visualization of Frame-wise Attention\xspace} To further investigate how does the FA\xspace module work, we present the visualization of the Frame-wise Attention\xspace in Fig.~\ref{fig-uvattn-viz}. We labeled each reference frame with a distinct color to visualize the attention map in a more intuitive way. For each {\it query} point from the embedded features in the target frame, we selected the most responsive {\it key} point (maximum attention value) from its pool of {\it key} candidates, and filled the attention map with the index color of the corresponding reference frame. In this example, we used the colors \{{\it \textcolor{Red}{red}}, {\it \textcolor{Green}{green}}, {\it \textcolor{Blue}{blue}}, {\it \textcolor{Dandelion}{yellow}}\} to denote the reference frames from left to right. The attention distribution is shown in the first column with the representative colors, while in the right four columns we display the response map of each reference frame. From the attention distribution, it is observed that the model learns to retrieve the matching features from the regions with higher reliability, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, intact regions. With the FA\xspace module, our proposed MUC-Net\xspace can better exploit the reference features and generate more visually plausible content for the corrupted face. \import{}{patch-size.tex} \import{}{fig-patch-mean.tex} \import{}{fig-patch-size.tex} \subsubsection{Ablation Study on UV-map completion Stage.} As mentioned in Sec.~\ref{sec:Stage-I}, we take both flipped UV map $\Hat{\mathbf{U}}_{in}$, the synthesized texture map $\mathbf{U}_{t}$, and the valid projection $\mathbf{U}_{v}$ as input. To further evaluate their contributions, we conducted ablation study on these components. Both quantitative results in Table~\ref{table:uv-ablation} and qualitative results in Fig.~\ref{fig-uv-ablation} demonstrate their effectiveness in reconstructing the face textures by utilizing the symmetry prior ($\Hat{\mathbf{U}}_{in}$) and 3D face model prior ($\mathbf{U}_{t}$). While $\mathbf{U}_{v}$ which indicates the valid face regions of the UV texture can help stabilize the training process and improve the overall performance. \subsubsection{Analysis on Patch Size used in Frame-wise Attention\xspace Module} Our method utilizes 3DMM face model as a bridge to transform the face textures from image space to UV space. Though the retrained face reconstruction network is capable of reconstructing proper face shapes for the corrupted input faces (refer to supplementary), it is possible that the predicted faces are slightly misaligned, which may result in small misalignment in the transformed UV maps. Fig.~\ref{fig-patch-mean} shows the mean value of a bunch of inputs (target frame and its reference frames). We can see that there exists some small inconsistency especially around the eye regions. Therefore, for each \textit{query} pixel, we propose to extract reference features in a local $s\times s$ window across all the reference frames. We further analyzed the effects of different patch sizes on shifting masks to observe how it affects the correspondence retrieval efficiency. Qualitative and quantitative results are shown in Fig.~\ref{fig-patch-size} and Table~\ref{table:patch-size} respectively. It is observed that adopting local windows instead of a single point can benefit the correspondence retrieval (the attention is more concentrated instead of scattered across the frames) and improve the overall performance. In our experiments, we adopted patch size $s=3$ to achieve a balance between performance and efficiency. \subsubsection{Speed} We also estimated the processing speed of our method to assess its applicability. Our model achieved 19.3 fps with an NVIDIA GTX 2080Ti GPU card. Despite the primary goal of improving the inpainting quality for face videos, our method still achieves reasonable efficiency with a na\"ive implementation. Specifically, Resnet-50 as feature extractor occupies $3.3\%$ of the time consumption, and the two main networks MUC-Net\xspace and FVR-Net\xspace take $45.4\%$ in total, while the remaining $51.3\%$ are for rendering process in UV mapping. \import{}{fig-user-study.tex} \subsection{User Study} We conducted a user study to further evaluate the visual quality of the inpainted videos. For comparison, we chose one image-based method LaFin~\cite{lafin-img-face} with landmarks as guidance, and two video-based methods -- 3DGated~\cite{3dconv-vd}, and STTN~\cite{sttn} with relatively higher performance. We sampled 16 videos from the test dataset, and tested on both static mask and shifting mask to evaluate the performance on these two cases. For each case, we sampled clips lasting 10 seconds from either rectangle mask or irregular mask (8 for each). The comparison is conducted in one-to-one manner with totally $3\times2\times16=96$ questions. For each question, the volunteers were given both the masked video and ground-truth video for reference, and were required to pick the better one from two inpainted videos (one baseline and ours). We collected responses from 20 volunteers and visualized the results in percentage (see Fig.~\ref{fig-user-study}). Our method gained most of the preference compared to other methods, which further demonstrates the effectiveness of our method. \import{}{fig-application.tex} \import{}{fig-failure.tex} \subsection{Application} Face video inpainting usually serves as a recovering tool in many applications, such as video editing or restoration. It can be used to remove unwanted watermark / subtitles or objects that appear in face videos. An example is shown in Fig.~\ref{fig-application} demonstrating the watermark removal application. Since our method is capable of both shifting and static masks with arbitrary shapes, it can benefit diverse face video editing tasks especially for those with large pose / expression variations (e.g., talk show). \subsection{Failure Case \& Future Work} Since our method utilizes face model to explore the underlying 3D structure of the given corrupted faces, it is possible the predicted 3DMM is not perfectly fitted to the ground truth face, especially when the mask covers key clues for accurate alignment. As shown in Fig.~\ref{fig-failure}, the eyes and nose are masked in the profile face, thus making it ambiguous for face reconstruction. The misaligned 3DMM (especially for nose region) results in noisy texture in the UV map and distorted nose in the final output. Currently, our second stage can help deal with small misalignment to refine the results. In our future work, we will try to improve the robustness of masked face reconstruction. Moreover, we will also extend this work to high-quality face videos. \section{Conclusion} In this paper, we propose a novel approach to facilitate face video inpainting by exploring face texture completion in the UV space. The symmetry and aligned distribution of face textures in the UV space help to restore the masked regions with detailed face textures and structures. We design a Multi-reference UV-map Completion Network\xspace\ with a Frame-wise Attention\xspace\ module to enable efficient frame-wise correspondence retrieval from reference UV texture maps. Compared with existing state-of-the-art methods, our approach is capable of synthesizing more visually plausible results especially under large face pose and expression variations.
1,108,101,566,300
arxiv
\section{Introduction} \label{sec:intro} The application of multi-variate analysis (MVA) techniques and machine learning have a long-standing history in analyses in particle physics and beyond. In the context of particle physics, machine learning-based approaches are typically employed when the expected signal count is small compared to the expected background contribution, thereby challenging a more traditional cut-and-count analysis to reach sufficient discriminating power to separate signal from backgrounds. For instance, the recent observations of top quark-associated Higgs production by CMS~\cite{Sirunyan:2018hoz} and ATLAS~\cite{Aaboud:2018urx} heavily rely on multi-variate approaches. But machine learning has also been considered in different contexts. The power of MVAs in searches for new physics is that they adapt to correlations in particle final states in order to map out relations between theoretical input parameters (the Lagrangian) and the output, e.g. the physical final state given by a particular radiation profile observed in a detector~\cite{Komiske:2016rsd,Barnard:2016qma, Butter:2017cot,Cohen:2017exh,Chang:2017kvc,Pearkes:2017hku,Louppe:2017ipp,Kasieczka:2017nvn, deOliveira:2017pjk,Luo:2017ncs,Datta:2017lxt,Larkoski:2017jix,Shimmin:2017mfk,Metodiev:2017vrx,Roxlo:2018adx,Brehmer:2018kdj,Brehmer:2018eca,Collins:2018epr,Duarte:2018ite,Fraser:2018ieu,Komiske:2018oaa,Macaluso:2018tck,Andreassen:2018apy,deCastro:2018mgh,DAgnolo:2018cun,Brehmer:2018hga,Monk:2018zsb,Moore:2018lsr,DeSimone:2018efk}. \begin{figure*}[!t] \includegraphics[height=6cm]{hj-eta_j1-noratio.pdf}\hfill \includegraphics[height=6cm]{hj-pT-noratio.pdf} \caption{\label{fig:hj} Predictions for Higgs+jet production for approximate cancellations of Wilson coefficient choices that can be resolved for large momenta. The uncertainty (grey band) is evaluated by factorisation and renormalisation scale variations ($\mu_0/2\leq \mu \leq 2\mu_0$) around the central scale $\mu_0=\sqrt{(p_h+p_j)^2}$. Modified branching ratios $h\to \tau\tau$ are included throughout.} \end{figure*} Machine learning approaches come into their own when there is insufficient knowledge of the dynamics that connect input and output, or in cases where there is no concrete model at all. This forms the basis of applications of machine learning approaches to stock trading and face or pattern recognition, where comparably effortless predictions need to be made on short timescales. This is qualitatively different for particle physics applications where the underlying Standard Model of Particle Physics (SM) is well-established. Connecting theoretical (not necessarily physical) input parameters with actual measurements is not only possible, but sets the baseline of the observed success of the SM over orders of magnitude. Of course, these strategies, which are supported by factorisation principles~\cite{Collins:1981tt,Collins:1985ue} at the price of associated uncertainties in perturbation theory, generalise to interactions beyond the SM. Therefore, the most adapted approach to classifying experimental observations (e.g. discriminating between signal and background) is using the theoretical model itself by employing its $S$-Matrix as an observable. This is known as the matrix-element method~\cite{Kondo:1988yd} and ATLAS and CMS have used these techniques in~Refs.~\cite{Khachatryan:2015ila,Aad:2015gra}. This approach can be extended to the full particle-level as discussed in~Refs.~\cite{Soper:2011cr,Soper:2012pb,Soper:2014rya, Englert:2015dlp}. The downside of such methods is that they require extensive computational resources and quick event-by-event selection is not possible without further simplifying assumptions. These shortcomings motivate MVAs as interpolating tools whose sensitivity will be bounded by the sensitivity that could be achieved by a particle-level matrix element method. Theoretical uncertainties are inherent to both the matrix element method as well as the multivariate techniques as the underlying Monte Carlo (MC) tool chain will be plagued by a range of largely unphysical parameter choices (e.g. renormalisation, factorisation and shower scales). MVAs need to be trained on MC output, at least for constraining models of new interactions or rare processes. Consequently, they inherit all MC-associated uncertainties. The MVA score will favour highly exclusive phase space region which are poorly understood perturbatively, enhancing the sensitivity to the underlying theoretical uncertainty. Data-driven methods might not be available in these very exclusive regions, and the price of a comparably large sensitivity is a reduced safety margin. However, there are no well-defined models that can systematically estimate theoretical uncertainties. The impact of such effects is therefore estimated by the community's ad-hoc consensus on scale variations etc. This motivates MVAs as an ideal choice to decide on {\emph{how}} to propagate such unknowns to the final discriminant. This transcends the traditional envelope of kinematic observables or cross sections as the MVA will be equipped to ``see'' and extrapolate correlations of uncertainties and can decide on an event-by-event basis whether a particular configuration is sensitive to the question we might ask and whether the information we would like to draw from it can be trusted. Such an approach provides unique opportunities to the extraction of unknown parameters. In particular, existing constraints from the LHC have left an impression that new physics could be heavy. This has motivated the use of effective field theory techniques for the hunt of new BSM interactions. The relevance of differential distributions in this context has been highlighted in~Refs.~\cite{Ellis:2014dva,Englert:2015hrx,Corbett:2015ksa,Englert:2017aqb} and the interplay of theoretical uncertainties in this context is extremely important. In this paper we extend existing machine learning techniques of treating systematic uncertainties using adversarial neural networks~\cite{Louppe:2016ylz} and propose a novel approach to include \textit{theoretical} uncertainties. In contrast to systematic uncertainties, which affect the kinematics on an event-by-event basis, theoretical uncertainties of the cross section are a property of the process at hand and affect the event sample as a whole. The ability to include all relevant uncertainties simultaneously not only allows for the evaluation of a neural network (NN) score in a much more controlled and meaningful way, but also paves the way to perform differential parameter fits on an event by event basis while fully including a measure of trust for the observed phase space region. We discuss this using the example of Higgs production in association with jets. However, our approach is applicable to a very wide range of scenarios where machine learning is used in the presence of previously known theoretical and systematic uncertainties, e.g. signal vs background classification, particle identification/tagging and fitting of model parameters. This paper is structured as follows: In Sec.~\ref{sec:eft}, we motivate Higgs+jets physics as BSM case where uncertainties are limiting factors in disentangling top-Yukawa modifications from gluon-Higgs contact interactions. In Sec.~\ref{sec:ann}, we review the basics of the application of adversarial neural networks to controlling such uncertainties and highlight the power of this approach with a basic example, before we consider the full kinematics of Higgs production up to 2 jets in Sec.~\ref{sec:secapp}. We summarise and conclude in Sec.~\ref{sec:conc}. \begin{figure*}[!t] \includegraphics[height=6cm]{hjj-delta_eta_jj-noratio.pdf}\hfill \includegraphics[height=6cm]{hjj-mjj-noratio.pdf}\\ \includegraphics[height=6cm]{hjj-pT_H-noratio.pdf}\hfill \includegraphics[height=6cm]{hjj-pTmax_j-noratio.pdf} \caption{\label{fig:hjj} Predictions for production in association with 2 jets for approximate cancellations of Wilson coefficient choices that can be resolved for large momenta. The uncertainty (grey band) is evaluated by factorisation and renormalisation scale variations ($\mu_0/2\leq \mu \leq 2\mu_0$) around the central scale $\mu_0= \sqrt{p_{T,j_1} p_{T,j_2}}$. Modified branching ratios $h\to \tau\tau$ are included throughout.} \end{figure*} \section{EFT measurements and differential distributions} \label{sec:eft} Extracting as much information as possible from energy-dependent observables is key to over-constraining the various parameters that need to be introduced if the low energy effects of new high-scale physics are treated generically~\cite{Ellis:2014dva,Englert:2015hrx,Corbett:2015ksa,Englert:2017aqb}. In particular, the high-$p_T$ regions of Higgs production can serve to break degeneracies of modified top quark-Higgs and effective gluon-Higgs interactions, which can be parameterised by \begin{multline} \label{eq:lag} {\mathcal{L}}_{\text{d6}}= c_g {\mathcal{O}}_g + c_t {\mathcal{O}}_t = {c_g\, g_s^2 \over 16\pi^2 v} \, h\, G^{a\,\mu\nu} G^{a}_{\mu\nu} + c_t\, h\, \bar t t\,, \end{multline} where $G^{a}_{\mu\nu}$ denotes the gluon field strength tensor, and $h$ and $t$ the physical Higgs boson and top quark, respectively. The Wilson coefficient normalisations are chosen to make their numerical impact comparable (see below) and reflect the strongly-interacting light Higgs ansatz~\cite{Giudice:2007fh}, the additional factor of the strong coupling $g_s^2$ re-sums large logarithmic corrections from QCD at the dimension-6 level~\cite{Grojean:2013kd,Jenkins:2013zja,Englert:2014cva}. The top-Yukawa coupling modification at fixed top quark mass that is described by Eq.~\eqref{eq:lag} leads to a degeneracy with $c_g$ for momentum transfers below the top pair threshold. Concretely, low-energy theorems~\cite{Ellis:1975ap,Shifman:1979eb,Vainshtein:1980ea,Voloshin:1985tc,Kniehl:1995tn} induce interactions \begin{equation} {\mathcal{L}}_{\text{eff},t}= - {\sqrt{2}\over 3}{c_t \over y_t} {\mathcal{O}}_G + \dots \,, \end{equation} where $y_t\simeq 1$ denotes the SM Yukawa coupling. This leads to an approximate blind direction \mbox{$\sim c_g-\sqrt{2}c_t/3$} of inclusive observables (such as cross sections), where the inclusive gluon fusion cross section becomes SM-like. This degeneracy can be lifted in a global fit through subsidiary measurements of top quark-associated Higgs production, which is insensitive to the $ggh$ modifications~\cite{Englert:2015hrx}. Another promising avenue is to distinguish ${\mathcal{O}}_G$ from ${\mathcal{O}}_t$ at large momentum transfers~\cite{Banfi:2013yoa,Grojean:2013nya,Buschmann:2014sia,Buschmann:2014twa,Schlaffer:2014osa}, see Figs.~\ref{fig:hj} and \ref{fig:hjj}. The expected uncertainties in these particular phase space regions are non-negligible and are the obvious limiting factors of a coupling extraction from the theoretical side. Multiple hard jet emission can enhance the $c_g, c_t$ discrimination (see also~\cite{Duff:1991ad,Dreiner:1991xi,Dixon:1993xd,Krauss:2016ely} for related discussions). On the one hand, this comes at the price of an increased phase space suppression and a typically larger theoretical uncertainty. Additionally, the higher dimensionality of the phase space can give rise to new sensitive observables which are not necessarily directly aligned with standard kinematical distributions such as invariant mass and transverse momentum distributions. Adapting into these particular phase space regions can be achieved through boosted decision trees and other neural net techniques, which exploit multi-dimensional correlations to isolate particularly sensitive phase space regions. The downside of such an approach is that the associated uncertainties are hard to control, which can make such multivariate analyses highly sensitive to theoretical systematics. The case of Higgs production in association with multiple hard jets in the presence of ${\mathcal{O}}_g$, ${\mathcal{O}}_t$ modifications is particularly difficult and consequently provides a compelling physics case for the application of adversarial neural networks. \subsection{Numerical setup} In order to study the presence of ${\mathcal{O}}_g$ and ${\mathcal{O}}_t$ in Higgs production in association with hard jets we employ a modified version of {\sc{Vbfnlo}}~\cite{Campanario:2010mi,Arnold:2008rz,Baglio:2014uba} to perform the parton-level calculations presented in this work. Specifically, we focus on QCD-mediated Higgs production (gluon fusion) with one and two additional jets in the final state \cite{DelDuca:2001fn,DelDuca:2001eu,DelDuca:2001ad,DelDuca:2003ba,DelDuca:2006hk,Campbell:2006xx,Andersen:2010zx}. We pre-select events at the parton level in the central part of the detector with large cuts on the jet-transverse momentum distribution of \begin{equation} \begin{split} h+1~{\text{jet}}: &\quad p_{T,j}\geq 130~\text{GeV},~|\eta_j|<2.5 \\ h+2~{\text{jets}}: &\quad p_{T,j}\geq 150~\text{GeV},~|\eta_j|<4.5 \end{split} \end{equation} to guarantee that these processes are well described by the associated hard matrix elements and that weak boson fusion and associated Higgs production can be controlled. For the chosen jet $p_T$ cut the weak contribution to $h$+2 jet production is around $1/5$. This contribution, which can be modified by other EFT operators is not discussed here and should be included in a more realistic EFT fit. Under these assumptions, the dominant Higgs coupling modifications to described Higgs production are parametrised by Eq.~\eqref{eq:lag}. We consider Higgs decays to tau leptons taking into account the branching ratio modifications induced by $c_g$ and $c_t$. We include $\tau$ tagging efficiencies independent of $c_g$, $c_t$ and phase space, but note that these are not major limiting factors at the LHC. In particular hadronic tau leptons are now under good control in Higgs final states, and di-tau efficiencies of around 50\% are possible at background rejection close to unity~\cite{Kreis:2015jjr,Cadamuro:2017slr,Dev:2017lde}. For computing significances for different choices of the Wilson coefficients $c_g$ and $c_t$ in Sec.~\ref{sec:secapp} we include a production reconstruction efficiency of 22\%~\cite{Englert:2015hrx} as well as a combined effective tau reconstruction efficiency of 43\%, which includes both leptonic and hadronic tau decay channels. The theoretical uncertainties associated with the residual renormalisation ($\mu_R$) and factorisation ($\mu_F$) scale dependence of the observables are estimated by varying these scales around a central scale $\mu_0$ \begin{eqnarray} \begin{array}{c} \mu=\mu_R=\mu_F=\,\mu_0/2,\,\mu_0,\,2\mu_0\,,\\[0.3cm] \mu_0 = \left\{\begin{array}{ll} m_{hj}=\sqrt{(p_h+p_j)^2} & h+\text{jet}\\[0.1cm] \sqrt{p_{T,j_1}p_{T,j_2}} & h+2~\text{jets} \end{array}\right., \end{array} \label{eq:scales} \end{eqnarray} where $m_{hj}$ is the invariant mass of $h$+jet and $p_{T,j_1}$ ($p_{T,j_2}$) is the transverse momentum of the (second) leading jet. For this study we do not include a parton shower or detector simulation in the generation of $h+$jet and $h+2$~jets events because these effects are inconsequential to the method of including theoretical uncertainties using an adversarial neural network described in this work. The reason is that this method is based on supervised learning with Monte-Carlo events as input. Whether these events are evaluated at the parton, particle or detector level is not essential for the method to work. However, we expect parton shower and detector simulation to show some effect on the significances presented in Fig.~\ref{fig:ann} and defer the investigation of these effects to future studies. \begin{figure*}[!t] \includegraphics[width=0.48\textwidth]{example_gf2j_xshjj_-60_27.pdf} \hfill \includegraphics[width=0.48\textwidth]{example_gf2j_pTj1_-60_27.pdf} \caption{\label{fig:nnexample}Cross section observable and jet-transverse momentum distribution in $h+2$~jets production for an operator choice $(c_g,c_t)=(-0.6,27)$. For further details see text.} \end{figure*} \section{Adversarial Neural Networks and Uncertainties} \label{sec:ann} \subsection{Learning uncertainties} The concept of generative adversarial neural networks was first proposed in Ref.~\cite{Goodfellow:2014upx}. Its aim is to train a NN to generate data according to a given (experimental) multi-dimensional distribution through a zero sum game. The setup consists of two NNs: a classifier and an adversary, which simultaneously use opposite training goals. The adversary learns to generate data samples according to the input distribution, while the classifier learns to distinguish generated from actual data. After the training of the setup when the NNs reach equilibrium, the classifier can only distinguish generated and real data by chance. We make use of this approach by starting with a classifier that can distinguish between different input data variations according to the systematic uncertainties. The adversary on the other hand penalises this kind of discrimination via the loss function. The result of this adversarial training is a classifier that cannot distinguish between different input data variations and is therefore insensitive to the systematic uncertainties~\cite{Louppe:2016ylz}. More specifically, we can obtain a classifier into signal and background independent of underlying nuisance parameters such as theoretical uncertainties including the renormalisation and factorisation scale dependence. This is achieved by using the adversary to penalise the classifier whenever it becomes sensitive to the scale variation. The classifier thus avoids phase space regions that have a large discriminating power, but are plagued by theoretical uncertainties. This is the region relevant to disentangling different EFT contributions as discussed in Sec.~\ref{sec:eft}. In total, such an adversarial neural network (ANN) is a numerical implementation of an optimisation problem (with respect to signal-background separation) with constraints (being independent of the scale) where the constraints are implemented via the loss function of the adversary and the associated Lagrange multiplier is a tunable hyper-parameter of the adversarial neural network. Applying this to our physics problem, Monte Carlo runs with different scale settings can be used as input for the adversarial setup to discard phase space regions where discrimination also distinguishes the scale variations. \begin{figure*}[!t] \subfigure[]{\includegraphics[width=0.48\textwidth]{NNresponse_without_adv_example}} \hfill \subfigure[]{\includegraphics[width=0.48\textwidth]{ROC_example_without_adv}} \caption{\label{fig:nnexamplescorenoadv} Distribution of NN scores (a) and associated ROC curve (b) for background-only and signal + background event samples. The classification has been performed using only the discriminator. If the area under curve (AUC) is larger than 0.5, discrimination is possible.} \end{figure*} \begin{figure*}[!t] \subfigure[]{\includegraphics[width=0.48\textwidth]{NNresponse_with_adv_example}} \hfill \subfigure[]{\includegraphics[width=0.48\textwidth]{ROC_example_with_adv}} \caption{\label{fig:nnexamplescoreadv} Same as Fig.~\ref{fig:nnexamplescorenoadv} but here the distributions were obtained by a classifier that had been trained using the adversarial setup. If the area under curve (AUC) is larger than 0.5, discrimination is possible.} \end{figure*} The ANN used here consists of two components. The first component is a classifier discriminating between a standard model Higgs sample and an alternative sample with fixed $c_t$ and $c_g$. The second component is the adversary. This setup is implemented using {\sc{Keras}}~\cite{keras} and {\sc{TensorFlow}}~\cite{Abadi:2016kic}. The classifier has one output node with a softmax activation function, i.e. the output is a scalar $\in[0,1]$ where "0" represents the SM class and "1" the signal class. The classifier output is fed directly into the adversary input. The adversary is trained to determine the scale choice only from the classifier output. Hence, the adversary has one output node with a linear activation function representing the adversary's prediction of the chosen scale. To perform the adversarial training, we consider a combined loss function consisting of the classifier loss and the adversary loss. The loss function of the classifier is defined by the binary cross-entropy. The adversarial loss function is defined as a mean squared error regression of the scale. The total loss function is constructed such that the classifier loss contributes positively and the adversarial loss negatively. Hence, the adversarial interplay works as follows: With decreasing ability of the adversary to determine the scale from the classifier output the adversary loss grows. Since it contributes negatively the total loss decreases. The training goal is to minimise the total loss function and therefore the classifier is forced to modify its output such as to minimise the ability of the adversary to distinguish between the scales. This results in a classifier which is insensitive to the scale choice of the input data. Two architectures exist to perform adversarial training. An approach where the training of classifier and adversary is performed simultaneously and another with alternating training steps. For the alternating approach the training is also performed on the entire adversarial neural network consisting of classifier and adversary. But in one step the adversary weights are frozen and the total loss function is used. In the other step the classifier weights are frozen and only the adversary loss function is used. Hence, one step trains the classifier taking the adversary penalty into account and the other step trains the adversary only thus adapting the adversary to the previously trained classifier. These two steps are performed alternating on each batch of training data. We tried both approaches (simultaneous and alternating training), but we found better convergence with the alternating adversary and consequently focused on this approach for this study. For the full NN architecture and training we required: \begin{itemize} \item for the \emph{``classification layer''} 2 hidden layers with 20 nodes each, \item for the \emph{``adversary layer''} 2 hidden layers with 20 nodes each, \item in all cases Relu activation function, and \item we use a batch size of 500 events trained over 500 epochs. \end{itemize} We have tried other configurations in terms of numbers of layers and nodes but did not observe a significant change in the training performance. However, hyperparameters such as learning rate ($5\times10^{-4}$), relative weight between classifier and adversary loss as well as the number of epochs had to be tuned. To ensure convergence of the adversary, the cross section, jet $p_T$ and any other tested variables are transformed to have mean zero. The transformation of the cross section is adjusted to have root mean square (RMS) 1, whereas the other variables are transformed to have an RMS of 100. This additional transformation is needed because the scale variation of the adversary and the discrimination power are both dominated by variations in the cross section. To perform the adversarial training the adversary loss is scaled by a factor of 100 relative the loss of the EFT classifier. When the adversary is reduced below 100, for all cases, we observed a gradual transition to the instance where the adversary is non-existent; eventually converging to the bare discrimination case. We use $\sim2.5-4\,\times10^5$ events for signal and background depending on the choice of the parameters $c_g$ and $c_t$. 90\% of the events are used for training and 10\% are reserved for validation and testing. \subsection{Example} To highlight the crucial features of our method, we first consider a simple example for which we use our numerical setup given in Sec.~\ref{sec:eft} focusing on the $h$+2~jets channel. For illustration purposes we only consider two input variables in this example: the normalised differential $p_T$ distribution and the associated cross section (see Fig.~\ref{fig:nnexample}). The use of additional variables is studied in Sec.~\ref{sec:secapp}. The choice of $c_g=-0.6,\,c_t=0.27$ is motivated by the shape of the $p_T$ distribution which needs to be contrasted with the overlapping uncertainty bands for the cross sections. We train the NN with background and signal distributions of events defined by the transverse momentum of the leading jet $p_{T,j_1}$ as shown on the right-hand side of Fig.~\ref{fig:nnexample}. The background distributions for all three scale choices in Eq.~\eqref{eq:scales} are combined into one distribution. For the signal we use the central scale ($\mu_0$) distribution. We have checked that the events from $p_{T,j_1}$ distributions of different scales choices produce the same neural network output. The reason is that the NN is only sensitive to shapes since it learns (normalized) probability distributions. However, as can be seen from Fig.~\ref{fig:nnexample} the scale choice has little impact on the shape of the differential cross section with respect to $p_{T,j_1}$. In addition to the $p_{T,j_1}$ we consider the exclusive $h+2$ jets cross section. We randomly assign to each background (signal) event a cross section distributed according to the background (signal) distribution shown on the left-hand side of Fig.~\ref{fig:nnexample}. Since the theoretical uncertainty of the cross section is estimated by scale variations, its distribution is not governed by statistics. Instead we have to choose a prior. Here we choose an asymmetric Gaussian distribution with mean $\hat{\sigma}=\sigma(\mu_0)$ and left (right) standard deviation $\Delta\sigma_l=\sigma(\mu_0)-\sigma(2\mu_0)$ ($\Delta\sigma_r=\sigma(\mu_0/2)-\sigma(\mu_0)$) to account for the asymmetric character of the theoretical uncertainty associated with the scale choice. We also have checked a flat distribution as a prior and found no significant changes in the NN output of the pivotal classifier as long as the distributions for signal and background cross section overlap. This is the crucial step in our approach to include \textit{theoretical} uncertainties into the machine learning driven event classification. The key difference to existing approaches to include systematic effects is that in this case the uncertainties affect the event sample as a whole and not event by event, as for example event reconstruction uncertainties. While NNs can be sensitive to theoretical uncertainties which change the shape of event distributions they remain blind to flat uncertainties as in the case at hand. Note that this becomes more important if adapted scale choices exist that capture the shape modifications of certain observables, i.e the ideal scenario of RGE-improved fixed order calculations. Therefore, we propose to promote these theoretical uncertainties to parametrised nuisance parameters to make them accessible on an event-by-event level. We first run this setup without the adversarial NN. The resulting NN score (and the associated receiver operating characteristic, ROC curve) is shown in Fig.~\ref{fig:nnexamplescorenoadv}. As the uncertainties between the new physics and the SM hypotheses are not necessarily completely correlated we show results for $\mu_0$ there. The classification is highly sensitive to the scale within the boundaries of our scan $1/2<\mu/\mu_0< 2$. There are a number of reasons for such a strong correlation with classification. However, the main qualitative feature that drives this discrimination is captured in the running of strong coupling $\alpha_s$. A feature that is particularly pronounced in the $pp \to h jj$ contribution and our main motivation for the use of this example. The larger the chosen dynamical scale, the smaller the cross section and the larger the damping of the high $p_T$ tail relative to the central SM choice. In contrast, our choice of non-zero $c_g,c_t$ induces an enhancement of the tail. Together this means that it is easier for the classifier to distinguish the $c_g,c_t$ modification from a lower cross section that results from a comparably soft $p_T$ tail. Conversely, a lower scale choice results in the opposite situation, it is now more difficult for the classifier to distinguish the BSM contribution from a larger cross section that results from an enhanced tail $\sim \alpha_s^4 \log^4(p_T/\mu)$. Note that this is already mitigated in our example as we choose a central scale of $\sim p_T$. Therefore, including the cross section of the whole sample as observable is crucial to isolate scale dependencies of limits, as mentioned above. The strong dependence of the classifier on scale is noteworthy for measuring BSM-like Higgs properties since it leads to an unphysical response. Close to the blind direction a ``wrong'' choice of $\mu$ could therefore be understood as a measurement of non-zero $c_t,c_g$ in a fit. This is the situation that we need to avoid. Fig.~\ref{fig:nnexamplescoreadv} demonstrates, that the adversary eliminates the scale dependence completely. The effect of including the adversary preserves the same discrimination across different scale choices. This means that the particular scale choice does not impact the classification into BSM or SM contribution. More concretely, this means that the NN has learned to avoid regions of phase space parametrized by the physical observables where uncertainties are the key factors that drive the classification in the non-adversary scenario. Put simply, the ANN performs BSM vs SM discrimination only where the SM hypothesis can be trusted. The net effect is therefore not only a convergence of the ROC curves to a single line between $2\mu_0$ and $\mu_0/2$, but an overall reduction of the sensitivity, i.e. three ROC curves that indicate a much reduced, yet reliable, discrimination between signal and SM background. \subsection{Application to EFT-modified jet-associated Higgs production} \label{sec:secapp} Building on the example of the previous section we can now turn to the multi-dimensional problem of Higgs production in association with up to 2 jets. We apply the numerical setup in Sec.~\ref{sec:eft} by generating Les Houches event files~\cite{Boos:2001cv} for a scan in $(c_g, c_t)$ under the constraint of reproducing the SM-like inclusive cross section within 25\%. Here we consider both Higgs production channels $h$+jet and $h+2$~jets. Furthermore, we treat the cross section for both processes analogously to the example above and additionally employ a range of kinematic information to the classification: for the $h$+jet channel we use transverse momentum and rapidity of the jet and for the $h+2$~jets channel we use transverse momentum and rapidity of the $p_T$-leading and second-leading jet, azimuthal angle between the jets and rapidity and invariant mass of the jet pair. As the uncertainties become limiting factors in particular in the vicinity of the blind direction $c_g-\sqrt{2}c_t/3$, we express the final score as a function of the deviation away from $c_g=\sqrt{2}c_t/3$. \begin{figure}[!t] \includegraphics[width=0.44\textwidth]{plot_cls-eps-converted-to.pdf} \caption{\label{fig:ann} Performance comparison of the (A)NN using Higgs+multijet final states. For details see text.} \end{figure} The (A)NN output (or ROC curve) can be used to compute significances for different parameter choices. To keep matters transparent, we do this by picking a particular working point on the ROC curve that maximises $S/\sqrt{B}$ (where $B$ stands for the SM expectation), requiring at least 2 (1) expected SM events in the $h+$jet ($h+2$~jets) selection region detailed above for a given luminosity. We treat the two regions as uncorrelated. No additional parton-level cuts are employed. The result is shown in Fig.~\ref{fig:ann} as a function of the distance from the $(c_g,c_t)$ blind direction for a luminosity of 100/fb. There, we also compare the ANN performance to a neural net analyses without the inclusion of the adversary. In the latter case, different scale choices will result in a different NN score. By tracing the influence of the $\mu$-dependence of the NN score through to the significance, a variation of exclusion can be assigned an uncertainty represented by the blue error bar. As can be seen from Fig.~\ref{fig:ann}, there are different possible outcomes, but an exclusion at the 68\% confidence level should be possible for the region close to the SM. In some cases the ANN limit agrees well with the lower end of the expected significance as one could naively expect. This situation corresponds to an ANN score that interpolates between maximum and minimum discrimination within the uncertainty bands of the fully differential cross sections. Given that the ANN pivots as a result of the uncertainties, it will always be less sensitive than the NN output. The lower NN sensitivity as a function of $\mu$ therefore provides a supremum of the ANN's sensitivity. There are also more interesting situations, in particular when we approach the blind direction. While the NN score without adversary becomes sensitive to phase space regions that are not under perturbative control, the ANN will not show any sensitivity in this particular region of phase space. This leads the ANN to push its region of discrimination to a more exclusive region of phase space where the relative impact of the uncertainty is smaller compared to the new physics deviation. In turn, this then manifests itself as a smaller total discriminating power, well outside the naive uncertainty expectation of the NN score without adversary. This robustness is a clear benefit of the adversarial network and is the main result of this analysis. As expected, this effect becomes most relevant when we approach the blind direction. New physics events with $c_g\sim \sqrt{2}c_t /3$ will be distributed more closely to the SM expectation across the considered phase-space. Scale uncertainties render the ANN ``blind'' to small kinematical deviations within the associated uncertainty bands, thereby decreasing the overall sensitivity significantly. Including a proper treatment of kinematic uncertainties, as provided by the ANN is therefore crucial to obtaining robust and reliable constraints that inform a new physics question, which in this example is represented by the relevance of the top threshold for new heavy BSM. \section{Summary and Conclusions} \label{sec:conc} Theoretical and experimental uncertainties are the key limiting factors in searches for new interactions at the LHC and future colliders. This is dramatically highlighted when we want to constrain non-resonant extensions of the Standard Model, where large momentum transfers and very exclusive regions of phase space are the most sensitive probes of new physics. Experimental sensitivities are usually good when we deal with hard final state objects. Unfortunately, outside the inclusive realm of perturbative QCD, theoretical control in highly selective regions of phase space is often lost or at least significantly degraded. There is no first principle way of correctly assessing the associated theoretical uncertainties apart from ad-hoc scale variations of unphysical remnant scales. Process-dependent QCD-educated guesses for such choices might exist, but these do not come with guarantees, in particular, when we deal with the multi-parton and multi-scale problems imposed by hadron collider phenomenology. In this paper, we have addressed this conundrum by building on recent developments in machine learning, specifically in the area of adversarial neural networks. While ad-hoc scale choices have to remain as estimators of the theoretically unknown, the response of Monte Carlo data to such choices can be propagated to the kinematics of the full final state. In phase space regions where the a priori-sensitivity to new physics is large but effectively obstructed by uncertainties, no sensitivity should be claimed. These regions, which also depend on the particular type of uncertainty, are process-specific and are not necessarily aligned nor connected with our standard understanding of collider kinematics. This large variation in conditions is most naturally addressed with neural networks. Using the particular case of jet-associated Higgs production at the LHC, where large momentum transfers can pinpoint different sources of new physics in the Higgs sector, we have demonstrated that uncertainties can be accounted for in the discrimination. Additionally we have shown that ``standard'' approaches to select new physics can be sensitive to uncertainties and typically the sensitivity is over-estimated, in some cases severely. An accurate, uncertainty insensitive estimate, can be achieved through a dedicated adversarial neural network implementation, which provides robust discrimination at expected smaller sensitivity. Although we have focussed on theoretical uncertainties, this methodology directly generalises to other sources of uncertainties that limit the sensitivity of events with high-momentum transfers at the current and future energy frontiers including $b$-tagging efficiencies, jet-substructure calibration, missing energy observables etc. (see in particular Ref.~\cite{Shimmin:2017mfk}) and could be part of a new standard of phenomenological analyses. \acknowledgements C.E. is grateful to the Mainz Institute for Theoretical Physics (MITP) for its hospitality and its support during the completion of parts of this work. C.E. is supported by the IPPP Associateship scheme and by the UK Science and Technology Facilities Council (STFC) under grant ST/P000746/1. P.G. is funded by the STFC under grant ST/P000746/1. P.H. acknowledges the support of the MIT Physics department.
1,108,101,566,301
arxiv
\section{Introduction} In the last three decedes, after the proposal of the inflationary paradigm by Guth~\cite{Guth} and Sato~\cite{Sato}, several scenarios for describing the early-time accelerated expansion of our Universe have been carried out (see Refs.~\cite{Linde, revinflazione} for some reviews). Among them, the ``old inflationary scenario'' is based on canonical scalar field theories where a scalar field, dubbed ``inflaton'', drives the primordial acceleration. Later, new classes of scalar theories have been proposed, like the $k$-essence models~\cite{kess1, kess2, kess3}, where the Lagrangian of the field contains higher order kinetic terms and which lead to the suppression of the speed of the sound: as a consequence, the tensor-to-scalar ratio associated to the tensorial cosmological perturbations is extremelly small, as it is strongly encouraged by the cosmological data~\cite{Planck}. An alternative description of inflation can be furnished by scalar-tensor theories of gravity, where a scalar field is coupled with some curvature invariants (Einstein's tensor, Ricci scalar...) inside the gravitational action. In this kind of theories the field equations are higher order differential equations, but in 1974 Horndeski derived the most general class of scalar-tensor models which lead to second order differential equations like in the theory of Einstein~\cite{Horn}. Recently, inflation from Horndeski gravity has been investigated in several works ~\cite{Amendola, Def, DeTsu, DeFelice, Kob, Kob2, Qiu, EugeniaH, mioH, mioGB}. In Ref.~\cite{Staro1} an investigation on the homogeneous and isotropic cosmologies in some classes of models of Horndeski gravity with Galileon shift symmetry has been carried out. In this paper, we will consider a class of Horndeski Lagrangian where a scalar $k$-essence field supporting inflation is coupled with the Gauss-Bonnet term. We mention that modifications of gravity from the four dimensional Gauss-Bonnet topological invariant have been often considered in the context of high curvature corrections of General Relativity as the result of quantum gravity effects (see for example Refs.~\cite{RGinfl, r2} or Refs.~\cite{GBO1}--\cite{GB03}). We will follow the lines of Refs.~\cite{muk1, muk2, miorec, miorec2} and we will propose a reconstruction method in order to infer viable models in agreement with the cosmological data. In this respect, we note that one of the most robust predictions of inflation is represented by the possibility of reproducing the cosmological perturbations at the origin of the inhomogeneities of our Friedmann Universe. The last Planck satellite data lead to a spectral index $n_s\simeq 1-2/N$ and to a tensor-to-scalar ratio $r< 8/N$ or $r\sim 1/N^2$, where the $e$-folds number $N$ must be $N\simeq 60$ in order to explain the thermalization of our observable Universe. By starting from some simple Ansatz on the $k$-essence field and on the coupling function of the field with the Gauss-Bonnet, it is possible to derive these indexes and finally get the full form of the viable models. The paper is organized as follows. In Section {\bf 1} we present the model of $k$-essence coupled with the Gauss-Bonnet in the Horndeski framework. In Section {\bf 2} we study the background equations for inflation and the equations for cosmological perturbations, deriving the spectral index and the tensor-to-scalar ratio. In Section {\bf 4} we introduce our Ansatz for the scalar field and the coupling function between the field and the Gauss-Bonnet term. Thus, in Section {\bf 5} we reconstruct several inflationary viable models of canonical scalar field, while in Section {\bf 6} the case of $k$-essence with quadratic kinetic term is investigated. Conclusions and final remarks are given in Section {\bf 7}. We use units of $k_{\mathrm{B}} = c = \hbar = 1$ and $8\pi/M_{Pl}^2=1$, where $M_{Pl}$ is the Planck Mass. \section{The model} In this paper we will work with the following gravitational model, \begin{equation} I=\int_\mathcal M dx^4\sqrt{-g}\left[\frac{R}{2}+\xi(\phi) \mathcal G+p(\phi, X)\right]\,, \label{action} \end{equation} where $\mathcal M$ is the space-time manifold and $g$ is the determinant of the metric tensor $g_{\mu\nu}$. The Hilbert-Einstein action of General Relativity (GR), given by the Ricci scalar $R$, has been modified by introducing a coupling $\xi(\phi)$ between a scalar field $\phi$ and the Gauss-Bonnet four dimensional topological invariant $\mathcal G$, \begin{equation} \mathcal G=R^2-4R_{\mu\nu}R^{\mu\nu}+R_{\mu\nu\sigma\xi}R^{\mu\nu\sigma\xi}\,, \end{equation} with $R_{\mu\nu}$ and $R_{\mu\nu\sigma\xi}$ the Ricci tensor and the Riemann tensor, respectively. Finally, $p(\phi, X)$ is a function of the scalar field $\phi$ and its kinetic energy $X$, \begin{equation} X=-\frac{g^{\mu\nu}\partial_\mu \phi\partial_\nu\phi}{2}\,. \end{equation} The scalar field effective pressure corresponds to the field Lagrangian $p(\phi, X)$, while the effective energy density of the field $\rho(\phi, X)$ is derived as \begin{equation} \rho(\phi, X)=2X\frac{\partial p(\phi, X)}{\partial X}-p(\phi, X)\,,\label{4} \end{equation} such that the following relation holds true: \begin{equation} \rho(\phi, X)+p(\phi, X)=2X p_X(\phi, X)\,. \end{equation} The case of canonical scalar field is given by $p(\phi, X)=X-V(\phi)$, $V(\phi)$ being a function of the field only, while in $k$-essence Lagrangian higher order kinetic terms appear~\cite{kess1, kess2}. This kind of scalar-tensor model with a minimal coupling with the Gauss-Bonnet belongs to a subclass of Horndeski theories of gravity~\cite{Horn, Kob} and the field equations are at the second order like in the theory of Einstein. \section{Inflation} In this section we will study the inflationary cosmology with our model. At first, we will analyze the equations of the bulk, and then we will proceed with the study of cosmological perturbations. \subsection{Background equations} The flat Friedmann-Robertson-Walker (FRW) metric reads, \begin{equation} ds^2=-dt^2+a(t)^2 d{\bf x}^2\,,\label{metric} \end{equation} where $a(t)$ is the scale factor and depends on the cosmological time. An useful parametrization of the field equations follows from the introduction of the $e$-folds number $N$ as, \begin{equation} N=\log\left[\frac{a(t_0)}{a(t)}\right]\,,\label{N} \end{equation} where $a(t_0)$ is the scale factor at the time $t_0$ when inflation ends, such that $t<t_0$ and $0<N$. In terms of the $e$-folds the first Friedmann equation of the model (\ref{action}) leads to~\cite{mioGB}, \begin{equation} 3H^2=\rho(\phi, X)+ 24H^4\phi'\frac{d\xi(\phi)}{d\phi} \,,\label{EOM} \end{equation} while the conservation law reads \begin{equation} -\rho'(\phi, X)+3H^2\phi'^2(p_X(\phi, X)) =24\frac{d\xi(\phi)}{d\phi}\phi' H^3(H'-H) \,, \label{conslaw} \end{equation} where the prime denotes the derivative with respect to $N$ and $X=H^2\phi'^2/2$. The early-time inflation takes place at high curvature and is described by a (quasi) de Sitter solution where the Hubble parameter is almost a constant: this is the so called slow-roll approximation regime. Therefore, the field slowly moves and drives the exit from the accelerated epoch. During the the slow-roll regime the field evolves under the conditions \begin{equation} \phi'^2\ll 1\,,\quad |\phi''|\ll |\phi'|\,, \end{equation} and the $\epsilon$ slow-roll parameter \begin{equation} \epsilon=\frac{H'}{H}\,,\label{epsilon} \end{equation} is positive and very small. Acceleration ends when $\epsilon$ is on the order of the unit. By taking into account the slow-roll approximation, equations (\ref{EOM})--(\ref{conslaw}) read \begin{equation} 3H^2\simeq \rho(\phi, X)\,,\label{uno} \end{equation} \begin{equation} \rho'(\phi, X)-3H^2 \phi'^2 p_X(\phi, X)\simeq 24\frac{d\xi(\phi)}{d\phi}\phi' H^4 \,.\label{due} \end{equation} These equations describe the Hubble parameter and the evolution of the field during inflation. \subsection{Cosmological perturbations} Scalar metric perturbations around the FRW metric (\ref{metric}) in their general formulation read~\cite{Def, DeTsu, DeFelice}, \begin{equation} ds^2=-[(1+\alpha(t, {\bf x}))^2-a(t)^{-2}\text{e}^{-2\zeta(t, {\bf x})}(\partial \psi(t,{\bf x}))^2]dt^2+2\partial_i\psi (t,{\bf x})dt dx^i+a(t)^2 \text{e}^{2\zeta(t, {\bf x})}d{\bf x}\,, \end{equation} with $\alpha\equiv\alpha(t, {\bf x})\,,\psi\equiv\psi(t, {\bf x})$ and $\zeta\equiv\zeta(t,{\bf x})$ functions of the space-time coordinates. Thus, by using the relations between $\alpha\,,\psi\,,\zeta$ that follow from the field equations, the second-order action for perturbations reduces to~\cite{DeTsu, DeFelice}, \begin{equation} I=\int_\mathcal{M}dx^4 a^3 Q\left[\dot\zeta^2-\frac{c_s^2}{a^2}(\nabla\zeta)^2\right]\,,\label{pertaction} \end{equation} where, if one uses the slow-roll approximation and in terms of the $e$-folds number, \begin{eqnarray} Q = \frac{\phi'^2}{2H^2} \left( 96H^4\xi'^2(\phi)+p_X(\phi, X)+\phi'^2 p_{XX}(\phi, X) \right)\,,\label{Q} \end{eqnarray} while the square of the speed of sound reads \begin{equation} \hspace{-2cm} c_s^2= \frac{p_X(\phi, X)}{p_X(\phi, X)+96H^4\xi_\phi^2(\phi)+2p_{XX}(\phi, X)X} \,.\label{c2} \end{equation} This last quantity plays a fundamental role in the evolution of cosmological perturbations. We note that, even in the case of canonical scalar field with $p_{XX}(\phi, X)=0$, one finds $c_s^2<1$, thanks to the contribute of the Gauss-Bonnet. This result seems to improve the predictivity of the model with respect to the standard framework of GR or other Horndeski theories (see for example Ref.~\cite{miorec2}), due to the fact that, when $c_s^2$ is small, the tensor-to-scalar ratio of tensorial perturbations tends to be suppressed, in agreement with cosmological data. The action for scalar perturbations can be rewritten as, \begin{equation} I=\int dx^4\left[\dot v^2-\frac{c_s^2}{a^2}(\nabla v)^2+\ddot z\frac{v^2}{z}\right]\,,\label{action2} \end{equation} where, \begin{equation} v\equiv v(t, {\bf x})=z(t) \zeta(t, {\bf x})\,,\quad z\equiv z(t)=\sqrt{a^3 Q}\,. \end{equation} As a consequence, the field equation for perturbations is derived as, \begin{equation} \ddot v-\frac{c_s^2}{a^2}\bigtriangleup v-\frac{\ddot z}{z}v=0\,. \end{equation} Therefore, if one decomposes $v(t, {\bf x})$ in Fourier modes $v_k\equiv v_k(t)\exp[i {\bf k}{\bf x}]$, we obtain \begin{equation} \ddot v_k+\left(k^2\frac{c_s^2}{a^2}-\frac{\ddot z}{z}\right)v_k=0\,.\label{eqpert} \end{equation} The solution of this equation, back into the asymptotic past, leads to \begin{equation} \zeta_k\equiv \frac{v_k}{\sqrt{Q a^3}}\simeq i\frac{H}{2\sqrt{Q}(c_s k)^{3/2}} \text{e}^{\pm i k\int \frac{c_s}{a}dt} \left(1+i c_s k\int\frac{dt}{a}\right)\,. \end{equation} Now we can calculate the variance of the power spectrum of perturbations on the sound horizon crossing $c_s\kappa\simeq H a$, namely \begin{equation} \mathcal P_{\mathcal R}\equiv\frac{|\zeta_k|^2 k^3}{2\pi^2}|_{c_s k\simeq H a}=\frac{H^2}{8\pi^2 c_s^3 Q}|_{c_s k\simeq H a}\,. \end{equation} From the variance of the power spectrum one gets the spectral index $n_s$~\cite{mioGB}, \begin{eqnarray} (1-n_s)&=&-\frac{d\ln \mathcal P_{\mathcal R}}{d \ln k}|_{k=a H/c_s}\nonumber\\ &=& \left(\phi ' \left(3072 H^6 p_X \xi '^3+24 H^4 \xi ' \left(p_X \left(16 p_X \xi ' \phi '^2+8 \xi ''+p_{XX}\phi '^4\right)-12 \xi ' p_X'\right) \right.\right. \nonumber\\&& +H^2 \left(16 p_X^2 \xi ' \phi '^2+3 p_X^2 p_{XX} \phi '^6+\phi '^4 \left(p_X p_{XX}'-3 p_{XX}p_X'\right)\right) \nonumber\\&& \left.\left. +2 p_X^3 \phi '^4-2 p_X p_X' \phi '^2\right)-2 p_X \phi '' \left(288 H^4 \xi '^2+H^2 p_{XX} \phi '^4+2 p_X \phi '^2\right)\right) \nonumber\\&& \times\frac{1}{2 p_X \phi ' \left(96 H^4 \xi '^2+H^2 p_{XX} \phi '^4+p_X \phi '^2\right)}\,,\label{n} \end{eqnarray} where we used (\ref{Q})--(\ref{c2}). In a similar way it is possible to calculate the tensor-to-scalar ratio for the tensorial perturbations, \begin{equation} r\simeq \frac{8 p_X\phi '^2 \sqrt{\frac{p_X}{\frac{96 H^4 \xi '^2}{\phi '^2}+H^2 p_{XX} \phi '^2+p_X}}}{\frac{4 H^2 \left(2-\log \left( H^2 \phi '^2/2\right)\right) \left(\xi '' \phi '-\xi ' \phi ''\right)}{\phi '}+1}\,.\label{r} \end{equation} These indexes describe the cosmological perturbations left at the end of inflation and must be evaluated at the beginning of the eraly-time acceleration when $N$ has to be $N\simeq 60$. \section{Viable models for inflation} Inflation predicts the production of cosmological perturbations responsable of anisotropies of our Universe at the galactic scale. The last Planck satellite data~\cite{Planck} fit the spectral index and the tensor-to-scalar ratio as $n_{\mathrm{s}} = 0.968 \pm 0.006\, (68\%\,\mathrm{CL})$ and $r < 0.11\, (95\%\,\mathrm{CL})$. Thus, by taking into account that $N\simeq 60$, observations strongly encourage the models with $(1-n_s)\simeq 2/N$ and $r< 8/N$ or $r\sim1/N^2$. In order to reconstruct some viable models we will start from the following simple assumptions on the background solutions of the scalar field and of the coupling function between the field and the Gauss-Bonnet term, \begin{equation} \phi'^2=\frac{\alpha^2}{N^a}\,,\quad \alpha<0\,,1\leq a\,,\label{Anphi} \end{equation} \begin{equation} \xi(\phi)=\frac{\beta}{N^b}\,,\quad 1\leq b\,,\label{Anxi} \end{equation} where $\alpha\,,\beta$ are dimensional constants and $a, b$ are positive parameters larger or equal to one. For the Lagrangian of the field $p(\phi, X)$ we will assume \begin{equation} p(\phi, X)=\kappa X^\lambda-V(\phi)\,,\quad 0<\lambda\,,\label{pk} \end{equation} with $\lambda$ a positive parameter, $\kappa$ a (positive) dimensional constant and $V(\phi)$ a function of the field. Therefore, the effective field energy density (\ref{4}) leads to \begin{equation} \rho(\phi, X)=\kappa(2\lambda-1)X^\lambda+V(\phi)\,,\label{rhok} \end{equation} and Eqs.(\ref{uno})--(\ref{due}) in the slow-roll approximation can be rewritten as \begin{equation} H^2\simeq\frac{V(N)}{3}\,,\label{unobis} \end{equation} \begin{equation} -6\kappa\lambda X^{\lambda} \simeq 24\xi' H^4-V'(N)\,.\label{duebis} \end{equation} By starting from this equations with the Ansatz in (\ref{Anphi})--(\ref{Anxi}), we can find the on-shell potential and the Hubble parameter for different kinds of model. As a consequence, we can derive the spectral index (\ref{n}) and the tensor-to-scalar ratio (\ref{r}) in order to study the viable conditions of the theory. Finally, an explicit reconstruction for the coupling function $\xi(\phi)$ and for the Lagrangian of the field is possible by inverting the relation (\ref{Anphi}). In the following sections we will analyze some examples. \section{Canonical scalar field} In this section, we will consider canonical scalar field models where $\kappa=\lambda=1$ in (\ref{pk})--(\ref{rhok}). Let us start with the case $a=1$ in (\ref{Anphi}). It follows, \begin{equation} \phi=2\alpha\sqrt{N}\,,\label{phi1} \end{equation} where we remember that $\alpha<0$. From (\ref{duebis}) we can obtain the following on shell form of the potential, \begin{equation} V(N)=\frac{3N^b(\alpha^2-b)}{8b\beta}\,. \end{equation} Therefore, by plugging this expression in (\ref{unobis}) one has for the Hubble parameter: \begin{equation} H^2\simeq\frac{N^b(\alpha^2-b)}{8b\beta}\,. \end{equation} The $\epsilon$ slow-roll parameter (\ref{epsilon}) describing the evolution of the quasi-de Sitter universe is given by, \begin{equation} \epsilon\simeq\frac{b}{2N}\,. \end{equation} The square of the speed of the sound (\ref{c2}) reads, \begin{equation} c_s^2\simeq\frac{1}{1+\frac{3(b-\alpha^2)^2}{2\alpha^2 N}}\,, \end{equation} and we see that, thanks to the non-minimal coupling with the Gauss-Bonnet, it is smaller than one, even if it is extremelly close to one when $1\ll N$. Now we can derive the spectral index (\ref{n}) and the tensor-to-scalar ratio (\ref{r}), \begin{equation} (1-n_s)\simeq \frac{1+(b-\alpha^2)}{N}\,,\quad r\simeq \frac{8\alpha^2}{N}\,. \end{equation} Thus, in order to satisfy the last Planck satellite data we must require \begin{equation} (b-\alpha^2)=1\,,\quad\alpha^2<1\,.\label{cond1} \end{equation} As a consequence, $\beta$ must be negative to get a real solution for the Hubble parameter. By using (\ref{phi1}) one has \begin{equation} N=\frac{\phi^2}{4\alpha^2}\,, \end{equation} and the model is fully reconstructed as \begin{equation} \xi(\phi)=\frac{(4\alpha^2)^b\beta}{\phi^{2b}} \,,\quad V(\phi)=-\frac{3}{2^{3+2b}b\beta}\left(\frac{\phi^2}{\alpha^2}\right)^b\,,\quad 1\leq b<2\,, \beta<0\,, \end{equation} where we used (\ref{cond1}). We should point out that canonical scalar fields with power law potential in the framework of General Relativty do not leave to viable scenarios for inflation. In particular the quadratic potential, namely one of the first models of the ``old inflationary scenario'', leads to a viable spectral index, but the tensor-to-scalar ratio reads $r\simeq 8/N$ and results to be large. Here, in the framework of Horndeski gravity, power-law potentials $V(\phi)\sim \phi^q$, $2\leq q<4$, bring to models compatible with observations.\\ \\ Let us take now the case $a=2$ in (\ref{Anphi}), namely \begin{equation} \phi=\phi_\text{i}+\alpha\log[N/N_\text{i}]\,,\label{phi2} \end{equation} where $\phi_\text{i}<0$ is the value of the field at the beginning of inflation when $N=N_\text{i}$. The field potential is derived from (\ref{duebis}) as \begin{equation} V(N)=\frac{3\alpha^{2b}\text{e}^{-\frac{\alpha^2}{N}}}{V_0+8b\beta\Gamma[b,\frac{\alpha^2}{N}]}\,,\label{42} \end{equation} where $V_0$ is a constant and $\Gamma[b, \frac{\alpha^2}{N}]$ corresponds to the upper incomplete gamma function, \begin{equation} \Gamma\left[b, \frac{\alpha^2}{N}\right]=\int^\infty_{\alpha^2/N} t^{b-1}\text{e}^{-t}dt\,. \label{Gamma} \end{equation} As a consequence, the Hubble parameter during inflation reads: \begin{equation} H^2\simeq \frac{\alpha^{2b}}{V_0+8b\beta\Gamma[b,0]}\,. \end{equation} In the limit $0\ll N$ the $\epsilon$ slow-roll parameter is given by (we remember $1\leq b$), \begin{eqnarray} \epsilon&\simeq&\frac{\alpha^2 V_0}{2(V_0+8\beta)N^2}\,,\quad b=1\,,\nonumber\\ \epsilon&\simeq& \frac{\alpha^2}{2N^2}\,,\quad 1<b\,. \end{eqnarray} The square of the speed of sound results to be $c_s\simeq 1^-$. Moreover, for the spectral index and the tensor-to-scalar ratio one has: \begin{equation} (1-n_s)\simeq\frac{2}{N}\,,\quad r\simeq\frac{8\alpha^2}{N^2}\,. \end{equation} As a general result, we can say that this kind of model is viable and correctly reproduce the last Planck satellite data when $\alpha^2<0$. The potential and the coupling function are reconstructed as, \begin{equation} V(\phi)=\frac{3\alpha^{2b}\text{e}^{-\frac{\alpha^2\exp[(\phi_0-\phi)/\alpha]}{N_\text{i}}}}{V_0+8b\beta\Gamma[b,\frac{\alpha^2\exp[(\phi_0-\phi)/\alpha]}{N_\text{i}}]}\,, \quad \xi(\phi)=\beta\frac{\text{e}^{-b(\phi-\phi_0)/\alpha}}{\left(N_\text{i}\right)^b}\,. \end{equation} For example, when $b=1$ we get $V(\phi)=3\alpha^2/\left[8\beta+V_0\exp\left[(\alpha^2/N_\text{i})\text{e}^{(\phi_0-\phi)/ \alpha}\right]\right]$.\\ \\ It is also possible to investigate the model for generic parameters of $a$ and $b$ in (\ref{Anphi})--(\ref{Anxi}). From (\ref{duebis}) we can get \begin{equation} V(N)=\frac{3(a-1)\text{e}^{\frac{\alpha^2 N^{1-a}}{1-a}}}{8b\beta\Gamma[\frac{b}{a-1},\frac{\alpha^2 N^{1-a}}{a-1}]}\left(\frac{\alpha^2}{a-1}\right)^{b/(a-1)}\,,\quad a\neq 1\,, \end{equation} where we must require \begin{equation} a-1\leq b\,, \end{equation} in order to obtain a finite value for the incomplete gamma function (\ref{Gamma}) when $0\ll N$. By making use of the relation $H^2=V(\phi)/3$, it is possible to calculate the spectral index $n_s$ (\ref{n}), namely \begin{equation} n_s\simeq \frac{a}{N}\,, \end{equation} and we see that only the models with $a=2$ are viable, namely we recover the example in (\ref{42}). \section{$k$-essence with quadratic kinetic term} In this section, we will generalize our analysis to $k$-essence models with quadratic kinetic term, namely $\lambda=2$ in (\ref{pk})--(\ref{rhok}). We start with $a=1$ in (\ref{Anphi}), namely we assume (\ref{phi1}). Thus, from (\ref{duebis}) one may get, \begin{equation} V(N)=\frac{3N^{1+b}}{\alpha^4\kappa N^b-8\beta N+V_0 N^{1+b}}\,,\label{51} \end{equation} with $V_0$ constant, such that \begin{equation} H^2\simeq\frac{N^{1+b}}{\alpha^4\kappa N^b-8\beta N+V_0 N^{1+b}}\,. \end{equation} Note that if $V_0\neq 0$ the Hubble parameter behaves as $H^2\simeq 1/V_0$, but for $V_0=0$ other forms of the Hubble parameter are allowed. The $\epsilon$ slow-roll parameter reads, \begin{equation} \epsilon\simeq\frac{\alpha^4\kappa N^b-8b\beta N}{2N(V_0 N^{1+b}-8\beta N+\alpha^4\kappa N^b)}\,. \end{equation} Thus, \begin{eqnarray} \epsilon&\simeq&\frac{1}{2N}\,,\quad V_0=0\,, \nonumber\\ \epsilon&\simeq& \frac{\alpha^4\kappa-8b\beta N^{1-b}}{2V_0 N^2}\,,\quad V_0\neq 0\,, \end{eqnarray} bringing to different dynamics for the early-time expansion. In both of the cases, \begin{equation} c_s\simeq\frac{1}{3}\,, \end{equation} and the results for the spectral index (\ref{n}) are, \begin{eqnarray} (1-n_s)&\simeq&\frac{48\beta-5\kappa\alpha^4}{(24\beta-3\alpha^4\kappa)N}\,,\quad V_0=0\,, b=1\,, \nonumber\\ (1-n_s)&\simeq&\frac{5}{3N}\,,\quad V_0=0\,,1<b\,, \nonumber\\ (1-n_s)&\simeq& \frac{2}{N}\,,\quad V_0\neq 0\,, 1\leq b\,. \end{eqnarray} The last two cases satisfy the last Planck satellite data, while in the first one we must require \begin{equation} \frac{48\beta-5\kappa\alpha^4}{(24\beta-3\alpha^4\kappa)}=2\,,\quad V_0=0\,, b=1\,. \end{equation} The tensor-to-scalar ratio (\ref{r}) reads \begin{eqnarray} r&\simeq&\frac{8\kappa\alpha^4}{\sqrt{3}(\alpha^4\kappa-8\beta)N}\,,\quad V_0=0\,,b=1\,,\nonumber\\ r&\simeq&\frac{8}{\sqrt{3}N}\,,\quad V_0=0\,, 1<b\,,\nonumber\\ r&\simeq&\frac{8\alpha^4\kappa}{\sqrt{3}V_0 N^2}\,,\quad V_0\neq 0\,, 1\leq b\,. \end{eqnarray} Thus, the model correctly reproduces also the tensorial cosmological perturbations (in the first case, we must require $\kappa\alpha^4/(\alpha^4\kappa-8\beta)\leq 1$). The fully reconstruction of the potential leads to \begin{equation} V(\phi)=\frac{3\phi^2(\phi^2/\alpha^2)^b}{(\phi^2/\alpha^2)^b(4\alpha^6\kappa+V_0\phi^2)-2^{3+2b}\beta\phi^2}\,,\quad \xi(\phi)=4^b\beta\left(\frac{\alpha^2}{\phi^2}\right)^b\,. \end{equation} For example, for $b=1$ and $V_0=0$, we recover a quadratic potential.\\ \\ Let us check for a more general result with generic $a\,,b$ in (\ref{Anphi})--(\ref{Anxi}). The potential reads, \begin{equation} V(N)=\frac{3(2a-1)N^{2a+b}}{\alpha^4\kappa N^{1+b}-8\beta(2a-1)N^{2a}-V_0 N^{2a+b}}\,, \end{equation} with $V_0$ constant. The Hubble parameter during the inflation is derived as (here, we remember $1\leq a,b$), \begin{eqnarray} H^2&\simeq&\frac{3(2a-1)N^{2a+b}}{\alpha^4\kappa N^{1+b}-8\beta(2a-1)N^{2a}}\,,\quad V_0= 0\,,\nonumber\\ H^2&\simeq&\frac{3-6a}{3V_0}\,,\quad V_0\neq 0\,. \end{eqnarray} The $\epsilon$ slow-roll parameter is given by \begin{equation} \epsilon\simeq\frac{(2a-1)(8b\beta N^{2a}-\alpha^4\kappa N^{1+b})N^b}{ 2(V_0 N^{2a+b}+8\beta(2a-1)N^{2a}-\alpha^4\kappa N^{1+b})}\,, \end{equation} and for $1\ll N$ we can verify that $\epsilon\ll 1$. The spectral index $n_s$ reads \begin{equation} (1-n_s)\simeq \frac{2a}{N}\,,\quad V_0\neq 0\,,1<a\leq b\,, \end{equation} We conclude that only the case $a=1$ is viable and we recover the model in (\ref{51}). \section{Conclusions} In this paper we analyzed an Horndeski model for inflation where the scalar field is non-minimally coupled with the Gauss-Bonnet four dimensional topological invariant. Horndeski models are quite interesting since, despite the complexity of the Lagrangian, lead to second order differential equations like in General Relativity. Moreover, the four dimensional Gauss-Bonnet topological invariant is often analyzed in the context of higher curvature corrections of the Einstein's theory as a leading term from quantum corrections or string inspired theories. Finally, the scalar Horndeski field has been identified with a generic $k$-essence field, namely high order kinetic terms can appear in its Lagrangian. Since a viable model for inflation must correctly reproduce the spectral index and the tensor-to-scalar ratio observed in our Universe, we reconstructed our models by starting from these indexes. To get them, we posed some Ansatz on the scalar field and on the coupling function between the field and the Gauss-Bonnet. In our analysis, we considered canonical scalar field and $k$-essence with quadratic kinetic term. As a general observation, we can say that only when the derivative of the field with respect to the $e$-folds number behaves as $\phi'^2\sim 1/N$ or $\phi'^2\sim 1/N^2$ we get a viable scenario, confirming the result of Refs.~\cite{muk1, miorec} for the framework of General Relativity and of Ref.~\cite{miorec2} for Horndeski gravity with the coupling with the Einstein's tensor. For inflation from quantum corrections to General Relativity see Ref.~\cite{buch}. Other works about modified gravity and inflation can be found in Refs.~\cite{Vagno, I2} or in Refs.~\cite{Odinfrev, FRreview}.
1,108,101,566,302
arxiv
\section{Introduction}\label{sec:intro} While learning-to-learn, or {\em meta-learning}, has long been an object of study \cite{thrun1998ltl}, in recent years it has gained significant attention as a multi-task paradigm for developing algorithms for learning in dynamic environments, from multiple sources of data, and in federated settings. Such methods focus on using data gathered from multiple tasks to improve performance when faced with data from a new, potentially related task. Among the more popular approaches to meta-learning is {\em initialization-based} meta-learning, in which the meta-learner uses multi-task data to output an initialization for an iterative algorithm such as stochastic gradient descent (SGD) \cite{finn2017maml}. The flexibility of this approach has led to its widespread adoption in areas, such as robotics \cite{duan2017imitation} and federated learning \cite{chen2018fedmeta}, and to a growing number of attempts to understand it, both empirically and theoretically \cite{denevi2019ltlsgd,khodak2019adaptive,fallah2020meta,raghu2020anil,saunshi2020meta}. However, outside some stylized setups our learning-theoretic understanding of how to meta-learn an initialization is largely restricted to the convex Lipschitz setting. We relax both assumptions to study the meta-learning of online algorithms over piecewise-Lipschitz functions, which can be nonconvex and highly discontinuous. As no-regret online learning over such functions is impossible in-general, we study the case of piecewise-Lipschitz functions whose discontinuities are {\em dispersed}, i.e. which do not concentrate in any small compact subset of the input domain \cite{balcan2018dispersion}. Such functions arise frequently in {\em data-driven algorithm design}, in which the goal is to learn the optimal parameter settings of algorithms for difficult (often NP-Hard) problems over a distribution or sequence of instances \cite{balcan2020data}; for example, a small change to the metric used to determine cluster linkage can lead to a discontinuous change in the classification error \cite{balcan2019learning}. In this paper, we also demonstrate that such losses are relevant in the setting of adversarial robustness, where we introduce a novel online formulation. For both cases, the associated problems are often solved across many time periods or for many different problem domains, resulting in natural multi-task structure that we might hope to use to improve performance. To the best of our knowledge, ours is the first theoretical study of meta-learning in both of these application settings. In the single-task setting the problem of learning dispersed functions can be solved using simple methods such as the exponentially-weighted forecaster. To design an algorithm for learning to initialize online learners in this setting, we propose a method that optimizes a sequence of data-dependent upper-bounds on the within-task regret \cite{khodak2019adaptive}. The result is an averaged bound that improves upon the regret of the single-task exponential forecaster so long as there exists an initial distribution that can compactly contain many of the within-task optima of the different tasks. Designing the meta-procedure is especially challenging in our setting because it involves online learning over a set of distributions on the domain. To handle this we study a ``prescient'' form of the classic follow-the-regularized leader (FTRL) scheme that is run over an unknown discretization; we then show the existence of another algorithm that plays the same actions but uses only known information, thus attaining the same regret while being practical to implement. To demonstrate the usefulness of our method, we study this algorithm in two settings. {\bf Multi-task data-driven algorithm design.} We consider data-driven tuning of the parameters of combinatorial optimization algorithms for hard problems such as knapsack and clustering. The likely intractability of these problems have led to several approaches to study them in more realistic settings, such as smoothed analysis \cite{spielman2004smoothed} and data-driven algorithm configuration \cite{balcan2020data}. We view our meta-learning approach as a refinement on the latter in which we allow not only a distribution of instances but multiple distributions of related instances that can help learn a good algorithm. Our setting is more realistic than those considered in prior work. It is more challenging than learning from i.i.d. instances \cite{gupta2017pac,balcan2017learning}, but at the same time less pessimistic than online learning over adversarial problem instances~\cite{balcan2018dispersion}, as it allows us to leverage similarity of problem instances coming from different but related distributions. We instantiate our bounds theoretically on several problems where the cost functions are piecewise-constant in the tuned parameters, allowing our meta-procedure to learn the right initial distribution for exponential forecasters. This includes well-known combinatorial optimization problems like finding the maximum weighted independent set (MWIS) of vertices on a graph, solving quadratic programs with integer constraints using algorithms based on the celebrated Goemans-Williamson algorithm, and mechanism design for combinatorial auctions. Then we consider experimentally the problem of tuning the right $\alpha$ for the $\alpha$-Lloyd's family of clustering algorithms~\cite{balcan2018data}. In experimental evaluations on two datasets---a synthetic Gaussian mixture model and the well-known Omniglot dataset from meta-learning \cite{lake2015human}---our meta-procedure leads to improved clustering accuracy compared to single-task learning to cluster. The results holds for both one-shot and five-shot clustering tasks. We also study our results for a family of greedy algorithms for the knapsack problem introduced by \cite{gupta2017pac} and obtain similar results for a synthetic dataset. {\bf Online robust meta-learning.} The second instantiation of our meta-learning procedure is to a new notion of adversarial robustness for the setting of online learning, where our results imply robust meta-learning in the presence of outliers. In this setting, the adversary can make (typically small) modifications to some example $x\in\mathcal X$, which can result in potentially large changes to the corresponding loss value $l_h(x)$, where $h\in\mathcal{H}$ is our hypothesis. For instance, consider the well-studied setting of adversarial examples for classification of images using deep neural networks \cite{nguyen2015deep,brendel2020adversarial}. Given a neural network $f$, the adversary can perturb a datapoint $x$ to a point $x'$, say within a small $L_p$-ball around $x$, such that $f(x)=f(x')$ but the true label of $x'$ does not match $x$, and therefore $l_f(x)\ne l_f(x')$. In general, under the adversarial influence, we observe a {\it perturbed loss} function $\Tilde{l}_h(x)=l_h(x)+a_h(x)$. Typically we are interested in optimizing both the perturbed loss $\Tilde{l}_h(x)$, i.e. measuring performance relative to optimum for adversarially perturbed losses, and the {\it true loss} $l_h(x)$ (performance on the unobserved, unperturbed loss). For example, in the online learning setting, \cite{agarwal2019online}~consider perturbed loss minimization for linear dynamical systems, while \cite{resler2019adversarial} look at true $\{0,1\}$ loss minimization in the presence of adversarial noise. Our approach ensures that regret for both the perturbed and true loss are small, for piecewise-Lipschitz but dispersed adversaries. \section{An algorithm for meta-learning the initialization and step-size} Having established a single-task algorithm and shown how its regret depends on the initialization and step-size, we move on to meta-learning these hyperparameters. Recall that our goal is to make the task-averaged regret \eqref{eq:tar} small, in particular to improve upon the baseline of repeatedly running Algorithm~\ref{alg:ef} from the uniform distribution, up to $o_T(1)$ terms that vanish as we see more tasks. This accomplishes the meta-learning goal of using multiple tasks to improve upon single-task learning. In this paper, we use the strategy of running online learning algorithms on the data-dependent regret guarantees from above \cite{khodak2019adaptive}. If we can do so with sublinear regret in $T$, then we will improve upon the single-task guarantees up to $o_T(1)$ terms, as desired. Specifically, we are faced with a sequence of regret-upper-bounds $U_t(w,v)=(v+f_t(w)/v)\sqrt m+g(m)$ on nonnegative functions $w$ over $C$ and positive scalars $v>0$. Note that $g(m)$ cannot be improved via meta-learning, so we will focus on learning $w$ and $v$. To do so, we run two online algorithms, one over the functions $f_t$ and the other over $h_t(v)=v+f_t(w_t)/v$, where $w_t$ is set by the first procedure. As shown in the following result, if both procedures have sublinear regret then our task-averaged regret will have the desired properties: \begin{Thm}\label{lem:aruba} Assume each task $t\in[T]$ consists of a sequence of $m$ $\beta$-dispersed piecewise $L$-Lipschitz functions $\ell_{t,i}:C\mapsto[0,1]$. Let $f_t$ and $g$ be functions such that the regret of Algorithm~\ref{alg:ef} run with step-size $\lambda=v\sqrt m$ for $v>0$ and initialization $w:C\mapsto\mathbb R_{\ge0}$ is bounded by $U_t(w,v)=(v+f_t(w)/v)\sqrt m+g(m)$. Suppose we have a procedure that achieves $F_T(w)$ regret w.r.t. any $w:C\mapsto\mathbb R_{\ge0}$ by playing actions $w_t:C\mapsto\mathbb R_{\ge0}$ on $f_t$ and another procedure that achieves $H_T(v)$ regret w.r.t. any $v>0$ by playing actions $v_t>0$ on $h_t(v)=v+f_t(w_t)/v$, where $H_T$ is non-increasing on the positive reals. Then by setting $\rho_{t,i}$ using Algorithm~\ref{alg:ef} with step-size $v_t/\sqrt m$ and initialization $w_t$ at each task $t$ we get task-averaged regret bounded by \begin{equation} \left(\frac{H_T(V)}T+\min\left\{\frac{F_T(w^\ast)}{VT},2\sqrt{F_T(w^\ast)/T}\right\}+2V\right)\sqrt m+g(m) \end{equation} for $w^\ast=\argmin_{w:C\mapsto\mathbb R_{\ge0}}\sum_{t=1}^Tf_t(w)$ the optimal initialization and $V$ the task-similarity~\eqref{eq:tasksim}. \end{Thm} This result is an analog of \cite[Theorem~3.1]{khodak2019adaptive} and follows by manipulating the definition of regret. It reduces the problem of obtaining a small task-averaged regret to solving two online learning problems, one to set the initialization and one to set the step-size. So long as both have sublinear regret then we will improve over single-task learning. In the next two sections we derive suitable procedures. \subsection{Meta-learning the initialization}\label{sec:meta} We now come to the most technically challenging component of our meta-learning procedure: learning the initialization. As discussed above, we can accomplish this by obtaining a no-regret procedure for the function sequence $$f_t(w)=-\log\frac{\int_{\mathcal B(\rho_t^\ast,m^{-\beta})}w(\rho)d\rho}{\int_Cw(\rho)d\rho}.$$ This is nontrivial as the optimization domain is a set of nonnegative functions, effectively measures on the domain $C$. To handle this, we first introduce some convenient notation and abstractions. At each task $t$ we are faced with some function $f_t$ associated with an unknown closed subset $C_t\subset C$ --- in particular $C_t=\mathcal B(\rho_t^\ast,m^{-\beta})$ --- with positive volume $\operatorname{vol}(C_t)>0$ that is revealed after choosing $w_t:C\mapsto\mathbb R_{\ge0}$. For each time $t$ define the discretization $$\mathcal D_t=\{D=\bigcap_{s\le t}C_s^{(\*c_{[s]})}:\*c\in\{0,1\}^t,\operatorname{vol}(D)>0\}$$ of $C$, where $C_t^{(0)}=C_t$ and $C_t^{(1)}=C\backslash C_t$. We will use elements of these discretizations to index nonnegative vectors in $\mathbb R_{\ge0}^{|\mathcal D_t|}$; specifically, for any measure $w:C\mapsto\mathbb R_{\ge0}$ let $\*w(t)\in\mathbb R_{\ge0}^{|\mathcal D_t|}$ denote the vector with entries $\*w(t)_{[D]}=\int_Dw(\rho)d\rho$ for $D\in\mathcal D_t$. Note that we will exclusively use $p,q,v,w$ for measures, with $v$ specifically referring to the uniform measure, i.e. $\*v(t)_{[D]}=\operatorname{vol}(D)$. For convenience, for all real vectors $\*x$ we will use $\*{\hat x}$ to denote $\*p/\|\*p\|_1$. Finally, we abuse notation and remove the parentheses to refer those vectors associated with the final discretization, i.e. $\*v=\*v(T)$ and $\*w=\*w(T)$. Now that we have this notation we can turn back to the functions we are interested in: $f_t(w)=-\log\frac{\int_{C_t}w(\rho)d\rho}{\int_Cw(\rho)d\rho}$, where $C_t=\mathcal B(\rho_t^\ast,m^{-\beta})$. Observe that we can equivalently write this as $f_t(\*w)=-\log\langle\*w_t^\ast,\*{\hat w}\rangle$, where $\*w_{t[D]}^\ast=1_{D\subset C_t}$; this translates our online learning problem from the domain of measures on $C$ to the simplex on $|\mathcal D_T|$ elements. However, we cannot play in this domain explicitly as we do not have access to the final discretization $\mathcal D_T$, nor do we get access to $\*w_t^\ast$ after task $t$, except implicitly via $C_t$. In this section we design a method that implicitly run an online convex optimization procedure over $\mathbb R_{\ge0}^{|\mathcal D_T|}$ while explicitly playing probability measures $w:C\mapsto\mathbb R_{\ge0}$. \begin{algorithm}[!t] \caption{Follow-the-Regularized-Leader (prescient form)} \label{alg:ftrl} \begin{algorithmic}[1] \STATE {\bfseries Input:} discretization $\mathcal D_T$ of $C$, mixture parameter $\gamma\in[0,1]$, step-size $\eta>0$ \STATE Initialize $\*w_1=\*{\hat v}$ \FOR{$t=1,2,\dots,T$} \STATE Play $\*w_t$. \STATE Suffer $f_t(\*w_t)=-\log\langle\*w_t^\ast,\*w_t\rangle$. \STATE Observe $f_t$. \STATE Update $\*w_{t+1}=\argmin_{\|\*w\|_1=1,\*w\ge\gamma\*{\hat v}}D_{KL}(\*w||\*{\hat v})+\eta\sum_{s\le t}f_s(\*w)$ \ENDFOR \end{algorithmic} \end{algorithm} As the functions $f_t$ are exp-concave, one might first consider applying a method attaining logarithmic regret on such losses \cite{hazan2007logarithmic,orabona2012beyond}; however, such algorithms have regret that depends linearly on the dimension, which in our case is poly$(T)$. We thus turn to the the follow-the-regularized-leader (FTRL) family of algorithms, which in the case of entropic regularization are well-known to have regret logarithmic in the dimension \cite{shalev-shwartz2011oco}. In Algorithm~\ref{alg:ftrl} we display the pseudo-code of a modification with regularizer $D_{KL}(\cdot||\*{\hat v})$, where recall $\*v$ is the vector of volumes of the discretization $\mathcal D_T$ of $C$, and we constrain the played distribution to have measure at least $\gamma\*{\hat v}_{[D]}$ over every set $D\in\mathcal D_T$. While Algorithm~\ref{alg:ftrl} explicitly requires knowing the discretization $\mathcal D_T$ of $C$ in advance, the following key lemma shows that we can run the procedure knowing only the discretization $\mathcal D_t$ after task $t$ by simply minimizing the same objective over probability distributions discretized on $\mathcal D_t$. This crucially depends on the re-scaling of the entropic regularizer by $\*{\hat v}$ (which notably corresponds to the uniform distribution over $C$) and the fact that $\*w_t^\ast\in\{0,1\}^{|\mathcal D_T|}$. \begin{Lem}\label{lem:equivalent} Let $w:C\mapsto\mathbb R_{\ge0}$ be the probability measure corresponding to the minimizer \begin{equation} \*w=\argmin_{\|\*q\|_1=1,\*q\ge\gamma\*{\hat v}}D_{KL}(\*q||\*{\hat v})-\eta\sum_{s\le t}\log\langle\*w_s^\ast,\*q\rangle \end{equation} and let $\tilde w:C\mapsto\mathbb R_{\ge0}$ be the probability measure corresponding to the minimizer \begin{equation} \tilde{\*w}(t)=\argmin_{\|\*q\|_1=1,\*q\ge\gamma\*{\hat v}(t)}D_{KL}(\*q||\*{\hat v(t)})-\eta\sum_{s\le t}\log\langle\*w_s^\ast(t),\*q\rangle \end{equation} Then $\*w=\tilde{\*w}$. \end{Lem} We can thus move on to proving a regret guarantee for Algorithm~\ref{alg:ftrl}. This follows from Jensen's inequality together with standard results for FTRL once we show that the loss functions are $\frac1{\gamma\operatorname{vol}(C_t)}$-Lipschitz over the constrained domain, yielding the following guarantee for Algorithm~\ref{alg:ftrl}: \newpage \begin{Thm}\label{thm:frl} Algorithm~\ref{alg:ftrl} has regret bounded by \begin{equation} \frac{1-\gamma}\eta D_{KL}(\*w^\ast||\*{\hat v})+\frac\eta{\gamma^2}\sum_{t=1}^T\frac1{(\operatorname{vol}(C_t))^2}+\gamma\sum_{t=1}^T\log\frac1{\operatorname{vol}(C_t)} \end{equation} w.r.t. the optimum in hindsight $\*w^\ast\in\argmin_{\|\*w\|_1=1,\*w\ge\*0}\sum_{t=1}^Tf_t(\*w)$ of the functions $f_t$. Setting $\gamma^2=GB/\sqrt T$ and $\eta^2=\frac{B^2\gamma^2}{TG^2}$, where $B^2=D_{KL}(\*w^\ast||\*{\hat v})$ and $G^2=\frac1T\sum_{t=1}^T\frac1{(\operatorname{vol}(C_t))^2}$, yields sublinear regret $\tilde O(\sqrt {BG}T^\frac34)$. \end{Thm} \begin{proof} Algorithm~\ref{alg:ftrl} is standard FTRL with regularizer $\frac1\eta D_{KL}(\cdot||\*{\hat v})$, which has the same Hessian as the standard entropic regularizer over the simplex and is thus $\frac1\eta$-strongly-convex w.r.t. $\|\cdot\|_1$ \cite[Example~2.5]{shalev-shwartz2011oco}. Applying Jensen's inequality, the standard regret bound for FTRL \cite[Theorem~2.11]{shalev-shwartz2011oco} together with the Lipschitz guarantee of Claim~\ref{clm:overlip}, and Jensen's inequality again yields the result: \begin{align*} \sum_{t=1}^Tf_t(\*w_t)-f_t(\*w^\ast) &=\sum_{t=1}^Tf_t(\*w_t)-(1-\gamma)f_t(\*w^\ast)-\gamma f_t(\*{\hat v})+\gamma(f_t(\*{\hat v})-f_t(\*w^\ast))\\ &\le\sum_{t=1}^Tf_t(\*w_t)-f_t(\gamma\*{\hat v}+(1-\gamma)\*w^\ast)+\gamma\log\frac{\langle\*w_t^\ast,\*w^\ast\rangle}{\langle\*w_t^\ast,\*{\hat v}\rangle}\\ &\le\frac1\eta D_{KL}(\gamma\*{\hat v}+(1-\gamma)\*w^\ast||\*{\hat v})+\frac\eta{\gamma^2}\sum_{t=1}^T\frac1{(\operatorname{vol}(C_t))^2}+\gamma\sum_{t=1}^T\log\frac1{\operatorname{vol}(C_t)}\\ &\le\frac{1-\gamma}\eta D_{KL}(\*w^\ast||\*{\hat v})+\frac\eta{\gamma^2}\sum_{t=1}^T\frac1{(\operatorname{vol}(C_t))^2}+\gamma\sum_{t=1}^T\log\frac1{\operatorname{vol}(C_t)} \end{align*} \end{proof} Since the regret is sublinear in $T$, this result satisfies our requirement for attaining asymptotic improvement over single-task learning via Theorem~\ref{lem:aruba}. However, there are several aspects of this bound that warrant some discussion. The first is the rate of $T^\frac34$, which is less sublinear than the standard $\sqrt T$ and certainly the $\log T $ regret of exp-concave functions. However, the functions we face are (a) non-Lipschitz and (b) over a domain that has dimensionality $\Omega(T)$; both violate conditions for good rates in online convex optimization~\cite{hazan2007logarithmic,shalev-shwartz2011oco}, making our problem much more difficult. A more salient aspect is the dependence on $B^2=D_{KL}(\*w^\ast||\*{\hat v})$, effectively the negative entropy of the optimal initialization. This quantity is in-principle unbounded but is analogous to standard online convex optimization bounds that depend on the norm of the optimum, which in e.g. the Euclidean case are also unbounded. In our case, if the optimal distribution is highly concentrated on a very small subset of the space it will be difficult to compete with. Note that our setting of $\eta$ depends on knowing or guessing $B$; this is also standard but is certainly a target for future work to address. For example, past work on parameter-free algorithms has solutions for optimization over the simplex~\cite{orabona2016parameter}; however, it is unclear whether this is straightforward to do while preserving the property given by Lemma~\ref{lem:equivalent} allowing us to implicitly work with an unknown discretization. A more reasonable approach may be to compete only with smooth measures that only assign probability at most $\kappa\operatorname{vol}(D)$ to any subset $D\subset C$ for some constant $\kappa\ge1$; in this case we will simply have $B$ bounded by $\log\kappa$. A final issue is the dependence on $\sqrt G$, which is bounded by the reciprocal of the smallest volume $\operatorname{vol}(C_t)$, which in the dispersed case is roughly $O(m^{\beta d})$; this means that the task-averaged regret will have a term that, while decreasing as we see additional tasks, is {\em increasing} in the number of within-task iterations and the dispersion parameter, which is counter-intuitive. It is also does so exponentially in the dimension. Note that in the common algorithm configuration setting of $\beta=1/2$ and $d=1$ this will simply mean that for each task we suffer an extra $o_T(1)$ loss at each within-task round, a quantity which vanishes asymptotically. \subsection{Meta-learning the step-size} In addition to learning the initialization, Theorem~\ref{lem:aruba} requires learning the task-similarity to set the within-task step-size $\lambda>0$. This involves optimizing functions of form $h_t(v)=v+f_t(w_t)/v$. Since we know that the measures $w_t$ are lower-bounded in terms of $\gamma$, we can apply a previous result \cite{khodak2019adaptive} that solves this by running the EWOO algorithm \cite{hazan2007logarithmic} on the modified sequence $v+\frac{f_t(w_t)+\varepsilon^2}v$: \begin{Cor}\label{cor:ewoo} For any $\varepsilon>0$, running the EWOO algorithm on the modified sequence $v+\frac{f_t(w)+\varepsilon^2}v$ over the domain $[\varepsilon,\sqrt{D^2-\log\gamma+\varepsilon^2}]$, where $D^2\ge\frac1T\sum_{t=1}^T\log\frac1{\operatorname{vol}(C_t)}$, attains regret \begin{equation} \min\left\{\frac{\varepsilon^2}{v^\ast},\varepsilon\right\}T+\frac {\sqrt{D^2-\log\gamma}}2\max\left\{\frac{D^2-\log\gamma}{\varepsilon^2},1\right\}(1+\log(T+1)) \end{equation} on the original sequence $h_t(v)=v+f_t(w)/v$ for all $v^\ast>0$. \end{Cor} Setting $\varepsilon=1/\sqrt[4]T$ gives a guarantee of form $\tilde O((\min\{1/v^\ast,\sqrt[4]T\})\sqrt T)$. Note this rate might be improvable by using the fact that $v$ is lower-bounded due to the $\gamma$-constraint; however, we do not focus on this since this component is not the dominant term in the regret. In fact, because of this we can adapt a related method that simply runs follow-the-leader (FTL) on the same modified sequence~\cite{khodak2019adaptive} without affecting the dominant terms in the regret: \begin{Cor}\label{cor:ftl} For any $\varepsilon>0$, running the FTL algorithm on the modified sequence $v+\frac{f_t(w)+\varepsilon^2}v$ over the domain $[\varepsilon,\sqrt{D^2-\log\gamma+\varepsilon^2}]$, where $D^2\ge\frac1T\sum_{t=1}^T\log\frac1{\operatorname{vol}(C_t)}$, attains regret \begin{equation} \min\left\{\frac{\varepsilon^2}{v^\ast},\varepsilon\right\}T+2\sqrt{D^2-\log\gamma}\max\left\{\frac{(D^2-\log\gamma)^\frac32}{\varepsilon^3},1\right\}(1+\log(T+1)) \end{equation} on the original sequence $h_t(v)=v+f_t(w)/v$ for all $v^\ast>0$. \end{Cor} Setting $\varepsilon=1/\sqrt[5]T$ gives a guarantee of form $\tilde O((\min\{1/v^\ast,\sqrt[5]T\})T^\frac35)$. The alternatives are described in pseudocode at the bottom of Algorithm~\ref{alg:meta}; while the guarantee of the FTL-based approach is worse, it is almost as simple to compute as the task-similarity and does not require integration, making it easier to implement. \subsection{Putting the two together} \begin{algorithm}[!t] \caption{ Meta-learning the parameters of the exponential forecaster (Algorithm~\ref{alg:ef}). Recall that $\*p(t)$ refers to the time-$t$ discretization of the measure $p:C\mapsto\mathbb R_{\ge0}$ (c.f. Section~\ref{sec:meta}). } \label{alg:meta} \begin{algorithmic}[1] \STATE {\bfseries Input:} domain $C\subset\mathbb R^d$, dispersion $\beta>0$, step-size $\eta>0$, constraint parameter $\gamma\in[0,1]$, offset parameter $\varepsilon>0$, domain parameter $D>0$. \STATE Initialize $w_1$ to the uniform measure on $C$ and set $\lambda_1=\frac{\varepsilon+\sqrt{D^2+\varepsilon^2-\log\gamma}}{2\sqrt m}$. \FOR{task $t=1,2,\dots,T$} \STATE Run Algorithm~\ref{alg:ef} with initialization $w_t$ and step-size $\lambda_t$ and obtain task-$t$ optimum $\rho_t^\ast\in C$. \STATE Set $w_t^\ast=1_{\mathcal B(\rho_t^\ast,m^{-\beta})}$ to be the function that is 1 in the $m^{-\beta}$-ball round $\rho_t^\ast$ and 0 elsewhere. \STATE Set $w_{t+1}$ to $\*w_{t+1}(t)=\argmin_{\|\*w\|_1=1,\*w\ge\gamma\*{\hat v}(t)}D_{KL}(\*w||\*{\hat v}(t))-\eta\sum_{s\le t}\log\langle\*w_s^\ast(t),\*w\rangle$. \IF{using EWOO} \STATE Define $\mu_t(x)=\exp\left(-\alpha\left(tx+\frac{t\varepsilon^2-\sum_{s\le t}\log\langle\*w_s^\ast(s),\*w_s(s)\rangle}x\right)\right)$ for $\alpha=\frac2D\min\left\{\frac{\varepsilon^2}{D^2},1\right\}$. \STATE Set $\lambda_{t+1}=\frac{\int_\varepsilon^{\sqrt{D^2+\varepsilon^2-\log\gamma}}x\mu_t(x)dx}{\sqrt m\int_\varepsilon^{\sqrt{D^2+\varepsilon^2-\log\gamma}}\mu_t(x)dx}$. \ELSE \STATE Set $\lambda_{t+1}=\sqrt{\frac{\sum_{s\le t}\varepsilon^2-\log\langle\*w_s^\ast(s),\*w_s(s)\rangle}{tm}}$. \ENDIF \ENDFOR \end{algorithmic} \end{algorithm} Now that we have an algorithm for both the initialization and the step-size, we can combine the two in Algorithm~\ref{alg:meta} to meta-learn the parameter of the exponential forecaster. Then we can obtain a bound on the task-averaged regret from Theorem~\ref{lem:aruba} to attain our final result. \begin{Thm}\label{thm:tar} Define $B^2=D_{KL}(\*w^\ast||\*{\hat v})$, $G^2=\frac1T\sum_{t=1}^T\frac1{(\operatorname{vol}(C_t))^2}$, and $D^2\ge\frac1T\sum_{t=1}^T\log\frac1{\operatorname{vol}(C_t)}=O(\beta d\log m)$. Then Algorithm~\ref{alg:meta} with $\eta,\gamma$ set as in Theorem~\ref{thm:frl} and $\varepsilon=1/\sqrt[4]T$ (if using EWOO) or $1/\sqrt[5]T$ (otherwise) yields task-averaged regret \begin{equation} \tilde O\left(\min\left\{\frac{\sqrt{BG}}{V\sqrt[4]T},\frac{\sqrt[4]{BG}}{\sqrt[8]T}\right\}+2V\right)\sqrt m+g(m) \end{equation} Here $V$ is the task-similarity \eqref{eq:tasksim}. \end{Thm} So as in past work in meta-learning, this achieves the goal of adapting to the task-similarity by attaining asymptotic regret of $2V\sqrt m+O(m^{-\beta})$ on-average, where here we substitute the dispersion term for $g$ and $V^2$ is the task-similarity encoding the average probability mass assigned to the different task balls by the optimal initialization distribution. We include the minimum of two rates in the bound, with the rate being $1/\sqrt[4]T$ is the task-similarity is a constant $\Theta_T(1)$ and $1/\sqrt[8]T$ if it is extremely small. As discussed in above, this rate reflects the difficulty of our meta-problem, in which we are optimizing non-smooth functions over a space of distributions; in contrast, past meta-update procedures have taken advantage of nice properties of Bregman divergences to obtain faster rates \cite{khodak2019adaptive}. \section{Conclusion} In this paper we studied the initialization-based meta-learning of piecewise-Lipschitz functions, demonstrating how online convex optimization over an adaptive discretization can find an initialization that improves the performance of the exponential forecaster across tasks, assuming the tasks have related optima. We then applied this result in two settings: online configuration of clustering algorithms and adversarial robustness in online learning. For the latter we introduced a dispersion-based understanding of robustness that we believe to be of independent interest. In addition, there are further interesting applications of our work to other algorithm configuration problems. \section*{Acknowledgments} This material is based on work supported in part by the National Science Foundation under grants CCF-1535967, CCF-1910321, IIS-1618714, IIS-1705121, IIS-1838017, IIS-1901403, IIS-2046613, and SES-1919453; the Defense Advanced Research Projects Agency under cooperative agreements HR00112020003 and FA875017C0141; an AWS Machine Learning Research Award; an Amazon Research Award; a Bloomberg Research Grant; a Microsoft Research Faculty Fellowship; an Amazon Web Services Award; a Facebook Faculty Research Award; funding from Booz Allen Hamilton Inc.; and a Block Center Grant. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of any of these funding agencies. \section*{Checklist} The checklist follows the references. Please read the checklist guidelines carefully for information on how to answer these questions. For each question, change the default \answerTODO{} to \answerYes{}, \answerNo{}, or \answerNA{}. You are strongly encouraged to include a {\bf justification to your answer}, either by referencing the appropriate section of your paper or providing a brief inline description. For example: \begin{itemize} \item Did you include the license to the code and datasets? \answerYes{See Section~\ref{gen_inst}.} \item Did you include the license to the code and datasets? \answerNo{The code and the data are proprietary.} \item Did you include the license to the code and datasets? \answerNA{} \end{itemize} Please do not modify the questions and only use the provided macros for your answers. Note that the Checklist section does not count towards the page limit. In your paper, please delete this instructions block and only keep the Checklist section heading above along with the questions/answers below. \begin{enumerate} \item For all authors... \begin{enumerate} \item Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? \answerYes{} \item Did you describe the limitations of your work? \answerYes{Alongside contributions in context, e.g. the end of Section~\ref{sec:meta}.} \item Did you discuss any potential negative societal impacts of your work? \answerNo{Our concern w.r.t. the negative societal impact of this theoretical work is limited to standard risks associated with ML, e.g. for privacy or fair treatment.} \item Have you read the ethics review guidelines and ensured that your paper conforms to them? \answerYes{} \end{enumerate} \item If you are including theoretical results... \begin{enumerate} \item Did you state the full set of assumptions of all theoretical results? \answerYes{} \item Did you include complete proofs of all theoretical results? \answerYes{} \end{enumerate} \item If you ran experiments... \begin{enumerate} \item Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? \answerYes{Supplemental material.} \item Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? \answerYes{} \item Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? \answerYes{} \item Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? \answerNA{Experiments run on personal computer (16GB, 2.3 GHz Dual-Core).} \end{enumerate} \item If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... \begin{enumerate} \item If your work uses existing assets, did you cite the creators? \answerYes{} \item Did you mention the license of the assets? \answerNA{} \item Did you include any new assets either in the supplemental material or as a URL? \answerNA{} \item Did you discuss whether and how consent was obtained from people whose data you're using/curating? \answerNA{} \item Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? \answerNA{} \end{enumerate} \item If you used crowdsourcing or conducted research with human subjects... \begin{enumerate} \item Did you include the full text of instructions given to participants and screenshots, if applicable? \answerNA{} \item Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? \answerNA{} \item Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? \answerNA{} \end{enumerate} \end{enumerate} \fi \subsection{Related work}\label{sec:related} The success of meta-learning has led to significant theoretical effort to understand it. Most efforts studying initialized-based meta-learning focus on the convex Lipschitz setting \cite{denevi2019ltlsgd,khodak2019provable}; work studying inherently nonconvex modeling approaches instead usually study multi-task representation learning~\cite{balcan2015lifelong,maurer2016mtl,du2021fewshot,tripuraneni2021provable} or target optimization, e.g. stationary point convergence \cite{fallah2020meta}. An exception is a study of linear models over Gaussian data showing that nonconvexity is critical to meta-learning an initialization that exploits low-rank task structure \cite{saunshi2020meta}. There is also work extending results from the neural tangent kernel literature to meta-learning \cite{zhou2021meta}, but in this case the objective becomes convex. On the other hand, we study initializations for learning a class of functions that can be highly non-convex and have numerous discontinuities. Theoretically, our work uses the Average Regret-Upper-Bound Analysis (ARUBA) strategy~\cite{khodak2019adaptive} for obtaining a meta-update procedure for initializing within-task algorithms, which has been applied elsewhere for privacy \cite{li2020dp} and federated learning \cite{khodak2021fedex}; the main technical advance in our work is in providing the guarantees for it in our setting, which is challenging due to the need to learn over a space of probability measures. Data-driven configuration is the selection of an algorithm from a parameterized family by learning over multiple problem instances \cite{gupta2017pac,balcan2017learning}. In other words, it is `hyperparameter tuning' with formal guarantees, and has applications to integer programming, clustering, and learning with limited labeled data \cite{balcan2018learning,balcan2019learning,balcan2021data}. In this work, we show how this general approach can be made even more effective by enabling it to adapt to task similarity. We also show applications of our results to robust meta-learning in the presence of outliers in the dataset \cite{pillutla2019robust,kong2020robust}. While previous work on robust online learning has considered adversaries with bounded perturbation in the online learning setting \cite{agarwal2019online,resler2019adversarial}, our results allow potentially unbounded perturbations, provided the adversary uses a smooth distribution. That is, the adversarial attack can be thought of as a distribution of perturbations, similar to the smoothed analysis approach of \cite{spielman2004smoothed}. In the offline setting, a similar attack is studied in the context of deep network feature-space attacks by \cite{balcan2020power}. We also remark that our formulation has a poisoning aspect, since we do not observe the clean loss $l_h(x)$, which is of particular interest in federated learning \cite{bagdasaryan2020backdoor,tolpegin2020data}. Also, note that unlike the typical applications of data-driven design where optimization is over the dual loss function, i.e. loss as a function of the algorithm parameter for a fixed sample $x\in\mathcal X$, here we consider learning loss or confidence functions over the input space $\mathcal X$. \section{Meta-learning for data-driven algorithm design} We demonstrate the utility of our bounds in a series of applications across two general areas: data-driven algorithm design \cite{balcan2020data} and robust learning. This section focuses on the former and demonstrates how our results imply guarantees for meta-learning the tuning of solvers for several difficult combinatorial problems arising from the theory of computing. We also demonstrate the practical utility of our approach for tuning clustering algorithms on real and synthetic datasets. \subsection{Instantiations for tuning combinatorial optimization algorithms} Algorithm configuration for combinatorial optimization algorithms involves learning algorithm parameters from multiple instances of combinatorial problems \cite{gupta2017pac,balcan2017learning,balcan2020data}. For well-known problems like MWIS (maximum weighted independent set), IQP (integer quadratic programming), and mechanism design for auctions, the algorithmic performance on a fixed instance is typically a piecewise Lipschitz function of the algorithm parameters. Prior work has looked at learning these parameters in the distributional setting (i.e. assuming iid draws of problem instances) \cite{balcan2017learning} or the online setting where the problem instances may be adversarially drawn \cite{balcan2018dispersion,balcan2020learning}. On the other hand, instantiating our results for these problems provide upper bounds for much more realistic settings where different tasks may be related and our bounds improve with this relatedness. We demonstrate how to apply our results to several combinatorial problems under mild smoothness assumptions. The key idea is to show that if the inputs come from a smooth distribution, the algorithmic performance is dispersed (as a sequence of functions in the algorithm parameters). We leverage known results about the MWIS problem to show $\frac{1}{2}$-dispersion, which together with Theorem \ref{thm:tar} implies that our bound on the task-averaged regret improves with task similarity $V$. {\bf The MWIS problem.} In MWIS, there is a graph $G=(V,E)$ and a weight $w_v\in\mathbb R^+$ for each vertex $v\in V$. The goal is to find a set of non-adjacent vertices with maximum total weight. The problem is $NP$-hard and in fact does not have any constant factor polynomial time approximation algorithm. \cite{gupta2017pac} propose a greedy heuristic family, which selects vertices greedily based on largest value of $w_v / (1 + \text{deg}(v))^\rho$, where $\text{deg}(v)$ is the degree of vertex $v$, and removes neighbors of the selected vertex before selecting the next vertex. For this algorithm family, we can learn the best parameter $\rho$ provided pairs of vertex weights have a joint $\kappa$-bounded distribution, and Theorem \ref{thm:tar} implies regret bounds that improve with task similarity. We use the recipe from \cite{balcan2020semi} to establish dispersion. \begin{Thm}\label{thm:mwis-tar} Consider instances of MWIS with all vertex weights in $(0, 1]$ and for each instance, every pair of vertex weights has a $\kappa$-bounded joint distribution. Then the asymptotic task-averaged regret for learning the algorithm parameter $\rho$ is $o_T(1)+2V\sqrt{m}+O(\sqrt{m})$. \end{Thm} \begin{proof}[Proof sketch] The loss function is piecewise constant with discontinuities corresponding to $\rho$ such that $w_v / (1 + \text{deg}(v))^\rho=w_u / (1 + \text{deg}(u))^\rho$ for a pair of vertices $u,v$. \cite{balcan2018dispersion} show that the discontinuities have $(\kappa \ln n)$-bounded distributions where $n$ is the number of vertices. This implies that in any interval of length $\epsilon$, we have in expectation at most $\epsilon\kappa \ln n$ discontinuities. Using this in dispersion recipe from \cite{balcan2020semi} implies $\frac{1}{2}$-dispersion, which in turn implies the desired regret bound by applying Theorem \ref{thm:tar}. \end{proof} Similar results may be obtained for other combinatorial problems including knapsack, $k$-center clustering, IQP and auction design (see Appendix \ref{app: combinatorial} for full details). We further show instantiations of our results for knapsack and $k$-center clustering, for which we will empirically validate our proposed methods in the next sections. {\bf Greedy Knapsack.} Knapsack is a well-known NP-complete problem. We are given a knapsack with capacity $\texttt{cap}$ and items $i\in[m]$ with sizes $w_i$ and values $v_i$. The goal is to select a subset $S$ of items to add to the knapsack such that $\sum_{i\in S}w_i\le \texttt{cap}$ while maximizing the total value $\sum_{i\in S}v_i$ of selected items. The classic greedy heuristic to add items in decreasing order of $v_i/w_i$ gives a 2-approximation. We consider a generalization to use $v_i/w_i^{\rho}$ proposed by \cite{gupta2017pac} for $\rho\in[0,10]$. For example, for the value-weight pairs $\{(0.99,1),(0.99,1),(1.01,1.01)\}$ and capacity $\texttt{cap}=2$ the classic heuristic $\rho=1$ gives value $1.01$ but using $\rho=3$ gives the optimal value $1.98$. We can learn this optimal value of $\rho$ from similar tasks, and obtain formal guarantees similar to Theorem \ref{thm:mwis-tar} (proof in Appendix \ref{app: combinatorial}). \begin{Thm} Consider instances of the knapsack problem given by bounded weights $w_{i,j}\in[1,C]$ and $\kappa$-bounded independent values $v_{i,j}\in[0,1]$ for $i\in[m],j\in[T]$. Then the asymptotic task-averaged regret for learning the algorithm parameter $\rho$ for the greedy heuristic family described above is $o_T(1)+2V\sqrt{m}+O(\sqrt{m})$. \end{Thm} {\bf $k$-center clustering.} We consider the parameterized $\alpha$-Llyod's algorithm family introduced in \cite{balcan2018data}. In the seeding phase, each point $x$ is sampled with probability proportional to $\min_{c\in C}d(v, c)^{\alpha}$, where $d(\cdot,\cdot)$ is the distance metric and $C$ is the set of centers chosen so far. The family contains an algorithm for each $\alpha\in[0,\infty)\cup \infty$, and includes popular clustering heuristics like vanilla $k$-means (random initial centers, for $\alpha=0$), $k$-means++ (corresponding to $\alpha=2$) and farthest-first traversal ($\alpha=\infty$). The performance of the algorithm is measured using the Hamming distance to the optimal clustering, and is a piecewise constant function of $\alpha$. Our meta-learning result can be instantiated for this problem even without smoothness assumptions (simply leveraging the smoothness induced by the internal randomness of the clustering algorithm, proof in Appendix \ref{app: combinatorial}). \begin{Thm} Consider instances of the $k$-center clustering problem on $n$ points, with Hamming loss $l_{i,j}$ for $i\in[m],j\in[T]$ against some (unknown) ground truth clustering. Then the asymptotic task-averaged regret for learning the algorithm parameter $\alpha$ for the $\alpha$-Lloyd's clustering algorithm family of \cite{balcan2018data} is $o_T(1)+2V\sqrt{m}+O(\sqrt{m})$. \end{Thm} In the following section we look at applications of our results through experiments for the knapsack and $k$-center clustering problems. \subsection{Experiments for greedy knapsack and $k$-center clustering}\label{sec:experiments} We design experiments to evaluate our new meta-initialization algorithm for data-driven design for knapsack and clustering problems on real and simulated data. Our experiments show the usefulness of our techniques in learning a sequence of piecewise-Lipschitz functions. For our experiments, we generate a synthetic dataset of knapsack instances described as follows. For each problem instance of each task, we have $\texttt{cap}=100$ and $m=50$. We have $10$ `heavy' items with $w_i\sim \mathcal{N}(27,0.5)$ and $v_i\sim \mathcal{N}(27,0.5)$, and $40$ items with $w_i\sim \mathcal{N}(19+w_t,0.5)$ and $v_i\sim \mathcal{N}(18,0.5)$, where $w_t\in[0,2]$ is task-dependent. We also consider the parameterized $\alpha$-Lloyd's algorithm family introduced in \cite{balcan2018data}. The performance of the algorithm is measured using the Hamming loss relative to the optimal clustering, and is a piecewise constant function of $\alpha$. We can compute the pieces of this function for $\alpha\in[0,10]$ by iteratively computing the subset of parameter values where a candidate point can be the next center. We use the small split of the {\it Omniglot} dataset \cite{lake2015human}, and create clustering tasks by drawing random samples consisting of five characters each, where four characters are constant throughout. We also create a Gaussian mixture binary classification dataset where each class is a 2D Gaussian distribution consisting of 100 points each, with variance $\begin{pmatrix} \sigma & 0\\ 0 & 2\sigma \end{pmatrix}$ and centers $(0,0)$ and $(d\sigma,0)$. We pick $d\in[2,3]$ to create different tasks. For each dataset we learn using 30 instances each of 10 training tasks and evaluate average loss over 5 test tasks. We perform 100 iterations to average over the randomization of the clustering algorithm and the exponential forecaster algorithm. We perform meta-initialization with parameters $\gamma=\eta=0.01$ (no hyperparameter search performed). The step-size is set to minimize the regret term in Theorem \ref{thm:exp-forc-meta}, and not meta-learned. The relative improvement in task-averaged regret due to meta-learning in our formal guarantees depend on the task-similarity $V$ and how it compares to the dispersion-related $O(m^{1-\beta})$ term, and can be significant when the latter is small. Our results in Table~\ref{table: meta initialization} show that meta-learning an initialization, i.e. a distribution over the algorithm parameter, for the exponential forecaster in this setting yields improved performance on each dataset. We observe this for both the one-shot and five-shot settings, i.e. the number of within-task iterations of the test task are one and five respectively. The benefit of meta-learning is most pronounced for the Gaussian mixture case (well-dispersed and similar tasks), and gains for Omniglot may increase with more tasks (dispersed but less similar tasks). For our knapsack dataset, the relative gains are smaller (similar tasks, but less dispersed). See Appendix \ref{app: experiment} for further experiments that lead us to these insights. \begin{table*}[t] \centering \caption{Effect of meta-initialization on few-shot learning of algorithmic parameters. Performance is computed as a fraction of the average value (Hamming accuracy, or knapsack value) of the offline optimum parameter.} \label{table: meta initialization} \resizebox{0.98\textwidth}{!}{% \begin{tabular}{c||cc|cc|cc} \toprule Dataset & \multicolumn{2}{c}{Omniglot} & \multicolumn{2}{c}{Gaussian Mixture} & \multicolumn{2}{c}{Knapsack} \\ & One-shot & Five-shot & One-shot & Five-shot& One-shot & Five-shot \\ \midrule \midrule Single task & $88.67\pm0.47\%$ & $95.02\pm0.19\%$ & $90.10\pm1.10\%$ & $91.43\pm0.44\%$ &$84.74\pm0.29\%$&$98.89\pm0.17\%$\\ Meta-initialized & $89.65\pm0.49\%$ & $96.05\pm0.15\%$ & $95.76\pm0.60\%$ & $96.39\pm0.27\%$&$85.66\pm0.57\%$&$99.12\pm0.15\%$ \\ \bottomrule \end{tabular} } \end{table*} \section{Preliminaries and initialization-dependent learning of dispersed functions}\label{sec:dispersion} In this section we introduce our setup and notation for online learning of piecewise-Lipschitz functions in a multi-task environment. We then generalize existing results for the single-task setting in order to obtain within-task regret bounds that depend on both the initialization and the task data. This is critical for both defining a notion of task similarity and devising a meta-learning procedure. \subsection{Meta-learning setup} Following past setups \cite{alquier2017lifelong,denevi2019meta,khodak2019adaptive}, for some $T,m>0$ and all $t\in[T]$ and $i\in[m]$ we consider a meta-learner faced with a sequence of $Tm$ loss functions $\ell_{t,i}:C\mapsto[0,1]$ over a compact subset $C\subset\mathbb R^d$ that lies within a ball $\mathcal B(\rho,R)$ of radius $R$ around some point $\rho\in\mathbb R^d$. Here we used the notation $[n]=\{1,\dots,n\}$. Before each loss function $\ell_{t,i}$ the meta-learner must pick an element $\rho_{t,i}\in C$ before then suffering a loss or cost $\ell_{t,i}(\rho_{t,i})$. For a fixed $t$, the subsequence $\ell_{t,1},\dots,\ell_{t,m}$ defines a {\bf task} for which we expect a single element $\rho_t^\ast\in C$ to do well, and thus we will use the {\bf within-task regret} on task $t$ to describe the quantity \begin{equation}\label{eq:regret} \*R_{t,m}=\sum_{i=1}^m\ell_{t,i}(\rho_{t,i})-\ell_{t,i}(\rho_t^\ast)\quad\textrm{where}\quad\rho_t^\ast\in\argmin_{\rho\in C}\sum_{i=1}^m\ell_{t,i}(\rho) \end{equation} In the single-task setting the goal is usually to show that $R_{t,m}$ is sublinear in $m$, i.e. that the average loss decreases with more rounds. A key point here is that the functions we consider can have numerous global optima. In this work we will assume, after going through the $m$ rounds of task $t$, that we have oracle access to a single fixed optimum for $t$, which we will refer to using $\rho_t^\ast$ and use in both our algorithm and to define the task-similarity. Note that in the types of applications we are interested in---piecewise-Lipschitz functions---the complexity of computing optima scales with the number of discontinuities. In the important special case of piecewise-constant functions, this dependency becomes logarithmic \cite{cohen2017online}. Thus this assumption does not affect the usefulness of the result. Our goal will be to improve the guarantees for regret in the single-task case by using information obtained from solving multiple tasks. In particular, we expect average performance across tasks to improve as we see more tasks; to phrase this mathematically we define the {\bf task-averaged regret} \begin{equation}\label{eq:tar} \*{\bar R}_{T,m}=\frac1T\sum_{t=1}^T\*R_{t,m}=\frac1T\sum_{t=1}^T\sum_{i=1}^m\ell_{t,i}(\rho_{t,i})-\ell_{t,i}(\rho_t^\ast) \end{equation} and claim improvement over single-task learning if in the limit of $T\to\infty$ it is smaller than $\*R_{t,m}$. Note that for simplicity in this work we assume all tasks have the same number of rounds within-task, but as with past work our results are straightforward to extend to the more general setting. \subsection{Learning piecewise-Lipschitz functions} We now turn to our target functions and within-task algorithms for learning them: piecewise-Lipschitz losses, i.e. functions that are $L$-Lipschitz w.r.t. the Euclidean norm everywhere except on measure zero subsets of the space; here they may have arbitrary jump discontinuities so long they still bounded between $[0,1]$. Apart from being a natural setting of interest due to its generality compared to past work on meta-learning, this class of functions has also been shown to have important applications in data-driven algorithm configuration \cite{balcan2018dispersion}; there these functions represent the cost, e.g. an objective value or time-complexity, of algorithms for difficult problems such as integer programming, auction design, and clustering. This literature has also shown lower bounds demonstrating that no-regret learning piecewise-Lipschitz function is impossible in general, necessitating assumptions about the sequence. One such condition is {\em dispersion}, which requires that the discontinuities are not too concentrated. \newpage \begin{Def}[\cite{balcan2018dispersion}]\label{def:dis} The sequence of random loss functions $\ell_1, \dots,\ell_m$ is said to be $\beta$-{\bf dispersed} with Lipschitz constant $L$ if, for all $m$ and for all $\epsilon\ge m^{-\beta}$, we have that, in expectation over the randomness of the functions, at most $\tilde{O}(\epsilon m)$ functions (the soft-O notation suppresses dependence on quantities beside $\epsilon,m$ and $\beta$, as well as logarithmic terms) are not $L$-Lipschitz for any pair of points at distance $\epsilon$ in the domain $\mathbb C$. That is, for all $m$ and for all $\varepsilon\ge m^{-\beta}$, \begin{equation} \mathbb E\left[ \max_{\begin{smallmatrix}\rho,\rho'\in\mathbb C\\\|\rho-\rho'\|_2\le\epsilon\end{smallmatrix}}\|\big\lvert \{ i\in[m] \mid\ell_i(\rho)-\ell_i(\rho')>L\|\rho-\rho'\|_2\} \big\rvert \right] \le \tilde{O}(\epsilon m) \end{equation} \end{Def} Assuming a sequence of $m$ $\beta$-dispersed loss functions and initial distribution $w_1$ set to the uniform distribution over $C$ and optimize the step size parameter, the exponential forecaster presented in Algorithm~\ref{alg:ef} achieves sublinear regret $\tilde{O}(\sqrt{dm\log(Rm)}+(L+1)m^{1-\beta})$. While this result achieves a no-regret procedure, its lack of dependence on both the task-data and on the chosen initialization makes it difficult to meta-learn. In the following theorem, we generalize the regret bound for the exponential forecaster to make it data-dependent and hyperparameter-dependent: \begin{Thm}\label{thm:exp-forc-meta} Let $\ell_1,\dots,\ell_m: C \mapsto [0, 1]$ be any sequence of piecewise $L$-Lipschitz functions that are $\beta$-dispersed. Suppose $C \subset \mathbb R^d$ is contained in a ball of radius $R$. The exponentially weighted forecaster (Algorithm \ref{alg:ef}) has expected regret $\*R_m\le m\lambda +\frac{\log (1/Z)}{\lambda}+\tilde{O}((L+1)m^{1-\beta})$, where $Z=\frac{\int_{\mathcal B(\rho^*,m^{-\beta})}w(\rho)d\rho}{\int_{C}w(\rho)d\rho}$ for $\rho^*$ the optimal action in hindsight. \end{Thm} The proof of this result adapts past analyses of Algorithm \ref{alg:ef}; setting step-size $\lambda$ appropriately recovers the previously mentioned bound. The new bound is useful due to its explicit dependence on both the initialization $w$ and the optimum in hindsight via the $\log(1/Z)$ term. Assuming $w$ is a (normalized) distribution, this effectively measures the overlap between the chosen initialization and a small ball around the optimum; we thus call $$-\log Z=-\log\frac{\int_{\mathcal B(\rho^\ast,m^{-\beta})}w(\rho)d\rho}{\int_Cw(\rho)d\rho}$$ the {\bf negative log-overlap} of initialization $w(.)$ with the optimum $\rho^*$. We also obtain an asymptotic lower bound on the expected regret of any algorithm by extending the argument of \cite{balcan2020learning} to the multi-task setting. We show that for finite $D^*$ we must suffer $\Tilde{\Omega}(m^{1-\beta})$ regret, which limits the improvement we can hope to achieve from task-similarity. \begin{Thm}\label{thm:dispersion-lb} There is a sequence of piecewise $L$-Lipschitz $\beta$-dispersed functions $\ell_{i,j}: [0,1] \mapsto [0, 1]$, whose optimal actions in hindsight $\argmin_{\rho}\sum_{i=1}^ml_{t,i}(\rho)$ are contained in some fixed ball of diameter $D^*$, for which any algorithm has expected regret $\*R_m\ge \tilde{\Omega}(m^{1-\beta})$. \end{Thm} \begin{algorithm}[!t] \caption{Exponential Forecaster} \label{alg:ef} \begin{algorithmic}[1] \STATE {\bfseries Input:} step size parameter $\lambda \in (0, 1]$, initialization $w:C\rightarrow \mathbb R_{\ge 0}$. \STATE{Initialize $w_1=w$} \FOR{$i=1,2,\dots,m$} \STATE{$W_i:=\int_{C}w_i(\rho)d\rho$} \STATE{Sample $\rho_i$ with probability proportional to $w_i(\rho_i)$, i.e. with probability $p_{i}(\rho_i)=\frac{w_i(\rho_i)}{W_i}$} \STATE{Suffer $\ell_i(\rho_i)$ and observe $\ell_i(\cdot)$} \STATE{For each $\rho\in C, \text{ set }w_{i+1}(\rho)=e^{-\lambda\ell_i(\rho)}w_{i}(\rho)$} \ENDFOR \end{algorithmic} \end{algorithm} \subsection{Task-similarity} Before proceeding to our discussion of meta-learning, we first discuss what we might hope to achieve with it; specifically, we consider what a reasonable notion of task-similarity is in this setting. Note that the Theorem~\ref{thm:exp-forc-meta} regret bound has three terms, of which two depend on the hyperparameters and the last is due to dispersion and cannot be improved via better settings. Our focus will thus be on improving the first two terms, which are the dominant ones due to the dependence on the dimensionality and the distance from the initialization encoded in the negative log overlap. In particular, when the initialization is the uniform distribution then this quantity depends inversely on the size of a small ball around the optimum, which may be quite small. Via meta-learning we hope to assign more of the probability mass of the initializer to areas close to the optimum, which will decrease these terms. On average, rather than a dependence on the volume of a small ball we aim to achieve a dependence on the {\bf average negative log-overlap} \begin{equation}\label{eq:tasksim} V^2=-\min_{w:C\mapsto\mathbb R_{\ge0},\int_Cw(\rho)d\rho=1}\frac1T\sum_{t=1}^T\log\int_{\mathcal B(\rho_t^\ast,m^{-\beta})}w(\rho)d\rho \end{equation} which can be much smaller if the task optima $\rho_t^\ast$ are close together; for example, if they are the same then $V=0$, corresponding to assigning all the initial weight within the common ball $\mathcal B(\rho^\ast,m^{-\beta})$ around the shared optima. This is also true if $\operatorname{vol}(\cap_{t\in T}\mathcal B(\rho_t^\ast,m^{-\beta}))>0$, as one can potentially initialize with all the weight in the intersection of the balls. On the other hand if $\operatorname{vol}(\cap_{t\in T}\mathcal B(\rho_t^\ast,m^{-\beta}))=0$, $V>0$. For example, if a $p$-fraction of tasks have optima $\rho_0$ and the remaining at $\rho_1$ with $||\rho_0-\rho_1||>2m^{-\beta}$ the task similarity is given by the binary entropy function $V=H_b(p)=-p\log p-(1-p)\log(1-p)$. The settings of Algorithm~\ref{alg:ef} that achieve the minimum in the definition of $V$ are directly related to $V$ itself: the optimal initializer is the distribution achieving $V$ and the optimal step-size is $V/\sqrt m$. Note that while the explicit definition requires computing a minimum over a set of functions, the task-similarity can be computed using the discretization constructed in Section~\ref{sec:meta}. \section{Robust online meta-learning} In online learning, we seek to minimize a sequence of loss functions, and are required to perform well relative to the optimal choice in hindsight. It is possible for the observed loss functions to be noisy on some inputs, either naturally or due to adversarial intent. We will now explore the conditions under which learning robust to such an adversarial influence (i.e. outlier injection) is possible, which is particularly common in meta-learning with diverse sources. {\it Setup}: At round $i$, we play $x_i$, observe perturbed loss $\Tilde{l}_i : \mathcal X\rightarrow[0,1]$ which is set by the adversary by modifying the true loss $l_i:\mathcal X\rightarrow[0,1]$ using an {\it attack function} $a_i:\mathcal X\rightarrow[0,1]$ such that $\Tilde{l}_i=l_i+a_i$ and may be non-Lipschitz, and suffer perturbed loss $\Tilde{l}_i(x_i)$ and true loss $l_i(x_i)$. We seek to minimize regret relative to best fixed action in hindsight, i.e. $$\Tilde{R}_m=\sum_{i=1}^m \Tilde{l}_i(x_i) - \min_{x\in\mathcal X}\sum_{i=1}^m \Tilde{l}_i(x)$$ for the perturbed loss and regret $$R_m=\sum_{i=1}^m l_i(x_i) - \min_{x\in\mathcal X}\sum_{i=1}^m l_i(x)$$ for the true loss. No regret can be achieved provided the adversary distribution is sufficiently smooth, i.e. satisfies $\beta$-dispersion for some $\beta>0$, as this corresponds to online optimization of the perturbed loss function. We can show this for both perturbed and true loss. The perturbed loss guarantee is immediate from standard results on online learning of piecewise Lipschitz functions \cite{balcan2018dispersion,balcan2020learning}. For the true loss, we can achieve no regret if the adversary perturbation $a_i$ is limited to small balls and the centers of the balls are dispersed, which we capture using the following definition. \begin{Def}[{$\delta$-bounded, $\beta_a$-dispersed attack}] An attack function $a_i$ is $\delta$-bounded if there exists a ball $\mathcal B(x_a,\delta)$ of radius $\delta$ such that $a_i(x)=0$ for each $x\in\mathcal X\setminus \mathcal B(x_a,\delta)$. $x_a$ is called a {\it center} $c_{a_i}$ for attack $a_i$. A sequence of attack functions $a_1,\dots,a_m$ is said to be $\beta_a$-dispersed, if the positions of attack centers $x_a$ are dispersed i.e. for all $m$ and for all $\epsilon\ge m^{-\beta_a}$, $$\mathbb E\left[ \max_{x,x'\in\mathcal X,x\in\mathcal B(x',\epsilon)}\big\lvert \{ i\in[m] \mid x=c_{a_i}\} \big\rvert \right] \le \Tilde{O}(\epsilon m)$$. \end{Def} \begin{Thm}\label{thm:robustness single task} Given a sequence of $\beta$-dispersed adversarially perturbed losses $\Tilde{l}_i=l_i+a_i$, where $\Tilde{l}_i,l_i,a_i$ are piecewise $L$-Lipschitz functions $ \mathcal X\rightarrow[0,1]$ for $i=1,\dots,m$ and $\mathcal X\subset\mathbb R^d$, the exponential forecaster algorithm has $$\mathbb E[\Tilde{R}_m]=\Tilde{O}(m\lambda +\frac{\log (1/Z)}{\lambda}+(L+1)m^{1-\beta})$$ (with Z as in Theorem \ref{thm:exp-forc-meta}). If in addition we have that $a_i$ is a $m^{-\beta_a}$-bounded, $\beta_a$-dispersed attack, then $$\mathbb E[R_m]=\Tilde{O}(m\lambda +\frac{\log (1/Z)}{\lambda}+(L+1)m^{1-\min\{\beta,\beta_a\}}).$$ \end{Thm} Together with Theorem \ref{thm:tar}, this implies no regret meta-learning in the presence of dispersed adversaries, in particular the occurrence of unreliable data in small dispersed parts of the domain. We also show a lower bound below which establishes that our upper bounds are essentially optimal in the attack dispersion. \begin{Thm}\label{thm:robustness lower bound} There exist sequences of piecewise $L$-Lipschitz functions $\Tilde{l}_i,l_i,a_i$ $[0,1]\rightarrow[0,1]$ for $i=1,\dots,m$ such that for any online algorithm \begin{enumerate}\itemsep0em \item $\Tilde{l}_i$ is $\beta$-dispersed and $\mathbb E[\Tilde{R}_m]=\Omega(m^{1-\beta})$, \item $\Tilde{l}_i$ is $\beta$-dispersed, $a_i$ is $m^{-\beta}$-bounded, $\beta_a$-dispersed and $\mathbb E[R_m]=\Omega(m^{1-\min\{\beta,\beta_a\}})$. \end{enumerate} \end{Thm} \section{Additional Related Work} Data-driven algorithm selection is an algorithm design paradigm for setting algorithm parameters when multiple instances of a problem are available or need to be solved \cite{blum2020technical,balcan2020data}. It is familiar as {\it hyperparameter tuning} to machine learning practitioners which often involves a ``grid search'', ``random search'' or gradient-based search, with no formal guarantees of convergence to a global optimum. By modeling the problem of identifying a good algorithm from data as a statistical learning problem, general learning algorithms have been developed which exploit smoothness of the underlying algorithmic distribution \cite{balcan2018dispersion}. This provides a new algorithmic perspective, along with tools and insights for good performance under this smoothed analysis for fundamental problems including clustering, mechanism design, and mixed integer programs, and providing guarantees like differential privacy, adaptive online learning and adversarial robustness \cite{balcan2019learning,balcan2018general,balcan2020learning,balcan2020power}. \section{Proofs} \subsection{Proof of Theorem~\ref{thm:exp-forc-meta}} \begin{proof} The proof adapts the analysis of the exponential forecaster in \cite{balcan2018dispersion}. Let $W_t = \int_Cw_t(\rho) d\rho$ be the normalizing constant and $P_t = \mathbb E_{\rho\sim p_t} [u_t(\rho)]$ be the expected payoff at round $t$. Also let $U_t(\rho)=\sum_{j=1}^{t}u_j(\rho)$. We seek to bound $R_T=OPT-P(T)$, where $OPT=U_{T}(\rho^*)$ for optimal parameter $\rho^*$ and $P(T)=\sum_{t=1}^{T}P_t$ is the expected utility of Algorithm \ref{alg:ef} in $T$ rounds. We will do this by lower bounding $P(T)$ and upper bounding $OPT$ by analyzing the normalizing constant $W_t$. {\it Lower bound for $P(T)$}: This follows from standard arguments, included for completeness. Using the definitions in Algorithm \ref{alg:ef}, it follows that \begin{align*}\frac{W_{t+1}}{W_{t}} &= \frac{\int_{\mathbb C}e^{\lambda u_t(\rho)}w_{t}(\rho)d\rho}{W_{t}} = \int_{\mathbb C}e^{\lambda u_t(\rho)}\frac{w_{t}(\rho)}{W_{t}}d\rho = \int_{\mathbb C}e^{\lambda u_t(\rho)}p_{t}(\rho)d\rho.\end{align*} Use inequalities $e^{\lambda x}\le1+(e^{\lambda}-1)x$ for $x\in[0,1]$ and $1+x\le e^x$ to conclude \begin{align*}\frac{W_{t+1}}{W_{t}} \le\int_{\mathbb C}p_{t}(\rho)\left(1+(e^{\lambda}-1)u_t(\rho)\right)d\rho = 1+(e^{H\lambda}-1){P_t} \le \exp\left((e^{\lambda}-1){P_t}\right).\end{align*} Finally, we can write $W_{T+1}/W_1$ as a telescoping product to obtain \[\frac{W_{T+1}}{W_{1}}=\prod_{t=1}^{T}\frac{W_{t+1}}{W_{t}}\le \exp\left((e^{\lambda}-1){\sum_tP_t}\right) = \exp\left({P(T)(e^{\lambda}-1)}\right),\] or, $W_{T+1}\le \exp\left({P(T)(e^{\lambda}-1)}\right)\int_Cw_1(\rho)d\rho$. {\it Upper bound for $OPT$}: Let $\mathcal B^* (r)$ be the ball of radius $r$ around $\rho^*$. If there are at most $k$ discontinuities in any ball of radius $r$, we can conclude that for all $\rho\in\mathcal B^* (r)$, $U_{T}(\rho) \ge OPT - k-LTr$. Now, since $W_{T+1}=\int_Cw_1(\rho)\exp(\lambda U_{T}(\rho))d\rho$, we have \begin{align*} W_{T+1} &\ge \int_{\mathcal B^* (r)}w_1(\rho)e^{\lambda U_{T}(\rho)}d\rho\\&\ge \int_{\mathcal B^* (r)}w_1(\rho)e^{\lambda(OPT - k-LTr)}d\rho \\&=e^{\lambda(OPT - k-LTr)}\int_{\mathcal B^* (r)}w_1(\rho)d\rho. \end{align*} Putting together with the lower bound, and rearranging, gives \begin{align*}OPT-P_T&\le \frac{P(T)(e^{\lambda}-1-\lambda)}{\lambda}+\frac{\log (1/Z)}{\lambda}+k+LTr\\ &\le T\lambda +\frac{\log (1/Z)}{\lambda}+k+LTr,\end{align*} where we use that $P(T)\le T$ and for all $x\in[0,1], e^x \le 1 + x + (e-2)x^2$. Take expectation over the sequence of utility functions and apply dispersion to conclude the result. \end{proof} \subsection{Proof of Theorem~\ref{thm:dispersion-lb}} We extend the construction in \cite{balcan2020learning} to the multi-task setting. The main difference is that we generalize the construction for any task similarity, and show that we get the same lower bound asymptotically. \begin{proof} Define $u^{(b,x)}(\rho)=I[b=0]*I[\rho>x]+I[b=1]*I[\rho\le x]$, where $b\in\{0,1\}$, $x,\rho\in[0,1]$ and $I[\cdot]$ is the indicator function. For each iteration the adversary picks $u^{(0,x)}$ or $u^{(1,x)}$ with equal probability for some $x\in [a,a+D^*]$, the ball of diameter $D^*$ containing all the optima. For each task $t$, $m-\frac{3}{D^*}m^{1-\beta}$ functions are presented with the discontinuity $x\in [a+D^*/3,a+2D^*/3]$ while ensuring $\beta$-dispersion. The remaining $\frac{3}{D^*}m^{1-\beta}$ are presented with discontinuities located in successively halved intervals (the `halving adversary') containing the optima in hindsight, any algorithm gets half of these wrong in expectation. It is readily verified that the functions are $\beta$-dispersed. The construction works provided $m$ is sufficiently large ($m>\left(\frac{3}{D^*}\right)^{1/\beta}$). The task averaged regret is therefore also $\Tilde{\Omega}(m^{1-\beta})$. \end{proof} \subsection{Proof of Theorem~\ref{lem:aruba}} \begin{proof} \begin{align*} \sum_{t=1}^T\sum_{m=1}^m \ell_{t,i}(\rho_{t,i})-\min_{\rho_t^\ast\in C}&\sum_{i=1}^m\ell_{t,i}(\rho_t^\ast)\\ &\le\sum_{t=1}^TU_t(w_t,v_t)\\ &\le\min_{v>0}H_T(v)\sqrt m+\sum_{t=1}^T\left(v+\frac{f_t(w_t)}v\right)\sqrt m+g(m)\\ &\le\min_{w:C\mapsto\mathbb R_{\ge0},v>0}H_T(v)\sqrt m+\frac{F_T(w)\sqrt m}v+\sum_{t=1}^T\left(v+\frac{f_t(w)}v\right)\sqrt m+g(m)\\ &\le\left(H_T(V)+\min\left\{\frac{F_T(w^\ast)}V,2\sqrt{F_T(w^\ast)T}\right\}+2TV\right)\sqrt m+Tg(m) \end{align*} where the last step is achieved by substituting $w=w^\ast$ and $v=\max\left\{V,\sqrt{F_T(w^\ast)/T}\right\}$. \end{proof} \subsection{Proof of Lemma~\ref{lem:equivalent}} \begin{proof} Define a probability measure $p:C\mapsto\mathbb R_{\ge0}$ that is constant on all elements $\tilde D\in\mathcal D_t$ of the discretization at time $t$, taking the value $p(\rho)=\frac1{\operatorname{vol}(\tilde D)}\sum_{D\in D_T,D\subset\tilde D}\*w_{[D]}~\forall~\rho\in\tilde D$. Note that for any $D\in\mathcal D_T$ that is a subset of $\tilde D$ we have that $$\*p_{[D]}=\int_D\tilde w(\rho)d\rho=\frac{\*v_{[D]}}{\sum_{D'\in\mathcal D_T,D'\subset\tilde D}\*v_{[D']}}\sum_{D'\in\mathcal D_T,D'\subset\tilde D}\*w_{[D']}$$ Then \begin{align*} D_{KL}&(\*p||\*{\hat v})-\eta\sum_{s\le t}\log\langle\*w_s^\ast,\*p\rangle\\ &=\sum_{\tilde D\in\mathcal D_t}\sum_{D\in\mathcal D_T,D\subset\tilde D}\*p_{[D]}\log\frac{\*p_{[D]}}{\*{\hat v}_{[D]}}-\eta\sum_{s\le t}\log\sum_{\tilde D\in\mathcal D_t}\sum_{D\in\mathcal D_T,D\subset\tilde D}{\*w_s^\ast}_{[D]}\*p_{[D]}\\ &=\sum_{\tilde D\in\mathcal D_t}\sum_{D\in\mathcal D_T,D\subset\tilde D}\frac{\*v_{[D]}}{\sum_{D'\in\mathcal D_T,D'\subset\tilde D}\*v_{[D']}}\sum_{D'\in\mathcal D_T,D'\subset\tilde D}\*w_{[D']}\log\frac{\sum_{D'\in\mathcal D_T,D'\subset\tilde D}\*w_{[D']}}{\sum_{D'\in\mathcal D_T,D'\subset\tilde D}\*{\hat v}_{[D']}}\\ &\quad-\eta\sum_{s\le t}\log\sum_{\tilde D\in\mathcal D_t}\sum_{D\in\mathcal D_T,D\subset\tilde D}\frac{{\*w_s^\ast}_{[D]}\*v_{[D]}}{\sum_{D'\in\mathcal D_T,D'\subset\tilde D}\*v_{[D']}}\sum_{D'\in\mathcal D_T,D'\subset\tilde D}\*w_{[D']}\\ &\le\sum_{\tilde D\in\mathcal D_t}\sum_{D\in\mathcal D_T,D\subset\tilde D}\frac{\*v_{[D]}}{\sum_{D'\in\mathcal D_T,D'\in\tilde D}\*v_{[D']}}\sum_{D'\in\mathcal D_T,D'\subset\tilde D}\*w_{D'}\log\frac{\*w_{[D']}}{\*{\hat v}_{[D']}}\\ &\quad-\eta\sum_{s\le t}\log\sum_{\tilde D\in\mathcal D_t,\tilde D\subset C_s}\sum_{D\in\mathcal D_T,D\subset\tilde D}\frac{\*v_{[D]}}{\sum_{D'\in\mathcal D_T,D'\subset\tilde D}\*v_{[D']}}\sum_{D'\in\mathcal D_T,D'\subset\tilde D}\*w_{[D']}\\ &=\sum_{\tilde D\in\mathcal D_t}\sum_{D'\in\mathcal D_T,D'\in\tilde D}\*w_{[D']}\log\frac{\*w_{[D']}}{\*{\hat v}_{[D']}}-\eta\sum_{s\le t}\log\sum_{\tilde D\in\mathcal D_t,\tilde D\subset C_s}\sum_{D'\in\mathcal D_T,D'\subset\tilde D}\*w_{[D']}\\ &=D_{KL}(\*w||\*{\hat v})-\eta\sum_{s\le t}\log\langle\*w_s^\ast,\*w\rangle \end{align*} where the inequality follows from applying the log-sum inequality to the first term and the fact that ${\*w_s^\ast}_{[D]}=\mathbf 1_{D\subset C_s}$ in the second term. Note that we also have $$\|\*p\|_1 =\sum_{\tilde D\in\mathcal D_t}\sum_{D\in\mathcal D_T,D\subset\tilde D}\frac{\*v_{[D]}}{\sum_{D'\in\mathcal D_T,D'\subset\tilde D}\*v_{[D']}}\sum_{D'\in\mathcal D_T,D'\subset\tilde D}\*w_{[D']} =\sum_{\tilde D\in\mathcal D_t}\sum_{D'\in\mathcal D_T,D'\subset \tilde D}\*w_{[D']} =1$$ and $$\*p_{[D]} =\frac{\*v_{[D]}}{\sum_{D'\in\mathcal D_T,D'\subset\tilde D}\*v_{[D']}}\sum_{D'\in\mathcal D_T,D'\subset\tilde D}\*w_{[D']} \ge\frac{\gamma\*v_{[D]}}{\sum_{D'\in\mathcal D_T,D'\subset\tilde D}\*v_{[D']}}\sum_{D'\in\mathcal D_T,D'\subset\tilde D}\*{\hat v}_{[D']} =\gamma\*{\hat v}_{[D]}$$ so $\*p$ satisfies the optimization constraints. Therefore, since $\*w$ was defined to be the minimum of the sum of the KL-divergence (a strongly-convex function \cite[Example~2.5]{shalev-shwartz2011oco}) and a convex function, it is unique and so coincides with $\*p$. On the other hand \begin{align*} D_{KL}(\*p(t)||\*{\hat v(t)})-\eta\sum_{s\le t}\log\langle\*w_s^\ast(t),\*p(t)\rangle &\le D_{KL}(\*p||\*{\hat v})-\eta\sum_{s\le t}\log\langle\*w_s^\ast,\*p\rangle\\ &=D_{KL}(\*w||\*{\hat v})-\eta\sum_{s\le t}\log\langle\*w_s^\ast,\*w\rangle\\ &\le D_{KL}(\*{\tilde w}||\*{\hat v})-\eta\sum_{s\le t}\log\langle\*w_s^\ast,\*{\tilde w}\rangle\\ &=D_{KL}(\*{\tilde w}(t)||\*{\hat v}(t))-\eta\sum_{s\le t}\log\langle\*w_s^\ast(t),\*{\tilde w}(t)\rangle \end{align*} where the first inequality follows from above and the second from the optimality of $\*w$. Note that by nonnegativity the discretization of $p$ does not affect its measure over $C$, so $\|\*p\|_1=1\implies\|\*p(t)\|_1=1$. Finally, also from above we have $$\*p(t)_{[D]}=\sum_{D'\in\mathcal D_T,D'\subset D}\*p_{[D']}\ge\gamma\sum_{D'\in\mathcal D_T,D'\subset D}\*p_{[D']}\*{\hat v}_{[D']}=\gamma\*{\hat v}(t)_{[D]}$$ Thus as before $\*p(t)$ satisfies the optimization constraints, which with the previous inequality and the uniqueness of the optimum $\*{\tilde w}(t)$ implies that $\*p(t)=\*{\tilde w}(t)$. Finally, since $\tilde w$ is constant on all elements of the discretization $\mathcal D_t$ of $C$ this last fact implies that $\*p=\*{\tilde w}$, which together with $\*p=\*w$ implies the result. \end{proof} \subsection{Lipschitzness for Algorithm~\ref{alg:ftrl}} \begin{Clm}\label{clm:overlip} The loss $f_t$ is $\frac1{\gamma\operatorname{vol}(C_t)}$-Lipschitz w.r.t. $\|\cdot\|_1$ over the set $\{\*w\in\mathbb R^{|\mathcal D_T|}:\|\*w\|_1=1,\*w\ge\gamma\*{\hat v}\}$. \end{Clm} \begin{proof} $$\max_{\|\*w\|_1=1,\*w\ge\gamma\*{\hat v}}\|\nabla\log\langle\*w_t^\ast,\*w\rangle\|_\infty =\max_{D,\|\*w\|_1=1,\*w\ge\gamma\*{\hat v}}\frac{{\*w_t^\ast}_{[D]}}{\langle\*w_t^\ast,\*w\rangle} \le\frac1{\langle\*w_t^\ast,\gamma\*{\hat v}\rangle} =\frac1{\gamma\operatorname{vol}(C_t)}$$ \end{proof} \subsection{Proof of Corollary~\ref{cor:ewoo}} \begin{proof} Using first-order conditions we have that the optimum in hindsight of the functions $h_t$ satisfies $$v^2 =\frac1T\sum_{t=1}^Tf_t(w_t) =-\frac1T\sum_{t=1}^T\log\langle\*w_t^\ast,\*w_t\rangle \le\frac1T\sum_{t=1}^T\log\frac1{\gamma\operatorname{vol}(C_t)}$$ Applying \cite[Corollary~C.2]{khodak2019adaptive} with $\alpha_t=1$, $B_t^2=f_t(w_t)$, and $D^2-\log\gamma$ instead of $D^2$ yields the result. \end{proof} \subsection{Proof of Corollary~\ref{cor:ftl}} \begin{proof} Using first-order conditions we have that the optimum in hindsight of the functions $h_t$ satisfies $$v^2 =\frac1T\sum_{t=1}^Tf_t(w_t) =-\frac1T\sum_{t=1}^T\log\langle\*w_t^\ast,\*w_t\rangle \le\frac1T\sum_{t=1}^T\log\frac1{\gamma\operatorname{vol}(C_t)}$$ Applying \cite[Proposition~B.2]{khodak2019adaptive} with $\alpha_t=1$, $B_t^2=f_t(w_t)$, and $D^2-\log\gamma$ instead of $D^2$ yields the result. \end{proof} \subsection{Proof of Theorem~\ref{thm:tar}} \begin{proof} We have $F_T(w^\ast)=\tilde O(\sqrt {BG}T^\frac34)$ and $H_T(V)=\tilde O(\min\{1/V,\sqrt[5]T\}T^\frac35)$ from Corollaries~\ref{cor:ewoo} and~\ref{cor:ftl}. Substituting into Lemma~\ref{lem:equivalent} and simplifying yields $$\tilde O\left(\frac{\min\left\{\frac1V,\sqrt[4]T\right\}}{\sqrt T}+\min\left\{\frac{\sqrt{BG}}{V\sqrt[4]T},\frac{\sqrt[4]{BG}}{\sqrt[8]T}\right\}+2V\right)\sqrt m+g(m)$$ Simplifying further yields the result. \end{proof} \subsection{Proof of Theorem \ref{thm:robustness single task}} \begin{proof} The bound on $\mathbb E[\Tilde{R}_T]$ is immediate from Theorem \ref{thm:exp-forc-meta}. For $\mathbb E[R_T]$, we can upper bound the natural regret with the sum of robust regret, total adversarial perturbation at the optimum and a term corresponding to the difference between the loss of natural and robust optima. \begin{align*} R_T &=\sum_{t=1}^T l_t(x_t) - \min_{x\in\mathcal X}\sum_{t=1}^T l_t(x)\\ &=\Tilde{R}_T+\sum_{t=1}^T l_t(x_t) - \sum_{t=1}^T \Tilde{l}_t(x_t) + \min_{x\in\mathcal X}\sum_{t=1}^T \Tilde{l}_t(x) - \min_{x\in\mathcal X}\sum_{t=1}^T l_t(x) \\ &=\Tilde{R}_T-\sum_{t=1}^T a_t(x_t) + \sum_{t=1}^Ta_t(\Tilde{x}^*)+ \sum_{t=1}^Tl_t(\Tilde{x}^*) - \sum_{t=1}^Tl_t(x^*)\\ &\le \Tilde{R}_T + \sum_{t=1}^Ta_t(\Tilde{x}^*)+\Big\lvert \sum_{t=1}^Tl_t(\Tilde{x}^*) - \sum_{t=1}^Tl_t(x^*)\Big\rvert \end{align*} where $\Tilde{x}^* = \argmin_{x\in\mathcal X}\sum_{t=1}^T \Tilde{l}_t(x)$ and $x^* = \argmin_{x\in\mathcal X}\sum_{t=1}^T l_t(x)$. We now use the $\beta_a$-dispersedness of the attack to show an excess expected regret of $\Tilde{O}(T^{1-\beta_a})$. Using attack dispersion on a ball of radius $T^{-\beta_a}$ around $\Tilde{x}^*$, the number of attacks that have non-zero $a_t(\Tilde{x}^*)$ is at most $\Tilde{O}(T^{1-\beta_a})$, and therefore $\sum_{t=1}^Ta_t(\Tilde{x}^*)\le \Tilde{O}(T^{1-\beta_a})$. Further, observe that the robust and natural optima coincide unless some attack occurs at the natural optimum $x^*$. We can use attack dispersion at $x^*$, and a union bound across rounds, to conclude $\mathbb E\lvert \sum_{t=1}^Tl_t(\Tilde{x}^*) - \sum_{t=1}^Tl_t(x^*)\rvert\le\Tilde{O}(T^{1-\beta_a})$ which concludes the proof. \end{proof} \subsection{Proof of Theorem \ref{thm:robustness lower bound}} \begin{proof} Part 1 follows from the lower bound in Theorem \ref{thm:dispersion-lb}, by setting $\Tilde{l}_i=l_i$ as the loss sequence used in the proof. To establish Part 2, we extend the construction as follows. $\Tilde{l}_i=l_i$ are both equal and correspond to the `halving adversary' from the proof of Theorem \ref{thm:dispersion-lb} for the first $\Theta(m^{1-\beta})$ rounds. If $\beta\le\beta_a$ we are done, so assume otherwise. Let $I$ denote the interval containing the optima over the rounds so far. Notice that the length of $I$ is at most $|I|\le (\frac{1}{2})^{\Theta(m^{1-\beta})}\le (\frac{1}{2})^{\beta \log m}=m^{-\beta}$ for $\beta>0$. For further rounds $l_i$ continues to be the halving adversary for $\Theta(m^{1-\beta_a})$ rounds, which implies any algorithm suffers $\Omega(m^{1-\beta_a})$ regret. We set attack $a_i$ on interval $I$ such that $\Tilde{l}_i=0$ on $I$ on these rounds. This ensures that $a_i$ is $\beta_a$-dispersed and $\Tilde{l}_i$ is $\beta$-dispersed. Putting together with the case $\beta\le\beta_a$, we obtain $\Omega(m^{1-\min\{\beta,\beta_a\}})$ bound on the regret of any algorithm. \end{proof} \section{Learning algorithmic parameters for combinatorial problems}\label{app: combinatorial} We discuss implications of our results for several combinatorial problems of widespread interest including integer quadratic programming and auction mechanism design. We will need the following theorem from \cite{balcan2021data}, which generalizes the recipe for establishing dispersion given by \cite{balcan2020semi} for $d=1,2$ dimensions to arbitrary constant $d$ dimendions. It is straightforward to apply the recipe to establish dispersion for these problems, which in turn implies that our meta-learning results are applicable. We demonstrate this for a few important problems below for completeness. \begin{Thm}[\cite{balcan2021data}]\label{thm:dispersion-recipe} Let $l_1, \dots, l_m : \mathbb R^d \rightarrow \mathbb R$ be independent piecewise $L$-Lipschitz functions, each having discontinuities specified by a collection of at most $K$ algebraic hypersurfaces of bounded degree. Let $\mathcal{L}$ denote the set of axis-aligned paths between pairs of points in $\mathbb R^d$, and for each $s\in \mathcal{L}$ define $D(m, s) = |\{1 \le t \le m \mid l_t\text{ has a discontinuity along }s\}|$. Then we have $\mathbb E[\sup_{s\in \mathcal{L}} D(m, s)] \le \sup_{s\in \mathcal{L}} \mathbb E[D(m, s)] + O(\sqrt{m \log(mK)})$. \end{Thm} \subsection{Greedy knapsack} We are given a knapsack with capacity $\texttt{cap}$ and items $i\in[m]$ with sizes $w_i$ and values $v_i$. The goal is to select a subset $S$ of items to add to the knapsack such that $\sum_{i\in S}w_i\le \texttt{cap}$ while maximizing the total value $\sum_{i\in S}v_i$ of selected items. We consider a general greedy heuristic to insert items with largest $v_i/w_i^{\rho}$ first (due to \cite{gupta2017pac}) for $\rho\in[0,10]$. The classic greedy heuristic sets $\rho=1$ and can be used to provide a 2-approximation for the problem. However other values of $\rho$ can improve the knapsack objective on certain problem instances. For example, for the value-weight pairs $\{(0.99,1),(0.99,1),(1.01,1.01)\}$ and capacity $\texttt{cap}=2$ the classic heuristic $\rho=1$ gives value $1.01$ as the greedy heuristic is maximized for the third item. However, using $\rho=3$ (or any $\rho>1+\log(1/0.99)/\log(1.01)>2.01$) allows us to pack the two smaller items giving the optimal value $1.98$. Our result (Theorem \ref{thm:tar}) when applied to this problem shows that it is possible to learn the optimal parameter values for the greedy heuristic algorithm family for knapsack from similar tasks. \begin{Thm} Consider instances of the knapsack problem given by bounded weights $w_{i,j}\in[1,C]$ and $\kappa$-bounded independent values $v_{i,j}\in[0,1]$ for $i\in[m],j\in[T]$. Then the asymptotic task-averaged regret for learning the algorithm parameter $\rho$ for the greedy heuristic family described above is $o_T(1)+2V\sqrt{m}+O(\sqrt{m})$. \end{Thm} \begin{proof} Lemma 11 of \cite{balcan2020semi} shows that the loss functions form a $\frac{1}{2}$-dispersed sequence. The result follows by applying Theorem \ref{thm:tar} with $\beta=\frac{1}{2}$. \end{proof} \subsection{$k$-center clustering} We consider the $\alpha$-Lloyd's clustering algorithm family from \cite{balcan2018data}, where the initial $k$ centers in the procedure are set by sampling points with probability proportional to $d^\alpha$ where $d$ is the distance from the centers selected so far for some $\alpha\in[0,D],D\in\mathbb R_{\ge0}$. For example, $\alpha=0$ corresponds to the vanilla $k$-means with random initial centers, and $\alpha=2$ setting is the $k$-means++ procedure. For this algorithm family, we are able to show the following guarantee. Interestingly, for this family it is sufficient to rely on the internal randomness of the algorithmic procedure and we do not need assumptions on data smoothness. \begin{Thm} Consider instances of the $k$-center clustering problem on $n$ points, with Hamming loss $l_{i,j}$ for $i\in[m],j\in[T]$ against some (unknown) ground truth clustering. Then the asymptotic task-averaged regret for learning the algorithm parameter $\alpha$ for the $\alpha$-Lloyd's clustering algorithm family of \cite{balcan2018data} is $o_T(1)+2V\sqrt{m}+O(\sqrt{m})$. \end{Thm} \begin{proof} We start by applying Theorem 4 from \cite{balcan2018data} to an arbitrary $\alpha$-interval $[\alpha_0,\alpha_0+\epsilon]\subseteq[0,D]$ of length $\epsilon$. The expected number of discontinuities (expectation under the internal randomness of the algorithm when sampling successive centers), is at most $$D(m,\epsilon)=O(nk \log(n) \log(\max\{(\alpha_0+\epsilon)/\alpha_0),(\alpha_0+\epsilon)\log R\}),$$ where $R$ is an upper bound on the ratio between any pair of non-zero distances. Considering cases $\alpha_0\lessgtr\frac{1}{\log R}$ and using the inequality $\log(1+x)\le x$ for $x\ge 0$ we get that there are, in expectation, at most $O(\epsilon nk \log n \log R)$ discontinuities in any interval of length $\epsilon$. Theorem \ref{thm:dispersion-recipe} now implies $\frac{1}{2}$-dispersion using the recipe from \cite{balcan2020semi}. The task-averaged regret bound follows from Theorem \ref{thm:tar}. \end{proof} \subsection{Integer quadratic programming (IQP)} The objective is to maximize a quadratic function $z^TAz$ for $A$ with non-negative diagonal entries, subject to $z\in\{0,1\}^n$. In the classic Goemans-Williamson algorithm \cite{goemans1995improved} one solves an SDP relaxation $U^TAU$ where columns $u_i$ of $U$ are unit vectors. $u_i$ are then rounded to $\{\pm 1\}$ by projecting on a vector $Z$ drawn according to the standard Gaussian, and using $\texttt{sgn}(\langle u_i,Z\rangle)$. A simple parametric family is $s$-linear rounding where the rounding is as before if $|\langle u_i,Z\rangle|>s$ but uses probabilistic rounding to round $u_i$ to 1 with probability $\frac{1+(\langle u_i,Z\rangle)/s}{2}$. The dispersion analysis of the problem from \cite{balcan2018dispersion} and the general recipe from \cite{balcan2020semi} imply that our results yield low task-averaged regret for learning the parameter of the $s$-linear rounding algorithms. \begin{Thm} Consider instances of IQP given by matrices $A_{i,j}$ and rounding vectors $Z_{i,j}\sim \mathcal{N}_n$ for $i\in[m],j\in[T]$. Then the asymptotic task-averaged regret for learning the algorithm parameter $s$ for $s$-linear rounding is $o_T(1)+2V\sqrt{m}+O(\sqrt{m})$. \end{Thm} \begin{proof} As noted in \cite{balcan2018dispersion}, since $Z_{i,j}$ are normal, the local of discontinuities $s=|\langle u_i,Z\rangle|$ are distributed with a $\sqrt{\frac{2}{\pi}}$-bounded density. Thus in any interval of length $\epsilon$, we have in expectation at most $\epsilon\sqrt{\frac{2}{\pi}}$ discontinuities. Theorem \ref{thm:dispersion-recipe} together with the general recipe from \cite{balcan2020semi} implies $\frac{1}{2}$-dispersion. The task-averaged regret bound is now a simple application of Theorem \ref{thm:tar}. \end{proof} Our results are an improvement over prior work which have only considered iid and (single-task) online learning settings. Similar improvements can be obtained for auction design, as described below. We illustrate this using a relatively simple auction, but the same idea applies for an extensive classes of auctions as studied in \cite{balcan2018general}. \subsection{Posted price mechanisms with additive valuations} There are $m$ items and $n$ bidders with valuations $v_j(b_i),j\in[n],i\in[2^m]$ for all $2^m$ bundles of items. We consider additive valuations which satisfy $v_j(b)=\sum_{i\in b}v_j(\{i\})$. The objective is to maximize the social welfare (sum of buyer valuations). If the item values for each buyer have $\kappa$-bounded distributions, then the corresponding social welfare is dispersed and our results apply. \begin{Thm} Consider instances of posted price mechanism design problems with additive buyers and $\kappa$-bounded marginals of item valuations. Then the asymptotic task-averaged regret for learning the price which maximizes the social welfare is $o_T(1)+2V\sqrt{m}+O(\sqrt{m})$. \end{Thm} \begin{proof} As noted in \cite{balcan2018dispersion}, the locations of discontinuities are along axis-parallel hyperplanes (buyer $j$ will be willing to buy item $i$ at a price $p_i$ if and only if $v_j(\{i\}) \ge p_i$, each buyer-item pair in each instance corresponds to a hyperplane). Thus in any pair of points $p,p'$ (corresponding to pricing) at distance $\epsilon$, we have in expectation at most $\epsilon\kappa mn$ discontinuities along any axis-aligned path joining $p,p'$, since discontinuities for an item can only occur along axis-aligned segment for the axis corresponding to the item. Theorem \ref{thm:dispersion-recipe} now implies $\frac{1}{2}$-dispersion. The task-averaged regret bound is now a simple application of Theorem \ref{thm:tar}. \end{proof} \section{Additional experiments}\label{app: experiment} \subsection{Number of training tasks needed for meta-learning} We also examine the number of training tasks that our meta-learning procedure needs to obtain improvements over the single-task baseline. We use a single test task, and a variable number of training tasks (0 through 10) to meta-learn the initialization. We use the same settings as in Section \ref{sec:experiments}, except the meta-learning experiments have been averaged over 20 iterations (to average over randomization in the algorithms). In Figure \ref{fig: regret vs meta-updates}, we plot the average regret against number of meta-updates performed before starting the test task, and compare against the single-task baselines. We observe gains with meta-learning with just $T=10$ tasks for the Omniglot dataset, and with even a single task in the Gaussian mixture dataset. The latter is likely due to a very high degree of task similarity across all the tasks (examined below), so learning on any task transfers very well to another task. \begin{figure}[!h] \centering \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{omni-meta.pdf} \caption{Omniglot} \end{subfigure} \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{gauss-meta.pdf} \caption{Gaussian mixture} \end{subfigure} \caption{Average regret vs. number of training tasks for meta-learning.} \label{fig: regret vs meta-updates} \end{figure} \subsection{Task similarity and dispersion} We also examine the task similarity of the different tasks by plotting the optimal values $\alpha^*_t$ of the clustering parameter $\alpha$ and the corresponding balls $\mathcal B(\alpha_t^\ast,m^{-\beta})$ used in our definition of task similarity (Figure \ref{fig: task similarity}). \begin{figure}[!h] \centering \begin{subfigure}[b]{0.35\textwidth} \centering \includegraphics[width=\textwidth]{omni-ts.pdf} \caption{Omniglot} \end{subfigure} \begin{subfigure}[b]{0.35\textwidth} \centering \includegraphics[width=\textwidth]{gauss-ts.pdf} \caption{Gaussian mixture} \end{subfigure} \begin{subfigure}[b]{0.35\textwidth} \centering \includegraphics[width=\textwidth]{knapsack-ts.pdf} \caption{Knapsack} \end{subfigure} \caption{Location of optimal parameter values for the training tasks.} \label{fig: task similarity} \end{figure} The intervals of the parameter induced by these balls correspond to the discretization used by Algorithm \ref{alg:ftrl}. We notice a stronger correlation in task similarity for the Gaussian mixture clustering tasks, which implies that meta-learning is more effective here (both in terms of learning test tasks faster, and with lower regret). For knapsack the task similarity is also high, but it turns out that for our dataset there are very `sharp peaks' at the optima of the total knapsack values as a function of the parameter $\rho$. So even though meta-learning helps us get within a small ball of the optima, a few steps are still needed to converge and we do not see the single-shot benefits of meta-learning as we do for the Gaussian clustering experiment. \begin{figure}[!h] \centering \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{gauss-cluster.png} \caption{Clustering (Gaussian mixture dataset)} \end{subfigure} \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{knapsack.png} \caption{Greedy Knapsack} \end{subfigure} \caption{Average performance (over algorithm randomization) for a few tasks as a function of the configuration parameter. This explains why, despite high task similarity in either case, few-shot meta-learning works better for the Gaussian mixture clustering.} \label{fig: average performance} \end{figure} \section{Additional Related Work} Data-driven algorithm selection is an algorithm design paradigm for setting algorithm parameters when multiple instances of a problem are available or need to be solved \cite{blum2020technical,balcan2020data}. It is familiar as {\it hyperparameter tuning} to machine learning practitioners which often involves a ``grid search'', ``random search'' or gradient-based search, with no formal guarantees of convergence to a global optimum. By modeling the problem of identifying a good algorithm from data as a statistical learning problem, general learning algorithms have been developed which exploit smoothness of the underlying algorithmic distribution \cite{balcan2018dispersion}. This provides a new algorithmic perspective, along with tools and insights for good performance under this smoothed analysis for fundamental problems including clustering, mechanism design, and mixed integer programs, and providing guarantees like differential privacy, adaptive online learning and adversarial robustness \cite{balcan2019learning,balcan2018general,balcan2020learning,balcan2020power}. \section{Proofs} \subsection{Proof of Theorem~\ref{thm:exp-forc-meta}} \begin{proof} The proof adapts the analysis of the exponential forecaster in \cite{balcan2018dispersion}. Let $W_t = \int_Cw_t(\rho) d\rho$ be the normalizing constant and $P_t = \mathbb E_{\rho\sim p_t} [u_t(\rho)]$ be the expected payoff at round $t$. Also let $U_t(\rho)=\sum_{j=1}^{t}u_j(\rho)$. We seek to bound $R_T=OPT-P(T)$, where $OPT=U_{T}(\rho^*)$ for optimal parameter $\rho^*$ and $P(T)=\sum_{t=1}^{T}P_t$ is the expected utility of Algorithm \ref{alg:ef} in $T$ rounds. We will do this by lower bounding $P(T)$ and upper bounding $OPT$ by analyzing the normalizing constant $W_t$. {\it Lower bound for $P(T)$}: This follows from standard arguments, included for completeness. Using the definitions in Algorithm \ref{alg:ef}, it follows that \begin{align*}\frac{W_{t+1}}{W_{t}} &= \frac{\int_{\mathbb C}e^{\lambda u_t(\rho)}w_{t}(\rho)d\rho}{W_{t}} = \int_{\mathbb C}e^{\lambda u_t(\rho)}\frac{w_{t}(\rho)}{W_{t}}d\rho = \int_{\mathbb C}e^{\lambda u_t(\rho)}p_{t}(\rho)d\rho.\end{align*} Use inequalities $e^{\lambda x}\le1+(e^{\lambda}-1)x$ for $x\in[0,1]$ and $1+x\le e^x$ to conclude \begin{align*}\frac{W_{t+1}}{W_{t}} \le\int_{\mathbb C}p_{t}(\rho)\left(1+(e^{\lambda}-1)u_t(\rho)\right)d\rho = 1+(e^{H\lambda}-1){P_t} \le \exp\left((e^{\lambda}-1){P_t}\right).\end{align*} Finally, we can write $W_{T+1}/W_1$ as a telescoping product to obtain \[\frac{W_{T+1}}{W_{1}}=\prod_{t=1}^{T}\frac{W_{t+1}}{W_{t}}\le \exp\left((e^{\lambda}-1){\sum_tP_t}\right) = \exp\left({P(T)(e^{\lambda}-1)}\right),\] or, $W_{T+1}\le \exp\left({P(T)(e^{\lambda}-1)}\right)\int_Cw_1(\rho)d\rho$. {\it Upper bound for $OPT$}: Let $\mathcal B^* (r)$ be the ball of radius $r$ around $\rho^*$. If there are at most $k$ discontinuities in any ball of radius $r$, we can conclude that for all $\rho\in\mathcal B^* (r)$, $U_{T}(\rho) \ge OPT - k-LTr$. Now, since $W_{T+1}=\int_Cw_1(\rho)\exp(\lambda U_{T}(\rho))d\rho$, we have \begin{align*} W_{T+1} &\ge \int_{\mathcal B^* (r)}w_1(\rho)e^{\lambda U_{T}(\rho)}d\rho\\&\ge \int_{\mathcal B^* (r)}w_1(\rho)e^{\lambda(OPT - k-LTr)}d\rho \\&=e^{\lambda(OPT - k-LTr)}\int_{\mathcal B^* (r)}w_1(\rho)d\rho. \end{align*} Putting together with the lower bound, and rearranging, gives \begin{align*}OPT-P_T&\le \frac{P(T)(e^{\lambda}-1-\lambda)}{\lambda}+\frac{\log (1/Z)}{\lambda}+k+LTr\\ &\le T\lambda +\frac{\log (1/Z)}{\lambda}+k+LTr,\end{align*} where we use that $P(T)\le T$ and for all $x\in[0,1], e^x \le 1 + x + (e-2)x^2$. Take expectation over the sequence of utility functions and apply dispersion to conclude the result. \end{proof} \subsection{Proof of Theorem~\ref{thm:dispersion-lb}} We extend the construction in \cite{balcan2020learning} to the multi-task setting. The main difference is that we generalize the construction for any task similarity, and show that we get the same lower bound asymptotically. \begin{proof} Define $u^{(b,x)}(\rho)=I[b=0]*I[\rho>x]+I[b=1]*I[\rho\le x]$, where $b\in\{0,1\}$, $x,\rho\in[0,1]$ and $I[\cdot]$ is the indicator function. For each iteration the adversary picks $u^{(0,x)}$ or $u^{(1,x)}$ with equal probability for some $x\in [a,a+D^*]$, the ball of diameter $D^*$ containing all the optima. For each task $t$, $m-\frac{3}{D^*}m^{1-\beta}$ functions are presented with the discontinuity $x\in [a+D^*/3,a+2D^*/3]$ while ensuring $\beta$-dispersion. The remaining $\frac{3}{D^*}m^{1-\beta}$ are presented with discontinuities located in successively halved intervals (the `halving adversary') containing the optima in hindsight, any algorithm gets half of these wrong in expectation. It is readily verified that the functions are $\beta$-dispersed. The construction works provided $m$ is sufficiently large ($m>\left(\frac{3}{D^*}\right)^{1/\beta}$). The task averaged regret is therefore also $\Tilde{\Omega}(m^{1-\beta})$. \end{proof} \subsection{Proof of Theorem~\ref{lem:aruba}} \begin{proof} \begin{align*} \sum_{t=1}^T\sum_{m=1}^m \ell_{t,i}(\rho_{t,i})-\min_{\rho_t^\ast\in C}&\sum_{i=1}^m\ell_{t,i}(\rho_t^\ast)\\ &\le\sum_{t=1}^TU_t(w_t,v_t)\\ &\le\min_{v>0}H_T(v)\sqrt m+\sum_{t=1}^T\left(v+\frac{f_t(w_t)}v\right)\sqrt m+g(m)\\ &\le\min_{w:C\mapsto\mathbb R_{\ge0},v>0}H_T(v)\sqrt m+\frac{F_T(w)\sqrt m}v+\sum_{t=1}^T\left(v+\frac{f_t(w)}v\right)\sqrt m+g(m)\\ &\le\left(H_T(V)+\min\left\{\frac{F_T(w^\ast)}V,2\sqrt{F_T(w^\ast)T}\right\}+2TV\right)\sqrt m+Tg(m) \end{align*} where the last step is achieved by substituting $w=w^\ast$ and $v=\max\left\{V,\sqrt{F_T(w^\ast)/T}\right\}$. \end{proof} \subsection{Proof of Lemma~\ref{lem:equivalent}} \begin{proof} Define a probability measure $p:C\mapsto\mathbb R_{\ge0}$ that is constant on all elements $\tilde D\in\mathcal D_t$ of the discretization at time $t$, taking the value $p(\rho)=\frac1{\operatorname{vol}(\tilde D)}\sum_{D\in D_T,D\subset\tilde D}\*w_{[D]}~\forall~\rho\in\tilde D$. Note that for any $D\in\mathcal D_T$ that is a subset of $\tilde D$ we have that $$\*p_{[D]}=\int_D\tilde w(\rho)d\rho=\frac{\*v_{[D]}}{\sum_{D'\in\mathcal D_T,D'\subset\tilde D}\*v_{[D']}}\sum_{D'\in\mathcal D_T,D'\subset\tilde D}\*w_{[D']}$$ Then \begin{align*} D_{KL}&(\*p||\*{\hat v})-\eta\sum_{s\le t}\log\langle\*w_s^\ast,\*p\rangle\\ &=\sum_{\tilde D\in\mathcal D_t}\sum_{D\in\mathcal D_T,D\subset\tilde D}\*p_{[D]}\log\frac{\*p_{[D]}}{\*{\hat v}_{[D]}}-\eta\sum_{s\le t}\log\sum_{\tilde D\in\mathcal D_t}\sum_{D\in\mathcal D_T,D\subset\tilde D}{\*w_s^\ast}_{[D]}\*p_{[D]}\\ &=\sum_{\tilde D\in\mathcal D_t}\sum_{D\in\mathcal D_T,D\subset\tilde D}\frac{\*v_{[D]}}{\sum_{D'\in\mathcal D_T,D'\subset\tilde D}\*v_{[D']}}\sum_{D'\in\mathcal D_T,D'\subset\tilde D}\*w_{[D']}\log\frac{\sum_{D'\in\mathcal D_T,D'\subset\tilde D}\*w_{[D']}}{\sum_{D'\in\mathcal D_T,D'\subset\tilde D}\*{\hat v}_{[D']}}\\ &\quad-\eta\sum_{s\le t}\log\sum_{\tilde D\in\mathcal D_t}\sum_{D\in\mathcal D_T,D\subset\tilde D}\frac{{\*w_s^\ast}_{[D]}\*v_{[D]}}{\sum_{D'\in\mathcal D_T,D'\subset\tilde D}\*v_{[D']}}\sum_{D'\in\mathcal D_T,D'\subset\tilde D}\*w_{[D']}\\ &\le\sum_{\tilde D\in\mathcal D_t}\sum_{D\in\mathcal D_T,D\subset\tilde D}\frac{\*v_{[D]}}{\sum_{D'\in\mathcal D_T,D'\in\tilde D}\*v_{[D']}}\sum_{D'\in\mathcal D_T,D'\subset\tilde D}\*w_{D'}\log\frac{\*w_{[D']}}{\*{\hat v}_{[D']}}\\ &\quad-\eta\sum_{s\le t}\log\sum_{\tilde D\in\mathcal D_t,\tilde D\subset C_s}\sum_{D\in\mathcal D_T,D\subset\tilde D}\frac{\*v_{[D]}}{\sum_{D'\in\mathcal D_T,D'\subset\tilde D}\*v_{[D']}}\sum_{D'\in\mathcal D_T,D'\subset\tilde D}\*w_{[D']}\\ &=\sum_{\tilde D\in\mathcal D_t}\sum_{D'\in\mathcal D_T,D'\in\tilde D}\*w_{[D']}\log\frac{\*w_{[D']}}{\*{\hat v}_{[D']}}-\eta\sum_{s\le t}\log\sum_{\tilde D\in\mathcal D_t,\tilde D\subset C_s}\sum_{D'\in\mathcal D_T,D'\subset\tilde D}\*w_{[D']}\\ &=D_{KL}(\*w||\*{\hat v})-\eta\sum_{s\le t}\log\langle\*w_s^\ast,\*w\rangle \end{align*} where the inequality follows from applying the log-sum inequality to the first term and the fact that ${\*w_s^\ast}_{[D]}=\mathbf 1_{D\subset C_s}$ in the second term. Note that we also have $$\|\*p\|_1 =\sum_{\tilde D\in\mathcal D_t}\sum_{D\in\mathcal D_T,D\subset\tilde D}\frac{\*v_{[D]}}{\sum_{D'\in\mathcal D_T,D'\subset\tilde D}\*v_{[D']}}\sum_{D'\in\mathcal D_T,D'\subset\tilde D}\*w_{[D']} =\sum_{\tilde D\in\mathcal D_t}\sum_{D'\in\mathcal D_T,D'\subset \tilde D}\*w_{[D']} =1$$ and $$\*p_{[D]} =\frac{\*v_{[D]}}{\sum_{D'\in\mathcal D_T,D'\subset\tilde D}\*v_{[D']}}\sum_{D'\in\mathcal D_T,D'\subset\tilde D}\*w_{[D']} \ge\frac{\gamma\*v_{[D]}}{\sum_{D'\in\mathcal D_T,D'\subset\tilde D}\*v_{[D']}}\sum_{D'\in\mathcal D_T,D'\subset\tilde D}\*{\hat v}_{[D']} =\gamma\*{\hat v}_{[D]}$$ so $\*p$ satisfies the optimization constraints. Therefore, since $\*w$ was defined to be the minimum of the sum of the KL-divergence (a strongly-convex function \cite[Example~2.5]{shalev-shwartz2011oco}) and a convex function, it is unique and so coincides with $\*p$. On the other hand \begin{align*} D_{KL}(\*p(t)||\*{\hat v(t)})-\eta\sum_{s\le t}\log\langle\*w_s^\ast(t),\*p(t)\rangle &\le D_{KL}(\*p||\*{\hat v})-\eta\sum_{s\le t}\log\langle\*w_s^\ast,\*p\rangle\\ &=D_{KL}(\*w||\*{\hat v})-\eta\sum_{s\le t}\log\langle\*w_s^\ast,\*w\rangle\\ &\le D_{KL}(\*{\tilde w}||\*{\hat v})-\eta\sum_{s\le t}\log\langle\*w_s^\ast,\*{\tilde w}\rangle\\ &=D_{KL}(\*{\tilde w}(t)||\*{\hat v}(t))-\eta\sum_{s\le t}\log\langle\*w_s^\ast(t),\*{\tilde w}(t)\rangle \end{align*} where the first inequality follows from above and the second from the optimality of $\*w$. Note that by nonnegativity the discretization of $p$ does not affect its measure over $C$, so $\|\*p\|_1=1\implies\|\*p(t)\|_1=1$. Finally, also from above we have $$\*p(t)_{[D]}=\sum_{D'\in\mathcal D_T,D'\subset D}\*p_{[D']}\ge\gamma\sum_{D'\in\mathcal D_T,D'\subset D}\*p_{[D']}\*{\hat v}_{[D']}=\gamma\*{\hat v}(t)_{[D]}$$ Thus as before $\*p(t)$ satisfies the optimization constraints, which with the previous inequality and the uniqueness of the optimum $\*{\tilde w}(t)$ implies that $\*p(t)=\*{\tilde w}(t)$. Finally, since $\tilde w$ is constant on all elements of the discretization $\mathcal D_t$ of $C$ this last fact implies that $\*p=\*{\tilde w}$, which together with $\*p=\*w$ implies the result. \end{proof} \subsection{Lipschitzness for Algorithm~\ref{alg:ftrl}} \begin{Clm}\label{clm:overlip} The loss $f_t$ is $\frac1{\gamma\operatorname{vol}(C_t)}$-Lipschitz w.r.t. $\|\cdot\|_1$ over the set $\{\*w\in\mathbb R^{|\mathcal D_T|}:\|\*w\|_1=1,\*w\ge\gamma\*{\hat v}\}$. \end{Clm} \begin{proof} $$\max_{\|\*w\|_1=1,\*w\ge\gamma\*{\hat v}}\|\nabla\log\langle\*w_t^\ast,\*w\rangle\|_\infty =\max_{D,\|\*w\|_1=1,\*w\ge\gamma\*{\hat v}}\frac{{\*w_t^\ast}_{[D]}}{\langle\*w_t^\ast,\*w\rangle} \le\frac1{\langle\*w_t^\ast,\gamma\*{\hat v}\rangle} =\frac1{\gamma\operatorname{vol}(C_t)}$$ \end{proof} \subsection{Proof of Corollary~\ref{cor:ewoo}} \begin{proof} Using first-order conditions we have that the optimum in hindsight of the functions $h_t$ satisfies $$v^2 =\frac1T\sum_{t=1}^Tf_t(w_t) =-\frac1T\sum_{t=1}^T\log\langle\*w_t^\ast,\*w_t\rangle \le\frac1T\sum_{t=1}^T\log\frac1{\gamma\operatorname{vol}(C_t)}$$ Applying \cite[Corollary~C.2]{khodak2019adaptive} with $\alpha_t=1$, $B_t^2=f_t(w_t)$, and $D^2-\log\gamma$ instead of $D^2$ yields the result. \end{proof} \subsection{Proof of Corollary~\ref{cor:ftl}} \begin{proof} Using first-order conditions we have that the optimum in hindsight of the functions $h_t$ satisfies $$v^2 =\frac1T\sum_{t=1}^Tf_t(w_t) =-\frac1T\sum_{t=1}^T\log\langle\*w_t^\ast,\*w_t\rangle \le\frac1T\sum_{t=1}^T\log\frac1{\gamma\operatorname{vol}(C_t)}$$ Applying \cite[Proposition~B.2]{khodak2019adaptive} with $\alpha_t=1$, $B_t^2=f_t(w_t)$, and $D^2-\log\gamma$ instead of $D^2$ yields the result. \end{proof} \subsection{Proof of Theorem~\ref{thm:tar}} \begin{proof} We have $F_T(w^\ast)=\tilde O(\sqrt {BG}T^\frac34)$ and $H_T(V)=\tilde O(\min\{1/V,\sqrt[5]T\}T^\frac35)$ from Corollaries~\ref{cor:ewoo} and~\ref{cor:ftl}. Substituting into Lemma~\ref{lem:equivalent} and simplifying yields $$\tilde O\left(\frac{\min\left\{\frac1V,\sqrt[4]T\right\}}{\sqrt T}+\min\left\{\frac{\sqrt{BG}}{V\sqrt[4]T},\frac{\sqrt[4]{BG}}{\sqrt[8]T}\right\}+2V\right)\sqrt m+g(m)$$ Simplifying further yields the result. \end{proof} \subsection{Proof of Theorem \ref{thm:robustness single task}} \begin{proof} The bound on $\mathbb E[\Tilde{R}_T]$ is immediate from Theorem \ref{thm:exp-forc-meta}. For $\mathbb E[R_T]$, we can upper bound the natural regret with the sum of robust regret, total adversarial perturbation at the optimum and a term corresponding to the difference between the loss of natural and robust optima. \begin{align*} R_T &=\sum_{t=1}^T l_t(x_t) - \min_{x\in\mathcal X}\sum_{t=1}^T l_t(x)\\ &=\Tilde{R}_T+\sum_{t=1}^T l_t(x_t) - \sum_{t=1}^T \Tilde{l}_t(x_t) + \min_{x\in\mathcal X}\sum_{t=1}^T \Tilde{l}_t(x) - \min_{x\in\mathcal X}\sum_{t=1}^T l_t(x) \\ &=\Tilde{R}_T-\sum_{t=1}^T a_t(x_t) + \sum_{t=1}^Ta_t(\Tilde{x}^*)+ \sum_{t=1}^Tl_t(\Tilde{x}^*) - \sum_{t=1}^Tl_t(x^*)\\ &\le \Tilde{R}_T + \sum_{t=1}^Ta_t(\Tilde{x}^*)+\Big\lvert \sum_{t=1}^Tl_t(\Tilde{x}^*) - \sum_{t=1}^Tl_t(x^*)\Big\rvert \end{align*} where $\Tilde{x}^* = \argmin_{x\in\mathcal X}\sum_{t=1}^T \Tilde{l}_t(x)$ and $x^* = \argmin_{x\in\mathcal X}\sum_{t=1}^T l_t(x)$. We now use the $\beta_a$-dispersedness of the attack to show an excess expected regret of $\Tilde{O}(T^{1-\beta_a})$. Using attack dispersion on a ball of radius $T^{-\beta_a}$ around $\Tilde{x}^*$, the number of attacks that have non-zero $a_t(\Tilde{x}^*)$ is at most $\Tilde{O}(T^{1-\beta_a})$, and therefore $\sum_{t=1}^Ta_t(\Tilde{x}^*)\le \Tilde{O}(T^{1-\beta_a})$. Further, observe that the robust and natural optima coincide unless some attack occurs at the natural optimum $x^*$. We can use attack dispersion at $x^*$, and a union bound across rounds, to conclude $\mathbb E\lvert \sum_{t=1}^Tl_t(\Tilde{x}^*) - \sum_{t=1}^Tl_t(x^*)\rvert\le\Tilde{O}(T^{1-\beta_a})$ which concludes the proof. \end{proof} \subsection{Proof of Theorem \ref{thm:robustness lower bound}} \begin{proof} Part 1 follows from the lower bound in Theorem \ref{thm:dispersion-lb}, by setting $\Tilde{l}_i=l_i$ as the loss sequence used in the proof. To establish Part 2, we extend the construction as follows. $\Tilde{l}_i=l_i$ are both equal and correspond to the `halving adversary' from the proof of Theorem \ref{thm:dispersion-lb} for the first $\Theta(m^{1-\beta})$ rounds. If $\beta\le\beta_a$ we are done, so assume otherwise. Let $I$ denote the interval containing the optima over the rounds so far. Notice that the length of $I$ is at most $|I|\le (\frac{1}{2})^{\Theta(m^{1-\beta})}\le (\frac{1}{2})^{\beta \log m}=m^{-\beta}$ for $\beta>0$. For further rounds $l_i$ continues to be the halving adversary for $\Theta(m^{1-\beta_a})$ rounds, which implies any algorithm suffers $\Omega(m^{1-\beta_a})$ regret. We set attack $a_i$ on interval $I$ such that $\Tilde{l}_i=0$ on $I$ on these rounds. This ensures that $a_i$ is $\beta_a$-dispersed and $\Tilde{l}_i$ is $\beta$-dispersed. Putting together with the case $\beta\le\beta_a$, we obtain $\Omega(m^{1-\min\{\beta,\beta_a\}})$ bound on the regret of any algorithm. \end{proof} \section{Learning algorithmic parameters for combinatorial problems}\label{app: combinatorial} We discuss implications of our results for several combinatorial problems of widespread interest including integer quadratic programming and auction mechanism design. We will need the following theorem from \cite{balcan2021data}, which generalizes the recipe for establishing dispersion given by \cite{balcan2020semi} for $d=1,2$ dimensions to arbitrary constant $d$ dimendions. It is straightforward to apply the recipe to establish dispersion for these problems, which in turn implies that our meta-learning results are applicable. We demonstrate this for a few important problems below for completeness. \begin{Thm}[\cite{balcan2021data}]\label{thm:dispersion-recipe} Let $l_1, \dots, l_m : \mathbb R^d \rightarrow \mathbb R$ be independent piecewise $L$-Lipschitz functions, each having discontinuities specified by a collection of at most $K$ algebraic hypersurfaces of bounded degree. Let $\mathcal{L}$ denote the set of axis-aligned paths between pairs of points in $\mathbb R^d$, and for each $s\in \mathcal{L}$ define $D(m, s) = |\{1 \le t \le m \mid l_t\text{ has a discontinuity along }s\}|$. Then we have $\mathbb E[\sup_{s\in \mathcal{L}} D(m, s)] \le \sup_{s\in \mathcal{L}} \mathbb E[D(m, s)] + O(\sqrt{m \log(mK)})$. \end{Thm} \subsection{Greedy knapsack} We are given a knapsack with capacity $\texttt{cap}$ and items $i\in[m]$ with sizes $w_i$ and values $v_i$. The goal is to select a subset $S$ of items to add to the knapsack such that $\sum_{i\in S}w_i\le \texttt{cap}$ while maximizing the total value $\sum_{i\in S}v_i$ of selected items. We consider a general greedy heuristic to insert items with largest $v_i/w_i^{\rho}$ first (due to \cite{gupta2017pac}) for $\rho\in[0,10]$. The classic greedy heuristic sets $\rho=1$ and can be used to provide a 2-approximation for the problem. However other values of $\rho$ can improve the knapsack objective on certain problem instances. For example, for the value-weight pairs $\{(0.99,1),(0.99,1),(1.01,1.01)\}$ and capacity $\texttt{cap}=2$ the classic heuristic $\rho=1$ gives value $1.01$ as the greedy heuristic is maximized for the third item. However, using $\rho=3$ (or any $\rho>1+\log(1/0.99)/\log(1.01)>2.01$) allows us to pack the two smaller items giving the optimal value $1.98$. Our result (Theorem \ref{thm:tar}) when applied to this problem shows that it is possible to learn the optimal parameter values for the greedy heuristic algorithm family for knapsack from similar tasks. \begin{Thm} Consider instances of the knapsack problem given by bounded weights $w_{i,j}\in[1,C]$ and $\kappa$-bounded independent values $v_{i,j}\in[0,1]$ for $i\in[m],j\in[T]$. Then the asymptotic task-averaged regret for learning the algorithm parameter $\rho$ for the greedy heuristic family described above is $o_T(1)+2V\sqrt{m}+O(\sqrt{m})$. \end{Thm} \begin{proof} Lemma 11 of \cite{balcan2020semi} shows that the loss functions form a $\frac{1}{2}$-dispersed sequence. The result follows by applying Theorem \ref{thm:tar} with $\beta=\frac{1}{2}$. \end{proof} \subsection{$k$-center clustering} We consider the $\alpha$-Lloyd's clustering algorithm family from \cite{balcan2018data}, where the initial $k$ centers in the procedure are set by sampling points with probability proportional to $d^\alpha$ where $d$ is the distance from the centers selected so far for some $\alpha\in[0,D],D\in\mathbb R_{\ge0}$. For example, $\alpha=0$ corresponds to the vanilla $k$-means with random initial centers, and $\alpha=2$ setting is the $k$-means++ procedure. For this algorithm family, we are able to show the following guarantee. Interestingly, for this family it is sufficient to rely on the internal randomness of the algorithmic procedure and we do not need assumptions on data smoothness. \begin{Thm} Consider instances of the $k$-center clustering problem on $n$ points, with Hamming loss $l_{i,j}$ for $i\in[m],j\in[T]$ against some (unknown) ground truth clustering. Then the asymptotic task-averaged regret for learning the algorithm parameter $\alpha$ for the $\alpha$-Lloyd's clustering algorithm family of \cite{balcan2018data} is $o_T(1)+2V\sqrt{m}+O(\sqrt{m})$. \end{Thm} \begin{proof} We start by applying Theorem 4 from \cite{balcan2018data} to an arbitrary $\alpha$-interval $[\alpha_0,\alpha_0+\epsilon]\subseteq[0,D]$ of length $\epsilon$. The expected number of discontinuities (expectation under the internal randomness of the algorithm when sampling successive centers), is at most $$D(m,\epsilon)=O(nk \log(n) \log(\max\{(\alpha_0+\epsilon)/\alpha_0),(\alpha_0+\epsilon)\log R\}),$$ where $R$ is an upper bound on the ratio between any pair of non-zero distances. Considering cases $\alpha_0\lessgtr\frac{1}{\log R}$ and using the inequality $\log(1+x)\le x$ for $x\ge 0$ we get that there are, in expectation, at most $O(\epsilon nk \log n \log R)$ discontinuities in any interval of length $\epsilon$. Theorem \ref{thm:dispersion-recipe} now implies $\frac{1}{2}$-dispersion using the recipe from \cite{balcan2020semi}. The task-averaged regret bound follows from Theorem \ref{thm:tar}. \end{proof} \subsection{Integer quadratic programming (IQP)} The objective is to maximize a quadratic function $z^TAz$ for $A$ with non-negative diagonal entries, subject to $z\in\{0,1\}^n$. In the classic Goemans-Williamson algorithm \cite{goemans1995improved} one solves an SDP relaxation $U^TAU$ where columns $u_i$ of $U$ are unit vectors. $u_i$ are then rounded to $\{\pm 1\}$ by projecting on a vector $Z$ drawn according to the standard Gaussian, and using $\texttt{sgn}(\langle u_i,Z\rangle)$. A simple parametric family is $s$-linear rounding where the rounding is as before if $|\langle u_i,Z\rangle|>s$ but uses probabilistic rounding to round $u_i$ to 1 with probability $\frac{1+(\langle u_i,Z\rangle)/s}{2}$. The dispersion analysis of the problem from \cite{balcan2018dispersion} and the general recipe from \cite{balcan2020semi} imply that our results yield low task-averaged regret for learning the parameter of the $s$-linear rounding algorithms. \begin{Thm} Consider instances of IQP given by matrices $A_{i,j}$ and rounding vectors $Z_{i,j}\sim \mathcal{N}_n$ for $i\in[m],j\in[T]$. Then the asymptotic task-averaged regret for learning the algorithm parameter $s$ for $s$-linear rounding is $o_T(1)+2V\sqrt{m}+O(\sqrt{m})$. \end{Thm} \begin{proof} As noted in \cite{balcan2018dispersion}, since $Z_{i,j}$ are normal, the local of discontinuities $s=|\langle u_i,Z\rangle|$ are distributed with a $\sqrt{\frac{2}{\pi}}$-bounded density. Thus in any interval of length $\epsilon$, we have in expectation at most $\epsilon\sqrt{\frac{2}{\pi}}$ discontinuities. Theorem \ref{thm:dispersion-recipe} together with the general recipe from \cite{balcan2020semi} implies $\frac{1}{2}$-dispersion. The task-averaged regret bound is now a simple application of Theorem \ref{thm:tar}. \end{proof} Our results are an improvement over prior work which have only considered iid and (single-task) online learning settings. Similar improvements can be obtained for auction design, as described below. We illustrate this using a relatively simple auction, but the same idea applies for an extensive classes of auctions as studied in \cite{balcan2018general}. \subsection{Posted price mechanisms with additive valuations} There are $m$ items and $n$ bidders with valuations $v_j(b_i),j\in[n],i\in[2^m]$ for all $2^m$ bundles of items. We consider additive valuations which satisfy $v_j(b)=\sum_{i\in b}v_j(\{i\})$. The objective is to maximize the social welfare (sum of buyer valuations). If the item values for each buyer have $\kappa$-bounded distributions, then the corresponding social welfare is dispersed and our results apply. \begin{Thm} Consider instances of posted price mechanism design problems with additive buyers and $\kappa$-bounded marginals of item valuations. Then the asymptotic task-averaged regret for learning the price which maximizes the social welfare is $o_T(1)+2V\sqrt{m}+O(\sqrt{m})$. \end{Thm} \begin{proof} As noted in \cite{balcan2018dispersion}, the locations of discontinuities are along axis-parallel hyperplanes (buyer $j$ will be willing to buy item $i$ at a price $p_i$ if and only if $v_j(\{i\}) \ge p_i$, each buyer-item pair in each instance corresponds to a hyperplane). Thus in any pair of points $p,p'$ (corresponding to pricing) at distance $\epsilon$, we have in expectation at most $\epsilon\kappa mn$ discontinuities along any axis-aligned path joining $p,p'$, since discontinuities for an item can only occur along axis-aligned segment for the axis corresponding to the item. Theorem \ref{thm:dispersion-recipe} now implies $\frac{1}{2}$-dispersion. The task-averaged regret bound is now a simple application of Theorem \ref{thm:tar}. \end{proof} \section{Additional experiments}\label{app: experiment} \subsection{Number of training tasks needed for meta-learning} We also examine the number of training tasks that our meta-learning procedure needs to obtain improvements over the single-task baseline. We use a single test task, and a variable number of training tasks (0 through 10) to meta-learn the initialization. We use the same settings as in Section \ref{sec:experiments}, except the meta-learning experiments have been averaged over 20 iterations (to average over randomization in the algorithms). In Figure \ref{fig: regret vs meta-updates}, we plot the average regret against number of meta-updates performed before starting the test task, and compare against the single-task baselines. We observe gains with meta-learning with just $T=10$ tasks for the Omniglot dataset, and with even a single task in the Gaussian mixture dataset. The latter is likely due to a very high degree of task similarity across all the tasks (examined below), so learning on any task transfers very well to another task. \begin{figure}[!h] \centering \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{omni-meta.pdf} \caption{Omniglot} \end{subfigure} \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{gauss-meta.pdf} \caption{Gaussian mixture} \end{subfigure} \caption{Average regret vs. number of training tasks for meta-learning.} \label{fig: regret vs meta-updates} \end{figure} \subsection{Task similarity and dispersion} We also examine the task similarity of the different tasks by plotting the optimal values $\alpha^*_t$ of the clustering parameter $\alpha$ and the corresponding balls $\mathcal B(\alpha_t^\ast,m^{-\beta})$ used in our definition of task similarity (Figure \ref{fig: task similarity}). \begin{figure}[!h] \centering \begin{subfigure}[b]{0.35\textwidth} \centering \includegraphics[width=\textwidth]{omni-ts.pdf} \caption{Omniglot} \end{subfigure} \begin{subfigure}[b]{0.35\textwidth} \centering \includegraphics[width=\textwidth]{gauss-ts.pdf} \caption{Gaussian mixture} \end{subfigure} \begin{subfigure}[b]{0.35\textwidth} \centering \includegraphics[width=\textwidth]{knapsack-ts.pdf} \caption{Knapsack} \end{subfigure} \caption{Location of optimal parameter values for the training tasks.} \label{fig: task similarity} \end{figure} The intervals of the parameter induced by these balls correspond to the discretization used by Algorithm \ref{alg:ftrl}. We notice a stronger correlation in task similarity for the Gaussian mixture clustering tasks, which implies that meta-learning is more effective here (both in terms of learning test tasks faster, and with lower regret). For knapsack the task similarity is also high, but it turns out that for our dataset there are very `sharp peaks' at the optima of the total knapsack values as a function of the parameter $\rho$. So even though meta-learning helps us get within a small ball of the optima, a few steps are still needed to converge and we do not see the single-shot benefits of meta-learning as we do for the Gaussian clustering experiment. \begin{figure}[!h] \centering \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{gauss-cluster.png} \caption{Clustering (Gaussian mixture dataset)} \end{subfigure} \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{knapsack.png} \caption{Greedy Knapsack} \end{subfigure} \caption{Average performance (over algorithm randomization) for a few tasks as a function of the configuration parameter. This explains why, despite high task similarity in either case, few-shot meta-learning works better for the Gaussian mixture clustering.} \label{fig: average performance} \end{figure} \section{Conclusion} In this paper we studied the initialization-based meta-learning of piecewise-Lipschitz functions, demonstrating how online convex optimization over an adaptive discretization can find an initialization that improves the performance of the exponential forecaster across tasks, assuming the tasks have related optima. We then applied this result in two settings: online configuration of clustering algorithms and adversarial robustness in online learning. For the latter we introduced a dispersion-based understanding of robustness that we believe to be of independent interest. In addition, there are further interesting applications of our work to other algorithm configuration problems. \section*{Acknowledgments} This material is based on work supported in part by the National Science Foundation under grants CCF-1535967, CCF-1910321, IIS-1618714, IIS-1705121, IIS-1838017, IIS-1901403, IIS-2046613, and SES-1919453; the Defense Advanced Research Projects Agency under cooperative agreements HR00112020003 and FA875017C0141; an AWS Machine Learning Research Award; an Amazon Research Award; a Bloomberg Research Grant; a Microsoft Research Faculty Fellowship; an Amazon Web Services Award; a Facebook Faculty Research Award; funding from Booz Allen Hamilton Inc.; and a Block Center Grant. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of any of these funding agencies. \section{Preliminaries and initialization-dependent learning of dispersed functions}\label{sec:dispersion} In this section we introduce our setup and notation for online learning of piecewise-Lipschitz functions in a multi-task environment. We then generalize existing results for the single-task setting in order to obtain within-task regret bounds that depend on both the initialization and the task data. This is critical for both defining a notion of task similarity and devising a meta-learning procedure. \subsection{Meta-learning setup} Following past setups \cite{alquier2017lifelong,denevi2019meta,khodak2019adaptive}, for some $T,m>0$ and all $t\in[T]$ and $i\in[m]$ we consider a meta-learner faced with a sequence of $Tm$ loss functions $\ell_{t,i}:C\mapsto[0,1]$ over a compact subset $C\subset\mathbb R^d$ that lies within a ball $\mathcal B(\rho,R)$ of radius $R$ around some point $\rho\in\mathbb R^d$. Here we used the notation $[n]=\{1,\dots,n\}$. Before each loss function $\ell_{t,i}$ the meta-learner must pick an element $\rho_{t,i}\in C$ before then suffering a loss or cost $\ell_{t,i}(\rho_{t,i})$. For a fixed $t$, the subsequence $\ell_{t,1},\dots,\ell_{t,m}$ defines a {\bf task} for which we expect a single element $\rho_t^\ast\in C$ to do well, and thus we will use the {\bf within-task regret} on task $t$ to describe the quantity \begin{equation}\label{eq:regret} \*R_{t,m}=\sum_{i=1}^m\ell_{t,i}(\rho_{t,i})-\ell_{t,i}(\rho_t^\ast)\quad\textrm{where}\quad\rho_t^\ast\in\argmin_{\rho\in C}\sum_{i=1}^m\ell_{t,i}(\rho) \end{equation} In the single-task setting the goal is usually to show that $R_{t,m}$ is sublinear in $m$, i.e. that the average loss decreases with more rounds. A key point here is that the functions we consider can have numerous global optima. In this work we will assume, after going through the $m$ rounds of task $t$, that we have oracle access to a single fixed optimum for $t$, which we will refer to using $\rho_t^\ast$ and use in both our algorithm and to define the task-similarity. Note that in the types of applications we are interested in---piecewise-Lipschitz functions---the complexity of computing optima scales with the number of discontinuities. In the important special case of piecewise-constant functions, this dependency becomes logarithmic \cite{cohen2017online}. Thus this assumption does not affect the usefulness of the result. Our goal will be to improve the guarantees for regret in the single-task case by using information obtained from solving multiple tasks. In particular, we expect average performance across tasks to improve as we see more tasks; to phrase this mathematically we define the {\bf task-averaged regret} \begin{equation}\label{eq:tar} \*{\bar R}_{T,m}=\frac1T\sum_{t=1}^T\*R_{t,m}=\frac1T\sum_{t=1}^T\sum_{i=1}^m\ell_{t,i}(\rho_{t,i})-\ell_{t,i}(\rho_t^\ast) \end{equation} and claim improvement over single-task learning if in the limit of $T\to\infty$ it is smaller than $\*R_{t,m}$. Note that for simplicity in this work we assume all tasks have the same number of rounds within-task, but as with past work our results are straightforward to extend to the more general setting. \subsection{Learning piecewise-Lipschitz functions} We now turn to our target functions and within-task algorithms for learning them: piecewise-Lipschitz losses, i.e. functions that are $L$-Lipschitz w.r.t. the Euclidean norm everywhere except on measure zero subsets of the space; here they may have arbitrary jump discontinuities so long they still bounded between $[0,1]$. Apart from being a natural setting of interest due to its generality compared to past work on meta-learning, this class of functions has also been shown to have important applications in data-driven algorithm configuration \cite{balcan2018dispersion}; there these functions represent the cost, e.g. an objective value or time-complexity, of algorithms for difficult problems such as integer programming, auction design, and clustering. This literature has also shown lower bounds demonstrating that no-regret learning piecewise-Lipschitz function is impossible in general, necessitating assumptions about the sequence. One such condition is {\em dispersion}, which requires that the discontinuities are not too concentrated. \newpage \begin{Def}[\cite{balcan2018dispersion}]\label{def:dis} The sequence of random loss functions $\ell_1, \dots,\ell_m$ is said to be $\beta$-{\bf dispersed} with Lipschitz constant $L$ if, for all $m$ and for all $\epsilon\ge m^{-\beta}$, we have that, in expectation over the randomness of the functions, at most $\tilde{O}(\epsilon m)$ functions (the soft-O notation suppresses dependence on quantities beside $\epsilon,m$ and $\beta$, as well as logarithmic terms) are not $L$-Lipschitz for any pair of points at distance $\epsilon$ in the domain $\mathbb C$. That is, for all $m$ and for all $\varepsilon\ge m^{-\beta}$, \begin{equation} \mathbb E\left[ \max_{\begin{smallmatrix}\rho,\rho'\in\mathbb C\\\|\rho-\rho'\|_2\le\epsilon\end{smallmatrix}}\|\big\lvert \{ i\in[m] \mid\ell_i(\rho)-\ell_i(\rho')>L\|\rho-\rho'\|_2\} \big\rvert \right] \le \tilde{O}(\epsilon m) \end{equation} \end{Def} Assuming a sequence of $m$ $\beta$-dispersed loss functions and initial distribution $w_1$ set to the uniform distribution over $C$ and optimize the step size parameter, the exponential forecaster presented in Algorithm~\ref{alg:ef} achieves sublinear regret $\tilde{O}(\sqrt{dm\log(Rm)}+(L+1)m^{1-\beta})$. While this result achieves a no-regret procedure, its lack of dependence on both the task-data and on the chosen initialization makes it difficult to meta-learn. In the following theorem, we generalize the regret bound for the exponential forecaster to make it data-dependent and hyperparameter-dependent: \begin{Thm}\label{thm:exp-forc-meta} Let $\ell_1,\dots,\ell_m: C \mapsto [0, 1]$ be any sequence of piecewise $L$-Lipschitz functions that are $\beta$-dispersed. Suppose $C \subset \mathbb R^d$ is contained in a ball of radius $R$. The exponentially weighted forecaster (Algorithm \ref{alg:ef}) has expected regret $\*R_m\le m\lambda +\frac{\log (1/Z)}{\lambda}+\tilde{O}((L+1)m^{1-\beta})$, where $Z=\frac{\int_{\mathcal B(\rho^*,m^{-\beta})}w(\rho)d\rho}{\int_{C}w(\rho)d\rho}$ for $\rho^*$ the optimal action in hindsight. \end{Thm} The proof of this result adapts past analyses of Algorithm \ref{alg:ef}; setting step-size $\lambda$ appropriately recovers the previously mentioned bound. The new bound is useful due to its explicit dependence on both the initialization $w$ and the optimum in hindsight via the $\log(1/Z)$ term. Assuming $w$ is a (normalized) distribution, this effectively measures the overlap between the chosen initialization and a small ball around the optimum; we thus call $$-\log Z=-\log\frac{\int_{\mathcal B(\rho^\ast,m^{-\beta})}w(\rho)d\rho}{\int_Cw(\rho)d\rho}$$ the {\bf negative log-overlap} of initialization $w(.)$ with the optimum $\rho^*$. We also obtain an asymptotic lower bound on the expected regret of any algorithm by extending the argument of \cite{balcan2020learning} to the multi-task setting. We show that for finite $D^*$ we must suffer $\Tilde{\Omega}(m^{1-\beta})$ regret, which limits the improvement we can hope to achieve from task-similarity. \begin{Thm}\label{thm:dispersion-lb} There is a sequence of piecewise $L$-Lipschitz $\beta$-dispersed functions $\ell_{i,j}: [0,1] \mapsto [0, 1]$, whose optimal actions in hindsight $\argmin_{\rho}\sum_{i=1}^ml_{t,i}(\rho)$ are contained in some fixed ball of diameter $D^*$, for which any algorithm has expected regret $\*R_m\ge \tilde{\Omega}(m^{1-\beta})$. \end{Thm} \begin{algorithm}[!t] \caption{Exponential Forecaster} \label{alg:ef} \begin{algorithmic}[1] \STATE {\bfseries Input:} step size parameter $\lambda \in (0, 1]$, initialization $w:C\rightarrow \mathbb R_{\ge 0}$. \STATE{Initialize $w_1=w$} \FOR{$i=1,2,\dots,m$} \STATE{$W_i:=\int_{C}w_i(\rho)d\rho$} \STATE{Sample $\rho_i$ with probability proportional to $w_i(\rho_i)$, i.e. with probability $p_{i}(\rho_i)=\frac{w_i(\rho_i)}{W_i}$} \STATE{Suffer $\ell_i(\rho_i)$ and observe $\ell_i(\cdot)$} \STATE{For each $\rho\in C, \text{ set }w_{i+1}(\rho)=e^{-\lambda\ell_i(\rho)}w_{i}(\rho)$} \ENDFOR \end{algorithmic} \end{algorithm} \subsection{Task-similarity} Before proceeding to our discussion of meta-learning, we first discuss what we might hope to achieve with it; specifically, we consider what a reasonable notion of task-similarity is in this setting. Note that the Theorem~\ref{thm:exp-forc-meta} regret bound has three terms, of which two depend on the hyperparameters and the last is due to dispersion and cannot be improved via better settings. Our focus will thus be on improving the first two terms, which are the dominant ones due to the dependence on the dimensionality and the distance from the initialization encoded in the negative log overlap. In particular, when the initialization is the uniform distribution then this quantity depends inversely on the size of a small ball around the optimum, which may be quite small. Via meta-learning we hope to assign more of the probability mass of the initializer to areas close to the optimum, which will decrease these terms. On average, rather than a dependence on the volume of a small ball we aim to achieve a dependence on the {\bf average negative log-overlap} \begin{equation}\label{eq:tasksim} V^2=-\min_{w:C\mapsto\mathbb R_{\ge0},\int_Cw(\rho)d\rho=1}\frac1T\sum_{t=1}^T\log\int_{\mathcal B(\rho_t^\ast,m^{-\beta})}w(\rho)d\rho \end{equation} which can be much smaller if the task optima $\rho_t^\ast$ are close together; for example, if they are the same then $V=0$, corresponding to assigning all the initial weight within the common ball $\mathcal B(\rho^\ast,m^{-\beta})$ around the shared optima. This is also true if $\operatorname{vol}(\cap_{t\in T}\mathcal B(\rho_t^\ast,m^{-\beta}))>0$, as one can potentially initialize with all the weight in the intersection of the balls. On the other hand if $\operatorname{vol}(\cap_{t\in T}\mathcal B(\rho_t^\ast,m^{-\beta}))=0$, $V>0$. For example, if a $p$-fraction of tasks have optima $\rho_0$ and the remaining at $\rho_1$ with $||\rho_0-\rho_1||>2m^{-\beta}$ the task similarity is given by the binary entropy function $V=H_b(p)=-p\log p-(1-p)\log(1-p)$. The settings of Algorithm~\ref{alg:ef} that achieve the minimum in the definition of $V$ are directly related to $V$ itself: the optimal initializer is the distribution achieving $V$ and the optimal step-size is $V/\sqrt m$. Note that while the explicit definition requires computing a minimum over a set of functions, the task-similarity can be computed using the discretization constructed in Section~\ref{sec:meta}. \section{Meta-learning for data-driven algorithm design} We demonstrate the utility of our bounds in a series of applications across two general areas: data-driven algorithm design \cite{balcan2020data} and robust learning. This section focuses on the former and demonstrates how our results imply guarantees for meta-learning the tuning of solvers for several difficult combinatorial problems arising from the theory of computing. We also demonstrate the practical utility of our approach for tuning clustering algorithms on real and synthetic datasets. \subsection{Instantiations for tuning combinatorial optimization algorithms} Algorithm configuration for combinatorial optimization algorithms involves learning algorithm parameters from multiple instances of combinatorial problems \cite{gupta2017pac,balcan2017learning,balcan2020data}. For well-known problems like MWIS (maximum weighted independent set), IQP (integer quadratic programming), and mechanism design for auctions, the algorithmic performance on a fixed instance is typically a piecewise Lipschitz function of the algorithm parameters. Prior work has looked at learning these parameters in the distributional setting (i.e. assuming iid draws of problem instances) \cite{balcan2017learning} or the online setting where the problem instances may be adversarially drawn \cite{balcan2018dispersion,balcan2020learning}. On the other hand, instantiating our results for these problems provide upper bounds for much more realistic settings where different tasks may be related and our bounds improve with this relatedness. We demonstrate how to apply our results to several combinatorial problems under mild smoothness assumptions. The key idea is to show that if the inputs come from a smooth distribution, the algorithmic performance is dispersed (as a sequence of functions in the algorithm parameters). We leverage known results about the MWIS problem to show $\frac{1}{2}$-dispersion, which together with Theorem \ref{thm:tar} implies that our bound on the task-averaged regret improves with task similarity $V$. {\bf The MWIS problem.} In MWIS, there is a graph $G=(V,E)$ and a weight $w_v\in\mathbb R^+$ for each vertex $v\in V$. The goal is to find a set of non-adjacent vertices with maximum total weight. The problem is $NP$-hard and in fact does not have any constant factor polynomial time approximation algorithm. \cite{gupta2017pac} propose a greedy heuristic family, which selects vertices greedily based on largest value of $w_v / (1 + \text{deg}(v))^\rho$, where $\text{deg}(v)$ is the degree of vertex $v$, and removes neighbors of the selected vertex before selecting the next vertex. For this algorithm family, we can learn the best parameter $\rho$ provided pairs of vertex weights have a joint $\kappa$-bounded distribution, and Theorem \ref{thm:tar} implies regret bounds that improve with task similarity. We use the recipe from \cite{balcan2020semi} to establish dispersion. \begin{Thm}\label{thm:mwis-tar} Consider instances of MWIS with all vertex weights in $(0, 1]$ and for each instance, every pair of vertex weights has a $\kappa$-bounded joint distribution. Then the asymptotic task-averaged regret for learning the algorithm parameter $\rho$ is $o_T(1)+2V\sqrt{m}+O(\sqrt{m})$. \end{Thm} \begin{proof}[Proof sketch] The loss function is piecewise constant with discontinuities corresponding to $\rho$ such that $w_v / (1 + \text{deg}(v))^\rho=w_u / (1 + \text{deg}(u))^\rho$ for a pair of vertices $u,v$. \cite{balcan2018dispersion} show that the discontinuities have $(\kappa \ln n)$-bounded distributions where $n$ is the number of vertices. This implies that in any interval of length $\epsilon$, we have in expectation at most $\epsilon\kappa \ln n$ discontinuities. Using this in dispersion recipe from \cite{balcan2020semi} implies $\frac{1}{2}$-dispersion, which in turn implies the desired regret bound by applying Theorem \ref{thm:tar}. \end{proof} Similar results may be obtained for other combinatorial problems including knapsack, $k$-center clustering, IQP and auction design (see Appendix \ref{app: combinatorial} for full details). We further show instantiations of our results for knapsack and $k$-center clustering, for which we will empirically validate our proposed methods in the next sections. {\bf Greedy Knapsack.} Knapsack is a well-known NP-complete problem. We are given a knapsack with capacity $\texttt{cap}$ and items $i\in[m]$ with sizes $w_i$ and values $v_i$. The goal is to select a subset $S$ of items to add to the knapsack such that $\sum_{i\in S}w_i\le \texttt{cap}$ while maximizing the total value $\sum_{i\in S}v_i$ of selected items. The classic greedy heuristic to add items in decreasing order of $v_i/w_i$ gives a 2-approximation. We consider a generalization to use $v_i/w_i^{\rho}$ proposed by \cite{gupta2017pac} for $\rho\in[0,10]$. For example, for the value-weight pairs $\{(0.99,1),(0.99,1),(1.01,1.01)\}$ and capacity $\texttt{cap}=2$ the classic heuristic $\rho=1$ gives value $1.01$ but using $\rho=3$ gives the optimal value $1.98$. We can learn this optimal value of $\rho$ from similar tasks, and obtain formal guarantees similar to Theorem \ref{thm:mwis-tar} (proof in Appendix \ref{app: combinatorial}). \begin{Thm} Consider instances of the knapsack problem given by bounded weights $w_{i,j}\in[1,C]$ and $\kappa$-bounded independent values $v_{i,j}\in[0,1]$ for $i\in[m],j\in[T]$. Then the asymptotic task-averaged regret for learning the algorithm parameter $\rho$ for the greedy heuristic family described above is $o_T(1)+2V\sqrt{m}+O(\sqrt{m})$. \end{Thm} {\bf $k$-center clustering.} We consider the parameterized $\alpha$-Llyod's algorithm family introduced in \cite{balcan2018data}. In the seeding phase, each point $x$ is sampled with probability proportional to $\min_{c\in C}d(v, c)^{\alpha}$, where $d(\cdot,\cdot)$ is the distance metric and $C$ is the set of centers chosen so far. The family contains an algorithm for each $\alpha\in[0,\infty)\cup \infty$, and includes popular clustering heuristics like vanilla $k$-means (random initial centers, for $\alpha=0$), $k$-means++ (corresponding to $\alpha=2$) and farthest-first traversal ($\alpha=\infty$). The performance of the algorithm is measured using the Hamming distance to the optimal clustering, and is a piecewise constant function of $\alpha$. Our meta-learning result can be instantiated for this problem even without smoothness assumptions (simply leveraging the smoothness induced by the internal randomness of the clustering algorithm, proof in Appendix \ref{app: combinatorial}). \begin{Thm} Consider instances of the $k$-center clustering problem on $n$ points, with Hamming loss $l_{i,j}$ for $i\in[m],j\in[T]$ against some (unknown) ground truth clustering. Then the asymptotic task-averaged regret for learning the algorithm parameter $\alpha$ for the $\alpha$-Lloyd's clustering algorithm family of \cite{balcan2018data} is $o_T(1)+2V\sqrt{m}+O(\sqrt{m})$. \end{Thm} In the following section we look at applications of our results through experiments for the knapsack and $k$-center clustering problems. \subsection{Experiments for greedy knapsack and $k$-center clustering}\label{sec:experiments} We design experiments to evaluate our new meta-initialization algorithm for data-driven design for knapsack and clustering problems on real and simulated data. Our experiments show the usefulness of our techniques in learning a sequence of piecewise-Lipschitz functions. For our experiments, we generate a synthetic dataset of knapsack instances described as follows. For each problem instance of each task, we have $\texttt{cap}=100$ and $m=50$. We have $10$ `heavy' items with $w_i\sim \mathcal{N}(27,0.5)$ and $v_i\sim \mathcal{N}(27,0.5)$, and $40$ items with $w_i\sim \mathcal{N}(19+w_t,0.5)$ and $v_i\sim \mathcal{N}(18,0.5)$, where $w_t\in[0,2]$ is task-dependent. We also consider the parameterized $\alpha$-Lloyd's algorithm family introduced in \cite{balcan2018data}. The performance of the algorithm is measured using the Hamming loss relative to the optimal clustering, and is a piecewise constant function of $\alpha$. We can compute the pieces of this function for $\alpha\in[0,10]$ by iteratively computing the subset of parameter values where a candidate point can be the next center. We use the small split of the {\it Omniglot} dataset \cite{lake2015human}, and create clustering tasks by drawing random samples consisting of five characters each, where four characters are constant throughout. We also create a Gaussian mixture binary classification dataset where each class is a 2D Gaussian distribution consisting of 100 points each, with variance $\begin{pmatrix} \sigma & 0\\ 0 & 2\sigma \end{pmatrix}$ and centers $(0,0)$ and $(d\sigma,0)$. We pick $d\in[2,3]$ to create different tasks. For each dataset we learn using 30 instances each of 10 training tasks and evaluate average loss over 5 test tasks. We perform 100 iterations to average over the randomization of the clustering algorithm and the exponential forecaster algorithm. We perform meta-initialization with parameters $\gamma=\eta=0.01$ (no hyperparameter search performed). The step-size is set to minimize the regret term in Theorem \ref{thm:exp-forc-meta}, and not meta-learned. The relative improvement in task-averaged regret due to meta-learning in our formal guarantees depend on the task-similarity $V$ and how it compares to the dispersion-related $O(m^{1-\beta})$ term, and can be significant when the latter is small. Our results in Table~\ref{table: meta initialization} show that meta-learning an initialization, i.e. a distribution over the algorithm parameter, for the exponential forecaster in this setting yields improved performance on each dataset. We observe this for both the one-shot and five-shot settings, i.e. the number of within-task iterations of the test task are one and five respectively. The benefit of meta-learning is most pronounced for the Gaussian mixture case (well-dispersed and similar tasks), and gains for Omniglot may increase with more tasks (dispersed but less similar tasks). For our knapsack dataset, the relative gains are smaller (similar tasks, but less dispersed). See Appendix \ref{app: experiment} for further experiments that lead us to these insights. \begin{table*}[t] \centering \caption{Effect of meta-initialization on few-shot learning of algorithmic parameters. Performance is computed as a fraction of the average value (Hamming accuracy, or knapsack value) of the offline optimum parameter.} \label{table: meta initialization} \resizebox{0.98\textwidth}{!}{% \begin{tabular}{c||cc|cc|cc} \toprule Dataset & \multicolumn{2}{c}{Omniglot} & \multicolumn{2}{c}{Gaussian Mixture} & \multicolumn{2}{c}{Knapsack} \\ & One-shot & Five-shot & One-shot & Five-shot& One-shot & Five-shot \\ \midrule \midrule Single task & $88.67\pm0.47\%$ & $95.02\pm0.19\%$ & $90.10\pm1.10\%$ & $91.43\pm0.44\%$ &$84.74\pm0.29\%$&$98.89\pm0.17\%$\\ Meta-initialized & $89.65\pm0.49\%$ & $96.05\pm0.15\%$ & $95.76\pm0.60\%$ & $96.39\pm0.27\%$&$85.66\pm0.57\%$&$99.12\pm0.15\%$ \\ \bottomrule \end{tabular} } \end{table*} \section{Introduction}\label{sec:intro} While learning-to-learn, or {\em meta-learning}, has long been an object of study \cite{thrun1998ltl}, in recent years it has gained significant attention as a multi-task paradigm for developing algorithms for learning in dynamic environments, from multiple sources of data, and in federated settings. Such methods focus on using data gathered from multiple tasks to improve performance when faced with data from a new, potentially related task. Among the more popular approaches to meta-learning is {\em initialization-based} meta-learning, in which the meta-learner uses multi-task data to output an initialization for an iterative algorithm such as stochastic gradient descent (SGD) \cite{finn2017maml}. The flexibility of this approach has led to its widespread adoption in areas, such as robotics \cite{duan2017imitation} and federated learning \cite{chen2018fedmeta}, and to a growing number of attempts to understand it, both empirically and theoretically \cite{denevi2019ltlsgd,khodak2019adaptive,fallah2020meta,raghu2020anil,saunshi2020meta}. However, outside some stylized setups our learning-theoretic understanding of how to meta-learn an initialization is largely restricted to the convex Lipschitz setting. We relax both assumptions to study the meta-learning of online algorithms over piecewise-Lipschitz functions, which can be nonconvex and highly discontinuous. As no-regret online learning over such functions is impossible in-general, we study the case of piecewise-Lipschitz functions whose discontinuities are {\em dispersed}, i.e. which do not concentrate in any small compact subset of the input domain \cite{balcan2018dispersion}. Such functions arise frequently in {\em data-driven algorithm design}, in which the goal is to learn the optimal parameter settings of algorithms for difficult (often NP-Hard) problems over a distribution or sequence of instances \cite{balcan2020data}; for example, a small change to the metric used to determine cluster linkage can lead to a discontinuous change in the classification error \cite{balcan2019learning}. In this paper, we also demonstrate that such losses are relevant in the setting of adversarial robustness, where we introduce a novel online formulation. For both cases, the associated problems are often solved across many time periods or for many different problem domains, resulting in natural multi-task structure that we might hope to use to improve performance. To the best of our knowledge, ours is the first theoretical study of meta-learning in both of these application settings. In the single-task setting the problem of learning dispersed functions can be solved using simple methods such as the exponentially-weighted forecaster. To design an algorithm for learning to initialize online learners in this setting, we propose a method that optimizes a sequence of data-dependent upper-bounds on the within-task regret \cite{khodak2019adaptive}. The result is an averaged bound that improves upon the regret of the single-task exponential forecaster so long as there exists an initial distribution that can compactly contain many of the within-task optima of the different tasks. Designing the meta-procedure is especially challenging in our setting because it involves online learning over a set of distributions on the domain. To handle this we study a ``prescient'' form of the classic follow-the-regularized leader (FTRL) scheme that is run over an unknown discretization; we then show the existence of another algorithm that plays the same actions but uses only known information, thus attaining the same regret while being practical to implement. To demonstrate the usefulness of our method, we study this algorithm in two settings. {\bf Multi-task data-driven algorithm design.} We consider data-driven tuning of the parameters of combinatorial optimization algorithms for hard problems such as knapsack and clustering. The likely intractability of these problems have led to several approaches to study them in more realistic settings, such as smoothed analysis \cite{spielman2004smoothed} and data-driven algorithm configuration \cite{balcan2020data}. We view our meta-learning approach as a refinement on the latter in which we allow not only a distribution of instances but multiple distributions of related instances that can help learn a good algorithm. Our setting is more realistic than those considered in prior work. It is more challenging than learning from i.i.d. instances \cite{gupta2017pac,balcan2017learning}, but at the same time less pessimistic than online learning over adversarial problem instances~\cite{balcan2018dispersion}, as it allows us to leverage similarity of problem instances coming from different but related distributions. We instantiate our bounds theoretically on several problems where the cost functions are piecewise-constant in the tuned parameters, allowing our meta-procedure to learn the right initial distribution for exponential forecasters. This includes well-known combinatorial optimization problems like finding the maximum weighted independent set (MWIS) of vertices on a graph, solving quadratic programs with integer constraints using algorithms based on the celebrated Goemans-Williamson algorithm, and mechanism design for combinatorial auctions. Then we consider experimentally the problem of tuning the right $\alpha$ for the $\alpha$-Lloyd's family of clustering algorithms~\cite{balcan2018data}. In experimental evaluations on two datasets---a synthetic Gaussian mixture model and the well-known Omniglot dataset from meta-learning \cite{lake2015human}---our meta-procedure leads to improved clustering accuracy compared to single-task learning to cluster. The results holds for both one-shot and five-shot clustering tasks. We also study our results for a family of greedy algorithms for the knapsack problem introduced by \cite{gupta2017pac} and obtain similar results for a synthetic dataset. {\bf Online robust meta-learning.} The second instantiation of our meta-learning procedure is to a new notion of adversarial robustness for the setting of online learning, where our results imply robust meta-learning in the presence of outliers. In this setting, the adversary can make (typically small) modifications to some example $x\in\mathcal X$, which can result in potentially large changes to the corresponding loss value $l_h(x)$, where $h\in\mathcal{H}$ is our hypothesis. For instance, consider the well-studied setting of adversarial examples for classification of images using deep neural networks \cite{nguyen2015deep,brendel2020adversarial}. Given a neural network $f$, the adversary can perturb a datapoint $x$ to a point $x'$, say within a small $L_p$-ball around $x$, such that $f(x)=f(x')$ but the true label of $x'$ does not match $x$, and therefore $l_f(x)\ne l_f(x')$. In general, under the adversarial influence, we observe a {\it perturbed loss} function $\Tilde{l}_h(x)=l_h(x)+a_h(x)$. Typically we are interested in optimizing both the perturbed loss $\Tilde{l}_h(x)$, i.e. measuring performance relative to optimum for adversarially perturbed losses, and the {\it true loss} $l_h(x)$ (performance on the unobserved, unperturbed loss). For example, in the online learning setting, \cite{agarwal2019online}~consider perturbed loss minimization for linear dynamical systems, while \cite{resler2019adversarial} look at true $\{0,1\}$ loss minimization in the presence of adversarial noise. Our approach ensures that regret for both the perturbed and true loss are small, for piecewise-Lipschitz but dispersed adversaries. \section*{Checklist} The checklist follows the references. Please read the checklist guidelines carefully for information on how to answer these questions. For each question, change the default \answerTODO{} to \answerYes{}, \answerNo{}, or \answerNA{}. You are strongly encouraged to include a {\bf justification to your answer}, either by referencing the appropriate section of your paper or providing a brief inline description. For example: \begin{itemize} \item Did you include the license to the code and datasets? \answerYes{See Section~\ref{gen_inst}.} \item Did you include the license to the code and datasets? \answerNo{The code and the data are proprietary.} \item Did you include the license to the code and datasets? \answerNA{} \end{itemize} Please do not modify the questions and only use the provided macros for your answers. Note that the Checklist section does not count towards the page limit. In your paper, please delete this instructions block and only keep the Checklist section heading above along with the questions/answers below. \begin{enumerate} \item For all authors... \begin{enumerate} \item Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? \answerYes{} \item Did you describe the limitations of your work? \answerYes{Alongside contributions in context, e.g. the end of Section~\ref{sec:meta}.} \item Did you discuss any potential negative societal impacts of your work? \answerNo{Our concern w.r.t. the negative societal impact of this theoretical work is limited to standard risks associated with ML, e.g. for privacy or fair treatment.} \item Have you read the ethics review guidelines and ensured that your paper conforms to them? \answerYes{} \end{enumerate} \item If you are including theoretical results... \begin{enumerate} \item Did you state the full set of assumptions of all theoretical results? \answerYes{} \item Did you include complete proofs of all theoretical results? \answerYes{} \end{enumerate} \item If you ran experiments... \begin{enumerate} \item Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? \answerYes{Supplemental material.} \item Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? \answerYes{} \item Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? \answerYes{} \item Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? \answerNA{Experiments run on personal computer (16GB, 2.3 GHz Dual-Core).} \end{enumerate} \item If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... \begin{enumerate} \item If your work uses existing assets, did you cite the creators? \answerYes{} \item Did you mention the license of the assets? \answerNA{} \item Did you include any new assets either in the supplemental material or as a URL? \answerNA{} \item Did you discuss whether and how consent was obtained from people whose data you're using/curating? \answerNA{} \item Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? \answerNA{} \end{enumerate} \item If you used crowdsourcing or conducted research with human subjects... \begin{enumerate} \item Did you include the full text of instructions given to participants and screenshots, if applicable? \answerNA{} \item Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? \answerNA{} \item Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? \answerNA{} \end{enumerate} \end{enumerate} \fi \section{An algorithm for meta-learning the initialization and step-size} Having established a single-task algorithm and shown how its regret depends on the initialization and step-size, we move on to meta-learning these hyperparameters. Recall that our goal is to make the task-averaged regret \eqref{eq:tar} small, in particular to improve upon the baseline of repeatedly running Algorithm~\ref{alg:ef} from the uniform distribution, up to $o_T(1)$ terms that vanish as we see more tasks. This accomplishes the meta-learning goal of using multiple tasks to improve upon single-task learning. In this paper, we use the strategy of running online learning algorithms on the data-dependent regret guarantees from above \cite{khodak2019adaptive}. If we can do so with sublinear regret in $T$, then we will improve upon the single-task guarantees up to $o_T(1)$ terms, as desired. Specifically, we are faced with a sequence of regret-upper-bounds $U_t(w,v)=(v+f_t(w)/v)\sqrt m+g(m)$ on nonnegative functions $w$ over $C$ and positive scalars $v>0$. Note that $g(m)$ cannot be improved via meta-learning, so we will focus on learning $w$ and $v$. To do so, we run two online algorithms, one over the functions $f_t$ and the other over $h_t(v)=v+f_t(w_t)/v$, where $w_t$ is set by the first procedure. As shown in the following result, if both procedures have sublinear regret then our task-averaged regret will have the desired properties: \begin{Thm}\label{lem:aruba} Assume each task $t\in[T]$ consists of a sequence of $m$ $\beta$-dispersed piecewise $L$-Lipschitz functions $\ell_{t,i}:C\mapsto[0,1]$. Let $f_t$ and $g$ be functions such that the regret of Algorithm~\ref{alg:ef} run with step-size $\lambda=v\sqrt m$ for $v>0$ and initialization $w:C\mapsto\mathbb R_{\ge0}$ is bounded by $U_t(w,v)=(v+f_t(w)/v)\sqrt m+g(m)$. Suppose we have a procedure that achieves $F_T(w)$ regret w.r.t. any $w:C\mapsto\mathbb R_{\ge0}$ by playing actions $w_t:C\mapsto\mathbb R_{\ge0}$ on $f_t$ and another procedure that achieves $H_T(v)$ regret w.r.t. any $v>0$ by playing actions $v_t>0$ on $h_t(v)=v+f_t(w_t)/v$, where $H_T$ is non-increasing on the positive reals. Then by setting $\rho_{t,i}$ using Algorithm~\ref{alg:ef} with step-size $v_t/\sqrt m$ and initialization $w_t$ at each task $t$ we get task-averaged regret bounded by \begin{equation} \left(\frac{H_T(V)}T+\min\left\{\frac{F_T(w^\ast)}{VT},2\sqrt{F_T(w^\ast)/T}\right\}+2V\right)\sqrt m+g(m) \end{equation} for $w^\ast=\argmin_{w:C\mapsto\mathbb R_{\ge0}}\sum_{t=1}^Tf_t(w)$ the optimal initialization and $V$ the task-similarity~\eqref{eq:tasksim}. \end{Thm} This result is an analog of \cite[Theorem~3.1]{khodak2019adaptive} and follows by manipulating the definition of regret. It reduces the problem of obtaining a small task-averaged regret to solving two online learning problems, one to set the initialization and one to set the step-size. So long as both have sublinear regret then we will improve over single-task learning. In the next two sections we derive suitable procedures. \subsection{Meta-learning the initialization}\label{sec:meta} We now come to the most technically challenging component of our meta-learning procedure: learning the initialization. As discussed above, we can accomplish this by obtaining a no-regret procedure for the function sequence $$f_t(w)=-\log\frac{\int_{\mathcal B(\rho_t^\ast,m^{-\beta})}w(\rho)d\rho}{\int_Cw(\rho)d\rho}.$$ This is nontrivial as the optimization domain is a set of nonnegative functions, effectively measures on the domain $C$. To handle this, we first introduce some convenient notation and abstractions. At each task $t$ we are faced with some function $f_t$ associated with an unknown closed subset $C_t\subset C$ --- in particular $C_t=\mathcal B(\rho_t^\ast,m^{-\beta})$ --- with positive volume $\operatorname{vol}(C_t)>0$ that is revealed after choosing $w_t:C\mapsto\mathbb R_{\ge0}$. For each time $t$ define the discretization $$\mathcal D_t=\{D=\bigcap_{s\le t}C_s^{(\*c_{[s]})}:\*c\in\{0,1\}^t,\operatorname{vol}(D)>0\}$$ of $C$, where $C_t^{(0)}=C_t$ and $C_t^{(1)}=C\backslash C_t$. We will use elements of these discretizations to index nonnegative vectors in $\mathbb R_{\ge0}^{|\mathcal D_t|}$; specifically, for any measure $w:C\mapsto\mathbb R_{\ge0}$ let $\*w(t)\in\mathbb R_{\ge0}^{|\mathcal D_t|}$ denote the vector with entries $\*w(t)_{[D]}=\int_Dw(\rho)d\rho$ for $D\in\mathcal D_t$. Note that we will exclusively use $p,q,v,w$ for measures, with $v$ specifically referring to the uniform measure, i.e. $\*v(t)_{[D]}=\operatorname{vol}(D)$. For convenience, for all real vectors $\*x$ we will use $\*{\hat x}$ to denote $\*p/\|\*p\|_1$. Finally, we abuse notation and remove the parentheses to refer those vectors associated with the final discretization, i.e. $\*v=\*v(T)$ and $\*w=\*w(T)$. Now that we have this notation we can turn back to the functions we are interested in: $f_t(w)=-\log\frac{\int_{C_t}w(\rho)d\rho}{\int_Cw(\rho)d\rho}$, where $C_t=\mathcal B(\rho_t^\ast,m^{-\beta})$. Observe that we can equivalently write this as $f_t(\*w)=-\log\langle\*w_t^\ast,\*{\hat w}\rangle$, where $\*w_{t[D]}^\ast=1_{D\subset C_t}$; this translates our online learning problem from the domain of measures on $C$ to the simplex on $|\mathcal D_T|$ elements. However, we cannot play in this domain explicitly as we do not have access to the final discretization $\mathcal D_T$, nor do we get access to $\*w_t^\ast$ after task $t$, except implicitly via $C_t$. In this section we design a method that implicitly run an online convex optimization procedure over $\mathbb R_{\ge0}^{|\mathcal D_T|}$ while explicitly playing probability measures $w:C\mapsto\mathbb R_{\ge0}$. \begin{algorithm}[!t] \caption{Follow-the-Regularized-Leader (prescient form)} \label{alg:ftrl} \begin{algorithmic}[1] \STATE {\bfseries Input:} discretization $\mathcal D_T$ of $C$, mixture parameter $\gamma\in[0,1]$, step-size $\eta>0$ \STATE Initialize $\*w_1=\*{\hat v}$ \FOR{$t=1,2,\dots,T$} \STATE Play $\*w_t$. \STATE Suffer $f_t(\*w_t)=-\log\langle\*w_t^\ast,\*w_t\rangle$. \STATE Observe $f_t$. \STATE Update $\*w_{t+1}=\argmin_{\|\*w\|_1=1,\*w\ge\gamma\*{\hat v}}D_{KL}(\*w||\*{\hat v})+\eta\sum_{s\le t}f_s(\*w)$ \ENDFOR \end{algorithmic} \end{algorithm} As the functions $f_t$ are exp-concave, one might first consider applying a method attaining logarithmic regret on such losses \cite{hazan2007logarithmic,orabona2012beyond}; however, such algorithms have regret that depends linearly on the dimension, which in our case is poly$(T)$. We thus turn to the the follow-the-regularized-leader (FTRL) family of algorithms, which in the case of entropic regularization are well-known to have regret logarithmic in the dimension \cite{shalev-shwartz2011oco}. In Algorithm~\ref{alg:ftrl} we display the pseudo-code of a modification with regularizer $D_{KL}(\cdot||\*{\hat v})$, where recall $\*v$ is the vector of volumes of the discretization $\mathcal D_T$ of $C$, and we constrain the played distribution to have measure at least $\gamma\*{\hat v}_{[D]}$ over every set $D\in\mathcal D_T$. While Algorithm~\ref{alg:ftrl} explicitly requires knowing the discretization $\mathcal D_T$ of $C$ in advance, the following key lemma shows that we can run the procedure knowing only the discretization $\mathcal D_t$ after task $t$ by simply minimizing the same objective over probability distributions discretized on $\mathcal D_t$. This crucially depends on the re-scaling of the entropic regularizer by $\*{\hat v}$ (which notably corresponds to the uniform distribution over $C$) and the fact that $\*w_t^\ast\in\{0,1\}^{|\mathcal D_T|}$. \begin{Lem}\label{lem:equivalent} Let $w:C\mapsto\mathbb R_{\ge0}$ be the probability measure corresponding to the minimizer \begin{equation} \*w=\argmin_{\|\*q\|_1=1,\*q\ge\gamma\*{\hat v}}D_{KL}(\*q||\*{\hat v})-\eta\sum_{s\le t}\log\langle\*w_s^\ast,\*q\rangle \end{equation} and let $\tilde w:C\mapsto\mathbb R_{\ge0}$ be the probability measure corresponding to the minimizer \begin{equation} \tilde{\*w}(t)=\argmin_{\|\*q\|_1=1,\*q\ge\gamma\*{\hat v}(t)}D_{KL}(\*q||\*{\hat v(t)})-\eta\sum_{s\le t}\log\langle\*w_s^\ast(t),\*q\rangle \end{equation} Then $\*w=\tilde{\*w}$. \end{Lem} We can thus move on to proving a regret guarantee for Algorithm~\ref{alg:ftrl}. This follows from Jensen's inequality together with standard results for FTRL once we show that the loss functions are $\frac1{\gamma\operatorname{vol}(C_t)}$-Lipschitz over the constrained domain, yielding the following guarantee for Algorithm~\ref{alg:ftrl}: \newpage \begin{Thm}\label{thm:frl} Algorithm~\ref{alg:ftrl} has regret bounded by \begin{equation} \frac{1-\gamma}\eta D_{KL}(\*w^\ast||\*{\hat v})+\frac\eta{\gamma^2}\sum_{t=1}^T\frac1{(\operatorname{vol}(C_t))^2}+\gamma\sum_{t=1}^T\log\frac1{\operatorname{vol}(C_t)} \end{equation} w.r.t. the optimum in hindsight $\*w^\ast\in\argmin_{\|\*w\|_1=1,\*w\ge\*0}\sum_{t=1}^Tf_t(\*w)$ of the functions $f_t$. Setting $\gamma^2=GB/\sqrt T$ and $\eta^2=\frac{B^2\gamma^2}{TG^2}$, where $B^2=D_{KL}(\*w^\ast||\*{\hat v})$ and $G^2=\frac1T\sum_{t=1}^T\frac1{(\operatorname{vol}(C_t))^2}$, yields sublinear regret $\tilde O(\sqrt {BG}T^\frac34)$. \end{Thm} \begin{proof} Algorithm~\ref{alg:ftrl} is standard FTRL with regularizer $\frac1\eta D_{KL}(\cdot||\*{\hat v})$, which has the same Hessian as the standard entropic regularizer over the simplex and is thus $\frac1\eta$-strongly-convex w.r.t. $\|\cdot\|_1$ \cite[Example~2.5]{shalev-shwartz2011oco}. Applying Jensen's inequality, the standard regret bound for FTRL \cite[Theorem~2.11]{shalev-shwartz2011oco} together with the Lipschitz guarantee of Claim~\ref{clm:overlip}, and Jensen's inequality again yields the result: \begin{align*} \sum_{t=1}^Tf_t(\*w_t)-f_t(\*w^\ast) &=\sum_{t=1}^Tf_t(\*w_t)-(1-\gamma)f_t(\*w^\ast)-\gamma f_t(\*{\hat v})+\gamma(f_t(\*{\hat v})-f_t(\*w^\ast))\\ &\le\sum_{t=1}^Tf_t(\*w_t)-f_t(\gamma\*{\hat v}+(1-\gamma)\*w^\ast)+\gamma\log\frac{\langle\*w_t^\ast,\*w^\ast\rangle}{\langle\*w_t^\ast,\*{\hat v}\rangle}\\ &\le\frac1\eta D_{KL}(\gamma\*{\hat v}+(1-\gamma)\*w^\ast||\*{\hat v})+\frac\eta{\gamma^2}\sum_{t=1}^T\frac1{(\operatorname{vol}(C_t))^2}+\gamma\sum_{t=1}^T\log\frac1{\operatorname{vol}(C_t)}\\ &\le\frac{1-\gamma}\eta D_{KL}(\*w^\ast||\*{\hat v})+\frac\eta{\gamma^2}\sum_{t=1}^T\frac1{(\operatorname{vol}(C_t))^2}+\gamma\sum_{t=1}^T\log\frac1{\operatorname{vol}(C_t)} \end{align*} \end{proof} Since the regret is sublinear in $T$, this result satisfies our requirement for attaining asymptotic improvement over single-task learning via Theorem~\ref{lem:aruba}. However, there are several aspects of this bound that warrant some discussion. The first is the rate of $T^\frac34$, which is less sublinear than the standard $\sqrt T$ and certainly the $\log T $ regret of exp-concave functions. However, the functions we face are (a) non-Lipschitz and (b) over a domain that has dimensionality $\Omega(T)$; both violate conditions for good rates in online convex optimization~\cite{hazan2007logarithmic,shalev-shwartz2011oco}, making our problem much more difficult. A more salient aspect is the dependence on $B^2=D_{KL}(\*w^\ast||\*{\hat v})$, effectively the negative entropy of the optimal initialization. This quantity is in-principle unbounded but is analogous to standard online convex optimization bounds that depend on the norm of the optimum, which in e.g. the Euclidean case are also unbounded. In our case, if the optimal distribution is highly concentrated on a very small subset of the space it will be difficult to compete with. Note that our setting of $\eta$ depends on knowing or guessing $B$; this is also standard but is certainly a target for future work to address. For example, past work on parameter-free algorithms has solutions for optimization over the simplex~\cite{orabona2016parameter}; however, it is unclear whether this is straightforward to do while preserving the property given by Lemma~\ref{lem:equivalent} allowing us to implicitly work with an unknown discretization. A more reasonable approach may be to compete only with smooth measures that only assign probability at most $\kappa\operatorname{vol}(D)$ to any subset $D\subset C$ for some constant $\kappa\ge1$; in this case we will simply have $B$ bounded by $\log\kappa$. A final issue is the dependence on $\sqrt G$, which is bounded by the reciprocal of the smallest volume $\operatorname{vol}(C_t)$, which in the dispersed case is roughly $O(m^{\beta d})$; this means that the task-averaged regret will have a term that, while decreasing as we see additional tasks, is {\em increasing} in the number of within-task iterations and the dispersion parameter, which is counter-intuitive. It is also does so exponentially in the dimension. Note that in the common algorithm configuration setting of $\beta=1/2$ and $d=1$ this will simply mean that for each task we suffer an extra $o_T(1)$ loss at each within-task round, a quantity which vanishes asymptotically. \subsection{Meta-learning the step-size} In addition to learning the initialization, Theorem~\ref{lem:aruba} requires learning the task-similarity to set the within-task step-size $\lambda>0$. This involves optimizing functions of form $h_t(v)=v+f_t(w_t)/v$. Since we know that the measures $w_t$ are lower-bounded in terms of $\gamma$, we can apply a previous result \cite{khodak2019adaptive} that solves this by running the EWOO algorithm \cite{hazan2007logarithmic} on the modified sequence $v+\frac{f_t(w_t)+\varepsilon^2}v$: \begin{Cor}\label{cor:ewoo} For any $\varepsilon>0$, running the EWOO algorithm on the modified sequence $v+\frac{f_t(w)+\varepsilon^2}v$ over the domain $[\varepsilon,\sqrt{D^2-\log\gamma+\varepsilon^2}]$, where $D^2\ge\frac1T\sum_{t=1}^T\log\frac1{\operatorname{vol}(C_t)}$, attains regret \begin{equation} \min\left\{\frac{\varepsilon^2}{v^\ast},\varepsilon\right\}T+\frac {\sqrt{D^2-\log\gamma}}2\max\left\{\frac{D^2-\log\gamma}{\varepsilon^2},1\right\}(1+\log(T+1)) \end{equation} on the original sequence $h_t(v)=v+f_t(w)/v$ for all $v^\ast>0$. \end{Cor} Setting $\varepsilon=1/\sqrt[4]T$ gives a guarantee of form $\tilde O((\min\{1/v^\ast,\sqrt[4]T\})\sqrt T)$. Note this rate might be improvable by using the fact that $v$ is lower-bounded due to the $\gamma$-constraint; however, we do not focus on this since this component is not the dominant term in the regret. In fact, because of this we can adapt a related method that simply runs follow-the-leader (FTL) on the same modified sequence~\cite{khodak2019adaptive} without affecting the dominant terms in the regret: \begin{Cor}\label{cor:ftl} For any $\varepsilon>0$, running the FTL algorithm on the modified sequence $v+\frac{f_t(w)+\varepsilon^2}v$ over the domain $[\varepsilon,\sqrt{D^2-\log\gamma+\varepsilon^2}]$, where $D^2\ge\frac1T\sum_{t=1}^T\log\frac1{\operatorname{vol}(C_t)}$, attains regret \begin{equation} \min\left\{\frac{\varepsilon^2}{v^\ast},\varepsilon\right\}T+2\sqrt{D^2-\log\gamma}\max\left\{\frac{(D^2-\log\gamma)^\frac32}{\varepsilon^3},1\right\}(1+\log(T+1)) \end{equation} on the original sequence $h_t(v)=v+f_t(w)/v$ for all $v^\ast>0$. \end{Cor} Setting $\varepsilon=1/\sqrt[5]T$ gives a guarantee of form $\tilde O((\min\{1/v^\ast,\sqrt[5]T\})T^\frac35)$. The alternatives are described in pseudocode at the bottom of Algorithm~\ref{alg:meta}; while the guarantee of the FTL-based approach is worse, it is almost as simple to compute as the task-similarity and does not require integration, making it easier to implement. \subsection{Putting the two together} \begin{algorithm}[!t] \caption{ Meta-learning the parameters of the exponential forecaster (Algorithm~\ref{alg:ef}). Recall that $\*p(t)$ refers to the time-$t$ discretization of the measure $p:C\mapsto\mathbb R_{\ge0}$ (c.f. Section~\ref{sec:meta}). } \label{alg:meta} \begin{algorithmic}[1] \STATE {\bfseries Input:} domain $C\subset\mathbb R^d$, dispersion $\beta>0$, step-size $\eta>0$, constraint parameter $\gamma\in[0,1]$, offset parameter $\varepsilon>0$, domain parameter $D>0$. \STATE Initialize $w_1$ to the uniform measure on $C$ and set $\lambda_1=\frac{\varepsilon+\sqrt{D^2+\varepsilon^2-\log\gamma}}{2\sqrt m}$. \FOR{task $t=1,2,\dots,T$} \STATE Run Algorithm~\ref{alg:ef} with initialization $w_t$ and step-size $\lambda_t$ and obtain task-$t$ optimum $\rho_t^\ast\in C$. \STATE Set $w_t^\ast=1_{\mathcal B(\rho_t^\ast,m^{-\beta})}$ to be the function that is 1 in the $m^{-\beta}$-ball round $\rho_t^\ast$ and 0 elsewhere. \STATE Set $w_{t+1}$ to $\*w_{t+1}(t)=\argmin_{\|\*w\|_1=1,\*w\ge\gamma\*{\hat v}(t)}D_{KL}(\*w||\*{\hat v}(t))-\eta\sum_{s\le t}\log\langle\*w_s^\ast(t),\*w\rangle$. \IF{using EWOO} \STATE Define $\mu_t(x)=\exp\left(-\alpha\left(tx+\frac{t\varepsilon^2-\sum_{s\le t}\log\langle\*w_s^\ast(s),\*w_s(s)\rangle}x\right)\right)$ for $\alpha=\frac2D\min\left\{\frac{\varepsilon^2}{D^2},1\right\}$. \STATE Set $\lambda_{t+1}=\frac{\int_\varepsilon^{\sqrt{D^2+\varepsilon^2-\log\gamma}}x\mu_t(x)dx}{\sqrt m\int_\varepsilon^{\sqrt{D^2+\varepsilon^2-\log\gamma}}\mu_t(x)dx}$. \ELSE \STATE Set $\lambda_{t+1}=\sqrt{\frac{\sum_{s\le t}\varepsilon^2-\log\langle\*w_s^\ast(s),\*w_s(s)\rangle}{tm}}$. \ENDIF \ENDFOR \end{algorithmic} \end{algorithm} Now that we have an algorithm for both the initialization and the step-size, we can combine the two in Algorithm~\ref{alg:meta} to meta-learn the parameter of the exponential forecaster. Then we can obtain a bound on the task-averaged regret from Theorem~\ref{lem:aruba} to attain our final result. \begin{Thm}\label{thm:tar} Define $B^2=D_{KL}(\*w^\ast||\*{\hat v})$, $G^2=\frac1T\sum_{t=1}^T\frac1{(\operatorname{vol}(C_t))^2}$, and $D^2\ge\frac1T\sum_{t=1}^T\log\frac1{\operatorname{vol}(C_t)}=O(\beta d\log m)$. Then Algorithm~\ref{alg:meta} with $\eta,\gamma$ set as in Theorem~\ref{thm:frl} and $\varepsilon=1/\sqrt[4]T$ (if using EWOO) or $1/\sqrt[5]T$ (otherwise) yields task-averaged regret \begin{equation} \tilde O\left(\min\left\{\frac{\sqrt{BG}}{V\sqrt[4]T},\frac{\sqrt[4]{BG}}{\sqrt[8]T}\right\}+2V\right)\sqrt m+g(m) \end{equation} Here $V$ is the task-similarity \eqref{eq:tasksim}. \end{Thm} So as in past work in meta-learning, this achieves the goal of adapting to the task-similarity by attaining asymptotic regret of $2V\sqrt m+O(m^{-\beta})$ on-average, where here we substitute the dispersion term for $g$ and $V^2$ is the task-similarity encoding the average probability mass assigned to the different task balls by the optimal initialization distribution. We include the minimum of two rates in the bound, with the rate being $1/\sqrt[4]T$ is the task-similarity is a constant $\Theta_T(1)$ and $1/\sqrt[8]T$ if it is extremely small. As discussed in above, this rate reflects the difficulty of our meta-problem, in which we are optimizing non-smooth functions over a space of distributions; in contrast, past meta-update procedures have taken advantage of nice properties of Bregman divergences to obtain faster rates \cite{khodak2019adaptive}. \subsection{Related work}\label{sec:related} The success of meta-learning has led to significant theoretical effort to understand it. Most efforts studying initialized-based meta-learning focus on the convex Lipschitz setting \cite{denevi2019ltlsgd,khodak2019provable}; work studying inherently nonconvex modeling approaches instead usually study multi-task representation learning~\cite{balcan2015lifelong,maurer2016mtl,du2021fewshot,tripuraneni2021provable} or target optimization, e.g. stationary point convergence \cite{fallah2020meta}. An exception is a study of linear models over Gaussian data showing that nonconvexity is critical to meta-learning an initialization that exploits low-rank task structure \cite{saunshi2020meta}. There is also work extending results from the neural tangent kernel literature to meta-learning \cite{zhou2021meta}, but in this case the objective becomes convex. On the other hand, we study initializations for learning a class of functions that can be highly non-convex and have numerous discontinuities. Theoretically, our work uses the Average Regret-Upper-Bound Analysis (ARUBA) strategy~\cite{khodak2019adaptive} for obtaining a meta-update procedure for initializing within-task algorithms, which has been applied elsewhere for privacy \cite{li2020dp} and federated learning \cite{khodak2021fedex}; the main technical advance in our work is in providing the guarantees for it in our setting, which is challenging due to the need to learn over a space of probability measures. Data-driven configuration is the selection of an algorithm from a parameterized family by learning over multiple problem instances \cite{gupta2017pac,balcan2017learning}. In other words, it is `hyperparameter tuning' with formal guarantees, and has applications to integer programming, clustering, and learning with limited labeled data \cite{balcan2018learning,balcan2019learning,balcan2021data}. In this work, we show how this general approach can be made even more effective by enabling it to adapt to task similarity. We also show applications of our results to robust meta-learning in the presence of outliers in the dataset \cite{pillutla2019robust,kong2020robust}. While previous work on robust online learning has considered adversaries with bounded perturbation in the online learning setting \cite{agarwal2019online,resler2019adversarial}, our results allow potentially unbounded perturbations, provided the adversary uses a smooth distribution. That is, the adversarial attack can be thought of as a distribution of perturbations, similar to the smoothed analysis approach of \cite{spielman2004smoothed}. In the offline setting, a similar attack is studied in the context of deep network feature-space attacks by \cite{balcan2020power}. We also remark that our formulation has a poisoning aspect, since we do not observe the clean loss $l_h(x)$, which is of particular interest in federated learning \cite{bagdasaryan2020backdoor,tolpegin2020data}. Also, note that unlike the typical applications of data-driven design where optimization is over the dual loss function, i.e. loss as a function of the algorithm parameter for a fixed sample $x\in\mathcal X$, here we consider learning loss or confidence functions over the input space $\mathcal X$. \section{Robust online meta-learning} In online learning, we seek to minimize a sequence of loss functions, and are required to perform well relative to the optimal choice in hindsight. It is possible for the observed loss functions to be noisy on some inputs, either naturally or due to adversarial intent. We will now explore the conditions under which learning robust to such an adversarial influence (i.e. outlier injection) is possible, which is particularly common in meta-learning with diverse sources. {\it Setup}: At round $i$, we play $x_i$, observe perturbed loss $\Tilde{l}_i : \mathcal X\rightarrow[0,1]$ which is set by the adversary by modifying the true loss $l_i:\mathcal X\rightarrow[0,1]$ using an {\it attack function} $a_i:\mathcal X\rightarrow[0,1]$ such that $\Tilde{l}_i=l_i+a_i$ and may be non-Lipschitz, and suffer perturbed loss $\Tilde{l}_i(x_i)$ and true loss $l_i(x_i)$. We seek to minimize regret relative to best fixed action in hindsight, i.e. $$\Tilde{R}_m=\sum_{i=1}^m \Tilde{l}_i(x_i) - \min_{x\in\mathcal X}\sum_{i=1}^m \Tilde{l}_i(x)$$ for the perturbed loss and regret $$R_m=\sum_{i=1}^m l_i(x_i) - \min_{x\in\mathcal X}\sum_{i=1}^m l_i(x)$$ for the true loss. No regret can be achieved provided the adversary distribution is sufficiently smooth, i.e. satisfies $\beta$-dispersion for some $\beta>0$, as this corresponds to online optimization of the perturbed loss function. We can show this for both perturbed and true loss. The perturbed loss guarantee is immediate from standard results on online learning of piecewise Lipschitz functions \cite{balcan2018dispersion,balcan2020learning}. For the true loss, we can achieve no regret if the adversary perturbation $a_i$ is limited to small balls and the centers of the balls are dispersed, which we capture using the following definition. \begin{Def}[{$\delta$-bounded, $\beta_a$-dispersed attack}] An attack function $a_i$ is $\delta$-bounded if there exists a ball $\mathcal B(x_a,\delta)$ of radius $\delta$ such that $a_i(x)=0$ for each $x\in\mathcal X\setminus \mathcal B(x_a,\delta)$. $x_a$ is called a {\it center} $c_{a_i}$ for attack $a_i$. A sequence of attack functions $a_1,\dots,a_m$ is said to be $\beta_a$-dispersed, if the positions of attack centers $x_a$ are dispersed i.e. for all $m$ and for all $\epsilon\ge m^{-\beta_a}$, $$\mathbb E\left[ \max_{x,x'\in\mathcal X,x\in\mathcal B(x',\epsilon)}\big\lvert \{ i\in[m] \mid x=c_{a_i}\} \big\rvert \right] \le \Tilde{O}(\epsilon m)$$. \end{Def} \begin{Thm}\label{thm:robustness single task} Given a sequence of $\beta$-dispersed adversarially perturbed losses $\Tilde{l}_i=l_i+a_i$, where $\Tilde{l}_i,l_i,a_i$ are piecewise $L$-Lipschitz functions $ \mathcal X\rightarrow[0,1]$ for $i=1,\dots,m$ and $\mathcal X\subset\mathbb R^d$, the exponential forecaster algorithm has $$\mathbb E[\Tilde{R}_m]=\Tilde{O}(m\lambda +\frac{\log (1/Z)}{\lambda}+(L+1)m^{1-\beta})$$ (with Z as in Theorem \ref{thm:exp-forc-meta}). If in addition we have that $a_i$ is a $m^{-\beta_a}$-bounded, $\beta_a$-dispersed attack, then $$\mathbb E[R_m]=\Tilde{O}(m\lambda +\frac{\log (1/Z)}{\lambda}+(L+1)m^{1-\min\{\beta,\beta_a\}}).$$ \end{Thm} Together with Theorem \ref{thm:tar}, this implies no regret meta-learning in the presence of dispersed adversaries, in particular the occurrence of unreliable data in small dispersed parts of the domain. We also show a lower bound below which establishes that our upper bounds are essentially optimal in the attack dispersion. \begin{Thm}\label{thm:robustness lower bound} There exist sequences of piecewise $L$-Lipschitz functions $\Tilde{l}_i,l_i,a_i$ $[0,1]\rightarrow[0,1]$ for $i=1,\dots,m$ such that for any online algorithm \begin{enumerate}\itemsep0em \item $\Tilde{l}_i$ is $\beta$-dispersed and $\mathbb E[\Tilde{R}_m]=\Omega(m^{1-\beta})$, \item $\Tilde{l}_i$ is $\beta$-dispersed, $a_i$ is $m^{-\beta}$-bounded, $\beta_a$-dispersed and $\mathbb E[R_m]=\Omega(m^{1-\min\{\beta,\beta_a\}})$. \end{enumerate} \end{Thm}
1,108,101,566,303
arxiv
\section{Introduction} \label{sec:intro} \setcounter{footnote}{0} Understanding bar-formation and the secular effects that bars have on the stellar component is becoming central to our understanding of galaxy formation and evolution. Once a bar forms it can change the scale length of the stellar component via scattering/mixing of stars in the radial direction and they can create an extended stellar distribution \citep{Roskar2008}. Bars can move a galaxy between morphological classes through the secular formation of pseudo-bulges \citep{Athanassoula2002, Debattista2004} and they can drive gas to the central black hole fuelling AGN (Shlosman et al. 1989). Numerical simulations have shown that bars naturally arise from the secular evolution of discs \citep{Toomre1964, Ostriker1973, Fall1980}. Bars can also be triggered by dynamical interactions in the field \citep{Gerin1990, Barnes1992, RomanoDiaz2008, Dubinski2009}. It has been shown that close galaxy companions are associated with bar formation, but primarily for early Hubble-types \citep{Elmegreen1990}. In galaxy clusters, gravitational encounters (harassment) can drive a morphological transformation from late type disks to dwarf ellipticals (dSphs). In this scenario, encounters create a ``naked'' stellar bar which is subsequently heated, causing the remnants to become more spherical with time \citep{Moore1996}. Is environment the key factor in determining why two similar galaxies may or may not have a bar or is the existence of a bar related to the initial conditions of galaxy formation? The stability (or instability) of disks to bar formation may also depend on the baryon fraction, and in particular the mass of stars and gas in the disk. This varies across the Hubble sequence and depends strongly on halo mass \citep{vandenBosch2000, Courteau2003, McGaugh2005}. This all suggests that in addition to local density, morphology and halo mass are also important principal parameters to investigate. Another key observational result is the fact that the bar fraction is not changing significantly with redshift (see \citealt{Elmegreen2004, Marinova2007}, but also \citealt{Sheth2008} for a different result), however, most disk galaxies are not within dense environments so it would be difficult to disentangle the effects of environment, especially at higher redshifts. Several studies have shown that there is no evidence for a dependence of bar frequency on galaxy environment \citep{VanDenBergh2002}, the same is true even if galaxies of different morphological type are considered independently \citep{Aguerri2009}. \cite{Li2009} came to the same conclusion analyzing the clustering properties of barred and unbarred galaxies of similar stellar mass and finding it indistinguishable over all the scales probed (from $\sim$20 kpc to 30 Mpc). More recently, the Coma cluster was studied by \cite{MendezAbreu2010}: they find that the bar fraction does not vary significantly even when going from the center to the cluster outskirts. However, the Coma cluster is such an extreme environment that most of its apparent spiral galaxy population may be field galaxies in projection. In the light of these observational results and motivation from numerical simulation studies, we aim at measuring the bar fraction (as number of barred discs over the total number of discs) as a function of environment and disc morphology, at $z\sim0$ in two carefully selected samples representative of a low-density environment (the isolated galaxies from the AMIGA sample) and of a moderately dense environment (galaxies in the Virgo cluster). To achieve this goal it is important to use homogeneous classifications since, as we have shown in Giordano et al., (2010) (paperI hereafter), the bar fraction is very stable against sample selection but that some (possibly spurious) differences can arise if the comparison is based on samples classified using different methods (for example visual classification versus automated profile fitting). In particular, the way the disc population is identified and isolated plays a crucial role, since, if no detailed morphological information is available, discs can easily be miscounted (for example applying only color and/or magnitude cuts). In order to address this, we use data from the UKIDSS Large Area Survey \citep{Lawrence2007} and from SDSS DR7 \citep{Abazajian2009}, with the great advantage of combining optical {\texttt rgb} images with near-infrared (H-band) imaging with excellent resolution for local universe studies, that allow us to visually inspect the images to provide detailed morphological classifications. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{FIGURE_01.pdf} \caption{H-band luminosity distribution of the {\sc Field}\ and {\sc Virgo}\ moderately inclined (axis ratio $> 0.4$), morphologically selected discs (with Hubble type ranging from S0 to Sm). Shaded histograms represent the barred discs.} \label{FIG:1} \end{figure} The outline of the paper is the following: in section \S \ref{sec.data} we present the data that we are using, their classification and the selection of the samples based on local density estimation. The results about the bar fraction in the different cases are presented in section \S \ref{sec.results} and discussed in section \S \ref{sec.discussion}. \section{Data} \label{sec.data} \subsection{{\sc Virgo}\ sample} In PaperI we presented a thorough study of the barred galaxies in the Virgo Cluster from which we adopt all the classified galaxies with a measured H-band magnitude from 2MASS. The {\sc Virgo}\ disk sample is composed of moderately inclined (axis ratio larger than 0.4) members with UKIDSS near-IR imaging of Hubble type between S0 and Sm, spanning a H-band magnitude (stellar mass) range of -17 to -25 mag ($10^8$ to $10^{12}$ M$_{\odot}$). In the following analysis, we use the H-band magnitudes from Paper~I to compute stellar masses assuming a flat $(B-H)$ color with a $\Upsilon_{H,*}=1$. The local galaxy density for members is determined via the $\rho_5$ proxy \citep{Baldry2006}, using the positions and magnitudes from the Virgo Cluster Catalog \citep{Binggeli1984_P1}. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{FIGURE_02.pdf} \caption{Bar fraction as a function of local density measured via the $\rho_5$ proxy of the {\sc NA10}\ (grey diamonds), {\sc Field}\ (blue square) and {\sc Virgo}\ (red circle) samples, with {\sc Field}\ and and {\sc Virgo}\ samples bracketing the extremes in $\rho_5$ of the {\sc NA10}\ density distribution. Averaged over all morphological types, the bar fraction is constant against local density.} \label{FIG:2} \end{figure} \subsection{{\sc Field}\ sample} To provide a robust comparison to the {\sc Virgo}\ sample, we select a true {\sc Field}\ sample using the AMIGA (Analysis of the interstellar Medium of Isolated GAlaxies) project \citep{Verdes-Montenegro2005_A01}. The AMIGA catalogue is based on the KIG catalog \cite{Karachentseva1973} of isolated galaxies ($z \lesssim 0.1$). The KIG catalog is composed of 1050 galaxies with apparent blue magnitudes brighter than 15.7 mag; these isolated galaxies are selected to have no neighbor of comparable size within twenty galactic diameters. The KIG catalog has been used by multiple studies to investigate the effects of under-dense environment on galaxy properties \citep{Adams1980, Haynes1980} and the AMIGA project quantified the isolation of KIG galaxies identifying their sample of 791 genuinely isolated galaxies \cite{Verley2007_A05}. The AMIGA project has also compiled multiwavelength coverage of this statistically significant sample of the most isolated galaxies in the local universe and the dataset includes optical photometry and morphologies for a redshift-complete subsample of 956 galaxies \citep{Sulentic2006_A02}, and these data are publicly released under a VO interface at \texttt{http://amiga.iaa.es/}. By cross-matching the AMIGA catalogue with the 2MASS and SDSS databases, we identify 563 galaxies with both H-band and {\texttt rgb} images. As in our {\sc Virgo}\ sample, we select only moderately inclined (axis ratio $>0.4$) disk galaxies with Hubble type of $S0$ to $Sm$. The stellar masses of the resulting 390 {\sc Field}\ disk galaxies are determined using the total H-band magnitudes from 2MASS and assuming the same flat $(B-H)$ color and $\Upsilon_{H,*}=1$ as in the {\sc Virgo}\ sample. The local density for each galaxy is determined by their 5th nearest neighbors ($\rho_5$, see \citealt{Baldry2006}) as defined by the AMIGA catalogue of neighbors, constructed down to a magnitude $m\sim$17.5, lying within 0.5 Mpc around the KIG galaxies \citep{Verley2007_A04}. \subsection{Identifying Barred Disks} \label{subsec:bars} The UKIDSS Large Area Survey is an ongoing survey to image 4000 deg$^2$ at high Galactic latitudes in the YJHK filters to a depth in H of 18.8 mag; it has a spatial resolution of 0.4$^{\prime \prime}$/pixel (like SDSS data) and an average seeing of 0.8$^{\prime \prime}$. Our galaxies span a redshift out to $z \sim$0.03 where the UKIDSS imaging has a physical resolution of $\sim$400 pc/pixel. As outlined in PaperI, we use the H-band imaging to visually classify the disk galaxies into one of three categories: ``barred'', ``non-barred'', or ``uncertain''. All the galaxies in the {\sc Virgo}\ sample have H-band imaging from UKIDSS that is also available for 172 galaxies in the {\sc Field}\ sample, for the rest we must rely on SDSS $z^{\prime}$ and \texttt{rgb} imaging; however, for all the galaxies with both H-band and $z'$ imaging, we find that our bar classifications are essentially identical. Both {\sc Virgo}\ and {\sc Field}\ catalogues, comprehensive of \texttt{rgb} and H-band thumbnails are available online\footnote{http://www.itp.uzh.ch/~giordano/mwpbg-v2.html}. \begin{figure*} \centering \includegraphics[width=0.84\textwidth]{FIGURE_03.pdf} \caption{Bar fraction vs. stellar mass distribution of the {\sc Field}\ and {\sc Virgo}\ morphologically selected disks. The red circles represent the {\sc Virgo}\ sample, the blue squares the {\sc Field}\ sample. In each panel, the average value over the morphological type is represented by the dashed/dotted line, for the {\sc Virgo}\ (red) and the {\sc Field}\ (blue) respectively. The $Sa-Sb$ disks have the highest bar fraction, regardless of environment, suggesting that baryon fraction is the main driver for bar instabilities.} \label{FIG:3} \end{figure*} \vskip 1.2cm \section{Results and Discussion} \label{sec.results} \subsection{Barred Fraction vs. Environment} In Figure \ref{FIG:1} we show the H-band luminosity distribution for moderately inclined disk galaxies in our {\sc Field}\ and {\sc Virgo}\ samples. The barred fraction averaged over the disk population in the {\sc Field}\ is $\sim$34\% (133/390) and in {\sc Virgo}\ is $\sim$28\% (90/332). We find that even with bar classifications based on high resolution near-IR imaging, the barred fraction does not vary with environment when considering the disk population as a whole. Although our results are consistent with earlier work based on optical imaging (see for example \citealt{VanDenBergh2002,Li2009}), the constancy of the barred fraction conflicts with expectations from current galaxy formation models, since strong interactions can trigger bar instabilities \citep{Berentzen2004}. To further test the robustness of our result, we incorporate the \cite{Nair2010} optically-selected sample of about 14,000 galaxies from the SDSS DR4; this sample includes nearly all spectroscopically-targeted galaxies in the redshift range $0.01 < z < 0.1$ to an apparent extinction-corrected limit of $g<16$ mag. In addition to visually classified Hubble types, the {\sc NA10}\ catalogue contains the existence of bars, stellar masses computed according to \cite{Kauffmann2003a}, and an estimate of the local density computed using the 5th nearest neighbors ($\rho_5$) according to \cite{Baldry2006}. To ensure that we can compare the (optical) {\sc NA10}\ sample directly to our (near-IR) results, we apply the same disk selection criteria as in our {\sc Field}\ and {\sc Virgo}\ samples and consider only galaxies at $z \leq 0.03$. Figure \ref{FIG:2} shows the barred fraction in our {\sc Field}\ (blue) and {\sc Virgo}\ (red) populations as well as the {\sc NA10}\ sample (grey) as a function of local galaxy density ($\rho_5$); note how effectively our {\sc Field}\ and {\sc Virgo}\ samples bracket the extremes in $\rho_5$. We find the barred disk fraction is surprisingly resilient ($\sim30-35$\%): the barred fraction does not vary as the environment changes from isolated field galaxies to cluster cores. \subsection{Barred Fraction vs. Disk Morphology} To investigate how the barred fraction varies with disk morphology, we divide both our {\sc Field}\ and {\sc Virgo}\ samples into three classes: 1) lenticulars (featureless discs, corresponding to Hubble-types $S0-S0a$); 2) early-type spirals (bulge-dominated discs, corresponding to Hubble-types $Sa-Sb$); and 3) late-type spirals (disc-dominated or bulge-less discs, corresponding to Hubble-types $Sbc-Sm$). In Figure \ref{FIG:3} we show the barred fraction as a function of stellar mass for the three disk classes where the blue squares represent the {\sc Field}\ and the red circles the {\sc Virgo}\ members. Each panel also includes the average barred fraction for the three disk classes in the {\sc Field}\ (dotted line) and in {\sc Virgo}\ (dashed line). The differences in the relative number of galaxies in each disk class is due to the morphology-density relation, i.e. the fraction of lenticulars in the {\sc Field}\ ($<10$\%) is lower than in {\sc Virgo}\ ($\sim30$\%; PaperI). We find that the barred fraction for early-type spirals is systematically higher than in late-type spirals {\it regardless of environment}: $45-50$\% for $Sa-Sb$ vs. $<25$\% for $Sbc-Sm$. The barred fraction in lenticulars ($S0-S0a$) is also lower in both environments. \section{Summary} \label{sec.discussion} We present the first comprehensive study of barred disks as a function of environment that uses NIR and \texttt{rgb} imaging to resolve bars; the advantage of using near-infrared imaging from UKIDSS is that bar classifications are less affected by dust and bright star-forming regions. We expand on our study of bars in {\sc Virgo}\ by building a {\sc Field}\ sample using the KIG catalog of isolated galaxies. Our {\sc Field}\ and {\sc Virgo}\ disk populations are at $z<0.03$, span a range in stellar mass from $\sim$10$^8$ to $\sim$10$^{12}$ M$_{\odot}$ and Hubble type ($S0-Sm$), encompass a wide range in local densities and are analyzed in exactly the same manner. We find that the barred disk fraction is surprisingly constant at $30-35$\% in both the {\sc Field}\ and {\sc Virgo}\ samples, i.e. the barred fraction for the disk population as a whole does not depend on environment. We test the robustness of our result by analyzing the NA10 optically-selected sample of nearby galaxies in the same manner, and we again find a constant barred fraction across the full range of local galaxy density. This implies that disks become barred prior to the late time assembly of galaxy clusters, which is consistent with observational evidence that the bar fraction does not evolve strongly with redshift. The barred fraction is highest for early-type spirals ($Sa-Sb$) regardless of environment: these galaxies are nearly twice as likely to be barred as late-type spirals ($Sbc-Sm$). If a late type spiral forms a bar, then it may also form a pseudo-bulge via a buckling instability and its morphological class will change. Indeed, the consensus is forming that our own Galaxy has evolved across the Hubble sequence in this fashion \citep{Oski2010}. If this is a common phenomenon, as numerical simulations indicate \citep{Debattista2006}, then we naturally expect the bar fraction to be higher in early type spirals, that have a higher baryon fraction. This implies that a significant fraction of the bulges of early type galaxies are pseudo-bulges. The morphology-density relation \citep{Dressler1980} can be explained by the notion that the cluster environment is creating S0's from the infalling disc population. Indeed, \cite{Graham2008} find that the bulge to disc ratios of S0's is similar to that of early type galaxies. One might therefore expect the bar fraction to be the same in S0s and early type spirals, however averaged over the entire population it is significantly lower (25\% versus 50\% respectively). We note that the bar fraction in S0's and early type discs with stellar masses above $10^{10}M_\odot$ is similar ($\sim$ 50\%), but this drops to less than 10\% in the least massive S0's. This supports a harassment scenario for the formation of the S0 population. Gravitational encounters between galaxies and with the global cluster potential thicken the disks of massive early type spirals by an amount that is sufficient to suppress spiral patterns (Moore et al 1999). For lower mass disks, the heating is more effective and will eventually erase the signatures of a preexisting bar. Numerical simulations also indicate that infalling late type disks will undergo an environmentally driven bar instability, however this phase is short lived with the bar experiencing subsequent heating until it becomes a dE/dSph.
1,108,101,566,304
arxiv
\section{Introduction}\label{sec:intro} Over the last decade, due to advances\cite{Bloch-08,ReichelVuletic11,Langen-15} in the control of ultra-cold atomic systems the experimental realisation of sudden quantum quenches\cite{CalabreseCardy06} has become possible. Hereby a quantum system is prepared in an initial state, say the ground state of an Hamiltonian $H_0$, and then its time evolution after switching to another Hamiltonian $H$, eg, obtained by suddenly switching one of the system parameters of $H_0$, is studied. Generically the initial state will have a highly non-trivial representation in terms of the eigenstates of $H$, thus resulting in a complicated relaxation of observables after the quench. These experimental advances have triggered tremendous theoretical efforts\cite{Polkovnikov-11,Eisert-15,GogolinEisert16} to understand the quench dynamics of a vast class of systems covering all spatial dimensions. Of particular interest have been one-dimensional systems, due to the availability of powerful numerical and analytical tools as well as their relation to topics like integrability. Arguably the most generic one-dimensional system is the Luttinger liquid\cite{Giamarchi04,Cazalilla-11,Schoenhammer13} which is known to describe the low-energy properties of gapless systems like quantum wires, spin chains or bosonic atoms in one-dimensional optical lattices. The importance of Luttinger liquids motivated the detailed investigation\cite{Cazalilla06,Perfetto06,DeChiara-06,Barmettler-09,Uhrig09,IucciCazalilla09,Barmettler-10,MitraGiamarchi11,Mitra12,KRSM12,MitraGiamarchi12,RSM12,DallaTorre-13,Mitra13,NessiIucci13,KennesMeden13,NgoDinh-13,HamerlaUhrig13,Coira-13,Tavora-14,Kennes-14,Protopopov-15,Collura-15} of its relaxation dynamics after sudden quantum quenches. While the vast majority of previous works focused on sudden quantum quenches, we will here consider more general quenches of finite length $\tau$ over which the system parameters vary (see Fig.~\ref{fig:nonsudden} for a sketch). Physically the finite quench time $\tau$ (depending on the precise form of the quench protocol one may even introduce several time scales $\tau_n$) introduces an additional energy scale $\Delta_\text{quench}\sim 1/\tau$, which is obviously trivial in the sudden and adiabatic limit. This newly generated energy scale is directly related to the quench protocol, ie, the switching process, and thus can be tuned. In particular, it can be made comparable to the other energy scales in the system like the band width, excitations gaps or relaxation rates. The interplay of these different energy scales originating in the properties of the post-quench Hamiltonian, the initial state and the quench protocol opens the possibility to study emergent quantum states beyond the ones accessible via sudden quench protocols. An example of such an emergent state is generated by the periodic quench discussed below. Several aspects of finite-time quenches and the interpolation between the sudden and adiabatic limit have been studied.\cite{PolkovnikovGritsev08,EcksteinKollar10,MoeckelKehrein10,Bernier-11,TomarasKehrein11,Bernier-12,Sandri-12,HaqueZimmer13,Das-15} For the Luttinger liquid Dora et al.\cite{Dora-11} first considered linear quench protocols. They obtained perturbative results for the total energy and fermionic chiral Green function, which were later extended to spin-spin correlation functions and compared with numerical simulations of the time evolution during the quench in the XXZ Heisenberg chain.\cite{Pollmann-13} Bernier et al.\cite{Bernier-14} went beyond the perturbative treatment by deriving exact results for the time evolution during linear quenches in terms of Bessel functions (see Sec.~\ref{sec:linear}). This enabled them to analyse the properties of the bosonic Green function in great detail, including the derivation of power laws governing the propagation of the light cone during the quench. The obtained results were further confirmed by numerical simulations for the Bose--Hubbard model. Further aspects that were investigated include the excitation energy, the work statistics, finite-temperature initial states, the Loschmidt echo and the diagonal ensemble reached at late times.\cite{DziarmagaTylutki11,PerfettoStefanucci11,Dora-12,Dora-13,BacsiDora13,Sachdeva-14,Porta-16} In this article we aim at obtaining a complete understanding of finite-time quenches in Luttinger liquids. To this end we consider the time evolution during and after the quench and derive exact, analytical results to go beyond the perturbative regime. This allows us to study the interplay between the quench time $\tau$ and other energy scales in the system, resulting for example in a non-trivial dependence of the total energy on the quench time (see Fig.~\ref{fig:Etot}). Both the fermionic and bosonic Green function exhibit a clear light-cone effect after the quench, which is due to the propagation of entangled pairs of quasiparticles.\cite{CalabreseCardy06} However, as compared to the sudden quench the light cone lags behind (see Fig.~\ref{fig:GFcontour1}), which can be associated with a combination of two effects: First, during the quench the quasiparticles propagate at the instantaneous velocity, which is generically smaller than the velocity after the quench. Second, the creation of the quasiparticles is spread over the whole quench duration, while in the sudden case all quasiparticles are created at $t=0$. For short to moderate lengths of the quench we obtain an analytic result for the observed lag, linking it to the integrated change in the coupling during the quench [see Eq.~\eqref{eq:lagapprox}]. The outline of this paper is as follows. In Sec.~\ref{sec:model} we define the Tomonaga--Luttinger model (TLM) and discuss its relation to microscopic fermionic and bosonic systems. In Sec.~\ref{sec:quenches} we present the general approach to the problem of time-dependent interaction quenches in the TLM, and derive some universal properties of the solutions. In Sec.~\ref{sec:exact} we present exact analytic results for several quench protocols including the linear ramp, the smooth cosine quench and periodic quenches with arbitrary number of oscillations. In Sec.~\ref{sec:results} we analyse the behaviour of the total and kinetic energies as well as the fermionic and bosonic Green functions, both during and after the quench. We conclude with a brief discussion of our results in Sec.~\ref{sec:conclusion}. Some technical aspects are presented in the appendices. \section{Tomonaga--Luttinger model}\label{sec:model} \subsection{The model}\label{sec:b-model} In this article we consider the time-dependent Tomonaga--Luttinger model (TLM)\cite{Giamarchi04,Cazalilla-11,Schoenhammer13} defined by the Hamiltonian \begin{equation} \begin{split} H(t) = \sum_{n >0} q_n &\left[\left( v_\text{F} + \frac{g_4(q_n,t)}{2 \pi} \right) \left( b_n^\dag b_n^{} + b_{-n}^\dag b_{-n}^{} \right) \right. \\ &\qquad\left. + \frac{g_2(q_n,t)}{2 \pi} \left( b_n^\dag b_{-n}^\dag + b_{-n}^{} b_{n}^{} \right) \right] , \label{eq:TLM} \end{split} \end{equation} where $q_n=2 \pi n / L$, $n \in {\mathbb Z}$, and $L$ denote the momenta and system length, respectively, and $v_\text{F}$ is the Fermi velocity. The operators $b_n^\dagger$ and $b_n$ create and annihilate bosonic modes at momentum $q_n$ and satisfy the standard commutation relations $\comm{b_m}{b_n^\dagger}=\delta_{mn}$. Here and in the following we denote quantities taken at momenta $q_n$ by the subindex $n$. Before we proceed with the analysis of the dynamics in the TLM, we briefly recall the properties of the time-independent system given by \eqref{eq:TLM} with coupling functions $g_2(q_n)$ and $g_4(q_n)$ constant in time. Then the Hamiltonian can be diagonalised to $H = \sum_{n \neq 0} \epsilon(q_n) \,\alpha_n^\dagger \alpha_n + E_{\rm gs}$ by introducing new modes $\alpha_n = c(q_n) b_n + s(q_n) b^\dagger_{-n}$ with \begin{eqnarray} s(q)^2 &=& \frac{1}{2} \left[ \frac{1+ \hat{g}_4(q)}{W(q)} -1 \right] = c(q)^2 -1,\label{eq:defsc}\\ \epsilon(q)&=& v_\text{F} |q| \, W(q) = v_\text{F} |q|\sqrt{[1+ \hat{g}_4(q)]^2 - \hat{g}_2(q)^2},\label{eq:epsilon}\quad \end{eqnarray} where $\hat{g}_{2/4}(q)= g_{2/4}(q)/(2 \pi v_{\rm F})$ denote dimensionless coupling functions and $E_{\rm gs}=v_\text{F}\sum_{n>0}q_n[W(q_n)-\hat{g}_4(q_n)-1]$ is the ground-state energy. In this article we assume that $g_2(q)$ and $g_4(q)$ depend on the momentum in the dimensionless combination $q/q_\text{c}$ and fall off to zero at $q/q_\text{c}\sim 1$. Thus the scale $q_\text{c}$ provides an ultra-violet cutoff for the theory. Furthermore, we require the limits $\lim_{q\to 0} g_{2/4}(q)$ to be smooth, corresponding to systems with interactions of finite range in real space. Apart from this the momentum dependence is kept arbitrary. In fact, it can be shown\cite{Solyom79,Meden99} that in equilibrium the momentum dependence of $g_2(q)$ and $g_4(q)$ is irrelevant in a renormalisation-group sense, ie, the behaviour at low energies and long wave lengths is governed solely by their values at $q=0$ through the Luttinger-liquid parameter and the renormalised velocity given by \begin{equation} K= \sqrt{\frac{1+\hat g_4(0) -\hat g_2(0)}{1+\hat g_4(0) +\hat g_2(0)}}, \quad v=\frac{\text{d}\epsilon}{\text{d} q}\bigg|_{q=0}=v_{\rm F} W(0). \label{eq:LLparameter} \end{equation} For non-interacting systems, $g_2(q)=g_4(q)=0$, this simplifies to $K=1$ and $v=v_\text{F}$. In general, if the system possesses Galilean invariance the product $vK$ is independent of the interaction parameters,\cite{Haldane81prl,Schoenhammer13} implying in turn $g_2(0)=g_4(0)$. Throughout this article we focus on universal quantities in the sense that they only depend on $g_2(0)$ and $g_4(0)$ and thus the Luttinger parameter $K$ and the renormalised velocity $v$ given in \eqref{eq:LLparameter}. Unless stated otherwise, the numerical results shown in the plots, eg, Fig.~\ref{fig:GF}, were obtained for the specific choice $g_2(q,\tau)=g_4(q,\tau)=g_0\,\exp[-(q/q_\text{c})^2/2]$. \subsection{Relation to fermionic systems}\label{sec:fermionicmodel} As is well known, the TLM describes the low-energy physics of one-dimensional fermionic systems in the absence of an energy gap.\cite{Giamarchi04,Schoenhammer13} For example, one can start from the lattice model of spinless fermions, \begin{equation} \begin{split} H_\text{F}=& -t\sum_i \bigl(c^{\dag}_{i} c_{i+1} +c^{\dag}_{i+1} c_{i}\bigr)\\ &+U\sum_i \left(c^{\dag}_{i} c_i-\frac{1}{2}\right)\left(c^{\dag}_{i+1} c_{i+1}-\frac{1}{2}\right), \end{split} \label{eq:spinless-ferm-def} \end{equation} where $c^{\dag}_{i}$ and $c_i$ are the fermionic creation and annihiliation operators at lattice site $i$. The low-energy description in the gapless regime $U\le 2t$ is obtained by first linearising the dispersion relation around the Fermi points $\pm q_\text{F}$ and taking the continuum limit, \begin{equation} \frac{c_i}{\sqrt{a_0}}\to \Psi_\text{F}(x)=e^{\text{i} q_\text{F}x}\,\Psi_+(x)+e^{-\text{i} q_\text{F} x}\,\Psi_-(x), \end{equation} where $a_0$ denotes the lattice spacing. The right- and left-moving fermionic fields $\Psi_\pm(x)$ are slowly varying on the scale $1/q_\text{F}$. The low-energy properties of \eqref{eq:spinless-ferm-def} are then captured by the Hamiltonian \begin{equation} \begin{split} H_\text{F}=&-\text{i} v_\text{F}\int\text{d} x\,\Bigl[\Psi_+\partial_x\Psi_+-\Psi_-\partial_x\Psi_-\Bigr]\\ &+\int\text{d} x\,\text{d} x'\,\Bigl[g_4(x-x') \rho_{\pm}(x)\rho_{\pm}(x')\\ &\qquad\qquad\qquad+ g_2(x-x') \rho_{\pm}(x)\rho_{\mp}(x')\Bigr], \end{split} \end{equation} where $\rho_{\pm}(x)=\Psi_{\pm}^\dag(x) \Psi_{\pm}(x)$ are the right- and left-moving densities, and $g_2(x-x')$ and $g_4(x-x')$, the Fourier transforms of $g_2(q)$ and $g_4(q)$, respectively, have thus a straightforward interpretation as density-density couplings. In the continuum limit one can introduce phase fields corresponding to collective excitations in the electronic liquid via\cite{Schoenhammer13} \begin{equation} \Psi_+(x)=O_+\,e^{\text{i}\phi^\dagger(x)}\,e^{\text{i}\phi(x)},\quad \text{i}\phi(x)=\sum_{n>0}\frac{e^{\text{i} q_nx}}{\sqrt{n}}b_n, \label{eq:RMtomodes} \end{equation} where the Klein factor $O_+$ lowers the fermion number by one and commutes with the bosonic modes. The relation between the microscopic parameters $t$ and $U$ of the lattice model and the effective parameters $K$ and $v$ in the TLM is exactly known\cite{Giamarchi04} from the Bethe-ansatz solution of \eqref{eq:spinless-ferm-def} \begin{equation} K=\frac{\pi}{2(\pi-\arccos\eta)},\quad v=\frac{\pi t\sqrt{1-\eta^2}}{\arccos\eta},\quad \eta=\frac{U}{2t}. \end{equation} We note that the relation between the microscopic and effective parameters is non-linear, implying, for example, that linear time dependences of the former will lead to non-linear quench protocols for the latter. The time evolution of the kinetic energy and fermionic quasiparticle weight (see Sec.s~\ref{sec:kineticenergy} and~\ref{sec:GF} below) as well as local densities after a sudden interaction quench in the lattice model \eqref{eq:spinless-ferm-def} have been analysed\cite{KRSM12,KennesMeden13,HamerlaUhrig13} in the framework of the TLM. Since one-dimensional spin models like the XXZ Heisenberg chain can be mapped to fermionic chains of the form \eqref{eq:spinless-ferm-def}, the results presented in our paper can be applied to the analysis of the time evolution during and after finite-time quenches in spin chains. A similar analysis has been performed for the dynamics of several observables in the XXZ chain after sudden quenches\cite{DeChiara-06,Barmettler-09,Barmettler-10,Coira-13,Collura-15} as well as during linear ramps in the anisotropy.\cite{Pollmann-13} \subsection{Relation to bosonic systems}\label{sec:bosonicmodel} The TLM also appears in the description\cite{Giamarchi04,Cazalilla-11} of one-dimensional bosonic systems. For example, one may start from the continuum model \begin{equation} \begin{split} H_\text{B}=&\frac{1}{2m_\text{B}}\int\text{d} x \big|\partial_x\Psi_\text{B}\big|^2-\mu\int\text{d} x\,\rho_\text{B}(x)\\ &+\frac{1}{2}\int\text{d} x\,\text{d} x'\,V(x-x')\rho_\text{B}(x)\rho_\text{B}(x'), \end{split} \label{eq:Hboson} \end{equation} were $\Psi_\text{B}(x)$ is a complex scalar field describing bosons of mass $m_\text{B}$, $\rho_\text{B}(x)=\Psi^\dagger_\text{B}(x)\Psi_\text{B}(x)$ denotes the boson density, $V(x)$ is a density-density interaction, and $\mu$ the chemical potential. In the special case $V(x)=V_0\delta(x)$ the model becomes the integrable Lieb--Liniger model,\cite{LiebLiniger63} whose Bethe-ansatz solution can be used to study its time evolution after a sudden quench.\cite{Gritsev-10,IyerAndrei12,Iyer-13,DeNardis-14,DeNardisCaux14,Zill-15,DeNardis-15,vandenBerg-16} In the presence of a lattice a natural starting point would be the Bose--Hubbard model \begin{equation} \begin{split} H_\text{BHM}=&-t\sum_i \bigl(a^{\dag}_{i} a_{i+1} +a^{\dag}_{i+1} a_{i}\bigr)\\ &+U\sum_i n_i(n_i-1)-\mu\sum_i n_i, \end{split} \label{eq:BHM} \end{equation} where $a^{\dag}_{i}$ and $a_i$ are the bosonic creation and annihiliation operators at lattice site $i$ and $n_i=a_i^\dagger a_i$ denotes the respective density. We note in passing that models like \eqref{eq:Hboson} and \eqref{eq:BHM} can be realised in systems of trapped, ultra-cold atoms.\cite{Buechler-03} In the superfluid phase the low-energy properties of bosonic systems like \eqref{eq:Hboson} and \eqref{eq:BHM} can be described by the TLM \eqref{eq:TLM}. For that one writes the bosonic fields in terms of the density $\rho_\text{B}$ and phase field $\theta$ as\cite{Cazalilla-11} \begin{equation} \Psi_\text{B}(x)=\sqrt{\rho_\text{B}(x)}\,e^{\text{i}\theta(x)}, \end{equation} where the phase field can in turn be represented by the creation and annihilation operators of the bosonic modes as \begin{equation}\label{eq:phi-to-b} \theta(x)=\frac{\text{i}}{2}\sum_{n\neq 0} \frac{\exp(-\text{i} q_n x)}{\sqrt{n}}\bigl(b_n^{\dag}-b_{-n}\bigr). \end{equation} The effective parameters $K$ and $v$ in the TLM can be obtained for example from the Bethe-ansatz solution in the Lieb--Liniger case or via the density-matrix renormalisation group method from the Bose--Hubbard model.\cite{Kollath-04} Again the relation between the microscopic and effective parameters is non-linear. The time evolution of the bosonic systems \eqref{eq:Hboson} and \eqref{eq:BHM} during finite-time interaction quenches was investigated in Refs.~\onlinecite{Bernier-12,HaqueZimmer13,Bernier-14}. \section{Finite-time quench protocols}\label{sec:quenches} After discussing the basic properties of the TLM and its relation to other one-dimensional systems, we now turn to the time evolution due to a change in the coupling functions in \eqref{eq:TLM}. Starting with the seminal work by Cazalilla\cite{Cazalilla06} the dynamics of the TLM after a sudden quench, ie, a sudden change in the coupling functions, has been exhaustively studied in the past.\cite{Perfetto06,DeChiara-06,Barmettler-09,Uhrig09,IucciCazalilla09,Barmettler-10,MitraGiamarchi11,Mitra12,KRSM12,MitraGiamarchi12,RSM12,DallaTorre-13,Mitra13,NessiIucci13,KennesMeden13,NgoDinh-13,HamerlaUhrig13,Coira-13,Tavora-14,Kennes-14,Protopopov-15,Collura-15} \begin{figure}[t] \centering \includegraphics[width=0.49\textwidth]{nonsudden.pdf} \caption{(Colour online) Sketch of different time-dependent protocols for a general, finite-time quench. Here $\tau$ denotes the quench time over which the parameter $g$ varies. The solid and dashed black lines represent the special cases of sudden $(\tau\to 0)$ and adiabatic $(\tau\to\infty)$ quenches respectively. The solid blue line corresponds to a linear ramp \eqref{eq:linearquench}, which constitutes the simplest finite-time quench. Further examples considered here are the cosine quench \eqref{eq:cosinequench} and the periodic quench \eqref{eq:cosine3quench} illustrated by the solid and dashed red lines respectively.} \label{fig:nonsudden} \end{figure} In contrast, here we will consider the dynamics during and after continuous changes in the coupling functions; see Fig.~\ref{fig:nonsudden} for an illustration. Most previous works investigating such a setup have focused on linear quenches in which the interactions are ramped up at a constant speed.\cite{Dora-11,Pollmann-13,Bernier-14,Sachdeva-14} We consider quench protocols starting from the non-interacting model and ranging over a finite quench time $\tau$, ie, the coupling functions satisfy \begin{equation} g_{2/4}(q,t<0)=0, \quad g_{2/4}(q,t>\tau)=g_{2/4}(q), \end{equation} with $g_{2/4}(q)$ being independent of time. At zero temperature, to which we restrict ourselves, the initial state $\ket{\Psi_0}$ at $t=0$ is thus the vacuum state of the bosons, ie, $b_n\ket{\Psi_0}=0$ for all $n$. \subsection{Evolution during the quench, $\boldsymbol{t<\tau}$}\label{sec:during} The time evolution of the bosonic operators is governed by the Heisenberg equations of motion\cite{Dora-11} \begin{equation} \text{i}\frac{\text{d}}{\text{d} t}O(t)=\bigl[O(t),H(t)\bigr],\quad O(t)=b_n(t), b_n^\dagger(t), \end{equation} where we denote operators in the Heisenberg picture by stating the time argument explicitly. For the TLM we obtain \begin{eqnarray} \text{i}\frac{\text{d}}{\text{d} t}b_n(t)&=&\omega_n(t)b_n(t)+\lambda_n(t)b_{-n}^\dagger(t),\label{eq:Heisenbergb1}\\ \text{i}\frac{\text{d}}{\text{d} t}b_n^\dagger(t)&=&-\omega_n(t)b_n^\dagger(t)-\lambda_n(t)b_{-n}(t),\label{eq:Heisenbergb2} \end{eqnarray} with the abbreviations \begin{eqnarray} \omega(q,t)&=&v_\text{F}|q|\bigl[1+\hat{g}_4(q,t)\bigr]=v_\text{F}|q|\left(1+\frac{g_4(q,t)}{2\pi v_\text{F}}\right)\!,\quad\\ \lambda(q,t)&=&v_\text{F}|q|\hat{g}_2(q,t)=\frac{|q|}{2\pi}g_2(q,t), \end{eqnarray} and $\omega_n(t)=\omega(q_n,t)$ as well as $\lambda_n(t)=\lambda(q_n,t)$. Here and in the following we denote quantities taken at momenta $q_n$ by the subindex $n$. Now making the ansatz \begin{eqnarray} b_n(t)&=&u_n(t)b_n+v_n(t)^*\,b_{-n}^\dagger,\\ b_n^\dagger(t)&=&u_n(t)^*\,b_n^\dagger+v_n(t)b_{-n}, \end{eqnarray} where the operators on the right-hand side are the time-independent Schr\"odinger operators at $t=0$, the equations \eqref{eq:Heisenbergb1} and \eqref{eq:Heisenbergb2} turn into differential equations for the coefficients\cite{Dora-11} $u_n(t)$ and $v_n(t)$, \begin{equation} \text{i}\frac{\text{d}}{\text{d} t}\left(\begin{array}{c}u_n(t)\\ v_n(t)\end{array}\right)= \left(\begin{array}{cc}\omega_n(t)& \lambda_n(t)\\ -\lambda_n(t) & -\omega_n(t)\end{array}\right) \left(\begin{array}{c}u_n(t)\\ v_n(t)\end{array}\right). \label{eq:DGLuv} \end{equation} The initial conditions read \begin{equation} u_n(0)=1,\quad v_n(0)=0, \label{eq:uvinit} \end{equation} and we recall $u_n(t)=u(q_n,t)$ and $v_n(t)=v(q_n,t)$. The coefficients satisfy $u(q,t)=u(-q,t)$, $v(q,t)=v(-q,t)$ and $|u(q,t)|^2-|v(q,t)|^2=1$ as well as $\lim_{\tau\to 0}u_n(t=\tau)=1$ and $\lim_{\tau\to 0}v_n(t=\tau)=0$. Furthermore, since the universal properties of the system are governed by the behaviour at small momenta, we determine the expansion of the solutions $u(q,t)$ and $v(q,t)$ at $q\ll q_\text{c}$. More specifically we make the ansatz \begin{equation} u(q,t)=\sum_{m=0}^\infty u^{(m)}(t)\,\left(\frac{q}{q_\text{c}}\right)^m, \end{equation} and similarly for $v(q,t)$, $\omega(q,t)$ and $\lambda(q,t)$. [We recall that the coupling functions $g_{2/4}(q,t)$ are assumed to be analytic at $q=0$, thereby excluding long-range interactions in real space.] Then we obtain for the initial conditions \eqref{eq:uvinit} and up to linear order in $q$ the results \begin{eqnarray} u(q,t)&=&1-\text{i} v_\text{F}q\int_0^t\text{d} t'\,\bigl[1+\hat{g}_4(0,t')\bigr],\label{eq:smallku}\\ v(q,t)&=&\text{i} v_\text{F}q\int_0^t\text{d} t'\,\hat{g}_2(0,t').\label{eq:smallkv} \end{eqnarray} These expansions are valid for sufficiently small momenta $q\ll q_\text{c},1/(v_\text{F}\tau)/[1+\hat{g}_4(0)]\sim 1/(v_\text{F}\tau)$, where the second condition originates from the requirement that the next-to-leading order term in \eqref{eq:smallku} stays smaller than the leading one (see App.~\ref{app:smallk} for a more detailed discussion). Apart from the restriction on $q$ the expansions \eqref{eq:smallku} and \eqref{eq:smallkv} are valid for quench protocols with arbitrary time dependencies and final interaction strengths $g_{2/4}(0,\tau)$. In particular, they are applicable in the non-perturbative regime $1\lesssim\hat{g}_{2/4}(0,\tau)$. It is straightforward to explicitly verify \eqref{eq:smallku} and \eqref{eq:smallkv} for the exactly solvable protocols discussed in Sec.~\ref{sec:exact} [but also the perturbative solution of \eqref{eq:DGLuv} presented for completeness in App.~\ref{app:PT}]. \subsection{Evolution after the quench, $\boldsymbol{t>\tau}$}\label{sec:after} For times $t>\tau$ the coupling functions are constant and thus the differential equations \eqref{eq:DGLuv} can be solved explicitly by \begin{equation} v_n(t)=A_n\cos(\epsilon_nt)+B_n\sin(\epsilon_nt), \label{eq:vafterquench} \end{equation} where [we assume $\omega_n(\tau)>\lambda_n(\tau)$] \begin{equation} \epsilon_n=\epsilon(q_n)=\sqrt{\omega_n(\tau)^2-\lambda_n(\tau)^2}>0 \end{equation} is the single-mode energy \eqref{eq:epsilon} after the quench, and \begin{equation} u_n(t)=-\frac{\text{i}}{\lambda_n(\tau)}\frac{\text{d}}{\text{d} t}v_n(t)-\frac{\omega_n(\tau)}{\lambda_n(\tau)}v_n(t). \end{equation} The constants $A_n=A(q_n)$ and $B_n=B(q_n)$ are obtained from the initial conditions for the post-quench dynamics at $t=\tau$, \begin{eqnarray} A_n&=&-\text{i}\frac{\lambda_n(\tau)}{\epsilon_n}\sin(\epsilon_n\tau)u_n(\tau)\\* & &-\frac{\text{i}}{\epsilon_n}\bigl[\text{i}\epsilon_n\cos(\epsilon_n\tau)+\omega_n(\tau)\sin(\epsilon_n\tau)\bigr]v_n(\tau),\nonumber\\ B_n&=&\text{i}\frac{\lambda_n(\tau)}{\epsilon_n}\cos(\epsilon_n\tau)u_n(\tau)\\* & &-\frac{\text{i}}{\epsilon_n}\bigl[\text{i}\epsilon_n\sin(\epsilon_n\tau)-\omega_n(\tau)\cos(\epsilon_n\tau)\bigr]v_n(\tau).\nonumber \end{eqnarray} The expansion of the post-quench coefficients for small momenta is obtained using \eqref{eq:smallku} and \eqref{eq:smallkv}; to leading order it reads \begin{eqnarray} A(q)&=& -\text{i} v_\text{F}q\tau\left[\hat{g}_2(0,\tau)-\frac{1}{\tau}\int_0^\tau\text{d} t\,\hat{g}_2(0,t)\right],\label{eq:smallkA}\\ B(q)&=&\text{i}\frac{1-K^2}{2K},\label{eq:smallkB} \end{eqnarray} with the Luttinger parameter $K$ defined in \eqref{eq:LLparameter}. The expansion is valid for arbitrary quench protocols provided $q\ll q_\text{c},1/(v_\text{F}\tau)$. For linear or periodic quenches (see next section for the precise definition) the expansion \eqref{eq:smallkA} simplifies to $A(q)=-\text{i} v_\text{F}\hat{g}_2(q,\tau) q\tau/2$. \subsection{Generalised Gibbs ensemble}\label{sec:GGE} \begin{figure}[t] \centering \includegraphics[width=0.49\textwidth]{modes.pdf} \caption{(Colour online) Mode occupations $\bra{\Psi(\tau)}\alpha_n^\dagger\alpha_n\ket{\Psi(\tau)}$ after the quench with the quench time given by $\ln(v_\text{F}q_\text{c}\tau)=1.5$ (except for the sudden quench), Gaussian momentum dependence of the coupling functions and final interaction strength $g_0=v_\text{F}/2$. For the periodic quench we observe a preferred occupation of high-energy modes which strongly affects the behaviour of the total energy shown in Fig.~\ref{fig:Etot}. In the adiabatic limit $\tau\to\infty$ one finds $\bra{\Psi(\tau)}\alpha_n^\dagger\alpha_n\ket{\Psi(\tau)}\to 0$ for all $n\neq 0$ and all quench protocols.} \label{fig:modes} \end{figure} Since the time evolution for $t>\tau$ is governed by the time-independent Hamiltonian \eqref{eq:TLM} conserving the mode occupations after the quench, the system relaxes for $t\to\infty$ to a generalised Gibbs ensemble.\cite{Rigol-07} One obtains\cite{Dora-12} (in complete analogy to the situation after a sudden quench\cite{Cazalilla06,IucciCazalilla09}) \begin{equation} \rho_\text{GGE}=\frac{e^{-\sum_{n\neq 0}\eta_n\alpha_n^\dagger \alpha_n}}{\text{tr}\left(e^{-\sum_{n\neq 0}\eta_n\alpha_n^\dagger \alpha_n}\right)}, \label{eq:GGE} \end{equation} where $\alpha_n^\dagger\alpha_n$ are the mode-occupation operators after the quench and the Lagrange multipliers are obtained from the initial conditions for the post-quench dynamics. Explicitly we find with the coefficients defined in \eqref{eq:defsc}, \begin{equation} \begin{split} &\bra{\Psi(\tau)}\alpha_n^\dagger\alpha_n\ket{\Psi(\tau)}=\big|c(q_n)v_n(\tau)+s(q_n)u_n(\tau)\big|^2\\ &\qquad\stackrel{!}{=}\text{tr}\bigl(\rho_\text{GGE}\alpha_n^\dagger\alpha_n\bigr)=\frac{e^{-\eta_n}}{1-e^{-\eta_n}}. \end{split} \label{eq:modesGGE} \end{equation} We stress that the Lagrange multipliers implicitly depend on the quench time $\tau$ and the precise form of the time dependence for $0<t<\tau$ via the coefficients $u_n(\tau)$ and $v_n(\tau)$. In the sudden limit we recover the well-known\cite{Cazalilla06,IucciCazalilla09} result $\bra{\Psi(\tau)}\alpha_n^\dagger\alpha_n\ket{\Psi(\tau)}=s(q_n)^2$. The mode occupations \eqref{eq:modesGGE} for specific quench protocols are illustrated in Fig.~\ref{fig:modes}. In particular, we observe that the periodic quench leads to a preferred occupation of high-energy modes around the band edge $q\sim q_\text{c}$ for quench times $\tau\sim 5/(v_\text{F}q_\text{c})$, ie, when the energy scale $1/\tau$ related to the quench protocol is comparable to the band width. In contrast, for the sudden or monotonic quench protocols such a preferred occupation of high-energy modes is not observed. We further note that the generalised Gibbs ensemble \eqref{eq:GGE} is diagonal in the modes and thus does not capture correlations between the $q$ and $-q$ modes.\cite{IucciCazalilla09} This limitation can be overcome by additionally including $\alpha_n^\dagger\alpha_n\alpha_{-n}^\dagger\alpha_{-n}$ into the set of conserved quantities used to define the generalised Gibbs ensemble. \section{Analytically solvable quench protocols}\label{sec:exact} In the case of Galilean invariance, $g_2(q,t)=g_4(q,t)$, exact solutions are possible for specific time dependences. (We briefly comment on the general case in Sec.~\ref{sec:non-Galilean-def}.) In order to derive them we introduce an auxiliary function $a_n(t)$ via \begin{eqnarray} u_n(t)&=&\frac{1}{2}a_n(t)+\frac{\text{i}}{2v_\text{F}|q_n|}\frac{\text{d}}{\text{d} t}a_n(t),\label{eq:relationua}\\ v_n(t)&=&\frac{1}{2}a_n(t)-\frac{\text{i}}{2v_\text{F}|q_n|}\frac{\text{d}}{\text{d} t}a_n(t).\label{eq:relationva} \end{eqnarray} In terms of this auxiliary function the differential equation \eqref{eq:DGLuv} becomes \begin{equation} \frac{\text{d}^2}{\text{d} t^2}a_n(t)+v_\text{F}^2q_n^2\bigl[1+2\hat{g}_2(q_n,t)\bigr]a_n(t)=0, \label{eq:DGLa} \end{equation} with the initial conditions \begin{equation} a_n(0)=1,\quad \frac{\text{d}}{\text{d} t}a_n(t)\Big|_{t=0}=-\text{i} v_\text{F}|q_n|. \label{eq:DGLainit} \end{equation} For specific choices of the time dependence of $g_2(q_n,t)$ the differential equation \eqref{eq:DGLa} admits a closed solution which we discuss in the following subsections. \subsection{Linear quench}\label{sec:linear} The simplest finite-time quench protocol is a linear ramp \begin{equation} g_2(q,t)=g_4(q,t)=g_2(q)\,\frac{t}{\tau}, \label{eq:linearquench} \end{equation} which is sketched by the solid blue line in Fig.~\ref{fig:nonsudden}. In our setup, contrary to most of previous works \cite{Dora-11,PerfettoStefanucci11,DziarmagaTylutki11,Dora-12,Pollmann-13,Sachdeva-14}, we do not neglect the $g_4$-term. We note that the coupling functions in the linear quench are not differentiable at $t=0$ and $t=\tau$. Inserting \eqref{eq:linearquench} into \eqref{eq:DGLa} leads to the Airy differential equation\cite{AbramowitzStegun65} whose solution for the initial conditions \eqref{eq:DGLainit} can be rewritten in terms of Bessel functions as\cite{Bernier-14,footnote1} \begin{equation} \begin{split} a_n(t)=&\frac{\pi h_n\sqrt{\tilde{t}_n}}{\sqrt{3}}\\ &\times\Bigl\{\bigl[J_{2/3}(h_n)-\text{i} J_{-1/3}(h_n)\bigr]J_{1/3}(h_n\tilde{t}_n^{3/2})\\ &\quad+\bigl[J_{-2/3}(h_n)+\text{i} J_{1/3}(h_n)\bigr]J_{-1/3}(h_n\tilde{t}_n^{3/2})\Bigr\}, \end{split} \label{eq:solutionlinearquench} \end{equation} where we have introduced $h_n=v_\text{F}|q_n|\tau/[3\hat{g}_2(q_n)]$ and $\tilde{t}_n=1+2\hat{g}_2(q_n)t/\tau$. The solution \eqref{eq:solutionlinearquench} is valid during the quench $0\le t\le \tau$. The time evolution after the quench is given by the results stated in Sec.~\ref{sec:after} with the constants $A_n$ and $B_n$ encoding the evolution for $t<\tau$ via the values of $u_n(\tau)$ and $v_n(\tau)$ obtained from \eqref{eq:solutionlinearquench}. \subsection{Cosine quench}\label{sec:cosine} Next we consider a quench protocol where also the first derivative at $t=0$ and $t=\tau$ is continuous. This is for example the case for the cosine quench \begin{equation} g_2(q,t)=g_4(q,t)=\frac{g_2(q)}{2}\left[1-\cos\frac{\pi t}{\tau}\right], \label{eq:cosinequench} \end{equation} sketched as solid red line in Fig.~\ref{fig:nonsudden}. The exact solution of the differential equation \eqref{eq:DGLa} with the initial condition \eqref{eq:DGLainit} is given by \begin{equation} \begin{split} a_n(t)=&c_1\,\text{Ce}\biggl(2\tilde{h}_n\bigl[1+\hat{g}_2(q_n)\bigr],\tilde{h}_n\hat{g}_2(q_n),\frac{\pi t}{2\tau}\biggr)\\ &+c_2\,\text{Se}\biggl(2\tilde{h}_n\bigl[1+\hat{g}_2(q_n)\bigr],\tilde{h}_n\hat{g}_2(q_n),\frac{\pi t}{2\tau}\biggr), \end{split} \label{eq:solutioncosinequench} \end{equation} where $\text{Ce}(a,q,z)$ and $\text{Se}(a,q,z)$ denote the even and odd Mathieu functions\cite{AbramowitzStegun65,footnote2} satisfying the differential equation $y''+[a-2q\cos(2z)]y=0$, $\tilde{h}_n=2(v_\text{F}q_n\tau)^2/\pi^2$, and the integration constants are \begin{equation} \begin{split} \frac{1}{c_1}&=\text{Ce}\bigl(2\tilde{h}_n\bigl[1+\hat{g}_2(q_n)\bigr],\tilde{h}_n\hat{g}_2(q_n),0\bigr),\\ \frac{1}{c_2}&=\frac{\text{i}\pi}{2v_\text{F}|q_n|\tau}\frac{\partial}{\partial z}\text{Se}\bigl(2\tilde{h}_n\bigl[1+\hat{g}_2(q_n)\bigr],\tilde{h}_n\hat{g}_2(q_n),z\bigr)\Big|_{z=0}. \end{split} \end{equation} \subsection{Periodic quench}\label{sec:periodic} We note that the solution \eqref{eq:solutioncosinequench} also allows one to consider periodic driving\cite{KaganManakova09,Graf-10,Pielawa11,BukovHeyl12} of the system. In fact, for periodic quenches with any period, ie, quenches of the form \begin{equation} g_2(q,t)=g_4(q,t)=\frac{g_2(q)}{2}\left[1-\cos\frac{\nu\pi t}{\tau}\right],\quad\nu\in\mathbb{N}, \label{eq:cosine3quench} \end{equation} the solution for $a_n(t)$ during the quench is directly obtained from \eqref{eq:solutioncosinequench} via the replacement $\tau\to\tau/\nu$. For odd $\nu$ these quenches correspond to the switching on of interactions after periodic driving (see the dashed red line in Fig.~\ref{fig:nonsudden} for a sketch), while for even $\nu$ the system is driven for $\nu/2$ periods and then returns to the non-interacting Hamiltonian. In the context of periodically driven Luttinger liquids the solution \eqref{eq:solutioncosinequench} has already been employed\cite{Pielawa11,BukovHeyl12} to investigate features like parametric resonances and metastable states.\cite{KaganManakova09,Graf-10,Pielawa11,BukovHeyl12} \subsection{Exponential and quadratic quenches}\label{sec:exponential} It is also possible to treat exponential quenches of the form \begin{equation} g_2(k,t)=g_4(k,t)=\frac{g_2(k)}{\xi}\left(e^{t\ln(1+\xi)/\tau}-1\right) \label{eq:exponentialquench} \end{equation} with $\xi>0$, which lead to exact solutions in terms of Bessel functions\cite{AbramowitzStegun65} \begin{equation} a_n(t)=c_1\,J_{-\nu_n}\bigl(\hat{t}_n\bigr)+c_2\,J_{\nu_n}\bigl(\hat{t}_n\bigr), \end{equation} where \begin{eqnarray} \nu_n&=&\frac{2v_\text{F}|q_n|\tau\sqrt{2\hat{g}_2(q_n)-\xi}}{\sqrt{\xi}\,\ln(1+\xi)},\\ \hat{t}_n&=&\frac{\sqrt{8}v_\text{F}|q_n|\tau\sqrt{\hat{g}_2(q_n)\,(1+\xi)^{t/\tau}}}{\sqrt{\xi}\,\ln(1+\xi)}. \end{eqnarray} The integration constants $c_1$ and $c_2$ have to be determined from the initial condition \eqref{eq:DGLainit}. Similarly, quadratic quenches \begin{equation} g_2(k,t)=g_4(k,t)=g_2(k)\,\frac{t^2}{\tau^2} \label{eq:quadraticquench} \end{equation} result in solutions in terms of parabolic cylinder functions\cite{AbramowitzStegun65} \begin{equation} \begin{split} a_n(t)=&c_1\,D_{-\frac{1}{2}+\mu_n}\left(-e^{-\text{i}\frac{\pi}{4}}2^{3/4}\hat{g}_2(q_n)^{1/4}\sqrt{\frac{v_\text{F}q_n}{\tau}}t\right)\\ &+c_2\,D_{-\frac{1}{2}-\mu_n}\left(e^{\text{i}\frac{\pi}{4}}2^{3/4}\hat{g}_2(q_n)^{1/4}\sqrt{\frac{v_\text{F}q_n}{\tau}}t\right) \end{split} \end{equation} with $\mu_n=\text{i} v_\text{F}q_n\tau/\sqrt{8\hat{g}_2(q_n)}$. \subsection{Beyond Galilean invariance}\label{sec:non-Galilean-def} If we drop the requirement of Galilean invariance, ie, if we allow coupling functions with $g_2(q,t)\neq g_4(q,t)$, a differential equation similar to \eqref{eq:DGLa} can still be derived. Defining $a_n(t)=u_n(t)+v_n(t)$ [we stress that \eqref{eq:relationua} and \eqref{eq:relationva} are no longer valid] and taking the second derivative we obtain \begin{equation} \begin{split} &\ddot{a}_n(t)-\frac{\dot{a}_n(t)}{1+\hat{g}_4(q_n,t)-\hat{g}_2(q_n,t)}\frac{\text{d}}{\text{d} t}\bigl[\hat{g}_4(q_n,t)-\hat{g}_2(q_n,t)\bigr]\\ &\quad+v_\text{F}^2q_n^2\Bigl[\bigl(1+\hat{g}_4(q_n,t)\bigr)^2-\hat{g}_2(q_n,t)^2\Bigr]a_n(t)=0, \end{split} \label{eq:DGLa-beyon} \end{equation} with the initial conditions for non-interacting initial states still given by $a_n(0)=1$ and $\dot{a}_n(0)=-\text{i} v_\text{F}|q_n|$. However, we are not aware of quench protocols that allow for an exact, analytical solution of \eqref{eq:DGLa-beyon}. The differential equation \eqref{eq:DGLa-beyon} and some properties of its solution will be further investigated in a separate work.\cite{Chudzinski16} \section{Results for specific observables}\label{sec:results} In this section we consider the total and kinetic energies and the fermionic and bosonic Green functions. We focus on universal properties that depend only on the values of $g_{2/4}(q)$ at $q=0$, which translates into a dependence on the Luttinger parameter $K$ and renormalised velocity $v$ of the post-quench system. Unless stated otherwise, $g_2(q)$ and $g_4(q)$ are considered to be independent functions. The numerical results shown in the plots of this section were obtained for the specific choice $g_2(q,\tau)=g_4(q,\tau)=g_0\,\exp[-(q/q_\text{c})^2/2]$. \subsection{Total energy}\label{sec:totalenergy} \begin{figure}[t] \centering \includegraphics[width=0.49\textwidth]{Etot.pdf} \caption{(Colour online) Energy density $E(\tau)/L$ after linear, cosine and periodic quenches with final interaction strength $g_0=v_\text{F}/2$. The time dependence during the quench clearly affects the total energy, in particular for the case with periodic driving for one period. In the adiabatic limit $\tau\to\infty$ the total energy density reaches the ground-state energy density $E_\text{gs}/L$ indicated by the dotted line. Inset: Energy density during linear and cosine quenches of length $v_\text{F}q_\text{c}\tau=10$, the former showing a kink at $t=\tau$.} \label{fig:Etot} \end{figure} The simplest observable is the total energy $E(t)=\langle H\rangle(t)$ during the quench, which has first been analysed for linear quenches in Ref.~\onlinecite{Dora-11}. Following this work a comparison to numerical data for linear quenches in the gapless phase of the XXZ chain showed very good agreement,\cite{Pollmann-13} thus indicating that the TLM can describe finite-time quenches in lattice models. The total energy in the TLM reads \begin{equation} E(t)=\sum_{n\neq 0}\text{Im}\left[v_n(t)^*\,\frac{\text{d}}{\text{d} t}v_n(t)\right], \label{eq:energy} \end{equation} after the quench, $t>\tau$, the total energy is given by $E(\tau)=\text{tr}(\rho_\text{GGE}H)$ with the generalised Gibbs ensemble $\rho_\text{GGE}$ defined in Sec.~\ref{sec:GGE}. The energy \eqref{eq:energy} depends on the details of the quench protocol and the quench time (as well as the precise form of the coupling functions), for example \begin{equation} \begin{split} \frac{\text{d}}{\text{d} t}E(t)=&\frac{1}{2\pi}\sum_{n\neq 0}|q_n|\,\Bigl[\dot{g}_2(q_n,t)\,\text{Re}\bigl(v_n(t)^*\,u_n(t)\bigr)\\ &\qquad\qquad\qquad+\dot{g}_4(q_n,t)\,\big|v_n(t)\big|^2\Bigr], \end{split} \end{equation} where the dot denotes the derivative with respect to time. In particular, kinks in $g_{2/4}(q,t)$ result in kinks in the total energy as exemplified in the inset of Fig.~\ref{fig:Etot}. As further shown in Fig.~\ref{fig:Etot} we observe that in both the sudden and the adiabatic limit the result does not depend on the quench protocol, as is of course well expected. In contrast, for quench times of the order of the inverse band width, $\tau\sim 1/(v_\text{F}q_\text{c})$, the results for the linear and cosine quenches clearly differ. The most drastic effect of the finite quench time shows up for the periodic quench, where at quench times $\tau\sim 5/(v_\text{F}q_\text{c})$ an increase of the energy can be observed. The physical origin of this behaviour lies in the preferred occupation of high-energy modes by these quench protocols, as can be seen from the mode occupation after the quench shown in Fig.~\ref{fig:modes}. For the linear and cosine quenches we observe that the adiabatic limit is reached as $E(\tau)-E_\text{gs}\propto (v_\text{F}q_\text{c}\tau)^{-2}\,\ln(v_\text{F}q_\text{c}\tau)$ in accord with the so-called analytic regime discussed in Ref.~\onlinecite{PolkovnikovGritsev08}. The behaviour $E(\tau)-E_\text{gs}\sim\tau^{-2}$ was also observed\cite{HaqueZimmer13} after power-law quenches in various many-body systems in harmonic traps, including the one-dimensional Bose--Hubbard model. Our results for the cosine quench suggest that this general behaviour originates from the existence of a finite quench time $\tau$ rather than from the existence of an endpoint kink at $t=\tau$ in the interaction function. \subsection{Kinetic energy}\label{sec:kineticenergy} As first observable which shows non-trivial behaviour after the quench we consider the kinetic energy, which is defined as the expectation value of the non-interacting Hamiltonian $H_\text{kin}=H(t<0)=v_\text{F}\sum_{n\neq 0}|q_n|b_n^\dagger b_n$. Straightforward calculation gives \begin{equation} E_\text{kin}(t)=2v_\text{F}\sum_{n>0}q_n\,\big|v_n(t)\big|^2. \end{equation} For times $t>\tau$ and in the thermodynamic limit we obtain \begin{equation} \begin{split} E_\text{kin}(t)=&\frac{v_\text{F}L}{\pi}\int_0^\infty \text{d} q\,q\biggl[\frac{|A(q)|^2+|B(q)|^2}{2}\\* &\quad+\frac{|A(q)|^2-|B(q)|^2}{2}\cos[2\epsilon(q)t]\\* &\quad+\text{Re}\bigl[A(q)^*B(q)\bigr]\,\sin[2\epsilon(q)t]\biggr]. \end{split} \end{equation} The behaviour at late times can be obtained using asymptotic analysis;\cite{BenderOrszag99} assuming $\text{d}\epsilon(q)/\text{d} q\neq 0$ we find \begin{equation} E_\text{kin}(t)=E_\text{kin}^\infty+\frac{L\gamma_\text{kin}}{t^2}+\mathcal{O}(t^{-3}). \label{eq:Ekinlatetimes} \end{equation} Here \begin{equation} \begin{split} E_\text{kin}^\infty&=\frac{v_\text{F}L}{2\pi}\int_0^\infty \text{d} q\,q\,\bigl(|A(q)|^2+|B(q)|^2\bigr)\\ &=\text{tr}\bigl(\rho_\text{GGE}H_\text{kin}\bigr) \end{split} \end{equation} denotes the asymptotic limit identical to the expectation value of $H_\text{kin}$ in the generalised Gibbs ensemble \eqref{eq:GGE}. As such it inherits all properties of the mode occupation after the quench \eqref{eq:modesGGE}, for example the non-monotonic dependence on the quench time after a periodic quench. Furthermore, the decay parameter $\gamma_\text{kin}$ is given by \begin{equation} \gamma_\text{kin}=\frac{v_\text{F}}{32\pi v^2}\left(K-\frac{1}{K}\right)^2, \end{equation} where $v$ and $K$ are defined in \eqref{eq:LLparameter} with the coupling functions taken at the quench time $t=\tau$. We stress that the decay parameter is universal in the sense that it depends on the values of the coupling functions $g_2(q,\tau)$ and $g_4(q,\tau)$ at $q=0$ only. Moreover, $\gamma_\text{kin}$ is identical to the result for the sudden quench,\cite{KRSM12} ie, it does not depend on the quench time or the precise form of the quench protocol but only on the final values of the interaction. Finally, we note that the derivation of \eqref{eq:Ekinlatetimes} relies on the assumption that $g_2(q,\tau)$ and $g_4(q,\tau)$ are smooth functions of $q$. For example, if we consider a sharp momentum cutoff $g_{2/4}(q,\tau)\propto\Theta(q_\text{c}-q)$ the leading late-time behaviour will change to $E_\text{kin}(t)-E_\text{kin}^\infty\sim\sin[2\epsilon(q_\text{c})t]/t$ with a non-universal prefactor.\cite{RSM12} \subsection{Fermionic Green function and quasiparticle weight}\label{sec:GF} \subsubsection{Definition} In the context of spinless fermions discussed in Sec.~\ref{sec:fermionicmodel} it is natural to consider the time evolution of the chiral Green function of the right movers \begin{equation} G_\text{F}(x,t)=\big\langle\Psi_+^\dagger(x,t)\,\Psi_+(0,t)\big\rangle, \end{equation} which has been studied in detail after sudden quenches in Refs.~\onlinecite{Cazalilla06,IucciCazalilla09,RSM12}. Using the representation of the right movers in terms of the bosonic modes \eqref{eq:RMtomodes} a straightforward calculation yields \begin{equation} G_\text{F}(x,t)=\frac{\text{i}}{2\pi}\frac{1}{x+\text{i} 0}\exp\left(-\frac{1}{2}F_\text{F}(x,t)\right), \label{eq:GFresult} \end{equation} where \begin{equation} F_\text{F}(x,t)=4\int_0^\infty\frac{\text{d} q}{q}\bigl[1-\cos(qx)\bigr]\,\big|v(q,t)\big|^2 \label{eq:Fresult} \end{equation} encodes the deviation of the Green function from the non-interacting result. We note that the fermionic Green function \eqref{eq:GFresult} after a linear quench was already studied by Dora et al.~\cite{Dora-11} in a perturbative treatment in $g_2$ for $g_4=0$ (see also App.~\ref{app:PT}). \subsubsection{Stationary limit} First let us consider the stationary limit $F_\text{F}^\text{st}(x)=\lim_{t\to\infty}F_\text{F}(x,t)$, which reads \begin{equation} F_\text{F}^\text{st}(x)=2 \int_0^\infty\frac{\text{d} q}{q}\bigl[1-\cos(qx)\bigr]\,\bigl(|A(q)|^2+|B(q)|^2\bigr). \label{eq:Fst} \end{equation} We note in passing that the momentum dependence of the coupling functions and thus the coefficients $A(q)$ and $B(q)$ is essential for the convergence of the integral. The limiting behaviour at large distances $1/q_\text{c}\ll x$ is given by $F_\text{F}^\text{st}(x)=2\gamma_\text{F}\ln\big|q_\text{c}x\big|$ with the prefactor $\gamma_\text{F}$ taking the values \begin{equation} \gamma_\text{F}=\left\{\begin{array}{ll} \displaystyle\gamma_\text{F}^\text{ad}=\frac{1}{2}\left(K+\frac{1}{K}-2\right),& x\ll 2v\tau,\\[3mm] \displaystyle\gamma_\text{F}^\text{sq}=\frac{1}{4}\left(K^2+\frac{1}{K^2}-2\right),& 2v\tau\ll x. \end{array}\right. \label{eq:gammasq} \end{equation} To order $\mathcal{O}(\hat{g}_2^2)$ we observe $\gamma_\text{F}^\text{sq}=2\gamma_\text{F}^\text{ad}$ in agreement with the perturbative result.\cite{Dora-11} \subsubsection{Light-cone effect}\label{sec:GFlightcone} Calabrese and Cardy~\cite{CalabreseCardy06,CalabreseCardy07} first identified the light-cone or horizon effect after sudden quenches in conformal field theories. They also put forward a rather natural picture which is as follows: The quench creates quasiparticles in the system, which, if they originate from closely separated points, are quantum entangled. They then propagate semi-classically through the system with unique speed $v$. If the two quasiparticles in such an entangled pair arrive at the points $x_{1,2}$ at time $t$ they induce correlations, which in turn imply a sharp light cone in space time at $|x_1-x_2|=2vt$. Such light cones in correlation functions have been subsequently observed in numerical simulation on the Bose--Hubbard model\cite{LauchliKollath08,Barmettler-12,Carleo-14} as well as short- and long-ranged spin systems,\cite{Manmana-09,HaukeTagliacozzo13,Eisert-13,Bonnes-14} and experimentally in ultra-cold atomic gases\cite{Cheneau-12,Langen-13} and ions.\cite{Richerme-14,Jurcevic-14} \begin{figure}[t] \centering \includegraphics[width=0.49\textwidth]{F-vF1-q01-g02pi-tau10-linear-Gaussian.pdf} \caption{(Colour online) Cuts through the function \eqref{eq:Fresult} for fixed separations $x$ as a function of time $t$. We consider a linear quench with quench time $\tau=10/(v_\text{F}q_\text{c})$ and final interaction strength $g_0=2\pi v_\text{F}$. We observe a propagating maximum (indicated by arrows) which for sufficiently late times follows the linear relation $x=2\tilde{v}t$ with $\tilde{v}<v$.} \label{fig:GF} \end{figure} In order to analyse the light-cone effect in the fermionic Green function we first consider the time dependence at fixed separation $x$ as is often done to analyse numerical data as well;\cite{Manmana-09} exemplary results are shown in Fig.~\ref{fig:GF}. As can be seen there is a clear maximium (indicated by arrows) propagating through the system, which, for sufficiently late times, follows a linear relation of time vs position from which we can extract the velocity $\tilde{v}$ of the horizon via $\tilde{v}=x/(2t)$. The first observation is that this velocity is smaller than the renormalised velocity $v$ defined in \eqref{eq:LLparameter}, ie, $\tilde{v}<v$. The origin of this finding is the simple fact that for any non-trivial momentum dependence of the coupling functions the quasiparticles created by the quench will possess different group velocities $v(q)=\text{d}\epsilon(q)/\text{d} q$ depending on their individual momenta $q$. Since for monotonically falling coupling functions one has $v(q)<v$, the maximum originating from a propagating packet of quasiparticles will be delayed compared to the front of the fastest particles moving with $v$. This behaviour is already present for sudden quenches as we discuss in App.~\ref{app:1}. Since this delay of propagating quasiparticles is a general feature of any non-trivial momentum dependence of the coupling functions, it is also expected to show up in simulations for lattice models in both sudden and finite-time quenches (see also Fig.~\ref{fig:Ftilde}). \begin{figure}[t] \centering \includegraphics[width=0.49\textwidth]{F-contour-vF1-q01-g02pi-tau10-linear-Gaussian.pdf} \caption{(Colour online) Contour plot of the function \eqref{eq:Fresult} after a linear quench of length $\tau=10/(v_\text{F}q_\text{c})$ and final interaction strength $g_0=2\pi v_\text{F}$. The white line indicates the light cone as identified by the propagating maximum shown in Fig.~\ref{fig:GF}, while the black line is the corresponding maximum after a sudden quench. We observe that after the quench the light cones propagate with identical velocities but that the maximum for the linear quench lags behind by a distance $\Delta x$. Here and in all following contour plots we used a linear interpolation between numerically evaluated data points.} \label{fig:GFcontour1} \end{figure} The space-time dependence of the function \eqref{eq:Fresult} after a linear quench is shown in Fig.~\ref{fig:GFcontour1}. The white line indicates the propagating maximum discussed above, while the black line is the corresponding maximum after a sudden quench with the same final parameters (see also App.~\ref{app:1}). At sufficiently late time after the quench both maxima propagate with identical velocities $\tilde{v}$. However, at any fixed time the maximum after the linear quench lags behind by a distance $\Delta x$. As we derive in App.~\ref{app:derivation}, the universal behaviour of \eqref{eq:Fresult} at late times and large separations, $\tau\ll t$ and $1/q_\text{c}, 2v\tau\ll x$, is given by \begin{equation} F_\text{F}(x,t)=F_\text{F}^\text{st}(x)-\gamma_\text{F}^\text{sq}\,\ln\left|1-\frac{x^2}{(2vt-\Delta x)^2}\right|, \label{eq:Fapprox} \end{equation} where $F_\text{F}^\text{st}(x)$ is the stationary contribution defined in \eqref{eq:Fst}, the sudden-quench exponent $\gamma_\text{F}^\text{sq}$ was obtained in \eqref{eq:gammasq}, and the lag is given by \begin{equation} \Delta x=\frac{4K v_\text{F}\tau}{1-K^2}\left[\hat{g}_2(0,\tau)-\frac{1}{\tau}\int_0^\tau\text{d} t\,\hat{g}_2(0,t)\right]. \label{eq:lagapprox} \end{equation} These results are valid for short to moderate quench times, $v_\text{F}q_\text{c}\tau\lesssim 1$, but arbitrary quench protocols. For example, for linear or cosine quenches \eqref{eq:lagapprox} simplifies to $\Delta x=2Kv_\text{F}\hat{g}_2(0,\tau)\tau/(1-K^2)$. Furthermore, the approximate result \eqref{eq:Fapprox} only depends on the values $g_2(0,\tau)$ and $g_4(0,\tau)$; thus together with the stationary contribution $F_\text{F}^\text{st}(x)=2\gamma_\text{F}^\text{sq}\,\ln|q_\text{c}x|$ it describes the universal behaviour of the fermionic Green function at late times and large distances. For $\tau\to 0$ we recover the well-known result for sudden quenches.\cite{Cazalilla06} We note that since \eqref{eq:Fapprox} is obtained using the small-momentum expansion of $\epsilon(q)$ it neglects the reduction of the propagation velocity from $v$ to $\tilde{v}$ (see App.~\ref{app:derivation} for a more detailed discussion). As an alternative, heuristic ansatz to analyse the light-cone effect we use \begin{equation} \Delta x=2\tilde{v}\tau-2v_\text{F}\int_{s_\text{av}}^\tau\text{d} t'\,\sqrt{[1+\hat{g}_4(0,t')]^2-\hat{g}_2(0,t')^2}. \label{eq:lag} \end{equation} Here the first term is due to the reduced post-quench evolution time during which the maximum propagates with velocity $\tilde{v}$. The second term describes the evolution during the quench, where we assume the quasiparticles to propagate with the instantaneous velocity~\cite{Bernier-14} $v_\text{F}\sqrt{[1+\hat{g}_4(0,t')]^2-\hat{g}_2(0,t')^2}$ (we have neglected the momentum dependence of the coupling functions for simplicity). Furthermore, the quasiparticles are created over the full quench time $0\le t\le\tau$, with the phenomenological parameter $s_\text{av}$ corresponding to the ``average" creation time of the relevant quasiparticles. Heuristically we find that $s_\text{av}$ grows with $\tau$ [$s_\text{av}\propto\tau$ for $v_\text{F}q_\text{c}\tau\lesssim 1$ in agreement with \eqref{eq:lagapprox}], decreases with increasing post-quench interaction strengths, and is larger for the cosine protocol than for the linear ramp. In principle, $s_\text{av}$ should be related to the mode occupations of the instantaneous eigenmodes $\alpha_n^t$ [where the parameter $t$ indicates that these modes diagonalise the Hamiltonian \eqref{eq:TLM} at time $t$ via $\alpha_n^t=c^t(q_n)b_n+s^t(q_n)b_{-n}^\dagger$], which is given by \begin{equation} \bra{\Psi(t)}(\alpha_n^t)^\dagger\alpha_n^t\ket{\Psi(t)}=\big|c^t(q_n)v_n(t)+s^t(q_n)u_n(t)\big|^2 \label{eq:modes2} \end{equation} with the coefficients $s^t(q)$ and $c^t(q)$ given by \eqref{eq:defsc} with the coupling functions $g_{2/4}(q)$ taken at time $t$. However, the precise relation between \eqref{eq:modes2} and the parameter $s_\text{av}$ remains unclear. \begin{figure}[t] \centering \includegraphics[width=0.49\textwidth]{F-contour-vF1-q01-g02pi-tau10-3cosine-Gaussian.pdf} \caption{(Colour online) Contour plot of the function \eqref{eq:Fresult} after a periodic quench with $\nu=3$, length $\tau=10/(v_\text{F}q_\text{c})$ and $g_0=2\pi v_\text{F}$. The white lines indicate two light cones as identified by the propagating maxima, while the black line is the corresponding maximum after a sudden quench.} \label{fig:GFcontour2} \end{figure} Finally, in Fig.~\ref{fig:GFcontour2} we show the space-time dependence of the fermionic Green function after a periodic quench with long quench time $\tau=10/(v_\text{F}q_\text{c})$ [for which \eqref{eq:Fapprox} is not applicable]. We observe two propagating maxima caused by the non-trivial time dependence of the creation of quasiparticles during the quench. Also the propagating maxima are narrower than for the linear quench with the same quench time (shown in Fig.~\ref{fig:GFcontour1}) since the creation of quasiparticles happens during shorter time intervals. \subsubsection{Oscillations inside the light cone}\label{sec:oscillations} \begin{figure}[t] \centering \includegraphics[width=0.49\textwidth]{F-contour-vF1-q01-g04pi-tau1-cosine-Gaussian.pdf} \caption{(Colour online) Contour plot of the function \eqref{eq:Fresult} after a cosine quench of length $\tau=1/(v_\text{F}q_\text{c})$ and $g_0=4\pi v_\text{F}$. In addition to the horizon we observe oscillations inside the light cone, which originate from the non-trivial momentum dependence of the coupling functions.} \label{fig:GFcontour3} \end{figure} The space-time dependence of the function \eqref{eq:Fresult} after a rather short cosine quench is shown in Fig.~\ref{fig:GFcontour3}. In addition to the horizon we observe oscillations inside the light cone, which become more pronounced when the quench rate $\hat{g}/\tau$ is increased. The origin of these oscillations is the non-trivial momentum dependence of the coupling functions and thus the single-mode energy $\epsilon(q)$, and hence eventually a result of the finite cutoff $q_\text{c}$ (see App.~\ref{app:oscillations} for more details). Since decreasing the quench time results in the creation of more quasiparticles at higher momenta (eg, compare the black and blue lines in Fig.~\ref{fig:modes}) where the momentum dependence of $\epsilon(q)$ is stronger, the oscillations are more pronounced for shorter quenches. We stress, however, that these oscillations are non-universal and indeed do not appear in the universal result \eqref{eq:Fapprox}. Nevertheless, the existence of a finite ultra-violet cutoff, eg, in lattice simulations, is generically expected to result in oscillating features following the horizon. For a discussion of similar, non-universal oscillations in the time evolution after sudden quenches we refer to Ref.~\onlinecite{RSM12} (see also Fig.~\ref{fig:GF-sudden}). \subsubsection{Quasiparticle weight}\label{sec:Zfactor} Finally we consider the fermionic quasiparticle weight $Z(t)$, whose late-time behaviour is characterised by universal power-law decay.\cite{Cazalilla06,IucciCazalilla09,Uhrig09,KRSM12,KRSM12,HamerlaUhrig13} In order to determine $Z(t)$ we consider the momentum distribution of right movers \begin{equation} n(q,t)=\int\text{d} x\,e^{\text{i} qx}\,G_\text{F}(x,t), \end{equation} which possesses a jump at the Fermi momentum $k_\text{F}$ with value $Z(t)=\lim_{q\to k_\text{F}-}n(q,t)-\lim_{q\to k_\text{F}+}n(q,t)$. At late times after the quench we find \begin{equation} Z(t)=c\,(v_\text{F}q_\text{c}\,t)^{-\gamma_\text{F}^\text{sq}}, \label{eq:Zfactor} \end{equation} in particular, the power-law decay is governed by the sudden-quench exponent $\gamma_\text{F}^\text{sq}$. However, the prefactor $c$ shows a dependence on the quench time $\tau$ as well as the quench protocol as shown in Fig.~\ref{fig:logZfactor}. For short and long quenches we further obtain the limiting behaviours $c\sim 1$ ($v_\text{F}q_\text{c}\tau\ll 1$) and $c\sim(v_\text{F}q_\text{c}\tau)^{\gamma_\text{F}^\text{ad}}$ ($v_\text{F}q_\text{c}\tau\gg 1$), in agreement with the perturbative results of Ref.~\onlinecite{Dora-11} for linear quenches. \begin{figure}[t] \centering \includegraphics[width=0.49\textwidth]{logZfactor.pdf} \caption{(Colour online) Quench-dependent prefactor $c$ in the quasiparticle weight \eqref{eq:Zfactor} after linear, cosine and periodic (only for $g_0=v_\text{F}/2$) quenches. For slow quenches, $v_\text{F}q_\text{c}\tau\gg 1$, we observe power-law enhancement $c\sim(v_\text{F}q_\text{c}\tau)^{\gamma_\text{F}^\text{ad}}$ indicated by the dotted lines. The occupation of high-energy modes after the periodic quench manifests itself in the non-monotonic behaviour around $v_\text{F}q_\text{c}\tau\sim 5$.} \label{fig:logZfactor} \end{figure} \subsection{Bosonic Green function}\label{sec:SpSm} \subsubsection{Definition} Similarly to the fermionic Green function discussed above we consider its bosonic counterpart (see Sec.~\ref{sec:bosonicmodel}) \begin{equation} G_\text{B}(x,t)=\big\langle\Psi_\text{B}(x,t)\,\Psi_\text{B}^\dagger(0,t)\big\rangle\propto\exp\left(-\frac{1}{2}F_\text{B}(x,t)\right) \label{eq:GFB} \end{equation} where\cite{Pollmann-13,Bernier-14} \begin{equation} F_\text{B}(x,t)=\int_0^\infty\frac{\text{d} q}{q}\bigl[1-\cos(qx)\bigr]\,\big|u(q,t)-v(q,t)\big|^2. \label{eq:Itt} \end{equation} In the limit of hard-core bosons the Green function \eqref{eq:GFB} corresponds to the spin-flip correlation function in the XXZ chain, which was studied by Pollmann et al.\cite{Pollmann-13} during linear quenches. More recently, Bernier et al.\cite{Bernier-14} analysed the bosonic Green function during a linear ramp of the interaction strength and identified the front at which correlations form [similar to the second term in Eq.~\eqref{eq:lag} above] as well as several regimes showing different power-law and stretched exponential decays. Here we will focus instead on the post-quench regime $t>\tau$. \subsubsection{Stationary limit} First we consider the stationary limit $F_\text{B}^\text{st}(x)=\lim_{t\to\infty}F_\text{B}(x,t)$, which shows the asymptotic behaviour \begin{equation} F_\text{B}^\text{st}(x)=2\gamma_\text{B}\,\ln(q_\text{c}x) \label{eq:FBst} \end{equation} with the adiabatic and sudden-quench exponents \begin{equation} \gamma_\text{B}=\left\{\begin{array}{ll} \displaystyle\gamma_\text{B}^\text{ad}=\frac{1}{2K},& x\ll 2v\tau,\\[3mm] \displaystyle\gamma_\text{B}^\text{sq}=\frac{1}{4}\left(1+\frac{1}{K^2}\right),& 2v\tau\ll x. \end{array}\right. \label{eq:gammaBsq} \end{equation} \subsubsection{Light-cone effect and oscillations} \begin{figure}[t] \includegraphics[width=0.49\textwidth]{FB-vF1-q01-g02pi-tau10-linear-Gaussian.pdf} \caption{(Colour online) Constant time cuts for $F_\text{B}(x,t)$ after a linear quench of length $\tau=10/(v_\text{F}q_\text{c})$ and final interaction strength $g_0=2\pi v_\text{F}$. We observe a propagating maximum (indicated by arrows) defining the light cone. For large separations inside the light cone, $2v\tau\ll x\ll 2vt$, the slope is given by $2\gamma_\text{B}^\text{sq}$, while outside the light cone we find $F_\text{B}(x,t)\sim\ln(q_\text{c}x)$ implying the power-law decay of the bosonic Green function $G_\text{B}(x,t)\propto 1/\sqrt{x}$ in the non-interacting initial state.} \label{fig:GBcuts1} \end{figure} \begin{figure}[t] \includegraphics[width=0.49\textwidth]{FB-contour-vF1-q01-g02pi-tau10-linear-Gaussian.pdf}\vspace{5mm} \includegraphics[width=0.49\textwidth]{FB-contour-vF1-q01-g04pi-tau1-linear-Gaussian.pdf} \caption{(Colour online) Contour plots of $F_\text{B}(x,t)$ after linear quenches with (a) $g_0=2\pi v_\text{F}$, $\tau=10/(v_\text{F}q_\text{c})$ and (b) $g_0=4\pi v_\text{F}$, $\tau=1/(v_\text{F}q_\text{c})$. The white lines indicate the light cones as identified by the propagating maximum shown in Fig.~\ref{fig:GBcuts1}, while the black lines are the corresponding maxima after a sudden quench. The feature for the linear quench lags behind by a distance $\Delta x$, which is identical to the one extracted from the fermionic Green function.} \label{fig:GBcontour1} \end{figure} The time evolution of $F_\text{B}(x,t)$ for linear quenches is shown in Figs.~\ref{fig:GBcuts1} and~\ref{fig:GBcontour1}. The propagating light cone is clearly visible in the cuts, the extracted position is identical to the one obtained from the fermionic Green function. In particular, we observe the same propagation velocity $\tilde{v}$ and the same lag $\Delta x$. The universal behaviour of $F_\text{B}(x,t)$ is obtained analogously to App.~\ref{app:derivation} with the result \begin{equation} F_\text{B}(x,t)=F_\text{B}^\text{st}(x)-\frac{1-K^2}{4K^2}\,\ln\left|1-\frac{x^2}{(2vt-\Delta x)^2}\right|, \label{eq:FBapprox} \end{equation} with the lag given for short to moderate quench times but arbitrary quench protocols by \eqref{eq:lagapprox}. Together with \eqref{eq:FBst} we can obtain the space dependence of the bosonic Green function at fixed times (see also Fig.~\ref{fig:GBcuts1}): $G_\text{B}(x,t)\propto x^{-\gamma_\text{B}^\text{ad}}$ for $x\ll 2v\tau$, $G_\text{B}(x,t)\propto x^{-\gamma_\text{B}^\text{sq}}$ for $2v\tau\ll x\ll 2vt$, and $G_\text{B}(x,t)\propto x^{-1/2}$ for $2vt\ll x$, ie, outside the light cone. In addition, behind the propagating front we observe oscillations, see Fig.~\ref{fig:GBcontour1}(b), which, as for the fermionic Green function, originate from the non-trivial momentum dependence of the single-mode energy $\epsilon(q)$. \subsubsection{Stretched exponential behaviour}\label{sec:strechedexponential} Scrutinising a Galilean invariant system during linear ramps Bernier et al.~\cite{Bernier-14} identified an intermediate regime over which the bosonic Green function shows an unconventional stretched exponential space dependence at fixed times. In App.~\ref{app:stretchedexponential} we perform the similar analysis for the stationary Green function after a linear quench with $g_2(q,t)=g_4(q,t)=g_2(q)\,t/\tau$. We find that between the power-law dependencies in the adiabatic and sudden-quench regimes defined by \eqref{eq:gammaBsq}, there exists an intermediate regime showing the stretched exponential behaviour \begin{equation} G_\text{B}^\text{st}(x)\sim \exp\!\left[-\frac{2^{1/3}\pi^2\sqrt{1+2\hat{g}_2(0)}}{\Gamma(1/3)^3}\left(\frac{3\hat{g}_2(0)x}{v_\text{F}\tau}\right)^{1/3}\right] \label{eq:GBstretched} \end{equation} provided \begin{equation} \frac{v_\text{F}\tau}{3\hat{g}_2(0)}\ll x\ll \frac{v_\text{F}\tau}{3\hat{g}_2(0)}\bigl[1+2\hat{g}_2(0)\bigr]^{3/2}. \label{eq:stretchedregime} \end{equation} A few remarks are in order: (i) The existence of the regime \eqref{eq:stretchedregime} requires very strong post-quench interactions, which may not be realisable in microscopic models.\cite{Bernier-14} (ii) The result \eqref{eq:GBstretched} is only valid for linear quenches. (iii) In the derivation of \eqref{eq:GBstretched} we have used the replacement $g_2(q)\to g_2(0)$. However, given that the regime \eqref{eq:stretchedregime} corresponds to a regime of finite momenta and thus finite energies, the stretched exponential behaviour may be masked by the effects of marginal or irrelevant perturbations to the TLM like the momentum dependence of $g_2(q)$. (iv) Interestingly, the result \eqref{eq:GBstretched} is identical to the one\cite{Bernier-14} at $t=\tau$. Thus the emerging picture is as follows: During the quench the Green function develops the adiabatic and stretched exponential regimes (provided the post-quench interactions are strong enough) inside the light cone, while outside the light cone the behaviour is governed by the non-interacting initial state. After the quench the adiabatic and stretched exponential regimes remain unchanged, while behind the horizon the additional sudden-quench behaviour develops, eventually governing the whole regime $v_\text{F}\tau[1+2\hat{g}_2(0)]^{3/2}/[3\hat{g}_2(0)]\ll x$ in the stationary limit. \subsection{Other correlation functions} Using the same methods one can analyse the behaviour of other correlation functions. For example, the staggered part of the density-density correlation function is given by $\chi(x,t)\propto\exp\left(-\frac{1}{2}F_\chi(x,t)\right)$ with\cite{Pollmann-13} \begin{equation} F_\chi(x,t)=\int_0^\infty\frac{\text{d} q}{q}\bigl[1-\cos(qx)\bigr]\,\big|u(q,t)+v(q,t)\big|^2. \end{equation} This shows the same qualitative features as the Green functions discussed in the previous two sections, ie, a clear light-cone effect with a delay due to the finite quench time as well as oscillations inside the light cone originating from the finite cutoff $q_\text{c}$. These general features are expected for other correlation functions as well. \section{Conclusion and discussion}\label{sec:conclusion} In this work we have investigated the time evolution in the TLM during and after finite-time interaction quenches. These were implemented by time dependent protocols to change the interaction parameters $g_{2/4}$ over the time interval $\tau$. After discussing the general framework of the time-dependent TLM, we derived exact analytical expressions for the small-momentum behaviour of the solution, as well as discussed the full solutions for specific quench protocols like the linear quench,\cite{Bernier-14} a cosine ramp and periodic driving.\cite{Pielawa11,BukovHeyl12} We then used these results to analyse the time evolution of the total and kinetic energy as well as fermionic and bosonic Green functions during and after the quench. We focused on universal quantities in the sense that they only depend on the coupling functions at zero momentum and thus the Luttinger parameter and renormalised velocity given in Eq.~\eqref{eq:LLparameter}. For example, we showed that the kinetic energy decays as $\gamma_\text{kin}/t^{2}$ to its stationary value, where the decay parameter $\gamma_\text{kin}$ is identical to the one found for sudden quenches\cite{KRSM12} and thus independent of the quench protocol. Analysing the stationary limit of the fermionic Green function we found a crossover from the adiabatic to the sudden regime at $x\sim 2v\tau$, where the two regimes are governed by different power-law decays, in agreement with earlier findings in the perturbative regime.\cite{Dora-11} Perhaps most interestingly, the light-cone effect\cite{CalabreseCardy06} well-known from sudden quenches is also clearly visible after finite-time quenches. However, as compared to the sudden case there is a lag of the horizon, which is related to two physical effects: First, during the quench the quasiparticles propagate at the instantaneous velocity\cite{Bernier-14} which is generically smaller than the post-quench velocity. Second, the creation of quasiparticle pairs happens during the full time of the quench, while for sudden quenches they are all created at the same time $t=0$. Using the analytical expressions for the small-momentum behaviour of the solution, we obtained the universal behaviour \eqref{eq:Fapprox} of the fermionic Green function. This includes an analytic expression for the lag \eqref{eq:lagapprox}, which is valid for short to moderate quench times, and relates the lag to the change of the interaction strength during the quench. In particular, the lag thus depends on the details of the quench protocol. Furthermore, we identified a reduction of the post-quench velocity with respect to the renormalised velocity as well as oscillations inside the light cone, and traced both effects back to the momentum dependence of the coupling functions $g_{2/4}(q)$. Finally we analysed the bosonic Green function. The behaviour is very similar to the one discussed for the fermionic one. In particular, we extracted the universal behaviour of the post-quench dynamics, see \eqref{eq:FBst} and \eqref{eq:FBapprox}, and showed that the lag of the horizon is still given by \eqref{eq:lagapprox}. In addition, for linear quenches to very strong interactions we analysed the regime of intermediate separations. We found that the stretched exponential behaviour previously observed\cite{Bernier-14} during the quench is unaffected by the post-quench dynamics and thus also present in the stationary Green function.\pagebreak As discussed in Sec.~\ref{sec:model} the TLM describes the universal low-energy physics of various one-dimensional fermionic and bosonic lattice models as well as spin chains in equilibrium, more specifically it corresponds to the low-energy fixed point in a renormalisation-group treatment. One may wonder to what extent results obtained for the quench dynamics in the TLM can also be used to analyse the quench dynamics in these lattice models, since the quench will inject a finite energy density into the system and thus drive it away from its low-energy fixed point. This may even be more severe in the case of finite-time quenches since the quench time $\tau$ will introduce an additional energy scale in the problem, which may increase the importance of marginal and irrelevant perturbations to the TLM. Nevertheless, various numerical studies\cite{DeChiara-06,Barmettler-09,Barmettler-10,KRSM12,KennesMeden13,HamerlaUhrig13,Coira-13,Collura-15} of observables in one-dimensional lattice models after sudden quenches showed a surprisingly good agreement with the results obtained in the TLM; a finding also obtained for the time evolution during finite-time interaction quenches\cite{Bernier-12,HaqueZimmer13,Bernier-14} in the Bose--Hubbard model. Still, from a practical point of view the study of the time evolution after finite-time quenches is complicated by the restriction of the achievable times in numerical simulations due to the finite quench time and the unknown effects of perturbations to the TLM as well as the energy (and thus time) scales involved. Further research in this direction is clearly desirable. In light of this it would be very interesting to investigate the effect of perturbations around the TLM, for example, the finite band curvature included in the non-linear Luttinger liquid theory\cite{Imambekov-12} or relevant perturbations leading to the opening of an excitation gap.\cite{Gritsev-07,IucciCazalilla10,BSE14} Furthermore, our results for periodic quenches could be used to connect the field of quantum quenches to periodically driven systems since the latter can be treated by increasing the number of periods $\nu$ in \eqref{eq:cosine3quench} as already employed in Refs.~\onlinecite{Pielawa11,BukovHeyl12}. \acknowledgements We thank Jean-S\'{e}bastien Bernier, Bal\'{a}zs D\'{o}ra, Masud Haque, Markus Heyl, Salvatore Manmana, Volker Meden and Tatjana Pu\v{s}karov for useful discussions, and Nicholas Ohs for collaboration in the very early stages of this project. This work is part of the D-ITP consortium, a program of the Netherlands Organisation for Scientific Research (NWO) that is funded by the Dutch Ministry of Education, Culture and Science (OCW). This work was supported by the German Research Foundation (DFG) through the Emmy-Noether Program under SCHU 2333/2-1, and the Foundation for Fundamental Research on Matter (FOM), which is part of the Netherlands Organisation for Scientific Research (NWO), under 14PR3168.
1,108,101,566,305
arxiv
\section{Introduction}\label{Sec1} In the field of Position Based Cryptography (PBC) one aims to develop cryptographic tasks using the geographical position of a third party as its only credential. Once the party proves to the verifier that it is in fact located at the claimed position, they interact considering the identity of the third party as granted. Basing cryptographic security on the position of the communicating parties could result very appealing in practical contexts such as the use of autonomous cars, or the secure communication between public services or banks. Besides that, at a more fundamental level, secure PBC could also serve as a way to circumvent insecurity under man-in-the middle attacks, a security leak suffered by standard cryptographic primitives. This vulnerability still prevails even in presence of information-theoretical security, as, for example, in the celebrated case of Quantum Key Distribution. In these settings, the security guarantees always come after the assumption that the identity of the trusted agents is granted. In PBC this assumption can be, at least, relaxed. Moreover, PBC proved to be a rich field of research emanating deep questions and connections from its study. To mention a few, attacks for PBC has been related with quantum teleportation \cite{KonigBeigi}, circuit complexity \cite{Speelman2015}, classical complexity theory \cite{Buhrman_2013} and, very recently, with properties of the boundary description of some processes in the context of the holographic duality AdS/CFT \cite{May_2019,May_2020}. In this work, we add to this list a connection with deep questions on the geometry of Banach spaces. The main task in PBC is the one of \emph{Position Verification} (PV). In PV a prover has to convince a verifier (usually composed by several agents spatially distributed) that it is located at a claimed position. This setting has been studied since the 90's in the context of classical cryptography. Nonetheless, in purely classical scenarios, PV is easily proven to be insecure against a team of colluding adversaries surrounding the honest location \cite{Chandran_09}. This motivates the study of \emph{quantum} PV schemes, in which the communication between prover and verifier is in general quantum. This idea was initially developed by A. Kent \cite{Kent_2011} and made rigorous only later on in \cite{Buhrman2011}. In this last paper, the authors construct a generic attack for any quantum PV scheme. To construct the general attack of \cite{Buhrman2011}, the authors built on works by L. Vaidman \cite{Vaidman03}, realizing that the cheating action in the setting of PV consists on performing what they called \emph{instantaneous non-local computation}. In this last task, two (or more) distant agents have to implement a quantum operation on a distributed input when subjected to non-signalling constraints -- see \cite{Buhrman2011} or Section \ref{Sec2.2} below for more details. At a first sight, the existence of general attacks to quantum PV renders the development of secure PBQC a hopeless program. However, their attack did not come for free for the adversaries, as in the case of classical PV. On the contrary, in order to cheat, the dishonest agents have to use a huge amount of entanglement -- a delicate and expensive resource in quantum information processing. Even when in \cite{KonigBeigi} another generic attack to PV was proposed exponentially reducing the entanglement consumption, the amount of entanglement required is still far from what is realizable in any practical situation. This leads naturally to the following question, which is the one motivating this work: \vspace{1em} \begin{question}\label{question1} How much entanglement is necessary to break \emph{any} PV scheme?\end{question} \vspace{1em} Answering this question with a large enough lower bound would lead to the existence of PBC schemes which are \emph{secure for all practical purposes}, term coined in \cite{Chakraborty2015}. The search of such an answer has been an active field of research in the last decades, specially in the years right after the publication of \cite{Buhrman2011}. Therefore, some progress is already available. Indeed, \cite{Buhrman2011} provides the first PV protocol secure against cheaters with \emph{no} entanglement. This was improved in \cite{KonigBeigi} and later in \cite{Tomamichel2013} providing PV protocols requiring a linear amount of entanglement (linear in the size of the quantum system used in the honest protocol). In terms of this figure of merit, the entanglement consumption in the generic attack of \cite{KonigBeigi} is exponentially large, hence leaving an exponential gap between lower and upper bounds for the amount of entanglement necessary to break PV schemes. After almost ten years since \cite{Buhrman2011} this is still essentially all it is known about Question \ref{question1} in its original formulation. Other works have studied attacks with some specific structure \cite{Buhrman_2013}, have designed attacks that are efficient emulating the computation of unitaries with low complexity \cite{Speelman2015} or have studied security under additional cryptographic assumptions \cite{Unruh2014}. \subsection{Summary of results}\label{Sec4_2} Here we aim to go back to Question \ref{question1} in its simplest form: the one-dimensional case without any further assumptions. Unfortunately, we were not able to find a definite answer to the question but we report here some progress that opens an avenue for a deeper understanding of the problem. From now on, we focus on the study of \emph{quantum} resources required to attack PV, considering classical communication as a free resource and unlimited computational power for all the agents involved. In this work, \begin{itemize} \item we rephrase the setting of PV in the framework of quantum games, \item connecting that way Question \ref{question1} with powerful techniques coming from Banach space theory, \item and providing new lower bounds on the amount of entanglement necessary to break a specific PV protocol presented in Section \ref{Sec3}. However, these bounds are not completely general but depend on some properties of the strategies considered. Intuitively, \emph{smooth} strategies, i.e., strategies with a smooth dependence in the unitary to be implemented, lead to exponential lower bounds. This provides evidence for the existence of PV schemes that are\emph{ secure for all practical purposes}; \item finally, we consider the possibility of turning the previous bounds unconditional. We relate the validity of this with a collection of open problems in local Banach space theory. In particular, we relate the bounds on resources to break our PV protocol with estimates for type constants of tensor norms of $\ell_2$ spaces. In this direction, we put forward a conjecture that would imply the desired unconditional exponential lower bounds and then provide some evidence supporting it. \end{itemize} \paragraph{The protocol \boldmath{$G_{Rad}$}.} To formalize this discussion, we propose a PV protocol that we denote $G_{Rad}$. This makes reference to a family $\lbrace G_{Rad}^{(n)} \rbrace_{n\in \mathbb{N}}$ rather than to a single task. The index $n$ represents the security parameter and determines the size of the quantum systems manipulated in the honest implementation of the protocol. The general structure of a PV protocol in the studied setting -- one-dimensional PV -- proceeds in four basic steps: \begin{enumerate} \item The verifier prepares a bipartite system and distributes it to two verifying agents that surrounds the location to be verified, $x$. For the sake of concreteness, we locate these agents at points $x \pm \delta$ for some positive $\delta$. \item Agents at $x \pm \delta$, when synchronized, communicate the registers they hold to $x$. \item A honest prover located at $x$, upon receiving both registers, immediately applies a required computation resulting in another bipartite system. The latter has to be returned to locations $ x \pm \delta$. One register should be sent to the agent at the left of $x$ ($x-\delta$), and the other, to its right ($x + \delta$). \item Finally, the verifiers check whether prover's answer arrives on time and whether the computation was performed correctly. Based on this information they declare the verification successful or not. \end{enumerate} In the dishonest scenario, two cheaters surrounding the location $x$, intercept the communication with the honest prover and try to emulate the ideal action in the honest protocol. In order to succeed, they have to prevent any delay in their response. This restricts cheaters' action to consist of two rounds of local operations mediated by a step of \emph{simulatenous two-way communication} -- see Section \ref{Sec2.2} for a detailed discussion of this model. Once we have fixed this basic setting, let us describe the protocol $G_{Rad}$ involved in our main results. The honest implementation is as follows: \begin{enumerate} \item Given a natural number $n$, in $G_{Rad}^{(n)}$ the verifier starts uniformly sampling a vector of $n^2$ signs $\varepsilon = (\varepsilon_{ij})_{i,j=1}^n$, where each $\varepsilon_{ij} \in \lbrace \pm 1 \rbrace$, and preparing the state $|\psi\rangle : = \frac{1}{n} \sum_{i,j} |i\rangle_A \otimes |j\rangle_B \otimes |ij\rangle_C $ in a tripartite Hilbert space $\mathcal{H}_A \otimes \mathcal{H}_B \otimes \mathcal{H}_C$. The agent at $x - \delta$ receives registers $\mathcal{H}_A \otimes \mathcal{H}_B$ while the one at $x + \delta$ is informed (classically) of the choice of $\varepsilon$. Register $\mathcal{H}_C$ is kept as private during the execution of the protocol. \item Then, registers $\mathcal{H}_A \otimes \mathcal{H}_B$ are forwarded to the verifying location $x$ from its left. From the right, the classical information about the choice of $\varepsilon$ is communicated. \item A honest prover located at $x$, upon receiving both pieces of information, has to apply the diagonal unitary on $\mathcal{H}_A \otimes \mathcal{H}_B$ determined by $\varepsilon$. Immediately, registers $\mathcal{H}_A \otimes \mathcal{H}_B$ must be returned, but this time only $\mathcal{H}_A$ should travel to the verifier at $x - \delta$. Register $\mathcal{H}_B$ should be sent to the verifier at $x + \delta$. \item After receiving those registers, the verifiers check answer's timing and, at some later time, they perform the measurement $\lbrace |\psi_\varepsilon \rangle\langle \psi_\varepsilon |, \mathrm{Id} - |\psi_\varepsilon \rangle\langle \psi_\varepsilon | \rbrace$ on system $\mathcal{H}_A \otimes \mathcal{H}_B \otimes \mathcal{H}_C$, where $|\psi_\varepsilon\rangle : = \frac{1}{n} \sum_{i,j} \varepsilon_{ij} |i\rangle_A \otimes |j\rangle_B \otimes |ij\rangle_C $. They accept the verification only if the arriving time was correct and the outcome of the measurement was the one associated to $|\psi_\varepsilon \rangle\langle \psi_\varepsilon |$. \end{enumerate} Next, let us specify the implementation of $G_{Rad}^{(n)}$ in an adversarial scenario. In this situation, we consider that two cheaters located between the honest location $x$ and the verifying agents at $x\pm \delta$, intercepts the communication in the honest protocol. In this work, we refer to these cheaters as Alice, at position $x - \delta'$, and Bob, at position $x+ \delta'$, for some $0<\delta'<\delta$. Their general action proceeds as follows\footnote{ For simplicity, we state here the case in which Alice and Bob use what we call \emph{pure} strategies. The most general case can be reduced to this one by purification. See Section \ref{Sec3} for a detailed discussion.}: in advance, the cheaters share a state $|\varphi\rangle$ in which Bob, after receiving the information about $\varepsilon$, applies an isometry $W_\varepsilon$ and sends part of the resulting system to Alice together with the classical information determining $\varepsilon$. On her part, when Alice receives registers $\mathcal{H}_A\otimes \mathcal{H}_B$ of $|\psi\rangle$, she applies another isometry $V$ (independent of $\varepsilon$) on these registers and her part of the shared state $|\varphi\rangle$. Part of her resulting system is communicated to Bob. After this step of simultaneous two-way communication Alice and Bob are allowed to apply another pair of local isometries $\tilde V_\varepsilon \otimes \tilde W_\varepsilon$ on the systems they hold. Then, they have to forward an answer to agents at $x \pm \delta$. \paragraph{Main results.} The structure of $G_{Rad}$ allows us to understand cheating strategies as vector valued assignments on the $n^2$-dimensional boolean hypercube, $ \mathcal{Q}_{n^2} = \lbrace \pm 1 \rbrace^{n^2}$. In our main result, we find lower bounds for the resources consumed in such an attack depending on the \emph{regularity} of the former assignment. Very informally, we can state:\vspace{0.2cm} {\it Cheating strategies depending on the value of $\varepsilon\in \lbrace \pm 1 \rbrace^{n^2}$ in a sufficiently regular way require an amount of entanglement exponential in $n$ in order to pass $G_{Rad}^{(n)}$ .}\\\vspace{-0.2cm} To quantify the regularity of a strategy we introduce a parameter $\sigma$ that can be regarded as a measure of the \emph{total influence} of the associated function on the Boolean hypercube. We give a precise definition for this parameter in Section \ref{Sec4}. Here, we restrict ourselves to give an intuitive idea behind this definition presenting some approximate expressions below. Based on two complementary ideas, given a strategy we construct two different assignments leading to two parameters $\sigma^i_\mathcal{S}$ and $\sigma^{ii}_\mathcal{S}$. The subscript $\mathcal{S}$ makes reference to the strategy we started with. According to the previous discussion, any such strategy can be characterized by a sequence of elements $\mathcal{S} = \lbrace \tilde V_\varepsilon, \tilde W_\varepsilon, V,W_\varepsilon, |\varphi\rangle \rbrace_{ \varepsilon \in \mathcal{Q}_{n^2} } $. With that, we can bound, up to logarithmic factors: $$ \sigma^i_{\mathcal{S}} \lesssim_{\log} \ \mathbb{E}_\varepsilon \ \left( \sum_{i,j} \frac{1}{2} \big\| \tilde V_\varepsilon \otimes \tilde W_\varepsilon - \tilde V_{\overline{\varepsilon}^{ij}} \otimes \tilde W_{\overline{\varepsilon}^{ij}} \big\| ^2 \right)^{1/2} + O\left(\frac{1}{n} \right), $$ $$ \sigma^{ii}_{\mathcal{S}} \lesssim_{\log} \ \mathbb{E}_\varepsilon \ \left( \sum_{i,j} \frac{1}{2} \big \| \left( V \otimes (W_\varepsilon - W_{\overline{\varepsilon}^{ij}})\right) |\varphi\rangle \big\|_{\ell_2}^2 \right)^{1/2} + O\left(\frac{1}{n} \right) .$$ \noindent Here, $\overline{\varepsilon}^{ij}$ denotes the sign vector $(\varepsilon_{11},\ldots,$$- \varepsilon_{ij},$$\ldots, $ $\varepsilon_{nn})$. The first of these parameters is therefore related with how strongly the \emph{second round of local operations} in the strategy depends on $\varepsilon$. In the other hand, $\sigma^{ii}_\mathcal{S}$ is similarly concerned with the dependence on $\varepsilon$ of the \emph{first round of local operations}. With this at hand, we can state -- yet informally -- our main result. Denoting the success probability attained by a strategy $\mathcal{S}$ in $G_{Rad}^{(n)}$ as $\omega(G_{Rad}^{(n)};\mathcal{S})$, we can say that: \begin{theorem}[Informal] \label{mainThm} Given a cheating strategy for $G_{Rad}^{(n)}$, $\mathcal{S}$, using quantum resources of local dimension $k$, \begin{enumerate}[I.] \item $$ \omega(G_{Rad}^{(n)};\mathcal{S}) \le C_1 + C_2 \ {\sigma^i_\mathcal{S}} \, \log^{1/2}(k) + O \left(\frac{1}{n^{1/2}}\right) ; $$ \item \begin{align*} &\omega(G_{Rad}^{(n)}; \mathcal{S}) \\ &\quad\le \tilde C_1 + C_3 \ \sigma^{ii}_\mathcal{S} \, n^{3/4} \log^{3/2}(nk) + O \left(\frac{1}{n^{1/2}}+\frac{ \log^{3/2}( n k)}{n}\right) ; \end{align*} \end{enumerate} where $C_1,\, \tilde C_1 <1, \, C_2,\, C_3 $ are positive constants. \end{theorem} What this theorem tells us is that cheating strategies for $G_{Rad}$ for which $\sigma^i_\mathcal{S} $ or $ \sigma^{ii}_\mathcal{S}$ are small enough necessarily need to make use of quantum resources of size exponential in a power of $n$, (loosely) matching the exponential entanglement consumption of known attacks\footnote{ The attack from \cite{KonigBeigi} requires an entangled system of dimension $O(\exp(n^4))$, that is still much larger than our bounds for smooth strategies. Nonetheless, we consider any strategy using quantum systems of dimension exponential in a power of $n$ to be infeasible for \emph{all practical purposes}. This is our main motivation in this work. }. We give a more concrete statement in the form of a corollary: \begin{corollary}[Informal]\label{mainCor} Consider a cheating strategy for $G_{Rad}^{(n)}$, $\mathcal{S}$, attaining value $\omega(G_{Rad};\mathcal{S}) $ $\ge 1 -\epsilon$ for some $0\le \epsilon \le \frac{1}{8}$. Denote by $k$ the local dimension of the quantum resources used in $\mathcal{S}$. If $ \sigma^i_\mathcal{S} = O( \mathrm{polylog}(n) / n^{\alpha}) $ or $ \sigma^{ii}_\mathcal{S} = O( \mathrm{polylog}(n) / n^{3/4 + \alpha}) $ for some $\alpha >0$, then: $$ k = \Omega \big( \exp\big( n^{\alpha'} \big) \big) \quad \text{for some }\alpha'>0. $$ \end{corollary} As we see, the regularity parameters $\sigma^{i(ii)}_\mathcal{S}$ play a key role in these results. We notice that known attacks in \cite{Buhrman2011,KonigBeigi} in fact fulfil the hypothesis of the previous corollary: the second round of local operations in these attacks is $\varepsilon$-independent, hence $\sigma^i_\mathcal{S} \sim \log(n)/n$. However, we do not known how generic this behaviour is. More generally, it turns out that from any Programmable Quantum Processor \cite{NielsenChuang_97} -- as the already considered protocol of Port Based Teleportation, for example -- with the capability of implementing the diagonal unitaries required in $G_{Rad}^{(n)}$, we can construct an assignment $\Phi$ fulfilling Theorem \ref{mainThm} with regularity parameter again of order $\sigma^{i}_\Phi \sim \log(n)/n$. Therefore, Corollary \ref{mainCor} also applies to this broader case allowing to recover some of the results obtained in \cite{Kubicki_19}. This is not a coincidence, our approach here builds on ideas introduced in this previous work. Turning our attention towards $\sigma^{ii}_\mathcal{S}$, a trivial example of a family of \emph{smooth} attacks for which $\sigma^{ii}_\mathcal{S} \sim \log(n)/n$ is given by cheaters sharing no entanglement in advance -- even when entanglement can be created in the first round of local operations and distributed for the second round. On the contrary, we can also easily compute $\sigma^{ii}_\mathcal{S}$ for the attack in \cite{KonigBeigi} obtaining $\sigma^{ii}_\mathcal{S} \ge O(1)$. Therefore, our second item in Theorem \ref{mainThm} is not able to predict good lower bounds for this case. Still, we think that this second item might be useful for restricting the structure of possible attacks to PV, especially in conjunction with the first part of the theorem. More importantly, the second part of Theorem \ref{mainThm} leads us to put forward the possibility of an unconditional lower bound for $k$, i.e., a bound in the spirit of Corollary \ref{mainCor} but dropping out the assumptions regarding $\sigma^{i(ii)}_\mathcal{S}$. Even when we were not able to prove such a bound, we relate its validity with a conjecture about the geometry of some Banach spaces. The positive resolution of this conjecture would prove our scheme $G_{Rad}$ \emph{secure for all practical purposes}. More concretely, our conjecture has to do with estimates of type constants of tensor norms on finite dimensional Hilbert spaces. Even when these constructions are relatively simple and well known in the theory of Banach spaces, there are long-standing open questions about the type of these kind of spaces. E.g., the type-2 constant of the simple space $\ell_2^n \otimes_\varepsilon \ell_2^n \otimes_\varepsilon \ell_2^n $ is still poorly understood. In fact, the related cotype-2 constant of its dual is a famous open question asked by Pisier decades ago -- see, for instance, \cite{Pisier90}. Once we formally state the conjecture, we provide some computation supporting it. We analyze the most direct approaches to disprove the conjectured statement computing the type constants of some subspaces and giving some volume ratio estimates. This might have interesting ramifications on the still not completely understood relation between volume ratio and cotype of Banach spaces. \subsection{Proof sketch} Here we sketch the main ideas behind the proof of Theorem \ref{mainThm}. These ideas are also at the bottom of the constructions that allow us to establish the more general connection between Question \ref{question1} and type constants leading to the conjecture indicated above. As we have already anticipated, the starting point of our study is the identification of each cheating strategy to $G_{Rad}^{(n)}$, $\mathcal{S}$, with a vector-valued function $\Phi_\mathcal{S}: \mathcal{Q}_{n^2} \rightarrow X$, being $X$ a suitable Banach space. With a clever definition of $\Phi_\mathcal{S}$ we can obtain a bound to the success probability $\omega(G_{Rad}, \mathcal{S}) $ in terms of the average norm of the image of that function. We obtain bounds of the following kind: $$ \omega ( G_{Rad}; \mathcal{S}) \le \, \mathbb{E}_\varepsilon \, \big \| \Phi_{\mathcal{S}} (\varepsilon) \big \|_{X}, $$ where $\varepsilon$ are taken uniformly distributed in $\mathcal{Q}_{n^2}$. Therefore, the key quantity we study is precisely $\mathbb{E}_\varepsilon \, \big \| \Phi_{\mathcal{S}} (\varepsilon) \big \|_X$. For that, we bring together two main ingredients. On one hand, a Sobolev-type inequality of Pisier for vector-valued function on the Boolean cube, and, on the other, the type-2 constant of the Banach space $X$, $\mathrm{T}_2(X)$. The combination of these two tools provides us with an inequality: \beq\label{PisierIneq_Intro} \mathbb{E}_\varepsilon\big \| \Phi_\mathcal{S}(\varepsilon) \big\|_X \le \big \|\mathbb{E}_\varepsilon \Phi (\varepsilon) \big \|_X + C \ \sigma_{\Phi_\mathcal{S}} \ \mathrm{T}_2 ( X ) , \eeq where $C$ is an independent constant and $\sigma_{\Phi_\mathcal{S}}$ is a regularity measure for $\Phi_\mathcal{S}$\footnote{ See Section \ref{Sec2.4.4} for a detailed discussion. There, the more refined type-2 constant with $m$ vectors is considered, see Corollary \ref{Cor1}. For the sake of simplicity, we consider the plain type-2 constant in this introductory section.}. Specific choices for $\Phi_\mathcal{S}$ and $X$ leads to parameters $\sigma^{i(ii)}_\mathcal{S}$ appearing in Theorem \ref{mainThm}. Now, depending on how $\Phi_\mathcal{S}$ is constructed, we are able to upper bound $\|\mathbb{E}_\varepsilon \Phi (\varepsilon) \big \|_X $ -- see, for instance, Proposition \ref{Prop_Main2} -- so we can focus on the second term in the RHS of \eqref{PisierIneq_Intro}. To obtain Theorem \ref{mainThm} we propose in Section \ref{Sec4} two possible choices for $\Phi_\mathcal{S}$ and study the type constants of their image spaces. Furthermore, in order to remove the dependence on $\sigma$ in the bounds obtained in that way, we propose in Section \ref{Sec6} yet another choice for $\Phi_\mathcal{S}$. This third function is regular enough by construction allowing to obtain bounds depending only on the dimension of the system used by the cheaters. The downside of this latter approach is that the space $X$ in this last case becomes more involved and its type properties cannot be estimated with the techniques at our disposal. To finish this introduction we sum up the structure of the paper: we start introducing in Section \ref{Sec2} preliminary material needed to develop this work. Then, in Section \ref{Sec3} we introduce in complete detail the setting studied here and the PV scheme $G_{Rad}$. The analysis of strategies for this scheme leading to Theorem \ref{mainThm} is presented in Section \ref{Sec4}. In Section \ref{Sec6} we discuss the possibility of pushing forward the techniques presented in this work to obtain unconditional lower bounds on the resources required by the cheaters, only dependent on the dimension of the quantum system they manipulate. We connect this question with the problem in local Banach space theory of obtaining precise estimates for the \emph{type constants} of particular Banach spaces. After establishing that connection in a precise and rigorous way, we provide some calculations supporting a positive resolution of a conjecture that would lead to the security \emph{for all practical purposes} of $G_{Rad}$. The paper ends with a discussion of the results presented and possible directions for future work. This corresponds to Section \ref{Sec7}. \section{Preliminaries}\label{Sec2} \subsection{Notation} In order to simplify the presentation, we use symbols $\approx$ and $\lesssim$ to denote equality and inequality up to multiplicative dimension independent constants and $ \approx_{\log}$ and $\lesssim_{\log}$, equality and inequality up to multiplicative logarithmic factors on the dimensions involved. The quantum mechanical description of a system is based on an underlying complex Hilbert space, that we denote $\mathcal{H},\, \mathcal{H}',\,\mathcal{H}_A,\, \mathcal{H}_B,\, \ldots$. When the dimension is known to be a specific natural number, say $n$, we use the notation $ \ell_2^n$. Given that, a density matrix is a trace one, positive operator $\rho:\mathcal{H} \rightarrow \mathcal{H} $. We denote the set of density matrices as $\mathcal{D}(\mathcal{H})$. Quantum operations are completely positive trace preserving linear maps $\mathcal{D}(\mathcal{H}) \rightarrow\mathcal{D}(\mathcal{H}')$. The set of these maps is denoted here as $\mathrm{CPTP}(\mathcal{H}, \mathcal{H}')$ or simply $\mathrm{CPTP}(\mathcal{H})$ when the input and output spaces are the same. The operation of discarding a subsystem is implemented by the partial trace. We specify the subsystem discarded by its underlying Hilbert space, e.g., in a composed system with underlying Hilbert space $\mathcal{H} \otimes \mathcal{H}'$ the operation of discarding $\mathcal{H}'$ is denoted $\mathrm{Tr}_{\mathcal{H}'} \in \mathrm{CPTP}(\mathcal{H}\otimes \mathcal{H}', \mathcal{H})$. To describe the evolution of a quantum system after a measurement, we make use of \emph{instruments}, that are collections of completely positive trace non-increasing maps summing up to a trace preserving map. To denote a completely positive (maybe non trace-preserving) map we use the symbol $\mathrm{CP}$ instead of the previous $\mathrm{CPTP}$. The set of instruments composed by finite collections of maps in $\mathrm{CP}(\mathcal{H},\mathcal{H}')$ are denoted $\mathrm{Ins}(\mathcal{H},\mathcal{H}')$. To denote Banach spaces we usually use letters $X,\, Y,\,\ldots$ and $X^*,\, Y^*,\,\ldots$ for the corresponding Banach duals. $B_X$ denotes the unit ball of a Banach space $X$. $\mathcal{B}(X,Y)$ is the space of linear operators between arbitrary Banach spaces $X$ and $Y$, while $\ell_p(X)$ and $L_p(X)$, with $p\in (0, \infty]$, are the classical (vector valued) spaces of $p-$summable sequences and $p-$integrable functions on the unit interval. More specifically, we also fix now the notation for two Banach spaces that will appear repeatedly. Given two Hilbert spaces $\mathcal{H}$, $\mathcal{H}'$, we denote as $\mathcal{B}(\mathcal{H},\mathcal{H}')$ and $\mathcal{S}_1(\mathcal{H}, \mathcal{H}')$ the space of bounded and trace class operators from $\mathcal{H}$ into $\mathcal{H}' $, respectively. In the finite dimensional case, $\mathcal{H} = \ell_2^m$, $\mathcal{H}' = \ell_2^n$, we simplify this notation to $M_{n,m}$ and $\mathcal{S}_1^{n,m}$ ($M_n$, $\mathcal{S}_1^n$ when $n=m$). To denote elements of the computational basis we use the quantum information oriented convention of using the symbols $|i\rangle,\, \langle i|,\, |j\rangle,\, \ldots$. When working with elements in the complex vector space composed by $n\times m$ matrices -- as is the case of elements in $M_{n,m}$ or $\mathcal{S}^{n,m}$, case we consider repeatedly below -- the usual basis of matrices with only one non-zero entry is denoted here as $\lbrace | i \rangle\langle j | \rbrace_{\begin{subarray}{l} i= 1 ,\ldots, n\\ j= 1, \ldots, m \end{subarray}}$. Observing the range of each subindex, the convention chosen here matches the standard agreement on regarding \emph{kets} $|i\rangle$ as column vectors and \emph{bras} $\langle i | $ as rows. \subsection{Position Based Cryptography in 1-D}\label{Sec2.2} The major aim of this work is to make progress towards Question \ref{question1}. For that, we restrict ourselves to the simplest scenario: position verification in 1-D. In this situation, we restrict the world to a line in which we consider a preferred location, $x$ -- the position to be verified. The verifier, composed by two agents, $V_A$ and $V_B$, is located around the honest position $x$. Let us consider $V_A $ in position $x- \delta$ and $V_B $ in position $x + \delta $. Then, $V_A$ and $V_B$ perform an interactive protocol sending in the direction of $x$ (possibly quantum) messages. These messages arrive to $x$ at the same time, so that a honest prover located at $x$ could receive them and, accordingly, generate answers for $V_A$ and $V_B$. The verifier accept the verification if and only if \begin{itemize} \item (correctness) the answers are correct with respect to verifier's messages (according to some public rule); \item (timeliness) the answers arrive on time to the locations of $V_A$ and $V_B$. Assuming that the signals between verifier and prover travels at some known velocity $c$, the answers should arrive to $V_A$ and $V_B$ at time $2 \delta c$ after the start of the protocol. \end{itemize} Before continuing, let us set a generic structure for such a protocol. To prepare the messages $V_A$ and $V_B$ must forward, the verifier prepares a (publicly known) state in a composite system with some underlying Hilbert space $\mathcal{H}_A \otimes \mathcal{H}_B \otimes \mathcal{H}_C$. That is, he prepares a density matrix $ \rho_0$ on that state space and sends the register $\mathcal{H}_A$ to $V_A$ and $\mathcal{H}_B$ to $V_B$. $\mathcal{H}_C$ is considered to take into account the possibility that the verifier keeps some part of the initial system as private during the protocol. Then, $V_A$ and $V_B$ send their systems in the direction of $x$. Now, the agent(s) interacting in the middle with $V_A$ and $V_B$ apply some quantum operation on the communicated system $\mathcal{H}_A \otimes \mathcal{H}_B$ obtaining as output another state $\rho_{ans}\in \mathcal{D}(\mathcal{H}_A' \otimes \mathcal{H}_B')$. The subsystems $\mathcal{H}_A' $, $\mathcal{H}_B'$ are forwarded to $V_A$, $V_B$, respectively. To decide whether the verification is correct or not, the verifier first check the \emph{timeliness} condition is fulfilled and then performs a (publicly known) dichotomic measurement on the system $\mathcal{H}_A' \otimes \mathcal{H}_B' \otimes \mathcal{H}_C$. \begin{remark} Above, $\rho_0$ and $\rho_{ans}$ are in general quantum states but they could perfectly describe also classical messages as well as quantum-classical messages. This will be indeed the case in the concrete scheme analized in this work. \end{remark} \begin{remark} Note that a honest prover, that is, an agent at position $x$, shouldn't have any problem to pass the test: at time $\delta c$ he would receive the whole system $\mathcal{H}_A \otimes \mathcal{H}_B$ from the verifiers, having the capability to perform any global operation on it to prepare his answer. This answer can still arrive on time to $V_A$ and $V_B$. The depicted prover's action is the most general operation that can be performed on verifier's messages, which are the only information transmitted in the protocol. Therefore, if the challenge is well designed (it can be passed), the honest prover must be able to succeed at it\footnote{ We don't take into account here the computational limitations at which the agents might be subjected.}. \end{remark} Next, let us focus on how the general protocol described above can be cheated. In order to impersonate the identity of a honest prover at position $x$, a couple of adversaries, Alice and Bob, at positions $x \pm \delta'$, $0 < \delta'< \delta $, can intercept the message systems $\mathcal{H}_A$, $\mathcal{H}_B$, interact between themselves to generate answers for the verifier and forward those answers in correct timing. In order to respect the timeliness of the protocol, the most general action of the cheaters proceeds as follows: \begin{figure \centering {\setlength{\fboxsep}{10pt} \framebox{% \parbox{0.9\textwidth}{ \begin{enumerate} \item Before the start of the protocol, Alice and Bob prepare some shared entangled state in a private register $ \mathcal{H}_{A_E} \otimes \mathcal{H}_{B_E}$; \item Alice receives question register $\mathcal{H}_A$ and applies a quantum channel $\mathcal{A}\in \mathrm{CPTP}(\mathcal{H}_A \otimes \mathcal{H}_{A_E},\, \mathcal{H}_{A \shortrightarrow B} \otimes \mathcal{H}_{A \shortrightarrow A }) $. Similarly, Bob receives $\mathcal{H}_B$ and applies $\mathcal{B} \in \mathrm{CPTP}(\mathcal{H}_B \otimes \mathcal{H}_{B_E},\, \mathcal{H}_{B \shortrightarrow A} \otimes \mathcal{H}_{B \shortrightarrow B} )$; \item the cheaters interchange registers $\mathcal{H}_{A \shortrightarrow B}$ and $\mathcal{H}_{B \shortrightarrow B} $, keeping $\mathcal{H}_{A \shortrightarrow A } $, $\mathcal{H}_{B \shortrightarrow B } $\protect\footnotemark; \item after this last step, Alice holds system $\mathcal{H}_{A \shortrightarrow A } \otimes \mathcal{H}_{B \shortrightarrow A } $, in which she applies another channel $\tilde{\mathcal{A}} \in \mathrm{CPTP} (\mathcal{H}_{A \shortrightarrow A } \otimes \mathcal{H}_{B \shortrightarrow A }, \,\mathcal{H}_A' ) $. Similarly, Bob applies $\tilde{\mathcal{B}} \in \mathrm{CPTP}( \mathcal{H}_{B \shortrightarrow B } \otimes \mathcal{H}_{A \shortrightarrow B } , \, \mathcal{H}_B' ) $; \item finally, Alice sends $\mathcal{H}_A'$ to $V_A$ and Bob $\mathcal{H}_B'$ to $V_B$. \end{enumerate} }% }} \caption{ Structure of adversarial action attacking 1-D PV schemes.}\label{s2wStrategies} \end{figure} \footnotetext{ In general, we model in that way any kind of communication between Alice and Bob, classical or quantum. However, in the particular setting studied later on in Section \ref{Sec3}, we will see that the dimension of $\mathcal{H}_{A \shortrightarrow B}$ and $\mathcal{H}_{B \shortrightarrow B} $ is essentially determined by the quantum resources the cheaters share, allowing us to disregard the classical communication that they might additionally use. See Section \ref{Sec3}, Lemma \ref{Lemma_ClassCom}, for a precise statement.} We call in this work \emph{simultaneous two-way communication scenario}, $s2w$, the set of actions -- strategies from now on -- with this structure. This scenario is central for us and will appear repeatedly in the rest of this manuscript \subsection{One-round Quantum games}\label{Sec2.3} Our approach starts rephrasing the scheme presented in the previous section in the framework of cooperative \emph{quantum games}, a generalization of nonlocal games. For the purpose of this manuscript it is enough to restrict ourselves to the case of two-player quantum games. In this setting, two players, Alice and Bob, interact with a referee receiving from him a bipartite quantum state. Acting on the received system, Alice and Bob obtain another bipartite state which is communicated back to the referee who checks the validity of player's answer performing a measurement. The parallelism with the situation of a team of cheaters attacking a 1-D PV scheme is now obvious, we just have to identify the players with the cheaters, the referee with the verifier and the definition of the game (which question state the referee prepares and which measurement he applies at the end of the game) with the definition of the test to be passed in a given PV protocol. Quantum games have appeared naturally in quantum information theory in several places and some specific classes of quantum games were rigorously defined and studied in \cite{ORQGames2015,Regev2015,Buscemi_2012,kempe2007}. Indeed, our starting point is one of these latest notions. In the rest of this section we introduce rank one quantum games, originally defined in \cite{ORQGames2015}, and present a slight generalization that we call \emph{mixed} rank-one quantum games. This is precisely the setting in which we develop this work. A (two-player one-round) rank-one quantum game, ROQG, is specified by: \begin{enumerate} \item a tripartite Hilbert space $\mathcal{H}_A \otimes \mathcal{H}_B \otimes \mathcal{H}_C$; \item and unit vectors $|\psi \rangle$, $|\gamma\rangle$ $\in \mathcal{H}_A \otimes \mathcal{H}_B \otimes \mathcal{H}_C$. \end{enumerate} The game then proceeds as follows: \begin{itemize} \item the referee starts preparing the state $| \psi \rangle \in \mathcal{H}_A \otimes \mathcal{H}_B \otimes \mathcal{H}_C$ and sends registers $\mathcal{H}_A$, $\mathcal{H}_B$ to Alice and Bob, respectively; \item the players apply an allowed quantum operation\footnote{ The action of the players might be constrained for physical or other reasons forbidding Alice and Bob to apply a completely general quantum channel. Below we state these limitations in more detail. } on the received registers, $\mathcal{H}_A$ and $\mathcal{H}_B$, sending them back to the referee; \item finally, to decide whether the players win or lose the game, the referee performs the projective measurement given by elements $\lbrace |\gamma\rangle\langle \gamma |, \mathrm{Id} - |\gamma \rangle\langle \gamma| \rbrace $. When the outcome of this measurement is the one associated to $|\gamma \rangle\langle \gamma|$, the referee declares Alice and Bob winning. Otherwise they lose. \end{itemize} We give now a formal definition for this type of games (see \cite{ORQGames2015} for further details): \begin{definition} We identify a \textbf{rank-one quantum game}, $G$, with a tensor $\hat G = \mathrm{Tr}_{\mathcal{H}_C} |\psi\rangle\langle\gamma|\in B_{ \mathcal{S}_1(\mathcal{H}_{AB}) }$. A {strategy} for $G$ is a quantum channel $\mathcal{S}$ acting on the system $\mathcal{H}_A \otimes \mathcal{H}_B$ shared with the players by the referee. That is, $\mathcal{S} \in \mathrm{CPTP}(\mathcal{H}_A \otimes \mathcal{H}_B)$. The {value} achieved by this strategy is defined by: \begin{align} \label{DefvalueStrat} \omega (G;\mathcal{S}) &:= \mathrm{Tr}\left[\: |\gamma \rangle\langle\gamma | \: \, \: (\mathrm{Id}_{C}\otimes \mathcal{S})\: \big(|\psi\rangle\langle\psi| \big) \: \right], \end{align} and corresponds with the winning probability achieved in the game described above when the players use $\mathcal{S}$ to play. \end{definition} In general, not any quantum operation will be considered to be an allowed strategy, since the actions of Alice and Bob might be restricted in a given situation. For example, one can consider situations in which Alice and Bob are spatially isolated so they cannot communicate between themselves. In this work we consider situations in which Alice and Bob are restricted to apply operations in the form of Figure \ref{s2wStrategies}. We study this setting in detail in Section \ref{Sec3}. Until then, we refer as \emph{scenarios} to sets of allowed strategies, which will be usually motivated by physical constraints on the players. \begin{definition} In a particular scenario $\mathfrak{S} \subseteq \mathrm{CPTP}(\mathcal{H}_A \otimes \mathcal{H}_B)$, the {value of the game} is: \beq \omega_{\mathfrak{S}} (G):= \sup_{\begin{subarray}{c} \mathcal{S} \in \mathfrak{S} \end{subarray}} \omega_{\mathfrak{S}}\left(G; \mathcal{S}\right) . \label{Defvalue} \eeq \end{definition} \color{black} \begin{example}[The honest scenario] In this preliminary section we only introduce the simplest scenario, the one in which we allow any quantum channel on $\mathcal{H}_A \otimes \mathcal{H}_B$ to be a valid strategy. That is, in this scenario, the set of allowed strategies is $\mathfrak{S} = \mathrm{CPTP}(\mathcal{H}_A \otimes \mathcal{H}_B) $. This corresponds to the case in which Alice and Bob are allowed to apply any global operation, so they perform as if they were a single agent with access to the full question in the game. We refer to this situation as the \emph{honest scenario}. \begin{definition} The {Honest value} of $G$ is given by: \beq\label{HonestROQG} \omega_H(G) = \sup_{ \mathcal{S} \in \mathrm{CPTP}(\mathcal{H}_A \otimes \mathcal{H}_B) }\omega(G; \mathcal{S}). \eeq \end{definition} As suggested by the nomenclature we use, this scenario is related with the action of a honest prover in a PV setting. Furthermore, it also serves as a normalization for the game, since it is the largest value achievable under the unique assumption that quantum mechanics is the underlying model explaining players' behaviour. \end{example} \begin{remark}\label{remark_HonestROQG} It turns out that the supremum in \eqref{HonestROQG} can be restricted to unitary channels without altering its value, see \cite[Theorem 3.2, 3.]{ORQGames2015}. In this case, we can work out \eqref{HonestROQG} to obtain the following equivalent expression: \begin{align} \omega_H(G) &= \sup_{ U \in \mathcal{B}(\mathcal{H}_A \otimes \mathcal{H}_B) }\omega(G; U(\, \cdot, \,) U^\dagger) = \sup_{ U \in \mathcal{B}(\mathcal{H}_A \otimes \mathcal{H}_B) } \left( \mathrm{Tr} (\hat G \, U) \right)^2 \nonumber \\ &\equiv \sup_{ U \in \mathcal{B}(\mathcal{H}_A \otimes \mathcal{H}_B) } \langle U,\hat G \rangle ^2 = \| \hat G \|^2_{\mathcal{S}_1(\mathcal{H}_A \otimes \mathcal{H}_B)}. \label{HonestValue1} \end{align} Indeed, by compactness, this supremum is achieved by some unitary $U_G$, that can be interpreted as the ideal action players have to perform to maximize their chances to win the game. This gives us a useful interpretation of rank-one quantum games: Alice and Bob have to simulate the application of a given unitary, $U_G$, on registers $\mathcal{H}_A \otimes \mathcal{H}_B$ of the system $\mathcal{H}_A \otimes \mathcal{H}_B \otimes \mathcal{H}_C$ prepared on the initial state $|\psi\rangle $. Restricting the action of the players to the s2w model of Figure \ref{s2wStrategies}, this is just the \emph{instantaneous non-local computation} of $U_G$. \end{remark} To complete this section, we introduce in the notion of \emph{mixed rank-one quantum games}. These games are constructed from a family of rank-one quantum games, say $\lbrace G_{t_a, t_b } \rbrace_{t_a, t_b}$, indexed by $t_a \in \mathcal{T}_A$, $t_b \in\mathcal{T}_B$, together with a probability distribution $\lbrace p_{t_a, t_b } \rbrace_{t_a, t_b}$\footnote{ For each $t_a$ $t_b$, $p_{t_a, t_b} \ge 0$ and $\sum_{t_a, t_b} p_{t_a, t_b} = 1$. We restrict ourselves to finite index sets $\mathcal{T}_A$, $\mathcal{T}_B$.}. For the sake of readability, we refer to pairs $(t_a,t_b)$ by $\mathbf{t}$, so that $\mathbf{t} \in \mathcal{T}_A \times \mathcal{T}_B $. The game proceeds as follows: \begin{itemize} \item The referee chooses randomly one of the ROQG, $G_{\mathbf{t}}$, according to the probability distribution $p_{\mathbf{t}}$. \item The referee prepares the state corresponding to the game $G_{\mathbf{t}}$. He sends to Alice and Bob a quantum system as specified by the ROQG $G_{\mathbf{t}}$ together with the classical information $\mathbf{t}$. Alice receives $t_a$ and Bob $t_b$ -- recall that $\mathbf{t} = (t_a , t_b)$; \item Alice and Bob, with the information they received, prepares a state to answer the referee; \item finally, with the state communicated by Alice and Bob, the referee performs the final measurement defined in $G_{\mathbf{t}}$. This decides whether the players win or lose. \end{itemize} Once we introduced this family of games, we now give a formal definition: \begin{definition}\label{Def:MROQG} We identify a {mixed rank-one quantum game} (MROQG), $G$, with a sequence of tensors $ \big \lbrace \hat G_{\mathbf{t}} = \mathrm{Tr}_C |\psi_{\mathbf{t}} \rangle\langle\gamma_{\mathbf{t}}| \big \rbrace_{\mathbf{t}}$, where each $ G_\mathbf{t} \in B_{ \mathcal{S}_1(\mathcal{H}_{AB}) }$, together with a probability distribution $\lbrace p_{\mathbf{t}} \rbrace_{\mathbf{t}}$. A {strategy} for $G$ is a sequence of quantum channels $\lbrace \mathcal{S}_{\mathbf{t}} \rbrace_{\mathbf{t}}$ acting on the system $\mathcal{H}_A \otimes \mathcal{H}_B$ shared with the players by the referee. The {value} achieved by this strategy is defined by: \begin{align}\label{DefvalueStrat_MROQG} \omega (G;\lbrace \mathcal{S}_{\mathbf{t}} \rbrace_{\mathbf{t}}) &:= \mathbb{E}_{\mathbf{t}} \ \mathrm{Tr}\left[\: |\gamma_{\mathbf{t}} \rangle\langle\gamma_{\mathbf{t}} | \: \, \: (\mathrm{Id}_{C}\otimes \mathcal{S}_{\mathbf{t}})\: \big(|\psi_{\mathbf{t}}\rangle\langle\psi_{\mathbf{t}}| \big) \: \right], \end{align} where $\mathbb{E}_\mathbf{t}$ denotes the expectation over the random variable $\mathbf{t}$ distributed according to $\lbrace p_{\mathbf{t}} \rbrace_{\mathbf{t}}$. \end{definition} As in the previous discussion, we also consider restricted families of allowed strategies for MROQGs, that is, subsets $\mathfrak{S} \subseteq \mathrm{CPTP}(\mathcal{H}_A \otimes \mathcal{H}_B)^{\times |\mathrm{T}_\mathcal{A}\times \mathrm{T}_B|}$. This leads to the value of a MROQG in a given scenario: \begin{definition} In a particular scenario $\mathfrak{S} \subseteq \mathrm{CPTP}(\mathcal{H}_A \otimes \mathcal{H}_B)^{\times |\mathrm{T}_\mathcal{A}\times \mathrm{T}_B|}$, the {value of the game} is: \beq \omega_{\mathfrak{S}} (G):= \sup_{\begin{subarray}{c} \mathcal{S} \in \mathfrak{S} \end{subarray}} \omega_{\mathfrak{S}}\left(G; \mathcal{S}\right) . \label{Defvalue2} \eeq \end{definition} As commented before, we are interested in the scenario resulting from the restrictions in Figure \ref{s2wStrategies}, but we leave the study of this situation for forthcoming sections. Matching our earlier discussion on ROQGs, we also comment that the \emph{honest} scenario is recovered considering any strategy as valid, that is, setting $\mathfrak{S} = \mathrm{CPTP}(\mathcal{H}_A \otimes \mathcal{H}_B)^{\times |\mathrm{T}_\mathcal{A}\times \mathrm{T}_B|}$. This leads to define $\omega_H (G)$ for a MROQG in analogy with \eqref{HonestROQG}. With similar computations as in Remark \ref{remark_HonestROQG} we can obtain: $$ \omega_H (G) = \, \mathbb{E}_\mathbf{t} \, \| G_\mathbf{t} \|^2_{\mathcal{S}^{AB}_1}. $$ Formally, this equation perfectly matches the idea of MROQGs as distributions of ROQGs. \begin{remark}\label{Rmk3} From now on we further restrict the games considered here. In Section \ref{Sec3} we study a concrete MROQG in which: \begin{enumerate} \item $\mathcal{T}_A $ is trivial so the information about the ROQG played in each instance is completely given to Bob -- the game is completely determined by $t_b \in \mathcal{T}_B$. \item The quantum part of the input is entirely given to Alice. Even though, we will still denote the question register as $\mathcal{H}_A \otimes \mathcal{H}_B$, since in the last phase of the game, both players have to answer the referee with a quantum register, ideally Alice answers $\mathcal{H}_A$ and Bob $\mathcal{H}_B$. \end{enumerate} To sum up, in the kind of MROQGs we are going to work with, after the first communication between the referee and the players, Bob is aware of the ROQG chosen by the referee, $G_{t_b}$, but is Alice who has the input to this game. In order to win, Alice and Bob have to interact such that at the end of their action each of the players holds the register that has to give back to the referee. The challenge is that Alice, when holding the whole quantum part of the input, does not know which ROQG she has to play. We come back to this point in Section \ref{Sec3}, where we introduce in detail the specific game analized in this work. \end{remark} \subsection{Banach spaces, operator ideals and type constants} At a technical level, the results of this work follow from the study of Banach spaces formed by tensor products of Hilbert spaces. The spaces $M_{n,m}$ and its dual, $\mathcal{S}_1^{n,m}$, play a prominent role in the rest of the manuscript. Properties of these spaces in conjunction with a classical Sobolev-type inequality of Pisier allow us to obtain our main result, Theorem \ref{mainThm}. The key property we study of these spaces are type constants, that we introduce in Section \ref{Sec2.4.3}. Before that, we need to introduce some objects we work with in the following sections. \subsubsection{Operator ideals} \label{Sec.2.4.1} \vspace{0.5cm} A deeper understanding of the constructions appearing in this work is provided by the perspective of the theory of operator ideals. For the reader's convenience, we first sum up the contents of this section: given two finite-dimensional\footnote{ Even though in most cases the following material also applies in the infinite-dimensional case, for simplicity, we restrict to finite dimension that is all we will use here. This allows us to use the equivalence between operators and tensor products in a comfortable way ignoring the subtleties that appear at this point for infinite dimension.} Banach spaces $X$ and $Y$ we consider the space of linear operators from $X$ into $Y$, $\mathcal{B}(X,Y)$. An operator ideal is essentially an assignment of any pair of Banach spaces $X$ and $Y$ with a subset of $\mathcal{B}(X,Y)$ that has the \emph{ideal} property of being closed under composition with bounded linear maps. We provide \cite{Pietsch86,DefantFloret} as standard references on this matter for the interested reader. In this section: \begin{enumerate} \item The first examples of operator ideals we introduce are \emph{tensor norms} on pairs of Banach spaces. This includes the space of bounded operators, $\mathcal{B}(X^*,Y)$, or $X \otimes_\varepsilon Y$ in tensor norm notation, 2-summing operators $\pi_2(X^*,Y)$, or $X\otimes_{\pi_2} Y$, and the ideal of nuclear operators, denoted as $\mathcal{N}(X^*,Y)$, or $X \otimes_\pi Y$\footnote{ Recall that here we restrict $X$ and $Y$ to be finite dimensional.} \item When $X$ and $Y$ are Hilbert spaces, another prominent family of operator ideals are the well-known Schatten classes $\mathcal{S}_p$, for $p\in [1,\infty] $. It turns out that these classes can be generalized to operators between arbitrary Banach spaces, leading to the definition of weak Schatten von-Neumman operators of type $p\in [1,\infty] $, denoted here as $\mathfrak{S}_p^w(X,Y)$ or $X^* \otimes_{\mathfrak{S}_p^w} Y$. \item Finally, here we also define a variant of the space $\mathfrak{S}_p^w(X,Y)$ that appears naturally in our study and that seems to be new in the literature. We denote this space $\mathfrak{S}_p^{w-cb}(X,Y)$ or $X^* \otimes_{\mathfrak{S}_p^{w-cb}} Y$ and call it the space of \emph{weak-cb} Schatten von-Neumman operators of type $p\in [1,\infty] $. The appellative cb is reminiscent of the fact that this new structure makes use of constructions coming from operator space theory. Indeed, $ \mathfrak{S}_p^{w-cb} $ is an operator ideal but in the operator space sense, therefore belonging more naturally to that category than to the one of Banach spaces. In any case, we state this as a matter of curiosity and completeness, and these fine-grained details are irrelevant for the scope of the present work. Nonetheless, it is possible that a further exploration of these structures could lead to the clarification of some of the problems we leave open. \end{enumerate} After this brief summary, we provide now the details of the contents cited above. We follow part of the exposition \cite[Chapter 2]{Pietsch86} with suitable simplifications adapted to the scope of this work. For finite dimensional Banach spaces $X$ and $Y$, the space of linear maps $X \rightarrow Y$ can be identified in a simple way with the tensor product $X^* \otimes Y$, as was implicitly assumed above. The identification consists on associating to any element in $X^* \otimes Y$, $\, \hat f = \sum_{i} x_i^* \otimes y_i$, the linear map $f: X \ni x \mapsto f(x) := \sum_{i} x_i^*(x) \, y_i \, \in Y$. Conversely, to any linear map $f: X \rightarrow Y$ we associate the tensor $\hat f = \sum_i x^*_i \otimes f(x_i)$, where $\lbrace x_i \rbrace_i,\, \lbrace x_i^* \rbrace_i $ are dual bases of $X$ and $X^*$, respectively. Based on that, we will tend to present our results making explicit the tensor product structure but sometimes, especially in this introductory part of the paper, it will be more natural to talk about mappings, so we will use both conventions interchangeably. The first operator ideal we encounter is the one of bounded operators from $X$ into $Y$, that we denote $\mathcal{B} (X,Y)$ and that is the Banach space of linear operators $f:X\rightarrow Y$ endowed with the operator norm, $\|f \| := \sup_{x\in B_X} \| f(x) \|_Y < \infty$. Using the equivalence stated before, understanding this space as a tensor product is precisely how the injective tensor product is defined: $X^* \otimes_\varepsilon Y \simeq \mathcal{B} (X,Y)$. If $X$ and $Y$ are finite dimensional spaces, the dual of $X^* \otimes_\varepsilon Y $ coincides with the projective tensor product, $X \otimes_\pi Y^* \simeq (\mathcal{B} (X,Y))^*$. It is enough for the scope of this manuscript to take this equivalence as the definition of $X \otimes_\pi Y^*$. These norms satisfy the desirable \emph{metric mapping property}: for any Banach spaces $X_0$, $X_1$, $Y_0$, $Y_1$, and any operators $ f \in \mathcal{B}(X_0,X_1)$, $ g \in \mathcal{B}(Y_0,\, Y_1)$, \beq\label{MetricMapProp} \big \| f \otimes g: X_0 \otimes Y_0 \rightarrow X_1 \otimes Y_1 \big \| \le \big \| f \big \| \, \big\| g \big\|. \eeq Furthermore, we call \emph{tensor norm} to any $\alpha$ that associates to any pair of Banach spaces $X$, $Y$, a norm $\| \, \cdot \, \|_{X\otimes_\alpha Y}$ such that: \begin{itemize} \item $\alpha$ is \emph{in between} of the tensor norms $\varepsilon$ and $\pi$. That is, $$ \text{for any }x \in X\otimes_\alpha Y, \ \, \|x\|_{X\otimes_\varepsilon Y} \le \| x \|_{X\otimes_\alpha Y} \le \| x \|_{X\otimes_\pi Y};$$ \item $\alpha$ satisfies the metric mapping property. \end{itemize} Later on, in Section \ref{Sec6} we will more generally refer as tensor norms to norms defined \emph{tensorizing} consecutively different tensor norms. For example, if $\alpha,\, \alpha'$ are tensor norms, the assignment on any three Banach spaces $X,\, Y,\, Z$ of the norm $(X \otimes_\alpha Y) \otimes_{\alpha'} Z$ will be called also tensor norm. The last tensor norm that we need is the 2-summing norm: for an operator $f \in \mathcal{B}(X,Y)$, \begin{equation}\label{Def_2summing} \| f \|_{\pi_2(X,Y) } := \big \| \mathrm{Id} \otimes f: \ell_2 \otimes_\varepsilon X \rightarrow \ell_2(Y) \big \|. \end{equation} Next we introduce Schatten classes of compact operators between Hilbert spaces, that are the model to define the generalizations in the theory of operator ideals that we use later on. To define the $p$-th Schatten class $\mathcal{S}_p(\mathcal{H})$, for $ 1 \le p\le \infty $, we associate to any compact operator on a Hilbert space, $f: \mathcal{H} \rightarrow \mathcal{H}$, its sequence of singular values $( s_i(f) )_{i\in \mathbb{N}}$, where $s_1(f)\le s_2(f)\le \ldots$. With this sequence, we define the norm $\| f\|_{\mathcal{S}_p} := \big\| ( s_i(f) )_i \big\|_{\ell_p}$, which provides the normed structure on $\mathcal{S}_p(\mathcal{H})$. We use the simpler notation $\mathcal{S}_p$ to denote the $p$-th Schatten class of operators on the separable Hilbert space $\ell_2$. In the finite dimensional case we use the notation $\mathcal{S}_p^{n,m}$ to refer to the $p$-th Schatten class of operators from $\ell_2^m$ into $\ell_2^n$. Notice that the case $p= \infty$ coincides with the operator ideal we denoted before as $M_{n,m}$, while for $p=1$ we obtain $\mathcal{S}_1^{n,m}$. Now, moving into operators between arbitrary Banach spaces we define: \begin{definition}\label{Def_WeakSchatten} Given an operator $f: X \rightarrow Y$ and $1 \le p \le \infty$ we say that $f$ is of weak Schatten-von Neumann type $\ell_p$ if $$ \| f \|_{\mathfrak{S}_{p}^w (X,Y) } := \sup \left \lbrace \Big\| \Big ( s_i (g\circ f\circ h) \Big )_i \Big \|_{\ell_p} \ : \ \begin{array}{l} \| g:Y \rightarrow \ell_2 \|\le 1 \\[0.5em] \| h: \ell_2 \rightarrow X\|\le 1 \end{array} \right \rbrace <\infty, $$ where $( s_i ( g\circ f\circ h) \big )_i $ is the sequence of singular values of the operator $g\circ f\circ h: \ell_2 \longrightarrow \ell_2 $. We denote by $\mathfrak{S}_{p}^w (X,Y)$ the space of operators $f: X \longrightarrow Y$ of weak Schatten-von Neumann type $\ell_p$. Alternatively, in the tensor product notation, we refer to this space by $ X^* \otimes_{\mathfrak{S}_{p}^w} Y$. \end{definition} To finish this section we introduce the space $\mathfrak{S}_p^{w-cb}(X,Y)$ announced at the beginning of this section. Its definition is based on Definition \ref{Def_WeakSchatten} and it incorporates elements of the theory of operators spaces. This forces us to endow $X$ and $Y$ with operator space structures (o.s.s), that is, norms on the \emph{matrix levels} of these spaces, $M_n (X)\equiv M_n \otimes X$, $M_n(Y)\equiv M_n \otimes Y$ for any $n\in \mathbb{N}$ -- see \cite{RuanBook} or \cite{pisier89_book} for a detailed exposition on operator spaces. With that, the natural notion for maps between operator spaces is the notion of \emph{completely bounded} operators, that is, linear operators $f:X\rightarrow Y$ such that $$ \| f: X\rightarrow Y \|_{cb} : = \sup_{n\in \mathbb{N}} \| \mathrm{Id} \otimes f : M_n(X) \rightarrow M_n(Y) \| <\infty. $$ The space of completely bounded operators between $X$ and $Y$ is denoted by $\mathcal{C}\mathcal{B}(X,Y) $. A Banach space can be endowed in general with several o.s.s. In the case of the space $\mathcal{B}(\mathcal{H},\mathcal{K})$, with $\mathcal{H}$ and $\mathcal{K}$, Hilbert spaces, there is a natural o.s.s. determined by promoting the isomorphism $M_n\left(\mathcal{B}(\mathcal{H},\mathcal{K}) \right) $ $ \simeq $ $ \mathcal{B}( \mathcal{H}^{\otimes n}, \mathcal{K}^{\otimes n})$ to an isometry (fixing that way the norm in the \emph{matrix levels} of the space)\footnote{ Here we considered hilbertian tensor products in such a way that $\mathcal{H}^{\otimes n}$ and $\mathcal{K}^{\otimes n}$ are again Hilbert spaces.}. For a Hilbert space $\mathcal{H}$, we introduce here the so-called \emph{row} and \emph{column} o.s.s., denoting the corresponding operator spaces $R$ and $C$, respectively. $R$ is defined via the \emph{row} embedding: $$ \mathcal{H} \simeq \mathcal{B}(\ell_2, \mathbb{C}), $$ from which we define a norm on $M_n(\mathcal{H}) $ considering the following isomorphism to be an isometry: $M_n(\mathcal{H}) \simeq M_n\left( \mathcal{B}(\mathcal{H}, \mathbb{C}) \right)\simeq \mathcal{B} (\mathcal{H}^{\otimes n}, \ell_2^n).$ Similarly, $C$ is defined substituting the previous \emph{row} embedding by it \emph{column} version $$ \mathcal{H} \simeq \mathcal{B}( \mathbb{C}, \ell_2). $$ These last two operator spaces turn out to be non-isomorphic, on the contrary to what happens at the Banach level, where they are simply Hilbert spaces. They are still dual between themselves, that is, $C^* \simeq R$ and $C \simeq R^*$ completely isometrically\footnote{ Meaning that not only $C^* \simeq R$ and $C \simeq R^*$ stand isometrically but also $M_n(C^*) \simeq M_n(R)$ and $M_n(C) \simeq M_n(R^*)$ for any $n \in \mathbb{N}$.}. However, to properly state those identifications we need a notion of duality for operator spaces. This notion is induced by that of completely bounded maps introduced before. We say that, for an operator space $X$, $X^*$ is its dual if $$ M_n(X^*) = \mathcal{C}\mathcal{B}(X, M_n) \quad \text{ for any } n\in \mathbb{N}. $$ Notice that for $n=1$ the previous characterization of $X^*$ coincides with the dual as Banach spaces\footnote{ For that it is necessary to consider the fact that $\mathcal{C}\mathcal{B}(X, \mathbb{C}) \simeq \mathcal{B}(X,\mathbb{C})$.}. As a last comment on operator spaces, we note that this duality allows to endow $\mathcal{S}_1(\mathcal{H})$ with a natural o.s.s. as the dual of $\mathcal{B}(\mathcal{H})$. Now we finally have all the ingredients to define: \begin{definition}\label{Def_cbWeakSchatten} Given an operator between operator spaces $f: X \rightarrow Y$ and $1\le p \le \infty$ we say that $f$ is of weak-cb Schatten-von Neumann type $\ell_p$ if $$ \| f \|_{\mathfrak{S}_{p}^{w-cb} (X,Y) } := \sup \left \lbrace \Big\| \left( s_i\left( g\circ f \circ h \right) \right)_i \Big\|_{\ell_p} \ : \ \begin{array}{l} \big \| \, g: Y \longrightarrow C\, \big \|_{cb} \le 1\\[0.5em] \big \| \, h: R \longrightarrow X \, \big \|_{cb} \le 1 \end{array} \right \rbrace <\infty, $$ where $( s_i ( g\circ f\circ h) \big )_i $ is the sequence of singular values of the operator $g\circ f\circ h: \ell_2 \longrightarrow \ell_2 $. We denote by $\mathfrak{S}_{p}^{w-cb} (X,Y)$ the space of operators $f: X \longrightarrow Y$ of weak-cb Schatten-von Neumann type $\ell_p$. Alternatively, in the tensor product notation, we refer to this space by $ X^* \otimes_{\mathfrak{S}_{p}^{w-cb}} Y$. \end{definition} \begin{remark}\label{Rmk_sigma^w} Since $B_{\mathcal{CB}(X,Y)} \subseteq B_{\mathcal{B}(X,Y)}$ for any operator spaces $X$, $Y$, it follows that \beq\label{Prop_OpI2} \| f \|_{\mathfrak{S}_{p}^{w-cb} (X,Y) } \le \| f \|_{\mathfrak{S}_{p}^{w} (X,Y) }, \eeq for any $1\le p \le \infty$ and any $f \in \mathfrak{S}_{p}^{w-cb} (X,Y) $. \end{remark} Before ending this section, we provide an alternative characterization of the norm introduced above when $X = M_{n,m} $, $Y=\mathcal{S}_1^{n, m}$ and $p=2$. That is the case appearing in our study of cheating strategies for PV in Section \ref{Sec4}. For that, we understand $\mathfrak{S}_{2}^{w-cb} ( M_{n,m} ,\mathcal{S}_1^{n, m})$ as the tensor product $\mathcal{S}_1^{n,m} \otimes_{\mathfrak{S}_2^{w-cb}} \mathcal{S}_1^{n,m}$. Then, \begin{lemma}\label{lemma_CharacSigma} Given a tensor $f\in \mathcal{S}_1^{n,m} \otimes \mathcal{S}_1^{n, m}$, where $\mathcal{S}_1^{n, m}$ is endowed with its natural o.s.s. (as the dual of $M_{n,m}$), we have that: $$ \| f \|_{\mathcal{S}_1^{n,m} \otimes_{\mathfrak{S}_2^{w-cb}} \mathcal{S}_1^{n,m} } = \sup_{ \begin{subarray}{c} r \in \mathbb{N}\\ g, h \in B_{ M_{nr,m} } \end{subarray} } \big \| ( h \otimes g)( f) \big\|_{\ell_2^{r^2}} . $$ Above, the action of $ h = \sum_{i=1}^n \sum_{j=1}^r \sum_{l=1}^m h_{ijl} |i j\rangle\langle l| \in M_{nr,m}$ on a tensor $ t = \sum_{i=1}^n \sum_{j=1}^m $ $t_{ij} |i\rangle\langle j| \in \mathcal{S}_1^{n,m}$ is defined by $$ h( t) := \sum_{j=1}^r \left( \sum_{i=1}^n \sum_{l=1}^m h_{ijl} t_{ijl} \right) \, |j\rangle \in \ell_2^r. $$ \end{lemma} \begin{proof} The claim follows from the following observations: \begin{itemize} \item a standard argument shows that the supremum in Definition \ref{Def_cbWeakSchatten} can be taken over finite dimensional $C_r$ and $R_r$, where $r\in \mathbb{N} $ is arbitrarily large; \item for an operator between Hilbert spaces, as $ g\circ f \circ h$ in Definition \ref{Def_cbWeakSchatten}, the $\ell_2$-sum of the singular values coincide with the Hilbert-Schmidt norm of the operator, which is the same as the Euclidean norm of the tensor associated. In our case, with a slight abuse of notation, the relevant tensor is $ ( h \otimes g)( f)$; \item finally, when we set $X = M_{n,m} $, $Y=\mathcal{S}_1^{n, m}$ in Definition \ref{Def_cbWeakSchatten}, the optimization is carried over elements $g \in B_{ \mathcal{CB} (\mathcal{S}_1^{n,m},C_r) }$ ans $ h \in B_{ M_{n,m} }$. But now, it is again a standard result that the following are complete isometries \cite[Section 9.3]{RuanBook}: $ \mathcal{CB} (\mathcal{S}_1^{n,m} ,C_r ) \simeq M_{n r, m} \simeq \mathcal{CB} (R_r, M_{n,m} )$. The claim of the lemma is obtained acting with $g,\, h $ viewed as elements in $B_{ M_{n r, m} }$ as defined in the statement. \end{itemize} \end{proof} \subsubsection{Interpolation of Banach spaces}\label{Sec.2.4.2} Properties of interpolation spaces allow us to obtain estimates for the type constants of certain spaces that are useful for our purposes in this work. Here we restrict ourselves to the study of the complex interpolation space $(X_0,X_1)_\theta$ for $0< \theta < 1 $ and finite dimensional Banach spaces $X_0$, $X_1$. We decided to avoid here a full treatment of the rather cumbersome definition of these spaces and focus on stating some natural properties they display. That is enough for the scope of our work. We redirect the interested reader to the classical references \cite{BerghLofstrom76,Triebel78}. In our case, in which $X_0$, $X_1$ are finite dimensional, the space $(X_0,X_1)_\theta$ can always be constructed. In the general case, for arbitrary Banach spaces, if we still can define $(X_0,X_1)_\theta$ we say that the couple $(X_0,\, X_1)$ is compatible\footnote{ Technically, this condition is usually stated as the requirement that $X_0$ and $X_1$ embed continuously in a common Hausdorff topological vector space.}, so we fix this terminology from now on. For the sake of concreteness, here we will consider the case in which $X_0$, $X_1$ and $(X_0,X_1)_\theta$ are algebraically the same space but endowed with different norms. The complex interpolation method, that assigns to any compatible couple $(X_0,\, X_1)$ the space $(X_0,X_1)_\theta$, is an \emph{exact interpolation functor of exponent} $\theta$. This means that it satisfies the following: \begin{theorem}[\cite{BerghLofstrom76}, Thm. 4.1.2.]\label{Int_IntProp} For any compatible couples $(X_0,\, X_1)$, $(Y_0,\, Y_1)$, and any linear map $f : (X_0,X_1)_\theta \rightarrow (Y_0,Y_1)_\theta$: $$ \big \| f : (X_0,X_1)_\theta \rightarrow (Y_0,Y_1)_\theta \big\| \le \big \| f : X_0 \rightarrow Y_0 \big\|^{1-\theta} \ \big \| f : X_1 \rightarrow Y_1 \big\|^\theta, $$ where $\| \, \cdot \, \| $ above denotes the usual operator norm. \end{theorem} Now we turn our attention to the classical sequence $\ell_p$ spaces. Interpolation in this case becomes remarkably natural. We have the isometric identification $ \ell_p = (\ell_\infty,\, \ell_1 )_{\sfrac{1}{p}} $ for any $1\le p \le \infty$. Indeed, such an identification follows in a much more general setting. For a Banach space $X$ and $p \in (0,\infty]$, let us denote $L_p(X)$ the space of p-integrable $X$ valued functions on the unit interval. That is, measurable functions $f:[0,1] \rightarrow X$ such that $$ \| f \|_{L_p(X)} := \left( \int_0^1 \| f(t) \|_X^p \mathrm{d} \mu(t) \right)^{\frac{1}{p}}, $$ for an (implicitly) given measure $\mu$. With that we can state: \begin{theorem}[\cite{BerghLofstrom76}, Thm. 5.6.1.]\label{Int_Lp's} For any compatible couple $(X_0,\, X_1)$, $p_0,\, p_1 \in [1,\infty]$ and $\theta\in (0,1)$ the following follows with equal norms: $$ \Big( L_{p_0} (X_0),\, L_{p_1}(X_1) \Big)_\theta = L_p \Big( (X_0,\, X_1)_\theta \Big), $$ where $\frac{1}{p} =\frac{1 - \theta}{p_0} + \frac{\theta}{p_1}$. \end{theorem} Notice that $\ell_p(X)$ spaces can be regarded as particular instances of $L_p(X)$ where the natural numbers are identified with a subset of the interval $[0,1]$ and $\mu$ is fixed as the discrete measure with unit weights on that subset. This allows us to translate the previous statement also to this case: \begin{equation}\label{Int_ellp's} \Big( \ell_{p_0} (X_0),\, \ell_{p_1}(X_1) \Big)_\theta = \ell_p \Big( (X_0,\, X_1)_\theta \Big), \end{equation} where $\frac{1}{p} =\frac{1 - \theta}{p_0} + \frac{\theta}{p_1}$. Pleasantly, an analogue result for Schatten classes is also true. \begin{theorem}[\cite{Pisier98}, Cor. 1.4.]\label{Int_Sp's} For a $p_0,\, p_1 \in [1,\infty]$ and $\theta\in (0,1)$ the following follows with equal norms: $$ ( \mathcal{S}_{p_0} ,\, \mathcal{S}_{p_1} )_\theta = \mathcal{S}_p, $$ where $\frac{1}{p} =\frac{1 - \theta}{p_0} + \frac{\theta}{p_1}$. When it applies, $\mathcal{S}_\infty$ must be understood as the Banach space (with the operator norm) of compact operators in a separable Hilbert space. \end{theorem} These are all the basic results we need regarding complex interpolation. To finish this section, we now relate some of the norms introduced in Section \ref{Sec.2.4.1} with the space $(X^* \otimes_\varepsilon Y, X^* \otimes_\pi Y)_{\frac{1}{2}}$. \begin{proposition}\label{Prop_RelNorms} Given finite dimensional Banach spaces $X$, $Y$, for any $f\in X \otimes Y)$, $$ \big\| f \big\|_{ X \otimes_{\mathfrak{S}_2^{w-cb}} Y } \le \big\| f \big\|_{ X \otimes_{\mathfrak{S}_2^{w}} Y} \le \big \| f \big\|_{(X^* \otimes_\varepsilon Y, X^* \otimes_\pi Y)_{\frac{1}{2}}}. $$ \end{proposition} \begin{proof} Recalling that we have already established the first inequality in Remark \ref{Rmk_sigma^w}. Therefore we focus on the second inequality. According to the definition of $\mathfrak{S}_2^{w} (X,Y)$, Definition \ref{Def_WeakSchatten}, we can directly write: $$ \big\| f \big\|_{ \mathfrak{S}_2^{w} (X,Y)} = \sup_{ \begin{subarray}{c} g \in B_{\mathcal{B}(Y,\ell_2)} \\ h \in B_{\mathcal{B}(\ell_2,X)} \end{subarray} } \| g\circ f \circ h \|_{\mathcal{S}_2} = \sup_{ \begin{subarray}{c} g \in B_{\mathcal{B}(Y,\ell_2)} \\ h \in B_{\mathcal{B}(\ell_2,X)} \end{subarray} } \| g\circ f \circ h \|_{(\mathcal{S}_\infty, \mathcal{S}_1)_{1/2}}, $$ where we have used Theorem \ref{Int_Sp's} to state the last equality. The map $g\circ f \circ h : \ell_2 \rightarrow \ell_2$ can be interpreted, as a tensor, as the image of the mapping $ h^*\otimes g: X^* \otimes Y \rightarrow \ell_2 \otimes \ell_2$ acting on $f$. With this, the previous expression can be rewritten as: \begin{align*} \big\| f \big\|_{ \mathfrak{S}_2^{w} (X,Y)} &= \sup_{ \begin{subarray}{c} g \in B_{\mathcal{B}(Y,\ell_2)} \\ h \in B_{\mathcal{B}(\ell_2,X)} \end{subarray} } \| (h^*\otimes g)(f) \|_{(\mathcal{S}_\infty, \mathcal{S}_1)_{\frac{1}{2}}} \\ &\le \| f \|_{(X^* \otimes_\varepsilon Y, X^* \otimes_\pi Y)_{\frac{1}{2}}} \ \sup_{ \begin{subarray}{c} g \in B_{\mathcal{B}(Y,\ell_2)} \\ h \in B_{\mathcal{B}(\ell_2, X)} \end{subarray} } \| h^*\otimes g : (X^* \otimes_\varepsilon Y, X^* \otimes_\pi Y)_{\frac{1}{2}}\rightarrow (\mathcal{S}_\infty, \mathcal{S}_1)_{\frac{1}{2}} \| . \end{align*} Now, it only remains to show that for any contractions $h^* : X^* \rightarrow \ell_2$, $g: Y \rightarrow \ell_2$ $$ \| h^*\otimes g : (X^* \otimes_\varepsilon Y, X^* \otimes_\pi Y)_{\frac{1}{2}}\rightarrow (\mathcal{S}_\infty, \mathcal{S}_1)_{\frac{1}{2}} \| \le 1. $$ This follows from the interpolation property, Theorem \ref{Int_IntProp}: $$ \| h^*\otimes g: (X^* \otimes_\varepsilon Y, X^* \otimes_\pi Y)_{\frac{1}{2}}\rightarrow (\mathcal{S}_\infty, \mathcal{S}_1)_{\frac{1}{2}} \| \le \| h^*\otimes g : X^* \otimes_\varepsilon Y \rightarrow \mathcal{S}_\infty \|^{\frac{1}{2}} \ \| h^*\otimes g : X^* \otimes_\pi Y \rightarrow \mathcal{S}_1 \|^{\frac{1}{2}} , $$ together with the understanding of $\mathcal{S}_\infty$ and $\mathcal{S}_1$ as the tensor products $\ell_2 \otimes_\varepsilon \ell_2$ and $\ell_2 \otimes_\pi \ell_2$, respectively. This allows us to bound $$ \| h^*\otimes g : X^* \otimes_\varepsilon Y \rightarrow \mathcal{S}_\infty \| = \| h^* : X^* \rightarrow \ell_2 \| \ \| g : Y \rightarrow \ell_2\| \le 1, $$ thanks to the metric mapping property displayed by the injective tensor norm, $\varepsilon$ \eqref{MetricMapProp}. Analogously $$ \| h^*\otimes g : X^* \otimes_\pi Y \rightarrow \mathcal{S}_1 \| = \| h^* : X^* \rightarrow \ell_2 \| \ \| g : Y \rightarrow \ell_2\| \le 1. $$ Hence, the claim in the statement follows. \end{proof} Being more specific, when $X^* = Y = \mathcal{S}_1^{n,m}$, Proposition \ref{Prop_RelNorms} reads \begin{equation} \label{Eq_RelNorms} \| f \|_{\mathcal{S}_1^{n,m} \otimes_{\mathfrak{S}_2^{w-cb}} \mathcal{S}_1^{n,m}} \le \| f \|_{\mathcal{S}_1^{n,m} \otimes_{\mathfrak{S}_2^w} \mathcal{S}_1^{n,m}} \le \| f \|_{ (\mathcal{S}_1^{n,m} \otimes_\varepsilon \mathcal{S}_1^{n,m},\, \mathcal{S}_1^{n,m} \otimes_\pi \mathcal{S}_1^{n,m} )_{\frac{1}{2}} }. \end{equation} \subsubsection{Type/cotype of a Banach space}\label{Sec2.4.3} The key properties of a Banach space we study are its \emph{type} and \emph{cotype}. These are probabilistic notions in the local theory of Banach spaces that built on \emph{Rademacher} random variables\footnote{ There also exists in the literature a \emph{gaussian} notion of type/cotype. See, e.g., \cite{Tomczak1989banach}. Both notions are in fact intimately related, but here we only consider the \emph{Rademacher} version of the story.}. We call a random variable $\varepsilon$ Rademacher if it takes values -1 and 1 with probability 1/2 each. We refer by $\lbrace \varepsilon_i\rbrace_{i=1}^n$ to a family of n i.i.d. such random variables. Then, $\mathbb{E}_\varepsilon \, \phi(\varepsilon)$ denotes the expected value of a function $\phi$ over any combination of signs $\lbrace \varepsilon_i\rbrace_{i=1}^n$ with uniform weight $1/2^n$. \begin{definition}\label{typedef}Let $X$ be a Banach space and $1 \le p \le 2$. We say $X$ is of (Rademacher) type $p$ if there exists a positive constant $\mathrm{T}$ such that for every natural number $n$ and every sequence $\lbrace x_i \rbrace_{i=1}^n \subset X$ we have \beq \nonumber \hspace{-3mm} \left( \mathbb{E}_\varepsilon \Big[ \big\| \sum_{i=1}^n \varepsilon_i x_i \big\|_X^2 \Big] \right)^{1/2} \hspace{-1mm} \le \mathrm{T} \left( \sum_{i=1}^n \|x_i\|_X^p \right)^{1/p}, \eeq Moreover, we define the Rademacher type $p$ constant $\mathrm{T}_p(X)$ as the infimum of the constants $\mathrm{T}$ fulfilling the previous inequality.\end{definition} The notion of type of a normed space finds a dual notion in the one of cotype: \emph{ For $2 \le q < \infty$, the Rademacher cotype $q$ constant of $X$, $\mathrm{C}_q(X)$, is the infimum over the constants $\mathrm C$ (in case they exist) such that the following inequality holds for every natural number $n$ and every sequence $\lbrace x_i \rbrace_{i=1}^n \subset X$, \begin{align*}\mathrm{\mathrm C}^{-1}\Big( \sum_{i=1}^n \|x_i\|_X^q\Big)^{1/q} \le \left( \mathbb{E}_\varepsilon \Big[ \big\| \sum_{i=1}^n \varepsilon_i x_i \big\|_X^2 \Big] \right)^{1/2} . \end{align*} In parallel with the previous definition, we also say that $X$ is of cotype $q$ if $\mathrm{C}_q(X) < \infty$. } If the number of elements $x_i$ in the definitions above is restricted to be lower or equal to some natural number $m$, we obtain the related notion of \emph{type/cotype constants of $X$ with $m$ vectors}, denoted here as $\mathrm{T}_p^{(m)} (X)$ and $\mathrm{C}_q^{(m)} (X)$. This is the precise notion we will use later on. Although it will be frequently enough to work with the notion of type constants, sometimes we will need to make this distinction. Coming back to the better studied context of type and cotype (without any restriction on the number of elements), it is well known that $X$ being of type $p$ implies cotype $q$ for the dual, $X^*$, where $q$ is the conjugate exponent such that $1/p + 1/q = 1$. This can be subsumed in the inequality -- see, e.g., \cite{Maurey03}: \begin{equation} \label{typecotype_duality2} \mathrm{C}_q (X^*) \le \mathrm{T}_p (X), \qquad \text{ for } 1 < p \le 2,\ 2 \le q < \infty\ : \ \frac{1}{p} + \frac{1}{q} = 1. \end{equation} On the contrary, the reverse inequality fails in general -- and, in fact, the pair of spaces considered in this work, $(M_n$, $\mathcal{S}_1^n)$, is an instance of that phenomenon. However, it turns out that the reverse inequality can be made true \emph{up to logarithmic} factors \cite{Pisier82,Maurey03}: \beq \label{typecotype_duality} \mathrm{T}_p (X) \lesssim \log(\dim(X)) \, \mathrm{C}_q (X^*) , \qquad \text{ for } 1 < p \le 2,\ 2 \le q < \infty\ : \ \frac{1}{p} + \frac{1}{q} = 1. \eeq Our interest now turns into the interaction between type and interpolation. In fact, type constants behave well w.r.t. interpolation methods, fact that will be extremely useful in next section. We state the following general known result: \begin{proposition}\label{Prop_IntType} Let $X_0, \, X_1$ be an interpolation couple, where $X_i$ has type $p_i$ for some $1\le p_i \le 2 $, $i=0,\,1$. Let $0 < \theta <1$ and $1 < p < 2 $ such that $\frac{1}{p} = \frac{1-\theta}{p_0} + \frac{\theta}{p_1}$. Then, $$ \mathrm{T}_p \big( (X_0,X_1)_{\theta} \big) \le \big( \mathrm{T}_{p_0} (X_0) \big)^{1-\theta} \left( \mathrm{T}_{p_1} (X_1) \right)^{\theta}. $$ \end{proposition} The proof follows easily from the interpolation properties of vector valued $\ell_p$ and $L_p$ spaces. We decided to include a simple proof next without any claim of originality. \begin{proof} An alternative characterization of the type-p constant of a Banach space $X$ is given by the norm of the mapping: $$ \begin{array}{cccc} \mathrm{Rad}:& \ell_p ( X ) & \longrightarrow & L_2 ( X ) \\ &(x_{i})_{i} &\mapsto & \sum_{i} \varepsilon_{i} \, x_{i} \end{array}, $$ where $\lbrace \varepsilon_i \rbrace_i$ are i.i.d. Rademacher random variables and\footnote{ Formally, to establish this identification we consider a realization of the random variables $\varepsilon_i$ as real valued functions on the interval $[0,1]$. A standard choice is setting $\varepsilon_i (t) = \mathrm{sign} \left(\sin(2^i \pi t) \right)$. In that way, for a function $\phi$ of the random variable $\varepsilon$, $\mathbb{E}_\varepsilon \phi(\varepsilon) = \int_0^1 \phi(\varepsilon(t))\mathrm{d} t$, which makes the connection with $L_p$ spaces.} $$ \| \sum_{i} \varepsilon_{i} \, x_{i} \|_{L_2(X)} := \left( \mathbb{E}_\varepsilon \big\| \sum_i \varepsilon_i x_i \big \|^2_X \right)^{\frac{1}{2}}. $$ Then, we write \begin{align*} \mathrm{T}_p \left( (X_0,X_1)_{\theta} \right) &= \left\| \mathrm{Rad}: \ell_p \left( (X_0,X_1)_{\theta} \right) \longrightarrow L_2 \left( (X_0,X_1)_{\theta} \right) \right\|. \end{align*} Taking into account the equivalences (Theorem \ref{Int_Lp's}): $$ \ell_p \left( (X_0,X_1)_{\theta} \right) = \left( \ell_{p_0} (X_0) , \ell_{p_1}( X_1 ) \right)_{\theta} , \qquad L_2 \left( (X_0,X_1)_{\theta} \right) = \left( L_{2} (X_0) , L_{2}( X_1 ) \right)_{\theta} , $$ we can bound: \begin{align*} \left\| \mathrm{Rad}: \ell_p \left( (X_0,X_1)_{\theta} \right) \longrightarrow L_2 \left( (X_0,X_1)_{\theta} \right) \right\|& \\ & \hspace{-2.4cm} \le \left\| \mathrm{Rad}: \ell_{p_0} (X_0) \longrightarrow L_{2} (X_0) \right\|^{1-\theta} \ \left\| \mathrm{Rad}: \ell_{p_1} (X_1) \longrightarrow L_{2} (X_1) \right\|^\theta \\ & \hspace{-2.4cm} = \left( \mathrm{T}_{p_0} (X_0) \right)^{1-\theta} \left( \mathrm{T}_{p_1} (X_1) \right)^{\theta} . \end{align*} \end{proof} \subsubsection{Vector valued maps on the Boolean hypercube}\label{Sec2.4.4} The main idea in this work consists on studying strategies to break a particular family of PV schemes -- defined in Section \ref{Sec3} -- as assignments on the boolean hypercube $\mathcal{Q}_m = \lbrace -1, 1 \rbrace^m$. We will associate to any cheating strategy a vector valued mapping $\Phi: \mathcal{Q}_m \rightarrow X$, being $X$ some Banach space. Regular enough $\Phi$'s will lead to good lower bounds on resources required by the cheaters, contributing that way to shed some new light on Question \ref{question1}. To quantify the regularity of this kind of maps we introduce the following parameter (depending also on the choice of the space $X$): \begin{definition}\label{Def_sigma} To any Banach-space valued map $\Phi: \mathcal{Q}_m \rightarrow X$ we associate the parameter: $$ \sigma_\Phi : = \log(m) \ \mathbb{E}_{\varepsilon\in \mathcal{Q}_m}\, \left( \sum_{i=1 }^m \| \partial_i \Phi_S(\varepsilon)\|^2_{X} \right)^{1/2}, $$ where $\partial_i \Phi(\varepsilon): = \frac{\Phi(\varepsilon_1,\ldots, \varepsilon_i ,\ldots ,\varepsilon_m)-\Phi(\varepsilon_1,\ldots, - \varepsilon_i ,\ldots ,\varepsilon_m)}{2}$ is the discrete derivative on the boolean hypercube in the i-th direction. \end{definition} Intuitively, $\sigma$ is an average on both, the point $\varepsilon$ and the direction $i$ (unnormalized in this last case), of the magnitude of the derivative of the map $\Phi$. The prefactor $\log(n)$ is of minor importance for our purposes and we added it to the definition of $\sigma_\Phi$ to obtain more compact expressions later on. \begin{example}\label{Example_sigma1} In order to gain some familiarity, let us compute the parameter $\sigma$ of a linear map $$ \begin{array}{rccc} \Phi : & D_{m} & \longrightarrow & X \\[1em] & \varepsilon & \mapsto & \Phi (\varepsilon): = \frac{1}{m} \sum_j \varepsilon_j x_j \end{array} , $$ where $x_j \in B_X$ for $j= 1,\ldots m$ First, for any point $\varepsilon \in \mathcal{Q}_m$, and a direction $i\in[m]$: $$ \partial_i \Phi(\varepsilon) =\frac{1}{2m} \ \left( \sum_{j} \varepsilon_j x_j - \varepsilon_j (-1)^{\delta_{i,j}} x_j \right) = \frac{1}{m} \varepsilon_i \,x_i. $$ Therefore, $$ \sigma_\Phi = \frac{\log(m)}{m} \left( \sum_i \| x_i \|_X^2 \right)^{\frac{1}{2}} \le \frac{\log(m)}{m^{\frac{1}{2}}}. $$ This is the ideal case in which our results lead directly to powerful lower bounds on the resources required to break our PV schemes. \end{example} Ultimately, the motivation for the definition of $\sigma_\Phi$ is the bound in Corollary \ref{Cor1} below. This is a consequence of the following Sobolev-type inequality due to Pisier for vector-valued functions on the hypercube: \begin{lemma}[\cite{Pisier86}, Lemma 7.3] \label{lemmaPisier} In a Banach space $X$, let $p\ge 1 $, $\Phi: \mathcal{Q}_m \rightarrow X$ and $\varepsilon ,\, \tilde \varepsilon$ be independent random vectors uniformly distributed on $\mathcal{Q}_m$. Then, $$ \mathbb{E}_{\varepsilon} \Big \| \Phi (\varepsilon) - \mathbb{E}_{\varepsilon} \Phi (\varepsilon) \Big \|_X^p \le ( C \log m)^p \ \mathbb{E}_{\varepsilon,\tilde\varepsilon} \Big \| \sum_i \tilde \varepsilon_i \partial_i \Phi(\varepsilon) \Big{\|}^p_X , $$ where $\partial_i \Phi(\varepsilon): = \frac{\Phi(\varepsilon_1,\ldots, \varepsilon_i ,\ldots ,\varepsilon_n)-\Phi(\varepsilon_1,\ldots, - \varepsilon_i ,\ldots ,\varepsilon_n)}{2}$. \end{lemma} It is now very easy to combine this result with the type properties of $X$ in order to obtain: \begin{corollary}[of Lemma \ref{lemmaPisier}]\label{Cor1} Consider a function $\Phi: \mathcal{Q}_m \longrightarrow X$, where $X$ is a Banach space. Then $$ \mathbb{E}_\varepsilon\big \| \Phi(\varepsilon) \big\|_X \le \big \|\mathbb{E}_\varepsilon\Phi (\varepsilon) \big \|_X + C \ \sigma_\Phi \ \mathrm{T}^{(m)}_2 ( X ) , $$ where $C$ is an independent constant. \end{corollary} This is the cornerstone of the building leading to Theorem \ref{mainThm}. \begin{proof}[Proof of Corollary \ref{Cor1}] Fix $p = 1 $ in Lemma \ref{lemmaPisier}. Therefore, we have that : $$ \mathbb{E}_{\varepsilon} \Big \| \Phi (\varepsilon) - \mathbb{E}_{\varepsilon} \Phi (\varepsilon) \Big \|_X \le ( C \log m) \ \mathbb{E}_{\varepsilon,\tilde \varepsilon} \Big \| \sum_i \tilde \varepsilon_i \partial_i \Phi(\varepsilon) \Big{\|}_X . $$ Additionally, we can trivially bound: $$ \mathbb{E}_{\varepsilon} \Big \| \Phi (\varepsilon) - \mathbb{E}_{\varepsilon} \Phi (\varepsilon) \Big \|_X \ge \mathbb{E}_{\varepsilon} \Big \| \Phi (\varepsilon) \Big \|_X - \Big \| \mathbb{E}_{\varepsilon} \Phi (\varepsilon) \Big \|_X. $$ On the other hand, according to the definition of the type-2 constant (with $m$ vectors if one wants to be more precise) of $X$ we also can say: $$ \mathbb{E}_{\varepsilon,\tilde \varepsilon} \Big \| \sum_i \tilde \varepsilon_i \partial_i \Phi(\varepsilon) \Big{\|}_X \le \mathrm{T}_2^{(m)}(X) \ \mathbb{E}_{\varepsilon} \, \left( \sum_{i} \| \partial_{i } \Phi_S(\varepsilon)\|^2_{X} \right)^{1/2}. $$ That's enough to obtain the statement. \end{proof} \subsubsection{Some key estimates of type constants} Corollary \ref{Cor1} provides us with a tool to upper bound the expected norm of the image of a map $\Phi: \mathcal{Q}_m \rightarrow X$, provided that we have some control over the RHS of the inequality in the statement. The only piece there that is independent of the map $\Phi$ is the type-2 constant (with m vectors) $\mathrm{T}_2^{(m)}(X)$, to which the rest of this section is devoted. Later on, the normed spaces $M_{n,m}$ and $\mathcal{S}_1^{n,m} \otimes_{\mathfrak{S}_2^{cb-w}} \mathcal{S}_1^{n,m}$ will play a prominent role. The type and cotype properties of $M_{n,m}$ as well as $\mathcal{S}_1^{n,m}$ are well known. In particular the following estimates hold: \begin{equation}\label{Eq4_TypeConsts_1} \begin{array}{c} \mathrm{C}_2 (M_{n,m}) \approx \min(n^{1/2}, m^{1/2}), \qquad \mathrm{T}_2( M_{n,m} ) \approx \log^{1/2} (\min(n,m) ), \end{array} \end{equation} \begin{equation}\label{Eq4_TypeConsts_1.1} \begin{array}{c} \mathrm{C}_2 (\mathcal{S}_1^{n,m}) \approx 1, \qquad \mathrm{T}_2( \mathcal{S}_1^{n,m} ) \approx (\min(n,m) )^{1/2}. \end{array} \end{equation} For $\mathcal{S}_1^{n,m} \otimes_{\mathfrak{S}_2^{cb-w}} \mathcal{S}_1^{n,m}$ the situation is not that well understood at all. In fact, we were not able to obtain any non-trivial estimate for its type properties so far. Then, instead of dealing directly with this space, we will consider the interpolation space $ (\mathcal{S}_1^{n,m} \otimes_\varepsilon \mathcal{S}_1^{n,m},\mathcal{S}_1^{n,m} \otimes_\pi \mathcal{S}_1^{n,m} )_{\frac{1}{2}}$, that turns out to be an upper bound to the norm in $\mathcal{S}_1^{n,m} \otimes_{\mathfrak{S}_2^{cb-w}} \mathcal{S}_1^{n,m}$, recall Proposition \ref{Prop_RelNorms}. From now on we use the following notational short-cut: $ (\mathcal{S}_1^{n,m} \otimes_\varepsilon \mathcal{S}_1^{n,m},\mathcal{S}_1^{n,m} \otimes_\pi \mathcal{S}_1^{n,m} )_{\theta} = \mathcal{S}_1^{n,m} \otimes_{ (\varepsilon,\pi)_{\theta} } \mathcal{S}_1^{n,m}$. Thanks to the extra structure in $\mathcal{S}_1^{n,m} \otimes_{ (\varepsilon,\pi)_{1/2} } \mathcal{S}_1^{n,m} $ provided by interpolation, we are able to obtain a bound for its type constants. To simplify the presentation, we consider in the following that $\min(n,m) = n$. We can state: \begin{proposition}\label{Type_Inttheta} Given $0< \theta <1$, and natural numbers $n\le m$: $$ \mathrm{T}_{ \frac{2}{1+\theta} }\left( \mathcal{S}_1^{n,m} \otimes_{ (\varepsilon,\pi)_\theta } \mathcal{S}_1^{n,m} \right) \lesssim_{\log} n^{\frac{1-\theta}{2}}. $$ \end{proposition} An immediate consequence of the previous corollary is a bound for the type-2 constant with $n^2$ vectors: \begin{equation*} \mathrm{T}_{ 2 }^{(n^2)} \left( \mathcal{S}_1^{n,m} \otimes_{ (\varepsilon,\pi)_\theta } \mathcal{S}_1^{n,m} \right) \le n^{\theta} \ \mathrm{T}_{ \frac{2}{1+\theta} }\left( \mathcal{S}_1^{n,m} \otimes_{ (\varepsilon,\pi)_\theta } \mathcal{S}_1^{n,m} \right) \lesssim_{\log} n^{ \frac{1+\theta}{2} }. \end{equation*} Particularizing for $\theta = \frac{1}{2}$: \begin{equation}\label{Type_Int1/2} \mathrm{T}_{ 2 }^{(n^2)} \left( \mathcal{S}_1^{n,m} \otimes_{ (\varepsilon,\pi)_\frac{1}{2} } \mathcal{S}_1^{n,m} \right) \lesssim_{\log} n^{ \frac{3}{4}}. \end{equation} This is the key type-estimate to obtain part II. of the main Theorem \ref{mainThm}. For the sake of concreteness, we explicit here the logarithmic corrections in \eqref{Type_Int1/2}: $$ \mathrm{T}_{ 2 }^{(n^2)} \big( \mathcal{S}_1^{n,m} \otimes_{ (\varepsilon,\pi)_\frac{1}{2} } \mathcal{S}_1^{n,m} \big) \lesssim n^{ 3/4 } \log^{1/2} (n m) \log(n) . $$ \begin{proof}[Proof of \ref{Type_Inttheta}] The proof proceeds in two steps. First, using techniques from \cite{Pisier90,Pisier1992}, we obtain the estimate \beq \label{type_epsilon} \mathrm{T}_2 (\mathcal{S}_1^{n,m} \otimes_\varepsilon \mathcal{S}_1^{n,m}) \lesssim_{\log} n ^{1/2} .\eeq With this at hand, Proposition \ref{Type_Inttheta} follows from how type constants interact with the complex interpolation method, Proposition \ref{Prop_IntType}. In particular, it is enough to fix $p_0 =2$, $p_1 = 1$ in that result and consider the trivial bound $\mathrm{T}_1 (\mathcal{S}_1^{n,m} \otimes_\pi \mathcal{S}_1^{n,m}) =1 $. Therefore, there remains to provide a proof for \eqref{type_epsilon}. To prove the stated estimate we bound the cotype-2 constant of the dual, $M_{n,m} \otimes_\pi M_{n,m}$. Therefore, from the duality between type and cotype, Proposition \ref{typecotype_duality}, we obtain: $$ \mathrm{T}_2 (\mathcal{S}_1^{n,m} \otimes_\varepsilon \mathcal{S}_1^{n,m}) \lesssim \log(nm) \, \mathrm{C}_2(M_{n,m} \otimes_\pi M_{n,m}). $$ To estimate $ \mathrm{C}_2(M_{n,m} \otimes_\pi M_{n,m})$, we use the following bound on the cotype of the projective tensor product, implicit in \cite{Pisier90}\footnote{ The key result here is Theorem 5.1 in \cite{Pisier90}. The bound we use is obtained keeping track of the constants appearing in the isomorphic statement of that theorem. We are indebted to Jop Briët for kindly sharing with us some very useful private notes on Pisier's method.} $$ \mathrm{C}_2(M_{n,m} \otimes_\pi M_{n,m}) \lesssim \mathrm{C}_2 (M_{n,m}) \, \mathrm{UMD}(M_{n,m})\, \mathrm{T}_2^2(M_{n,m}), $$ where $\mathrm{UMD}(X)$ is the analytic UMD (unconditional martingale difference) parameter of the Banach space $X$. We now bound each of the quantities in the RHS of the last inequality: \begin{itemize} \item recalling \eqref{Eq4_TypeConsts_1} we have that $\mathrm{C}_2 (M_{n,m}) \lesssim n^{1/2}$ and $ \mathrm{T}_2(M_{n,m}) \lesssim \log^{1/2}(n)$; \item we estimate $\mathrm{UMD}(M_{n,m})$ from known bounds for the UMD constant of the p-Schatten class $\mathcal{S}_p$, for $ 1<p <\infty$. It is known that these spaces are UMD and the following estimate for $\mathrm{UMD}(\mathcal{S}_p)$ is available \cite{Randrianantoanina2002}: $$ \mathrm{UMD}(\mathcal{S}_p) \lesssim p. $$ This also translates on the same bound for the subspace $\mathcal{S}_p^{n,m}$. Now, we take into account the following relation between the UMD constants of arbitrary spaces $X$ and $Y$ at Banach-Mazur distance $d(X,Y)$. This is a direct consequence of the geometric characterization of the UMD property due to Burkholder \cite{Burkholder81} -- see also \cite{Burkholder86}: $$ \mathrm{UMD}(X) \lesssim d(X,Y) \, \mathrm{UMD}(Y). $$ Finally, with this at hand, we obtain the bound $$ \mathrm{UMD}(M_{n,m}) \lesssim d(M_{n,m},\mathcal{S}_p^{n,m}) \, \mathrm{UMD}(\mathcal{S}_p^{n,m}) \lesssim n^{1/p} \, p. $$ Adjusting the parameter $p$ as $ p = \log(n)$ we obtain $$ \mathrm{UMD}(M_{n,m}) \lesssim \log(n) , $$ that is enough to conclude that $$ \mathrm{T}_2 (\mathcal{S}_1^{n,m} \otimes_\varepsilon \mathcal{S}_1^{n,m}) \lesssim \log(nm)\, \log^2(n)\, n ^{1/2} .$$ \end{itemize} \end{proof} \section{The game $\mathbf{G_{Rad}}$}\label{Sec3} In this section we describe the precise scheme we analyse in this work, which we denote $G_{Rad}$. For that, we come back to the setting fixed at the end of Section \ref{Sec2.3}, in Remark \ref{Rmk3}. But first, we introduce $G_{Rad}$ from the point of view of protocols for PV -- cf. section \ref{Sec2.2} --. Actually, $G_{Rad}$ will rather refer to a family of protocols indexed by a natural number, $n$, making reference to the \emph{size} of the protocol. This will become clear in a second. We omit explicit reference to index $n$ when there is no risk of confusion. Recall that in 1-D PV, we consider a privileged point $x$ and a couple of verifiers, $V_A$, $V_B$, at locations $ x \pm \delta$. In the dishonest scenario, that will be later considered, two cheaters hold locations $x \pm \delta'$ for some $0 < \delta' < \delta$. $G_{Rad}$ proceeds as follows: \begin{enumerate} \item The verifier prepares the state $ |\psi \rangle = \frac{1}{n} \sum_{i,j=1}^n |ij\rangle_{AB} \otimes |ij\rangle_{C} \in \mathcal{H}_A \otimes \mathcal{H}_B \otimes \mathcal{H}_C $ and distributes registers $\mathcal{H}_A \otimes \mathcal{H}_B$ to $V_A$. He also chooses uniformly at random an $n^2$ dimensional sign vector $\varepsilon =(\varepsilon_{ij})_{i,j=1}^n \in \mathcal{Q}_{n^2}$ and informs $V_B$ of that choice. \item $V_A$ forwards the quantum system $\mathcal{H}_A \otimes \mathcal{H}_B$ to the prover at $x$. From the other side, $V_B$ communicates the classical information specifying the vector $\varepsilon$. \item After receiving both messages, a honest prover located at $x$ has to apply the unitary $ U_\varepsilon = diag(\varepsilon_{11}, \ldots, \varepsilon_{nn})$ on the received system $\mathcal{H}_A \otimes \mathcal{H}_B$ and forwards $\mathcal{H}_A$ back to $V_A$ and $\mathcal{H}_B$ \emph{to} $V_B$. \item At some later time, $V_A$ and $V_B$ perform the joint measurement defined by elements $\big\lbrace |\psi_\varepsilon\rangle\langle \psi_\varepsilon|, \mathrm{Id} - |\psi_\varepsilon\rangle\langle \psi_\varepsilon| \big\rbrace$, where we have defined $ |\psi_\varepsilon \rangle = U_\varepsilon \otimes \mathrm{Id}_C \, |\psi\rangle$. The verification is correct if the outcome of this final measurement is the one corresponding to $ |\psi_\varepsilon\rangle\langle \psi_\varepsilon| $ and the registers $\mathcal{H}_A \otimes \mathcal{H}_B$ were received on time. \end{enumerate} Consider now a coalition of attackers, Alice and Bob, trying to impersonate the honest prover intercepting the communication with $V_A$ and $V_B$ at points $x-\delta'$, $x+\delta'$. As described in Section \ref{Sec2.3}, the cheaters' action can be understood as Alice and Bob playing a collaborative quantum game with suitable restrictions in their resources -- we consider the s2w scenario defined in Figure \ref{s2wStrategies}. The game associated with the PV scheme above turns out to be a MROQG, as introduced in Section \ref{Sec2.3}. This game, which we also denote as $G_{Rad}$, is defined by tensors $\big\lbrace G_\varepsilon :=\mathrm{Tr}_C |\psi_\varepsilon\rangle\langle \psi| = \frac{1}{n^2} \sum_{i,j=1}^n \varepsilon_{ij} |ij\rangle\langle ij| \big\rbrace_{\varepsilon \in \mathcal{Q}_{n^2}}$ and the uniform probability distribution over $ \mathcal{Q}_{n^2} $. The game proceeds as follows: \begin{enumerate} \item The referee prepares the state $ |\psi \rangle = \frac{1}{n} \sum_{i,j}^n |ij\rangle_{AB} \otimes |ij\rangle_{C} \in \mathcal{H}_A \otimes \mathcal{H}_B \otimes \mathcal{H}_C $ and samples uniformly at random an $n^2$ dimensional sign vector, $\varepsilon =(\varepsilon_{ij})_{ij=1}^n \in \mathcal{Q}_{n^2}$. \item He sends registers $\mathcal{H}_A \otimes \mathcal{H}_B$ to Alice and the classical description of $\varepsilon$ to Bob. \item Alice and Bob apply a quantum operation with the information received and send to the referee quantum messages resulting from that operation. Register $\mathcal{H}_A$ has to be communicated from Alice and $\mathcal{H}_B$ from Bob. Their action is restricted to be of the form of Figure \ref{s2wStrategies}. We study in detail this scenario below. \item The referee performs the measurement $\lbrace |\psi_\varepsilon\rangle\langle \psi_\varepsilon|, \mathrm{Id} - |\psi_\varepsilon\rangle\langle \psi_\varepsilon| \rbrace$ on registers $\mathcal{H}_A \otimes \mathcal{H}_B \otimes \mathcal{H}_C$. He declares the players winning when the outcome of this last measurement is the one corresponding to $ |\psi_\varepsilon\rangle\langle \psi_\varepsilon|$. \end{enumerate} The main object of study in this work is the value of this game in the $s2w$ scenario, denoted by $\omega_{s2w} (G_{Rad})$. A strategy in this scenario is determined by -- cf. Figure \ref{s2wStrategies} and Definition \ref{Def:MROQG}: \begin{itemize} \item a shared entangled state $ \varphi \in \mathcal{D}(\mathcal{H}_{E_a} \otimes \mathcal{H}_{E_b} ) $ that we assume here to be pure\footnote{ It can be easily checked that, by convexity, the value achieved in $G_{Rad}$ by strategies using mixed states is always upper bounded by the value when using pure states. Since the quantity we are interested in is the optimal value of the game, restricting ourselves to strategies using pure states would be enough. }. From now on we use interchangeably the notations $\varphi$ or $|\varphi\rangle\langle \varphi|$ to refer to that state; \item a family -- indexed on $\varepsilon$ -- of tuples of four ``local'' channels: for each $\varepsilon \in \mathcal{Q}_{n^2}$, $$\mathcal{A} \in \mathrm{CPTP}(\mathcal{H}_A \otimes \mathcal{H}_B \otimes \mathcal{H}_{E_a}, \mathcal{H}_{A \shortrightarrow A }\otimes \mathcal{H}_{A \shortrightarrow B }),\quad \mathcal{B}_\varepsilon \in \mathrm{CPTP}( \mathcal{H}_{E_b}, \mathcal{H}_{B \shortrightarrow B } \otimes \mathcal{H}_{B \shortrightarrow A}), $$ $$ \tilde{\mathcal{A}}_\varepsilon \in \mathrm{CPTP}( \mathcal{H}_{A \shortrightarrow A }\otimes \mathcal{H}_{B \shortrightarrow A }, \mathcal{H}'_{A}), \quad \tilde \mathcal{B}_\varepsilon \in \mathrm{CPTP}( \mathcal{H}_{B \shortrightarrow B }\otimes \mathcal{H}_{A \shortrightarrow B }, \mathcal{H}'_{B} ) .$$ For verification, $\mathcal{H}'_A$, $\mathcal{H}'_B$ should be communicated to $V_A$ and $V_B$ respectively. Therefore, according to the definition of the game, these registers should be isomorphic to the originals $\mathcal{H}_A$ and $\mathcal{H}_B$. \end{itemize} Understood as a family of quantum channels, the strategy defined by these elements reads: \begin{equation}\label{StratS2w} \begin{array}{cccc} \mathcal{S}_\varepsilon: & \mathcal{D}( \mathcal{H}_A \otimes \mathcal{H}_B ) & \longrightarrow & \mathcal{D}(\mathcal{H}_A \otimes \mathcal{H}_B) \\[1em] & \psi & \mapsto & \mathcal{S}_\varepsilon (\psi ) = (\tilde{\mathcal{A}}_\varepsilon \otimes \tilde \mathcal{B}_\varepsilon)\circ(\mathcal{A} \otimes \mathcal{B}_\varepsilon) (\psi \otimes \varphi) \end{array} , \end{equation} for each $ \varepsilon \in \mathcal{Q}_{n^2}$. The value attained by such a strategy is given by \eqref{DefvalueStrat_MROQG}, that is, denoting as $\mathfrak{S}_{s2w}$ the set of strategies in the form \eqref{StratS2w}: \color{black} \begin{align}\label{DefvalueStrat_MROQG_GRad1} \omega_{s2w}(G_{Rad}) &= \sup_{\lbrace \mathcal{S}_\varepsilon \rbrace_\varepsilon \in \mathfrak{S}_{s2w} } \omega (G_{Rad};\lbrace \mathcal{S}_\varepsilon \rbrace_\varepsilon ) \\ & = \sup_{\begin{subarray}{c} \mathcal{H}_{E_{a (b)}},\\ \mathcal{H}_{A (B) \shortrightarrow A (B)}, \, \mathcal{H}_{B (A) \shortrightarrow A (B) } \end{subarray}} \ \sup_{\begin{subarray}{c} \tilde{\mathcal{A}}_\varepsilon, \, \tilde \mathcal{B}_\varepsilon,\, \mathcal{A} ,\, \mathcal{B}_\varepsilon \\ \varphi \end{subarray}} \mathbb{E}_\varepsilon \ \omega \Big(G_\varepsilon; (\tilde{\mathcal{A}}_\varepsilon \otimes \tilde \mathcal{B}_\varepsilon)\circ(\mathcal{A} \otimes \mathcal{B}_\varepsilon) (\, \cdot \, \otimes \varphi)\Big), \nonumber \end{align} where $ \tilde{\mathcal{A}}_\varepsilon, \, \tilde \mathcal{B}_\varepsilon,\, \mathcal{A} ,\, \mathcal{B}_\varepsilon $ and $\varphi $ in the last supremum are as indicated above. For future reference, we recall here the expression of \eqref{DefvalueStrat_MROQG} in our particular case: consider the strategy defined by the family of channels \eqref{StratS2w}, then \begin{align}\label{DefvalueStrat_MROQG_GRad2} \omega (G_{Rad};\lbrace \mathcal{S}_\varepsilon \rbrace_\varepsilon ) &:= \mathbb{E}_{\varepsilon} \ \mathrm{Tr}\left[\: |\psi_{\varepsilon} \rangle\langle\psi_{\varepsilon} | \, (\mathrm{Id}_{C}\otimes \mathcal{S}_{\varepsilon}) (|\psi \rangle\langle \psi| ) \: \right], \end{align} \noindent where $|\psi_\varepsilon \rangle = \frac{1}{n} \sum_{i,j}^n \, \varepsilon_{ij} \, |i\rangle \otimes |j\rangle \otimes |ij \rangle, \, |\psi \rangle = \frac{1}{n} \sum_{i,j}^n |i\rangle \otimes |j\rangle \otimes |ij \rangle \in \mathcal{H}_A \otimes \mathcal{H}_B \otimes \mathcal{H}_C$. In this language, the existence of general attacks for arbitrary PV schemes translates into the coincidence of the value in the $s2w$ scenario with the honest value\footnote{ In fact this coincidence holds in general for any MROQG.}: \beq \label{CoincidenceValues} \omega_{s2w}(G_{Rad}) =\omega_H(G_{Rad})=1. \eeq As we said in the introduction, the main question we are interested in is the amount of entanglement necessary to establish this equality. It is natural then to define a restricted version of $\omega_{s2w}(G_{Rad})$ considering only strategies using a limited amount of resources. Here, we restrict the local dimension at any time during the protocol: for $\tilde k,\, k\in \mathbb{N}$ we define: \begin{align} \omega_{s2w;\tilde k, k}(G_{Rad}) := \sup_{\begin{subarray}{c} \tilde{\mathcal{A}}_\varepsilon, \, \tilde \mathcal{B}_\varepsilon,\, \mathcal{A} ,\, \mathcal{B}_\varepsilon \\ \varphi \in \mathcal{D}(\mathbb{C}^k \otimes \mathbb{C}^k) \end{subarray}} \mathbb{E}_\varepsilon \ \omega \Big(G_\varepsilon; (\tilde{\mathcal{A}}_\varepsilon \otimes \tilde \mathcal{B}_\varepsilon)\circ(\mathcal{A} \otimes \mathcal{B}_\varepsilon) (\, \cdot \, \otimes \varphi)\Big), \label{eq:ws2w} \end{align} where we restrict $$\dim(\mathcal{H}_{E_{a (b)}})\le k, \qquad \dim(\mathcal{H}_{A (B) \shortrightarrow A(B) } ) \times \dim( \mathcal{H}_{A (B) \shortrightarrow B (A) } )\le \tilde k.$$ I.e., we restrict, for each $\varepsilon \in \mathcal{Q}_{n^2}$, $$\mathcal{A} \in \mathrm{CPTP}(\ell_2^{n^2 k}, \, \ell_2^{\tilde k}),\quad \mathcal{B}_\varepsilon \in \mathrm{CPTP}( \ell_2^{ k}, \, \ell_2^{\tilde k} ), $$ $$ \tilde{\mathcal{A}}_\varepsilon \in \mathrm{CPTP}( \ell_2^{\tilde k}, \, \ell_2^{n} ), \quad \tilde \mathcal{B}_\varepsilon \in \mathrm{CPTP}( \ell_2^{\tilde k}, \, \ell_2^{n} ) .$$ Clearly, \beq\label{eq:LimS2wH} \lim_{\tilde k, k \shortrightarrow \infty} \omega_{s2w; \tilde k,k}(G_{Rad}) = \omega_{s2w}(G_{Rad}) =\omega_H(G_{Rad}). \eeq We want to study the rate of convergence of this limit. To the best of our knowledge, it is not even known whether the limit is in general attained for finite $k,\, \tilde k$. We worry about lower bounds in $k,\, \tilde k$ to achieve a given degree of approximation in \eqref{eq:LimS2wH}. More precisely, we lower bound the difference $ \omega_H(G_{Rad}) - \omega_{s2w;\tilde k, k}(G_{Rad})$ in terms of $k,\, \tilde k$ and properties of the strategies considered. However, we postpone those results until Section \ref{Sec4}. Before that, we need to provide here two reductions to the kind of strategies we consider in order to prepare the ground for next section. \subsection{Use of classical communication in cheating strategies} First, we consider the role of classical communication between Alice and Bob. In our model, we regard this resource as free and, in fact, we built into the structure of the considered strategies the free communication of the classical information about $\varepsilon$ (in the second round of local operations this parameter was considered as public). This is justified by the fact that our interest is in bounding the \emph{quantum} resources used for attacking $G_{Rad}$, which are assumed to be much more expensive than classical communication. However, there is a potential problem with this approach. That is the possibility of the players using further classical communication apart from that of $\varepsilon$ -- \emph{extra classical communication} from now on --. In our model, this extra classical communication would be included in the definition of the channels $\mathcal{A}$ and $\mathcal{B}_\varepsilon$. In the $\mathfrak{S}_{s2w,\tilde k , k}$ scenario, this would affect the dimension $\tilde k $ being no longer a reliable witness for the quantum resources spent by a given strategy: $\tilde k$ would also include the dimension of the extra classical messages shared by Alice and Bob. Nonetheless, we show that the amount of \emph{useful} extra classical communication in our setting is bounded by the initial dimension of the quantum system manipulated by the players, that is, by $k$ and $n$. The following lemma let us to control the contribution of the classical part of players action to $\tilde k$. \begin{lemma}\label{Lemma_ClassCom} The optimization over $\mathcal{S} \in \mathfrak{S}_{s2w,\tilde k , k} $ in \eqref{eq:ws2w} can be restricted to strategies using extra classical communication of local dimension $ \tilde k_{cl} \le n^4 k^2$. \end{lemma} \begin{proof} The result follows from convexity taking into account that the extreme points of the set of instruments acting on a given Hilbert space of dimension $d$ has at most $d^2$ outcomes. See, for instance, \cite[Rmk. 7.9., p.158]{BuschBook}. Consider an arbitrary strategy $\mathcal{S} = \lbrace \tilde \mathcal{A}_\varepsilon, \tilde \mathcal{B}_\varepsilon,\mathcal{A},\mathcal{B}_\varepsilon, \varphi\rbrace \in \mathfrak{S}_{s2w,m , k} $ using extra classical communication of local dimension $m_{cl}$. The dimensions $m$, $m_{cl}$ are free parameters that will fixed at the end of the proof. Therefore, we can further specify these classical messages in the structure of the channels $\mathcal{A}$ and $\mathcal{B}_\varepsilon$: $$ \mathcal{A}(\, \cdot \,) = \sum_{c_a=1}^{m_{cl}} \mathcal{A}^{c_a}(\, \cdot \,) \otimes |c_a\rangle\langle c_a|\quad : \quad \mathcal{A}^{c_a}\in\mathrm{CP}(\ell_2^{n^2 k},\ell_2^{m/m_{cl}}) \text{ for any }c_a, $$ $$ \mathcal{B}_\varepsilon(\, \cdot \,) = \sum_{c_b=1}^{m_{cl}} \mathcal{B}_\varepsilon^{c_b}(\, \cdot \,) \otimes |c_b\rangle\langle c_b|\quad : \quad \mathcal{B}^{c_b}_\varepsilon\in\mathrm{CP}(\ell_2^{k},\ell_2^{m/m_{cl}}) \text{ for any }c_b. $$ These expressions are nothing else than the description of some instruments in $\mathrm{Ins}(\ell_2^k, \ell_2^{m/m_{cl}})$ ($\mathrm{Ins}(\ell_2^{n^2 k }, \ell_2^{m/m_{cl}})$ in the first case) with $m_{cl}$ outcomes each. As we said before, the extreme points of $\mathrm{Ins}(\ell_2^k, \ell_2^{m/m_{cl}})$ consist of instruments with at most $k^2$ outcomes ($n^4 k^2$ in the first case). Therefore, we can rewrite the channels $\mathcal{A}$, $\mathcal{B}_\varepsilon$ as a convex combination of such extreme points: $$ \mathcal{A}(\, \cdot \,) = \sum_s \alpha_s \mathcal{A}_s(\, \cdot \,), $$ $$ \mathcal{B}_\varepsilon(\, \cdot \,) = \sum_s \beta_{\varepsilon,s} \mathcal{B}_{\varepsilon,s} (\, \cdot \,), $$ where, for each $s, \, \varepsilon$: \begin{itemize} \item $0\le \alpha_s,\, \beta_{\varepsilon,s} \le 1$ : $\sum_s \alpha_s = 1 = \sum_s \beta_{\varepsilon,s}$; \item $\mathcal{A}_s \in \mathrm{Ins}(\ell_2^{n^4k^2}, \ell_2^{m/m_{cl}}) $, $\mathcal{B}_{\varepsilon;s} \in \mathrm{Ins}(\ell_2^k, \ell_2^{m/m_{cl}})$ with at most $n^4 k^2$ and $k^2$ outcomes, respectively. For simplicity we just fix $\tilde k_{cl}$ bounded by the largest of these bounds, $ \tilde k_{cl} \le n^4 k^2$. \end{itemize} Denote $\mathcal{S}_{s,s'}$ the strategy specified by elements $\lbrace \tilde \mathcal{A}_\varepsilon, \tilde \mathcal{B}_\varepsilon,\mathcal{A}_s,\mathcal{B}_{\varepsilon,s'}, \varphi \rbrace_\varepsilon$ and $\mathcal{S}_{\varepsilon;s,s'}(\, \cdot \,)$ the corresponding channels, defined by the generic prescription \eqref{StratS2w}. Notice that $\mathcal{S}_\varepsilon (\, \cdot \,)= \sum_{s,s'} \, \alpha_s \beta_{s'} \, \mathcal{S}_{\varepsilon;s,s'}(\, \cdot \,) $. Now, focus on the value achieved in $G_{Rad}$. It turns out that $\omega(G_{Rad};\mathcal{S})$ is linear in $\mathcal{S}$, fact that allows us to write: $$ \omega(G_{Rad};\mathcal{S}) = \sum_s \alpha_s \, \mathbb{E}_\varepsilon \, \sum_{s'} \, \beta_{\varepsilon,s}\, \omega(G_{Rad}; \lbrace\mathcal{S}_{\varepsilon;s,s'}\rbrace_\varepsilon) \le \max_{s} \left\lbrace \mathbb{E}_\varepsilon \max_{s'} \left\lbrace \omega(G_{Rad}; \lbrace\mathcal{S}_{\varepsilon;s,s'}\rbrace_\varepsilon) \right\rbrace \right\rbrace. $$ Denoting $s^*$, ${s'_\varepsilon}^*$ the indexes at which the maxima above are attained, the strategy $\lbrace \tilde \mathcal{A}_\varepsilon, \tilde \mathcal{B}_\varepsilon,$ $\mathcal{A}_{s^*},\mathcal{B}_{\varepsilon,{s'_\varepsilon}^*}, \varphi \rbrace_\varepsilon$, that uses extra classical communication of local dimension at most $\tilde k_{cl} \le n^4 k^2$, can be now regarded as an element in $\mathfrak{S}_{s2w;\tilde k , k}$ with $\tilde k = m \tilde k_{cl}/m_{cl}$. This proves the claim. \end{proof} \subsection{Pure strategies} The second reduction consist on purifying arbitrary strategies. We start fixing some notation. We say that a strategy $\mathcal{S} = \lbrace \tilde \mathcal{A}_\varepsilon, \tilde \mathcal{B}_\varepsilon,\mathcal{A},\mathcal{B}_\varepsilon, \varphi\rbrace_\varepsilon \in \mathfrak{S}_{s2w}$ is \emph{pure} if the channels $\tilde \mathcal{A}_\varepsilon, \tilde \mathcal{B}_\varepsilon,\mathcal{A},\mathcal{B}_\varepsilon$ can be written as: \begin{align} \mathcal{A} (\, \cdot \, ) &= V (\, \cdot \, ) V^\dagger, &\mathcal{B}_\varepsilon (\, \cdot \, ) &= W_\varepsilon (\, \cdot \, ) W_\varepsilon^\dagger, \\ \tilde \mathcal{A}_\varepsilon (\, \cdot \, ) &= \mathrm{Tr}_{anc_a} \, \tilde V_\varepsilon (\, \cdot \, )\tilde V_\varepsilon^\dagger, &\tilde \mathcal{B}_\varepsilon (\, \cdot \, ) &= \mathrm{Tr}_{anc_b} \, \tilde W_\varepsilon (\, \cdot \, )\tilde W_\varepsilon^\dagger, \end{align} for some contractive operators $$ V:\mathcal{H}_A \otimes \mathcal{H}_B \otimes \mathcal{H}_{E_a} \longrightarrow \mathcal{H}_{A \shortrightarrow A }\otimes \mathcal{H}_{A \shortrightarrow B }, \qquad W_\varepsilon: \mathcal{H}_{E_b} \longrightarrow \mathcal{H}_{B \shortrightarrow B }\otimes \mathcal{H}_{B \shortrightarrow A }, $$ $$ \tilde V_\varepsilon: \mathcal{H}_{A \shortrightarrow A }\otimes \mathcal{H}_{B \shortrightarrow A } \longrightarrow \mathcal{H}_A \otimes \mathcal{H}_{anc_a}, \qquad \tilde W_\varepsilon: \mathcal{H}_{B \shortrightarrow B }\otimes \mathcal{H}_{A \shortrightarrow B } \longrightarrow \mathcal{H}_B \otimes \mathcal{H}_{anc_b}, $$ being $\mathcal{H}_{anc_a}$, $\mathcal{H}_{anc_b}$ arbitrary ancillary Hilbert spaces. In the restricted scenario s2w;$\tilde k , k$, these operators are of the form: \beq\label{pureStrategy} V:\ell_2^{n^2 k} \longrightarrow \ell_2^{\tilde k}, \qquad W_\varepsilon:\ell_2^{k} \longrightarrow \ell_2^{\tilde k}, \qquad \tilde V_\varepsilon,\,\tilde W_\varepsilon: \ell_2^{\tilde k} \longrightarrow \ell_2^{n r}, \eeq being $r$ some natural number. For convenience, we identify pure strategies with families of such \emph{pure} objects, setting the notation $\mathcal{S}^\mathcal{U} = \lbrace \tilde V_\varepsilon, \tilde W_\varepsilon, V,W_\varepsilon , |\varphi\rangle \rbrace_\varepsilon$. We further denote $\mathfrak{S}^\mathcal{U}_{s2w}$ the subset of pure strategies in the s2w scenario and $\mathfrak{S}^\mathcal{U}_{s2w;\tilde k, k}$ the corresponding subset in the model with limited dimension. Due to Stinespring's dilation theorem \cite{Stinespring55}, it turns out that $\mathfrak{S}^\mathcal{U}_{s2w} = \mathfrak{S}_{s2w} $. However, when we restrict the dimension of the considered strategies, the situation is a bit subtler and the Stinespring's dilation of the channels involved affects the relevant dimensions defining the models $\mathfrak{S}_{s2w;\tilde k, k}$ and $\mathfrak{S}^\mathcal{U}_{s2w;\tilde k, k}$. This is taken care of by the following lemma: \begin{lemma}\label{Lemma_Purifications} Any strategy $\mathcal{S} \in \mathfrak{S}_{s2w;\tilde k, k}$ can be regarded as a pure strategy $\mathcal{S}^\mathcal{U} \in \mathfrak{S}^\mathcal{U}_{s2w;\tilde k', k}$ where $ \tilde k' \le n^2 k \tilde k^4. $ That is, the chain of containments $\mathfrak{S}^\mathcal{U}_{s2w;\tilde k, k} \subseteq \mathfrak{S}_{s2w;\tilde k, k} \subseteq \mathfrak{S}^\mathcal{U}_{s2w;\tilde k', k}$ holds. \end{lemma} \begin{proof} Set a strategy $\mathcal{S} = \lbrace \tilde \mathcal{A}_\varepsilon, \tilde \mathcal{B}_\varepsilon,\mathcal{A},\mathcal{B}_\varepsilon, \varphi\rbrace $ in $ \mathfrak{S}_{s2w;\tilde k, k}$. We are going to consider Stinespring's dilations to purify the corresponding channels \begin{equation}\label{StratS2w2} \mathcal{S}_\varepsilon (\, \cdot \, ) = (\tilde{\mathcal{A}}_\varepsilon \otimes \tilde \mathcal{B}_\varepsilon)\circ(\mathcal{A} \otimes \mathcal{B}_\varepsilon) (\, \cdot \, \otimes \varphi). \end{equation} We start with $$ \tilde{\mathcal{A}}_\varepsilon ,\, \tilde \mathcal{B}_\varepsilon \in \mathrm{CPTP}( \ell_2^{\tilde k}, \, \ell_2^{n} ). $$ These channels can be lifted (due to a Stinespring's dilation) to be of the form: $$ \tilde{\mathcal{A}}_\varepsilon (\, \cdot \, ) = \mathrm{Tr}_{\widetilde{anc}} \tilde V_\varepsilon (\, \cdot \, ) \tilde V_\varepsilon^\dagger , \qquad \tilde{\mathcal{B}}_\varepsilon (\, \cdot \, ) = \mathrm{Tr}_{\widetilde{anc}} \tilde W_\varepsilon (\, \cdot \, )\tilde W_\varepsilon^\dagger , $$ where $\tilde V_\varepsilon,\, \tilde W_\varepsilon: \ell_2^{\tilde k} \longrightarrow \ell_2^n \otimes \mathcal{H}_{\widetilde{anc}}$ are Stinespring's isometries and $\dim(\mathcal{H}_{\widetilde{anc}})$ can be upper bounded by $n \tilde k$. Proceeding similarly with $\mathcal{A} \in \mathrm{CPTP}(\ell_2^{n^2 k}, \, \ell_2^{\tilde k})$ and $\mathcal{B}_\varepsilon \in \mathrm{CPTP}( \ell_2^{ k}, \, \ell_2^{\tilde k} )$ we obtain: $$ \mathcal{A} (\, \cdot \, ) = \mathrm{Tr}_{anc_1} V (\, \cdot \, ) V^\dagger , \qquad \mathcal{B}_\varepsilon (\, \cdot \, ) = \mathrm{Tr}_{anc_2} W_\varepsilon (\, \cdot \, ) W_\varepsilon^\dagger , $$ for Stinespring's dilations $V : \ell_2^{n^2 k} \longrightarrow \ell_2^{\tilde k}\otimes \mathcal{H}_{anc_1} $, $ W_\varepsilon : \ell_2^k \longrightarrow \ell_2^{\tilde k}\otimes \mathcal{H}_{anc_2} $ such that $\dim( \mathcal{H}_{anc_1}) \le n^2 k \tilde k$, $\dim( \mathcal{H}_{anc_2}) \le k \tilde k$. With all that, and denoting $\mathcal{H}_{anc_a} \equiv \mathcal{H}_{anc_1} \otimes \mathcal{H}_{\widetilde{anc}}$, $\mathcal{H}_{anc_b} \equiv \mathcal{H}_{anc_2} \otimes \mathcal{H}_{\widetilde{anc}}$, we define the channels \begin{align*} \tilde \mathcal{A}_\varepsilon^\mathcal{U} (\, \cdot \, ) &:= \mathrm{Tr}_{anc_a} ( \tilde V_\varepsilon \otimes \mathrm{Id}_{anc_1} ) \, (\, \cdot \, ) \, ( \tilde V_\varepsilon^\dagger \otimes \mathrm{Id}_{anc_1}) , \\[0.2em] \tilde \mathcal{B}_\varepsilon^\mathcal{U} (\, \cdot \, ) &:= \mathrm{Tr}_{anc_b} ( \tilde W_\varepsilon \otimes \mathrm{Id}_{anc_2} )\, (\, \cdot \, ) \, ( \tilde W_\varepsilon^\dagger \otimes \mathrm{Id}_{anc_2} ), \end{align*} $$ \mathcal{A}^\mathcal{U} (\, \cdot \, ) := V_\varepsilon (\, \cdot \, ) V_\varepsilon^\dagger , \qquad \mathcal{B}_\varepsilon^\mathcal{U} (\, \cdot \, ) := W_\varepsilon (\, \cdot \, ) W_\varepsilon^\dagger . $$ Then, we can rewrite \eqref{StratS2w2} as: \begin{equation*} \mathcal{S}_\varepsilon (\, \cdot \, ) = (\tilde \mathcal{A}^\mathcal{U}_\varepsilon \otimes \tilde \mathcal{B}^\mathcal{U}_\varepsilon ) \circ ( \mathcal{A}^\mathcal{U} \otimes \mathcal{B}^\mathcal{U}_\varepsilon ) (\, \cdot \, \otimes |\varphi\rangle\langle \varphi| ) . \end{equation*} But clearly the strategy $\mathcal{S}^\mathcal{U}:= \lbrace \tilde \mathcal{A}_\varepsilon^\mathcal{U}, \tilde \mathcal{B}_\varepsilon^\mathcal{U},\mathcal{A}^\mathcal{U},\mathcal{B}_\varepsilon^\mathcal{U}, \varphi\rbrace_\varepsilon$ is pure, finishing the proof of the lemma. A careful look at the definition of the channels defining $\mathcal{S}^\mathcal{U}$ reveals that $\mathcal{S}^\mathcal{U} \in \mathfrak{S}^\mathcal{U}_{s2w;\tilde k',k} $ with $ \tilde k' \le n^2 k \tilde k^2. $ \end{proof} With Lemmas \ref{Lemma_ClassCom} and \ref{Lemma_Purifications} at hand we can focus now in the study of strategies in $\mathfrak{S}^\mathcal{U}_{s2w;\tilde k',k} $. Given a general strategy $\mathcal{S} \in \mathfrak{S}_{s2w;\tilde k,k} $, Lemma \ref{Lemma_ClassCom} guarantees that $\mathcal{S}$ can be taken such that the dimension of the classical resources used is upper bounded by \begin{equation}\label{Corresponde_k-k'2} \tilde k_{cl} \le n^4 k^2. \end{equation} Then, Lemma \ref{Lemma_Purifications} allows us to relate $\mathcal{S}$ with a pure strategy $ \mathcal{S}^\mathcal{U} \in \mathfrak{S}^\mathcal{U}_{s2w;\tilde k',k}$ such that \begin{equation}\label{Corresponde_k-k'1} \tilde k' \le n^2 k \tilde k^2. \end{equation} Accordingly, in the rest of this manuscript we work in the model $\mathfrak{S}^\mathcal{U}_{s2w;\tilde k',k}$ redirecting the reader to \eqref{Corresponde_k-k'1} and \eqref{Corresponde_k-k'2} for the relation with the resources used by more general strategies. However, notice that these correspondences are at most polynomial in $n$, $k$ and $\tilde k$ and, in fact, will only introduce corrections by constant factors in the bounds we state later on. In this sense, the precise exponents in \eqref{Corresponde_k-k'1}, \eqref{Corresponde_k-k'2} are irrelevant. This will become clearer in the next section. For convenience, we finish this section recalling the expression of $\omega(G_{Rad}; \mathcal{S}^\mathcal{U})$, Equation \eqref{DefvalueStrat_MROQG_GRad2}, particularized for pure strategies $\mathcal{S}^\mathcal{U} =\lbrace \tilde V_\varepsilon , \tilde W_\varepsilon, V, W_\varepsilon,\varphi \rbrace$: \begin{align}\label{DefvalueStrat_MROQG_GRad3} \omega (G_{Rad};\mathcal{S}^\mathcal{U} ) &= \mathbb{E}_{\varepsilon} \ \mathrm{Tr}\left[\: |\psi_{\varepsilon} \rangle\langle\psi_{\varepsilon} |\, (\mathrm{Id}_{C}\otimes \mathcal{S}^\mathcal{U}_{\varepsilon})\: \big(|\psi \rangle\langle \psi| \big) \: \right], \end{align} where now: $$ \mathcal{S}_\varepsilon^\mathcal{U} (\, \cdot \, ) =\mathrm{Tr}_{\mathcal{H}_{anc_a \otimes anc_b}}\, \left [ (\tilde V_\varepsilon \otimes \tilde W_\varepsilon) \, (V \otimes W_\varepsilon ) \, (\, \cdot \, \otimes |\varphi\rangle\langle \varphi| ) \, (V^\dagger \otimes W_\varepsilon^\dagger ) \, (\tilde V_\varepsilon^\dagger \otimes \tilde W_\varepsilon^\dagger )\right]. $$ Notice that for strategies in the more specific model $ \mathfrak{S}_{s2w;\tilde k',k}^\mathcal{U} $, the operators $\tilde V_\varepsilon , \tilde W_\varepsilon, V, W_\varepsilon$ are specified as in \eqref{pureStrategy} and, therefore, $\mathcal{H}_{anc_a}$ and $\mathcal{H}_{anc_b}$ in this case are identified with $\ell_2^{ r}$ for some $r\in \mathbb{N}$. \begin{remark} Alternatively, \eqref{DefvalueStrat_MROQG_GRad3} can be rewritten as: \begin{align}\label{DefvalueStrat_MROQG_GRad4} \omega (G_{Rad};\mathcal{S}^\mathcal{U} ) &= \, \mathbb{E}_\varepsilon \, \Big\| \frac{1}{n^2} \sum_{i,j} \varepsilon_{ij} (\langle i|\tilde V_\varepsilon \otimes \langle j|\tilde W_\varepsilon ) \, (V|ij\rangle \otimes W_\varepsilon ) \, |\varphi \rangle \Big\|^2_{\mathcal{H}_{anc_a}\otimes \mathcal{H}_{anc_b}}. \end{align} As commented before, in the $ \mathfrak{S}_{s2w;\tilde k',k}^\mathcal{U} $ scenario, $\mathcal{H}_{anc_a}\otimes \mathcal{H}_{anc_b} = \ell_2^{ r^2}$. Before showing the easy proof of this claim, let us clarify the notation used above. By $V|ij\rangle $ we mean the operator $V \in M_{\tilde k',n^2 k }$ with its indices corresponding to $\ell_2^{n^2}$ contracted with the vector $|ij\rangle $. That is, if we expand $V$ on its coordinates, $V = \sum_{ k,l=1}^n \sum_{m=1}^{k} \sum_{p}^{ \tilde k'} V_{p,klm} |p \rangle\langle kl m| $, and then $V|ij\rangle = \sum_{m=1}^{k} \sum_{p}^{ \tilde k' } V_{p,i j m} |p \rangle\langle m| \in M_{\tilde k', k}$. Similarly with $\langle i |\tilde{V}_{\varepsilon}$ and $\langle j|\tilde{ W}_{\varepsilon}$. \begin{proof} The proof is completely elementary and follows the next lines: In first place, we notice that for any vectors $|\xi\rangle\in \mathcal{H}$, $|\eta\rangle\in\mathcal{H}' $ and any operator $U \in \mathcal{B}(\mathcal{H}',\mathcal{H}\otimes \mathcal{K})$ \begin{align*} \mathrm{Tr} \left[ |\xi\rangle\langle \xi | \,\mathrm{Tr}_\mathcal{K} U \, |\eta\rangle\langle \eta|\, U^\dagger \right] &= \mathrm{Tr} \left[( |\xi\rangle\langle \xi | \otimes \mathrm{Id}_\mathcal{K}) \, U \, |\eta\rangle\langle \eta| \, U^\dagger \right] \\ & = \langle \eta| U^\dagger ( |\xi \rangle \otimes \mathrm{Id}_\mathcal{K}) \ (\langle \xi | \otimes \mathrm{Id}_\mathcal{K}) \, U \, |\eta\rangle \\ &= \big\| (\langle \xi | \otimes \mathrm{Id}_\mathcal{K}) \, U \, |\eta\rangle \big\|^2_\mathcal{K}. \end{align*} Applying this elementary identity to $|\psi_\varepsilon \rangle\in \mathcal{H}_A \otimes \mathcal{H}_B \otimes \mathcal{H}_C \equiv \mathcal{H} $, $ |\psi\rangle \otimes |\varphi\rangle \in \mathcal{H}_A \otimes \mathcal{H}_B \otimes \mathcal{H}_C \otimes \mathcal{H}_E\equiv \mathcal{H}'$ and the operator $\mathrm{Id}_C\otimes (\tilde V_\varepsilon \otimes \tilde W_\varepsilon)\,(V \otimes W_\varepsilon ) \in \mathcal{B}(\mathcal{H}', \mathcal{H} \otimes \mathcal{H}_{anc_a}\otimes \mathcal{H}_{anc_b})$ we have that, for each $\varepsilon\in \mathcal{Q}_{n^2}$: \begin{align*} &\mathrm{Tr}\left[\: |\psi_{\varepsilon} \rangle\langle\psi_{\varepsilon} |\, (\mathrm{Id}_{C}\otimes \mathcal{S}^\mathcal{U}_{\varepsilon})\: \big(|\psi \rangle\langle \psi| \big) \: \right] \\ &\qquad = \, \mathbb{E}_\varepsilon \, \left\| ( \langle \psi_\varepsilon |\otimes \mathrm{Id}_{anc_a \otimes anc_b}) \left( \mathrm{Id}_C\otimes(\tilde V_\varepsilon \otimes \tilde W_\varepsilon)\, (V \otimes W_\varepsilon) \right)\, ( |\psi\rangle \otimes |\varphi \rangle ) \right\|^2_{\mathcal{H}_{anc_a}\otimes \mathcal{H}_{anc_b}}. \end{align*} Equation \eqref{DefvalueStrat_MROQG_GRad4} is obtained from the last line above just recalling the definitions $|\psi_\varepsilon \rangle = \frac{1}{n} \sum_{i,j=1}^n \varepsilon_{ij} |ij\rangle_{AB} \otimes |ij\rangle_C$ and $|\psi \rangle = \frac{1}{n} \sum_{i,j=1}^n |ij\rangle_{AB} \otimes |ij\rangle_C $. \end{proof} \end{remark} \section{Bounds for ``smooth'' strategies. Theorem \ref{mainThm}}\label{Sec4} This section is devoted to the proof of Theorem \ref{mainThm}, which provides lower bounds on resources needed to break $G_{Rad}$ by strategies characterized by regularity measures based on parameter $\sigma$ defined in Section \ref{Sec2.4.3}. When we refer here to a cheating strategy for $G_{Rad}$, unless the opposite is explicitly specified, we mean a pure strategy $\mathcal{S}^\mathcal{U} =\lbrace \tilde V_\varepsilon,\, \tilde W_\varepsilon, \, V,\, W_\varepsilon,\, |\varphi \rangle \rbrace_\varepsilon \in \mathfrak{S}^\mathcal{U}_{s2w;\tilde k', k}$. As explained in the introduction, the main idea leading to Theorem \ref{mainThm} is the understanding of cheating strategies for $G_{Rad}$ as assignments on the hypercube $\mathcal{Q}_{n^2}$, i.e., vector-valued functions $\Phi: \mathcal{Q}_{n^2} \rightarrow X$ where $X$ is a suitable Banach space. Given a strategy $\mathcal{S}^\mathcal{U}$, the corresponding assignment $\Phi_{\mathcal{S}^\mathcal{U}}$ must be related with the value $\omega(G_{Rad};\mathcal{S}^\mathcal{U}) $. Ideally, we hope to bound $\omega(G_{Rad};\mathcal{S}^\mathcal{U}) $ with the expected value of the norm of $\Phi_{\mathcal{S}^\mathcal{U}}$, quantity for which we can use Corollary \ref{Cor1} to obtain upper bounds. Equation \eqref{DefvalueStrat_MROQG_GRad4} gives us a first hint on how to construct $\Phi_{\mathcal{S}^\mathcal{U}}$. Given $\mathcal{S}^\mathcal{U} =\lbrace \tilde V_\varepsilon,\, \tilde W_\varepsilon, \, V,\, W_\varepsilon,\, |\varphi \rangle \rbrace_\varepsilon$, consider the map: \begin{equation}\label{Def_Phi0} \begin{array}{rccc} \Phi_{\mathcal{S}^\mathcal{U}} : & \mathcal{Q}_{n^2} & \longrightarrow & \ell_2^{r^2} \\[1em] & \varepsilon & \mapsto & \Phi^{i}_{\mathcal{S}^\mathcal{U}} (\varepsilon) = \frac{1}{n^2} \, \sum_{ij} \, \, \varepsilon_{ij} \, (\langle i |\tilde{V}_{\varepsilon} \otimes \langle j|\tilde{ W}_{\varepsilon})\, (V|ij\rangle \otimes W) |\varphi\rangle \end{array} , \end{equation} where $ r $ is determined by the strategy, recall \eqref{pureStrategy}. The referred Equation \eqref{DefvalueStrat_MROQG_GRad4} now reads: \beq\label{Equivalence_Phi-omega} \omega(G_{Rad}; \mathcal{S}^\mathcal{U}) = \mathbb{E}_\varepsilon \| \Phi_{\mathcal{S}^\mathcal{U}} (\varepsilon ) \|_{\ell_2^{ r^2}}^2 , \eeq so we are in good track. Considering the trivial bound $\mathbb{E}_\varepsilon \| \Phi_{\mathcal{S}^\mathcal{U}} (\varepsilon ) \|_{\ell_2^{ r^2 }}^2 \le \mathbb{E}_\varepsilon \| \Phi_{\mathcal{S}^\mathcal{U}} (\varepsilon ) \|_{\ell_2^{ r^2}}$ and Corollary \ref{Cor1}, we can obtain -- recall Definition \ref{Def_sigma} for $\sigma_{\Phi_{\mathcal{S}^\mathcal{U}}}$: \begin{equation*} \omega(G_{Rad}; \mathcal{S}^{\mathcal{U}}) \le \big \|\mathbb{E}_\varepsilon \Phi_{\mathcal{S}^\mathcal{U}} (\varepsilon) \big\|_{\ell_2^{ r^2} } + C \ \sigma_{\Phi_{\mathcal{S}^\mathcal{U}}} \mathrm{T}^{(n^2)}_2 (\ell_2^{ r^2} ). \end{equation*} Furthermore, $\mathrm{T}^{(n^2)}_2 (\ell_2^{ r^2} ) =1$ since, more generally, $\mathrm{T}_2 (\ell_2^{ r^2} ) = 1 $. The main problem with this approach is that the quantity $\big \|\mathbb{E}_\varepsilon \Phi_{\mathcal{S}^\mathcal{U}} (\varepsilon) \big\|_{\ell_2^{ r^2} }$ might be of the same order as $\omega(G_{Rad}; \mathcal{S}^{\mathcal{U}})$, making the previous bound trivial. The reason is that, if the strategy is near optimal, that is, $\omega(G_{Rad}; \mathcal{S}^{\mathcal{U}}) \approx 1$, for most $\varepsilon \in \mathcal{Q}_{n^2}$, $\Phi_{\mathcal{S}^\mathcal{U}}(\varepsilon)$ is a vector with Euclidean norm $\approx 1$. Then, we can easily modify the map $\Phi_{\mathcal{S}^\mathcal{U}}$ \emph{without increasing} any relevant dimension composing $\Phi_{\mathcal{S}^\mathcal{U}}(\varepsilon)$ with an $\varepsilon$ dependent unitary that ``aligns'' all vectors $\Phi_{\mathcal{S}^\mathcal{U}}(\varepsilon)$ in the same direction. For this modified map, $\big \|\mathbb{E}_\varepsilon \Phi_{\mathcal{S}^\mathcal{U}} (\varepsilon) \big\|_{\ell_2^{ r^2} } = \mathbb{E}_\varepsilon \big \| \Phi_{\mathcal{S}^\mathcal{U}} (\varepsilon) \big\|_{\ell_2^{ r^2} } = \omega (G_{Rad}; \mathcal{S}^{\mathcal{U}}).$ The approach presented so far is unable to detect such an artefact, so we now look at alternative constructions for $\Phi_{\mathcal{S}^\mathcal{U}}$. What we do next, is simplifying the image of the map $\Phi_{\mathcal{S}^\mathcal{U}}$ considering more involved choices for the output Banach space. This allows us to preserve an equivalence of the kind of \eqref{Equivalence_Phi-omega} while obtaining good upper bounds for $\big \|\mathbb{E}_\varepsilon \Phi_{\mathcal{S}^\mathcal{U}} (\varepsilon) \big\|_{ X }$. Given a strategy $\mathcal{S}^\mathcal{U} =\lbrace \tilde V_\varepsilon,\, \tilde W_\varepsilon, \, V,\, W_\varepsilon,\, |\varphi \rangle \rbrace_\varepsilon$ we define the following two alternatives to $\Phi_{\mathcal{S}^\mathcal{U}}$: $$ \begin{array}{rccc} \Phi^{i}_{\mathcal{S}^\mathcal{U}} : & \mathcal{Q}_{n^2} & \longrightarrow & M_{ r^2, k \tilde k'} \\[1em] & \varepsilon & \mapsto & \Phi^{i}_{\mathcal{S}^\mathcal{U}} (\varepsilon) = \frac{1}{n^2} \, \sum_{ij} \, \, \varepsilon_{ij} \, (\langle i |\tilde{V}_{\varepsilon} \otimes \langle j|\tilde{ W}_{\varepsilon})\, (V|ij\rangle \otimes \mathrm{Id}_{\ell_2^{\tilde k'}}), \\[2em] \Phi^{ii}_{\mathcal{S}^\mathcal{U}} : & \mathcal{Q}_{n^2} & \longrightarrow & \mathcal{S}_1^{\tilde k'\!, n } \otimes_{ (\varepsilon, \pi)_{ \sfrac{1}{2} } } \mathcal{S}_1^{\tilde k'\!, n} \\[1em] & \varepsilon & \mapsto & \Phi^{ii}_{\mathcal{S}^\mathcal{U}} (\varepsilon) = \frac{1}{n^2} \, \sum_{ij} \, \, \varepsilon_{ij} \, \langle i | \otimes \langle j | \otimes (V|ij\rangle \otimes W_\varepsilon ) \, |\varphi \rangle. \end{array} $$ These are the central objects we study to obtain Theorem \ref{mainThm}. Recall that the output space in $\Phi^{ii}_{\mathcal{S}^\mathcal{U}}$ was defined at the end of Section \ref{Sec2.4.3} as the interpolation space $( \mathcal{S}_1^{\tilde k'\!, n } \otimes_{ \varepsilon } \mathcal{S}_1^{\tilde k'\!, n}, \mathcal{S}_1^{\tilde k'\!, n } \otimes_{ \pi } \mathcal{S}_1^{\tilde k'\!, n})_{1/2} $. Now we comment on the idea behind the definitions of these maps: recall that a strategy $\mathcal{S}^\mathcal{U} = \lbrace \tilde V_\varepsilon,\, \tilde W_\varepsilon, \, V,\, W_\varepsilon,\, |\varphi \rangle \rbrace_\varepsilon$ consists of two rounds of local operations with a communication stage in between. Fixing the first round, that is related to $V,\, W_\varepsilon$ and $ |\varphi \rangle $, and understanding the optimization over any $\tilde V_\varepsilon,\, \tilde W_\varepsilon$ as computing a particular norm leads to define $\Phi^{ii}_{\mathcal{S}^\mathcal{U}}$. When we fix $\tilde V_\varepsilon,\, \tilde W_\varepsilon$ and $V$ -- this last one is $ \varepsilon$-independent --, optimizing then over any possible $(\mathrm{Id}_{\ell_2^{k}} \otimes W_\varepsilon)|\varphi \rangle$, we obtain $\Phi^{i}_{\mathcal{S}^\mathcal{U}}$. Next we describe how these maps are related to $G_{Rad}$, postponing the proofs for later on -- cf. Section \ref{Proof_Lemmas_Main}. \begin{lemma}\label{Lemma_Main1} For any strategy $\mathcal{S}^\mathcal{U}$, $$ \omega ( G_{Rad}; \mathcal{S}^\mathcal{U}) \le \, \mathbb{E}_\varepsilon \, \big \| \Phi^{i(ii)}_{\mathcal{S}^\mathcal{U}} (\varepsilon) \big \|_{X^{i(ii)}}, $$ where we have denoted $X^i = M_{ r^2, k \tilde k'}$ and $X^{ii} = \mathcal{S}_1^{\tilde k'\!, n } \otimes_{ (\varepsilon, \pi)_{ \sfrac{1}{2} } } \mathcal{S}_1^{\tilde k'\!, n}$. \end{lemma} \begin{remark}\label{Lemma_Main1_Rmk} For $\Phi_{\mathcal{S}^\mathcal{U}}^{ii}$, the previous statement can be strengthen to $$ \omega ( G_{Rad}; \mathcal{S}^\mathcal{U}) \le \, \mathbb{E}_\varepsilon \, \big \| \Phi^{ii}_{\mathcal{S}^\mathcal{U}} (\varepsilon) \big \|_{\tilde X^{ii}}, $$ where $\tilde X^{ii} = \mathcal{S}_1^{\tilde k'\!, n } \otimes_{ \mathfrak{S}_2^{w-cb} } \mathcal{S}_1^{\tilde k'\!, n}$. Recall Definition \ref{Def_cbWeakSchatten} for this last norm. \end{remark} The regularity of these maps can be characterized by parameters $ \sigma^{i}_{\mathcal{S}^\mathcal{U}}:= \sigma_{\Phi^{i}_{\mathcal{S}^\mathcal{U}}} $ and $ \sigma^{ii}_{\mathcal{S}^\mathcal{U}}:= \sigma_{\Phi^{ii}_{\mathcal{S}^\mathcal{U}}}$ -- recall Definition \ref{Def_sigma}. More explicitly: \beq \label{Sigma_i.} \sigma^{i(ii)}_{\mathcal{S}^\mathcal{U}} = \log(n^2)\, \mathbb{E}_\varepsilon\, \left(\sum_{i,j} \| \partial_{ij} \Phi^{i}_{\mathcal{S}^\mathcal{U}} (\varepsilon) \|_{ X^{i(ii)} }^2 \right)^{1/2}. \eeq The above expressions for $\sigma^{i}_{\mathcal{S}^\mathcal{U}}$, $\sigma^{ii}_{\mathcal{S}^\mathcal{U}}$ can be bounded by the easier expressions appearing in the introduction. See Appendix \ref{Appendix_1} for details. In Equation \eqref{Sigma_i.} the analytic nature of these parameters is clearer while the approximate expressions in Section \ref{Sec1} are closer to an operational interpretation of them. In the case of an arbitrary (possibly \emph{non-pure}) strategy $\mathcal{S} $, we can assign parameters $ \sigma^{i}_{\mathcal{S}}$, $\sigma^{ii}_{\mathcal{S}}$ with the simple prescription: $$ \sigma^{i(ii)}_{\mathcal{S} } :=\inf_{\begin{subarray}{c}\mathcal{S}^\mathcal{U} \in \\ \text{purifying } \mathcal{S} \end{subarray}} \sigma^{i(ii)}_{\mathcal{S}^{\mathcal{U}}}. $$ With definition \eqref{Sigma_i.} at hand and taking into account Corollary \ref{Cor1}, we can obtain: \begin{lemma}\label{Lemma_Main2} For any strategy $\mathcal{S}^\mathcal{U}$, \begin{enumerate}[i.] \item $$ \omega(G_{Rad}; \mathcal{S}^{\mathcal{U}}) \le \big \|\mathbb{E}_\varepsilon \Phi^{i}_{\mathcal{S}^\mathcal{U}} (\varepsilon) \big\|_{X^i} + C \ \sigma^{i}_{\mathcal{S}^\mathcal{U}} \ {\mathrm{T}^{(n^2)}_2} \left( X^i \right), $$ \item $$ \omega(G_{Rad}; \mathcal{S}^{\mathcal{U}}) \le \big \|\mathbb{E}_\varepsilon \Phi^{ii}_{\mathcal{S}^\mathcal{U}} (\varepsilon) \big\|_{\tilde X^{ii}} + C \ \sigma^{ii}_{\mathcal{S}^\mathcal{U}} \ {\mathrm{T}^{(n^2)}_2} \left( X^{ii} \right). $$ \end{enumerate} \end{lemma} \begin{comment} Notice the change of norms in the second item of the lemma. This refinement is needed later on in order to obtain Proposition \ref{Prop_Main2} below. \end{comment} Lemma \ref{Lemma_Main2} allows us to somehow exchange the lack of control on the behaviour of a general strategy by the control of some properties of the Banach spaces involved. Bounding the quantities appearing there, we obtain our main result: \begin{theorem}[Formal statement of Theorem \ref{mainThm}] \label{mainThm_2} Given an arbitrary strategy $\mathcal{S} \in \mathfrak{S}_{s2w;\tilde k, k}$, \begin{enumerate}[I.] \item $$ \omega(G;\mathcal{S}) \le C_1 + C_2 \ {\sigma^i_\mathcal{S}} \, \log^{1/2}( k \tilde k ) + \mathcal{O} \left(\frac{1}{n^{1/2}} \right) ; $$ \item $$ \omega(G;\mathcal{S}) \le \tilde C_1 + C_3 \ \tilde \sigma^{ii}_\mathcal{S} \, \log^{1/2} (n k \tilde k ) + \mathcal{O} \left(\frac{1}{n^{1/2}} + \frac{\log(n) \log^{1/2}( n k \tilde k )}{n} \right) , $$ where we have denoted $\tilde \sigma^{ii}_\mathcal{S} = n^{3/4} \log(n) \, \sigma^{ii}_\mathcal{S}$. \end{enumerate} Above, $C_1,\, \tilde C_1 <1, \, C_2,\, C_3 $ are positive constants. \end{theorem} \begin{proof} To obtain the statement of the theorem, as we already said, we start considering Lemma \ref{Lemma_Main2}. Then, we need to bound: \begin{enumerate} \item the type constants ${\mathrm{T}^{(n^2)}_2}(X^{i})$ and ${\mathrm{T}^{(n^2)}_2}(X^{ii})$. These bounds are already provided in Equations \eqref{Eq4_TypeConsts_1} and \eqref{Type_Int1/2}, respectively. We recall these bounds here for reader's convenience: $$ {\mathrm{T}^{(n^2)}_2}(X^{i}) \le {\mathrm{T}_2}(X^{i}) \lesssim \log^{1/2}(k\tilde k'), \qquad {\mathrm{T}^{(n^2)}_2}(X^{ii}) \lesssim n^{3/4} \log(n) \log^{1/2} (n \tilde k') ; $$ \item the terms $ \big \|\mathbb{E}_\varepsilon \Phi^{i}_{\mathcal{S}^\mathcal{U}} (\varepsilon) \big\|_{X^i}$ and $\big \|\mathbb{E}_\varepsilon \Phi^{ii}_{\mathcal{S}^\mathcal{U}} (\varepsilon) \big\|_{\tilde X^{ii}}$. These quantities are controlled by Proposition \ref{Prop_Main2} below. \end{enumerate} With this we obtain the stated bound in the case of pure strategies. Nonetheless, statements about pure strategies can be transformed into statements about general strategies taking into account the relation \eqref{Corresponde_k-k'1}. As we said at the end of Section \ref{Sec3}, this relation is polynomial in the parameters involved and therefore, the change from pure to general strategies only induces corrections by constant factors that we absorbed in the constants $C_2,\, C_3$ present in the statement. Similar considerations deals with the amount of classical communication included in $\tilde k$, in this case one has to recall Equation \eqref{Corresponde_k-k'2}. See Appendix \ref{Appendix_MainThm} for further details. \end{proof} In the rest of this section we first state the proposition alluded in the previous proof and then give the proofs, in this order, of Lemma \ref{Lemma_Main1}, Lemma \ref{Lemma_Main2} and Proposition \ref{Prop_Main2}. \begin{proposition} \label{Prop_Main2} For any pure strategy $\mathcal{S}^\mathcal{U} \in \mathfrak{S}_{s2w;\tilde k', k}$: \begin{enumerate}[i.] \item $$ \big \|\mathbb{E}_\varepsilon \Phi^{i}_{\mathcal{S}^\mathcal{U}} (\varepsilon) \big\|_{X^i} \le \frac{3}{4} + \frac{C}{\sqrt{n}}. $$ \item $$ \big \|\mathbb{E}_\varepsilon \Phi^{ii}_{\mathcal{S}^\mathcal{U}} (\varepsilon) \big\|_{ \tilde X^{ii} } \le \frac{\sqrt{3}}{2} + \frac{C}{\sqrt{n}} + C' \frac{\log (n) \log^{1/2}(k \tilde k')}{ n}, $$ \end{enumerate} where $C,\, C'$ are universal constants. \end{proposition} \subsection{Proofs of Lemmas \ref{Lemma_Main1} and \ref{Lemma_Main2}}\label{Proof_Lemmas_Main} \begin{proof}[Proof of Lemma \ref{Lemma_Main1}] The proof of both items in the lemma follows the same structure. We start with the bound regarding $\Phi^i_{\mathcal{S}^\mathcal{U}}$: Recalling \eqref{DefvalueStrat_MROQG_GRad4}: \begin{align*} \omega(G_{Rad}; \mathcal{S}^{\mathcal{U}}) = \, \mathbb{E}_\varepsilon \, \Big\| \frac{1}{n^2} \sum_{i,j} \varepsilon_{ij} (\langle i|\tilde V_\varepsilon \otimes \langle j|\tilde W_\varepsilon ) \, (V|ij\rangle \otimes W_\varepsilon ) \, |\varphi \rangle \Big\|^2_{\ell_2^{ r^2}}. \end{align*} We bound this quantity as follows: \begin{align*} \omega(G_{Rad}; \mathcal{S}^{\mathcal{U}}) & = \, \mathbb{E}_\varepsilon \, \left\| \frac{1}{n^2} \sum_{i,j} \varepsilon_{ij} (\langle i|\tilde V_\varepsilon \otimes \langle j|\tilde W_\varepsilon ) \, (V|ij\rangle \otimes \mathrm{Id}_{\ell_2^{\tilde k'}} ) \, (\mathrm{Id}_{\ell_2^{k}} \otimes W_\varepsilon)\, |\varphi \rangle \right\|^2_{\ell_2^{ r^2}} \\ & \le \, \mathbb{E}_\varepsilon \, \sup_{ |\varphi \rangle \in B_{\ell_2^{k \tilde k'}} } \left\| \frac{1}{n^2} \sum_{i,j} \varepsilon_{ij} (\langle i|\tilde V_\varepsilon \otimes \langle j|\tilde W_\varepsilon ) \, (V|ij\rangle \otimes \mathrm{Id}_{\ell_2^{\tilde k'^2}} ) \, |\varphi \rangle \right\|^2_{\ell_2^{r^2}} \\ & =\, \mathbb{E}_\varepsilon \, \sup_{ |\varphi \rangle \in B_{\ell_2^{k \tilde k' }} } \left\| \Phi^{i}_{\mathcal{S}^\mathcal{U}} (\varepsilon) (|\varphi\rangle) \right\|^2_{\ell_2^{r^2}} = \, \mathbb{E}_\varepsilon \, \left\| \Phi^{i}_{\mathcal{S}^\mathcal{U}} (\varepsilon) \right\|^2_{M_{r^2 , k \tilde k'}} \\ &\le\, \mathbb{E}_\varepsilon \, \left\| \Phi^{i}_{\mathcal{S}^\mathcal{U}} (\varepsilon) \right\|_{M_{ r^2 , k \tilde k' }} \equiv \, \mathbb{E}_\varepsilon \, \left\| \Phi^{i}_{\mathcal{S}^\mathcal{U}} (\varepsilon) \right\|_{X^i} . \end{align*} For $\Phi^{ii}_{\mathcal{S}^\mathcal{U}}$, we prove a stronger result. That is, considering the map $\Phi^{ii}_{\mathcal{S}^\mathcal{U}}$ taking values on the space $\tilde X^{ii} =\mathcal{S}_1^{\tilde k'\!, n } \otimes_{\mathfrak{S}^{w-cb}_2} \mathcal{S}_1^{\tilde k'\!, n}$, we show that: \beq\label{Bound2_Lemma_Main1} \omega ( G_{Rad}; \mathcal{S}^\mathcal{U}) \le \, \mathbb{E}_\varepsilon \, \big \| \Phi^{ii}_{\mathcal{S}^\mathcal{U}} (\varepsilon) \big \|_{\tilde X^{ii}}. \eeq Since the norm in $\tilde X^{ii}$ is smaller than in $X^{ii}$, recall Proposition \ref{Prop_RelNorms}, the statement of the lemma is also true. Following the proof of the first item, we start bounding: \begin{align*} \omega(G_{Rad}; \mathcal{S}^{\mathcal{U}}) & = \, \mathbb{E}_\varepsilon \, \left\| \frac{1}{n^2} \sum_{i,j} \varepsilon_{ij} (\langle i|\tilde V_\varepsilon \otimes \langle j|\tilde W_\varepsilon ) \, (V|ij\rangle \otimes W_\varepsilon ) \, |\varphi \rangle \right\|^2_{\ell_2^{r^2}} \\ & \le \, \mathbb{E}_\varepsilon \, \sup_{ \tilde V, \tilde W \in B_{M_{n r,\tilde k'}}} \left\| \frac{1}{n^2} \sum_{i,j} \varepsilon_{ij} (\langle i|\tilde V\otimes \langle j|\tilde W ) \, (V|ij\rangle \otimes W_\varepsilon ) \, |\varphi \rangle \right\|^2_{\ell_2^{r^2}} \\ & = \, \mathbb{E}_\varepsilon \, \sup_{ \begin{subarray}{c} \tilde V, \tilde W \in B_{M_{n r,\tilde k'}} \end{subarray}} \left\| ( \tilde V_\varepsilon \otimes \tilde W_\varepsilon) \left( \frac{1}{n^2} \sum_{i,j} \varepsilon_{ij}\, (\langle i | \otimes \langle j | )\, (V |ij\rangle \otimes W_\varepsilon) \, |\varphi \rangle \right) \right\|^2_{\ell_2^{r^2}} \\ & = \, \mathbb{E}_\varepsilon \, \sup_{ \begin{subarray}{c} \tilde V, \tilde W \in B_{M_{n r,\tilde k'}} \end{subarray}} \left\| ( \tilde V_\varepsilon \otimes \tilde W_\varepsilon) \left( \Phi^{ii}_{\mathcal{S}^\mathcal{U}} (\varepsilon) \right) \right\|^2_{\ell_2^{r^2}} \\& \hspace{-0.65cm} \! \stackrel{\text{(Lemma \ref{lemma_CharacSigma} )}}{\le} \, \mathbb{E}_\varepsilon \, \left\| \Phi^{ii}_{\mathcal{S}^\mathcal{U}} (\varepsilon) \right\|^2_{\mathcal{S}_1^{\tilde k'\!, n } \otimes_{\mathfrak{S}^{w-cb}_2} \mathcal{S}_1^{\tilde k'\!, n}} \\ & \le \, \mathbb{E}_\varepsilon \, \left\| \Phi^{ii}_{\mathcal{S}^\mathcal{U}} (\varepsilon) \right\|_{\mathcal{S}_1^{\tilde k'\!, n } \otimes_{\mathfrak{S}^{w-cb}_2} \mathcal{S}_1^{\tilde k'\!, n}} \equiv \, \mathbb{E}_\varepsilon \, \left\| \Phi^{ii}_{\mathcal{S}^\mathcal{U}} (\varepsilon) \right\|_{ \tilde X^{ii}}. \end{align*} \end{proof} \begin{proof}[Proof of Lemma \ref{Lemma_Main2}] The first item is a direct consequence of Corollary \ref{Cor1} applied to the bound in Lemma \ref{Lemma_Main1}, i. The second item proceed similarly but with a small detour. Using now Pisier's inequality, Lemma \ref{lemmaPisier} (with $p=1$ and a trivial triangle inequality, as in the proof of Corollary \ref{Cor1}), in the stronger inequality \eqref{Bound2_Lemma_Main1} we obtain $$ \mathbb{E}_\varepsilon \, \left\| \Phi^{ii}_{\mathcal{S}^\mathcal{U}} (\varepsilon) \right\|_{ \tilde X^{ii} } \le \, \left\| \mathbb{E}_\varepsilon \Phi^{ii}_{\mathcal{S}^\mathcal{U}} (\varepsilon) \right\|_{\tilde X^{ii} } + C \, \log(n) \, \mathbb{E}_{\varepsilon,\tilde \varepsilon} \left\| \sum_{k,l=1}^n \tilde \varepsilon_{kl} \partial_{kl} \Phi^{ii}(\varepsilon) \right\|_{ \tilde X^{ii} }. $$ Now, according to Proposition \ref{Prop_RelNorms}, we can upper bound the last summand above changing the norm $\tilde X^{ii}$ by $X^{ii}$. Considering that $$ \mathbb{E}_{\varepsilon,\tilde \varepsilon} \left\| \sum_{k,l=1}^n \tilde \varepsilon_{kl} \partial_{kl} \Phi^{ii}(\varepsilon) \right\|_{X^{ii}} \lesssim \mathrm{T}_2^{(n^2)} (X^{ii}) \ \ \mathbb{E}_\varepsilon \, \left ( \sum_{k,l=1}^n \| \partial_{kl} \Phi^{ii}(\varepsilon)\|_{X^{ii}}^2 \right)^{1/2}, $$ we have: $$ \omega(G_{Rad};\mathcal{S}^\mathcal{U}) \le \, \left\| \mathbb{E}_\varepsilon \Phi^{ii}_{\mathcal{S}^\mathcal{U}} (\varepsilon) \right\|_{\tilde X^{ii} } + C \, \log(n) \, \mathrm{T}_2^{(n^2)} (X^{ii}) \ \ \mathbb{E}_\varepsilon \, \left ( \sum_{k,l=1}^n \| \partial_{kl} \Phi^{ii}(\varepsilon)\|_{X^{ii}}^2 \right)^{1/2}. $$ We obtain Lemma \ref{Lemma_Main2}, ii., identifying $\sigma_{\mathcal{S}^\mathcal{U}}^{ii}$ above. \end{proof} \subsection{Proof of Proposition \ref{Prop_Main2}} Finally, as promised, we prove Proposition \ref{Prop_Main2}: \begin{proof}[Proof of Proposition \ref{Prop_Main2}, i.] The norm in the L.H.S. of Proposition \ref{Prop_Main2}, i., is attained at unit vectors $|\varphi\rangle \in \ell_2^{k \tilde k'},\, |\xi\rangle \in \ell_2^{ r^2}$ (independent of $\varepsilon$): $$ \big \|\mathbb{E}_\varepsilon \Phi^{i}_{\mathcal{S}^\mathcal{U}} (\varepsilon) \big\|_{M_{r ^2,k \tilde k'}} = \big| \mathbb{E}_\varepsilon\ \langle \xi|\, \Phi^{i}_{\mathcal{S}^\mathcal{U}} (\varepsilon)\, |\varphi\rangle \big| \le \mathbb{E}_\varepsilon\ \big| \langle \xi|\, \Phi^{i}_{\mathcal{S}^\mathcal{U}} (\varepsilon)\, |\varphi\rangle \big| . $$ Expanding this expression we have: $$ \| \mathbb{E}_\varepsilon\ \Phi^{i}_{\mathcal{S}^\mathcal{U}} (\varepsilon) \|_{M_{ r^2,k \tilde k'}} \le \mathbb{E}_\varepsilon\ \big| \, \frac{1}{n^2} \sum_{i j} \varepsilon_{ij} \langle \xi| \, \big( \langle i |\tilde V_\varepsilon\otimes \langle j |\tilde W_\varepsilon \big) \, \big( V \ |ij\rangle \otimes \mathrm{Id}_{\ell_2^{\tilde k'}} \big) |\varphi\rangle\, \big| = \mathbb{E}_\varepsilon \ \big| \langle \overline \xi_\varepsilon \, |\, \overline \varphi \rangle \big|, $$ where we have defined the unit vectors: $$ \langle \overline \xi_\varepsilon | := \frac{1}{n} \sum_{i,j} \varepsilon_{ij} \langle ij|_{C} \otimes \langle \xi| \, \big( \langle i |\tilde V_\varepsilon\otimes \langle j |\tilde W_\varepsilon \big) , $$ $$ |\overline \varphi \rangle := \frac{1}{n} \sum_{i,j} |ij\rangle_{C} \otimes \big( V \ |ij\rangle \otimes \mathrm{Id}_{\ell_2^{\tilde k'}} \big) |\varphi\rangle. $$ Now, notice that there exists at least one $\varepsilon^*$ such that $ |\langle \overline \xi_{\varepsilon^*} \, |\, \overline \varphi \rangle | \ge \| \mathbb{E}_\varepsilon\ \Phi^{i}_{\mathcal{S}^\mathcal{U}} (\varepsilon) \|_{M_{ r^2,k \tilde k'}}$. Consider this $\varepsilon^*$ to rewrite $|\overline \varphi \rangle = |\overline \xi_{\varepsilon^*} \rangle + (|\overline \varphi \rangle - |\overline \xi_{\varepsilon^*} \rangle )$. An application of Cauchy-Schwartz inequality gives us the following: \begin{align}\label{eq3} \| \mathbb{E}_\varepsilon\ \Phi^{i}_{\mathcal{S}^\mathcal{U}} (\varepsilon) \|_{M_{ r^2,k \tilde k'}} &\le \mathbb{E}_\varepsilon\ \big| \langle \overline \xi_\varepsilon |\overline \varphi \rangle \big| \le \mathbb{E}_\varepsilon\ \big| \langle \overline \xi_\varepsilon |\overline \xi_{\varepsilon^*} \rangle \big| + \big | \langle \overline \varphi-\overline \xi_{\varepsilon^*} |\overline \varphi-\overline \xi_{\varepsilon^*} \rangle\big|^{1/2}. \end{align} Now we bound both summands in the R.H.S. of the previous expression separately: \begin{itemize} \item For the second: \begin{align} \label{eq4} \big | \langle \overline \varphi-\overline \xi_{\varepsilon^*} |\overline \varphi-\overline \xi_{\varepsilon^*} \rangle\big|^{1/2} &\le \left( 2 ( 1- \langle \overline \xi_{\varepsilon^*}|\overline \varphi\rangle ) \right)^{1/2} \le \left( 2(1- \| \mathbb{E}_\varepsilon\ \Phi_{\mathcal{S}^\mathcal{U}} (\varepsilon) \|_{M_{r^2,k \tilde k'}}) \right)^{1/2} \nonumber\\ &\le \frac{7}{4} - \frac{4}{3} \| \mathbb{E}_\varepsilon\ \Phi^{i}_{\mathcal{S}^\mathcal{U}} (\varepsilon) \|_{M_{r^2,k \tilde k'}}.\nonumber \end{align} \item For the first one, we have the following bound: \begin{align*} \mathbb{E}_\varepsilon\ \big| \langle \overline \xi_\varepsilon |\overline \xi_{\varepsilon^*} \rangle \big| &= \mathbb{E}_\varepsilon \ \Big| \frac{1}{n^2} \sum_{ij} \varepsilon_{ij} \varepsilon^*_{ij} \langle \xi |\, \big( \langle i | \tilde V_\varepsilon \, \tilde V_{\varepsilon^*}^\dagger |i\rangle \otimes \langle j | \tilde W_\varepsilon \, \tilde W_{\varepsilon^*}^\dagger|j\rangle \big) \, |\xi\rangle \Big| \\ & \le \mathbb{E}_\varepsilon \ \sup_{\begin{subarray}{c} |\xi_i \rangle, |\varphi_j\rangle \in B_{\ell_2^{r^2}}\\ \text{for }i,j = 1,\ldots, n \end{subarray}} \ \Big| \frac{1}{n^2} \sum_{ij} \varepsilon_{ij} \varepsilon^*_{ij} \langle \xi_i | \varphi_j\rangle \Big| \\ & \approx \mathbb{E}_\varepsilon \ \Big\| \frac{1}{n^2} \sum_{ij} \varepsilon_{ij} \varepsilon^*_{ij} |i\rangle \otimes |j \rangle \Big\|_{\ell_1^n \otimes_{\varepsilon} \ell_1^n} = \mathbb{E}_\varepsilon \ \Big\| \frac{1}{n^2} \sum_{ij} \varepsilon_{ij} |i\rangle \otimes |j \rangle \Big\|_{\ell_1^n \otimes_{\varepsilon} \ell_1^n} . \end{align*} In the last line we have used, in this order, Grothendiecks's inequality \cite{Grothendieck53} and the fact that $\lbrace \varepsilon_{ij} \varepsilon^*_{ij} \rbrace_{i,j} $ are i.i.d. Rademacher random variables for any fixed signs $\varepsilon_{i,j}^*$. Finally, to conclude we can bound: $$ \mathbb{E}_\varepsilon \ \big\| \frac{1}{n^2} \sum_{ij} \varepsilon_{ij} |i\rangle \otimes |j \rangle \big\|_{\ell_1^n \otimes_{\varepsilon} \ell_1^n} \lesssim \frac{1}{\sqrt{n}} . $$ One way to see this is considering the metric mapping property of $\otimes_\varepsilon$ and the estimate $\|\mathrm{Id}: \ell_2^n \rightarrow \ell_1^n \| = \sqrt{n}$. With this: $$ \mathbb{E}_\varepsilon \ \big\| \frac{1}{n^2} \sum_{ij} \varepsilon_{ij} |i\rangle \otimes |j \rangle \big\|_{\ell_1^n \otimes_{\varepsilon} \ell_1^n} \le n \ \mathbb{E}_\varepsilon \ \big\| \frac{1}{n^2} \sum_{ij} \varepsilon_{ij} |i\rangle \otimes |j \rangle \big\|_{\ell_2^n \otimes_{\varepsilon} \ell_2^n} = \frac{1}{n} \ \mathbb{E}_\varepsilon \ \big\| \sum_{ij} \varepsilon_{ij} |i\rangle\langle j| \big\|_{M_n} . $$ The well-known estimate $\mathbb{E}_\varepsilon \ \big\| \sum_{ij} \varepsilon_{ij} |i\rangle\langle j| \big\|_{M_n} \lesssim \sqrt{n}$ provides the desired bound. \end{itemize} Joining everything in \eqref{eq3} we obtain the bound in Proposition \ref{Prop_Main2},i. \end{proof} \begin{proof}[Proof Proposition \ref{Prop_Main2}, ii.] Notice first that, up to this point, we already have a full proof of Theorem \ref{mainThm_2}, i. It turns out that Proposition \ref{Prop_Main2}, ii. is a consequence of this first part of our main theorem. The key idea is understanding the norm $ \big \|\mathbb{E}_\varepsilon \Phi^{ii.} \big \|_{\tilde X^{ii}}$ as the optimization over some family of strategies with small enough parameter $\sigma^i_{\mathcal{S}^\mathcal{U}}$. Concretely, considering the characterization of the norm $\tilde X^{ii}$ given in Lemma \ref{lemma_CharacSigma}, we can prove that \beq\label{Prop3.ii._eq1} \big \|\mathbb{E}_\varepsilon \Phi^{ii}_{\mathcal{S}^\mathcal{U}} (\varepsilon) \big\|_{\tilde X^{ii}} \le \, \sup_{\begin{subarray}{c} r \in \mathbb{N}\\ \tilde V, \, \tilde W \in B_{M_{ n r , \tilde k'}} \end{subarray} } \, \omega^{1/2} \left(G_{Rad} ; \lbrace \tilde V,\, \tilde W, \, V,\, W_\varepsilon,\, |\varphi \rangle \rbrace_\varepsilon \right). \eeq The desired bound follows now from realizing that in the strategies on which this optimization is performed, the second round of local operations, $\tilde V \otimes \tilde W$, is $\varepsilon$-independent. Therefore, for these strategies, according to Example \ref{Example_sigma1}, $\sigma^{i} \approx \frac{\log(n)}{n}$, which, in conjunction with Theorem \ref{mainThm_2}, I., leads to the desired statement. To obtain the precise statement appearing there, we have considered the elementary inequality $( 1 + x )^{1/2} \le 1 + x/2$. Then, to finish, let us prove the claim \eqref{Prop3.ii._eq1}. Recall that, according to Lemma \ref{lemma_CharacSigma} we can write: \begin{align*} \big \| \mathbb{E}_\varepsilon \Phi^{ii}_{\mathcal{S}^\mathcal{U}} (\varepsilon) \big\|_{\tilde X^{ii}} &= \sup_{ \begin{subarray}{c} r\in\mathbb{N} \\ \tilde V, \, \tilde W \in B_{M_{ n r , \tilde k'}} \end{subarray} } \Big \| \, \mathbb{E}_\varepsilon \, ( \tilde V \otimes \tilde W) \Big( \frac{1}{n^2} \sum_{i,j} \varepsilon_{ij}\, ( \langle i | \otimes \langle j | )\, (V |ij\rangle \otimes W_\varepsilon) \, |\varphi \rangle \Big) \Big\|_{\ell_2^{r^2}} \\ &\le \sup_{ \begin{subarray}{c} r\in\mathbb{N} \\ \tilde V, \, \tilde W \in B_{M_{ n r , \tilde k'}} \end{subarray} } \, \mathbb{E}_\varepsilon\, \Big \| \frac{1}{n^2} \sum_{i,j} \varepsilon_{ij}\, ( \langle i |\tilde V \otimes\langle j | \tilde W )\, (V |ij\rangle \otimes W_\varepsilon) \, |\varphi \rangle \Big\|_{\ell_2^{r^2}}. \end{align*} Furthermore, Considering the elementary bound $ \, \mathbb{E}_\varepsilon \, \phi(\varepsilon) \le \Big( \mathbb{E}_\varepsilon \, {\phi(\varepsilon)}^2 \Big )^{\frac{1}{2}}\!, $ valid for any function $\phi:\mathcal{Q}_{n^2} \rightarrow \mathbb{R}$, we can finally write: \begin{align*} \big \| \mathbb{E}_\varepsilon \Phi^{ii}_{\mathcal{S}^\mathcal{U}} (\varepsilon) \big\|_{\tilde X^{ii}} &\le \sup_{ \begin{subarray}{c} r\in\mathbb{N} \\ \tilde V, \, \tilde W \in B_{M_{ n r , \tilde k'}} \end{subarray} } \, \Big( \mathbb{E}_\varepsilon\, \Big \| \frac{1}{n^2} \sum_{i,j} \varepsilon_{ij}\, (\langle i |\tilde V \otimes \langle j |\tilde W )\, (V |ij\rangle \otimes W_\varepsilon) \, |\varphi \rangle \Big\|_{\ell_2^{r^2}}^2 \Big)^{\frac{1}{2}} \\ & = \sup_{\begin{subarray}{c} r\in\mathbb{N} \\ \tilde V, \, \tilde W \in B_{M_{ n r , \tilde k'}} \end{subarray} } \, \omega^{1/2} \left(G_{Rad} ; \lbrace \tilde V,\, \tilde W, \, V,\, W_\varepsilon,\, |\varphi \rangle \rbrace_\varepsilon \right), \end{align*} as claimed. \end{proof} We make a final comment that, in some sense, connects with the next section where we will discuss possible extensions of the approach presented up to this point. \begin{remark}\label{Rmk_normalizationInt1/2} The appearance of the norms $X^{i} = M_{ r^2,k \tilde k'},\ X^{ii} = \mathcal{S}_1^{\tilde k'\!, n} \otimes_{ (\varepsilon, \pi)_{ \sfrac{1}{2} } } \mathcal{S}_1^{\tilde k'\!, n}$ above might seem, at some point, arbitrary, in the sense that we have used these norms to \emph{upper} bound the value $\omega(G_{Rad}, \mathcal{S}^\mathcal{U})$ being these upper bounds not tight in general. Part of the motivation to consider these spaces is the fact that we are able to properly understand their type properties. But we can wonder: is any norm upper bounding $\omega(G_{Rad}, \mathcal{S}^\mathcal{U})$ a reasonable choice provided that we can control the relevant type constants? Obviously, this is not the case. Actually, in Section \ref{Sec6} we explore further this issue. By now, let us note that the chosen norms also satisfy some basic normalization conditions. In particular, it can be shown that the elements constituting $\Phi^{i}_{\mathcal{S}^\mathcal{U}} $, $\Phi^{ii}_{\mathcal{S}^\mathcal{U}}$ are well normalized when regarded as elements in $X^i$ and $ X^{ii}$, respectively. Concretely, for each $i,\, j \in [n]$ $$ \left \| (\langle i |\tilde{V}_{\varepsilon} \otimes \langle j|\tilde{ W}_{\varepsilon})\, (V|ij\rangle \otimes \mathrm{Id}_{\ell_2^{\tilde k'}}) \right \|_{X^i} \le 1 $$ and $$ \left \| | i \rangle \otimes | j \rangle \otimes (V|ij\rangle \otimes W_\varepsilon) |\varphi \rangle \right \|_{ X^{ii} } \ \le 1, $$ no matters which are the operators $\tilde{V}_{\varepsilon}, \, \tilde{W}_{\varepsilon}, \, V,\, W_\varepsilon$ as long as they are contractive or which is the vector $|\varphi \rangle$ as long as its Euclidean norm is bounded by one. The first bound is straightforward. Since $\tilde{V}_{\varepsilon} \otimes \tilde{ W}_{\varepsilon}$ and $V \otimes \mathrm{Id}_{\ell_2^{\tilde k'}}$ are contractive operators, $\langle i |\tilde{V}_{\varepsilon} \otimes \langle j|\tilde{ W}_{\varepsilon}$ and $V|ij\rangle \otimes \mathrm{Id}_{\ell_2^{\tilde k'}}$ are also contractive and the same applies to their composition. For the second bound, fixing $i,\, j$, we first notice that $|\tilde \varphi \rangle := (V|ij\rangle \otimes \mathrm{Id}_{\ell_2^{\tilde k'^2}}) (\mathrm{Id}_{\ell_2^{k}} \otimes W_\varepsilon)|\varphi \rangle$ has norm $\| |\tilde \varphi \rangle \|_{\ell_2^{\tilde k'^2}} \le 1$. Furthermore, considering the norm-one injections $\iota_i: \ell_2^{\tilde k'}\ni |\varphi \rangle \mapsto | i \rangle \otimes |\varphi \rangle \in \mathcal{S}_1^{\tilde k'\!, n}$, we have that $ | i \rangle \otimes | j \rangle \otimes |\tilde \varphi \rangle = \iota_i \otimes \iota_j \big( | \tilde \varphi \rangle \big)$. Therefore $$ \big \| | i \rangle \otimes | j \rangle \otimes |\tilde \varphi \rangle \big\|_{ \mathcal{S}_1^{\tilde k'\!, n} \otimes_{ (\varepsilon, \pi)_{ \sfrac{1}{2} } } \mathcal{S}_1^{\tilde k'\!, n} } \le \big \| |\tilde \varphi \rangle \big \|_{\ell_2^{\tilde k'^2}} \ \big \| \iota_i \otimes \iota_j: \ell_2^{\tilde k'} \rightarrow \mathcal{S}_1^{\tilde k'\!, n} \otimes_{ (\varepsilon, \pi)_{ \sfrac{1}{2} } } \mathcal{S}_1^{\tilde k'\!, n}\big \| \le 1. $$ It remains to justify that, in fact, $$ \big \| \iota_i \otimes \iota_j: \ell_2^{\tilde k'} \rightarrow \mathcal{S}_1^{\tilde k'\!, n} \otimes_{ (\varepsilon, \pi)_{ \sfrac{1}{2} } } \mathcal{S}_1^{\tilde k'\!, n}\big \| \le 1. $$ This can be proved recalling that $\mathcal{S}_1^{\tilde k'\!, n} \otimes_{ (\varepsilon, \pi)_{ \sfrac{1}{2} } } \mathcal{S}_1^{\tilde k'\!, n}$ is the interpolation space $ ( \mathcal{S}_1^{\tilde k'\!, n} \otimes_{ \varepsilon } \mathcal{S}_1^{\tilde k'\!, n}, $ $ \mathcal{S}_1^{\tilde k'\!, n} \otimes_{ \pi } \mathcal{S}_1^{\tilde k'\!, n} )_{\frac{1}{2}}$ and $\ell_2^{\tilde k'^2}$ can be also regarded as the space $ \left( \ell_2^{\tilde k'} \otimes_{ \varepsilon } \ell_2^{\tilde k'}, \ell_2^{\tilde k'} \otimes_{ \pi } \ell_2^{\tilde k'} \right)_{\frac{1}{2}}$. Then, \begin{align*} &\big \| \iota_i \otimes \iota_j: \ell_2^{\tilde k'} \rightarrow \mathcal{S}_1^{\tilde k'\!, n} \otimes_{ (\varepsilon, \pi)_{ \sfrac{1}{2} } } \mathcal{S}_1^{\tilde k'\!, n}\big \| \hspace{-2cm} \\ &\qquad\le \big \| \iota_i \otimes \iota_j: \ell_2^{\tilde k'} \otimes_ \varepsilon \ell_2^{\tilde k'}\rightarrow \mathcal{S}_1^{\tilde k'\!, n} \otimes_{ \varepsilon } \mathcal{S}_1^{\tilde k'\!, n} \big \|^{\frac{1}{2}} \ \big\| \iota_i \otimes \iota_j: \ell_2^{\tilde k'} \otimes_\pi \ell_2^{\tilde k'}\rightarrow \mathcal{S}_1^{\tilde k'\!, n} \otimes_{ \pi} \mathcal{S}_1^{\tilde k'\!, n} \big \|^{\frac{1}{2}} \\ & \qquad \le \big \| \iota_i : \ell_2^{\tilde k'} \rightarrow \mathcal{S}_1^{\tilde k'\!, n} \big \|\ \big \| \iota_j : \ell_2^{\tilde k'} \rightarrow \mathcal{S}_1^{\tilde k'\!, n} \big \| \le 1. \end{align*} \end{remark} \section{A conjecture towards unconditional lower bounds}\label{Sec6} In the previous section, we have modified the naïve choice \eqref{Def_Phi0} for $\Phi_{\mathcal{S}^{\mathcal{U}}}$ in order to circumvent the problem that $\big \|\mathbb{E}_\varepsilon \Phi_{\mathcal{S}^\mathcal{U}} (\varepsilon) \big\|_{\ell_2^{r^2} } $ can be in general too large, damning that way the bounds obtained through Corollary \ref{Cor1} to be trivial. The variations $\Phi_{\mathcal{S}^{\mathcal{U}}}^i$, $\Phi_{\mathcal{S}^{\mathcal{U}}}^{ii}$ allowed us to obtain the bounds in Theorem \ref{mainThm_2}. An unsatisfactory feature of this result is that, in order to obtain concrete bounds on the quantum resources employed by a given strategy for $G_{Rad}$, we still need to make some additional assumption on that strategy. Recall that, in particular, the bounds in Theorem \ref{mainThm_2} depend on the regularity parameters $\sigma_{\Phi^{i}_{\mathcal{S}^\mathcal{U}}}$, $\sigma_{\Phi^{i(ii)}_{\mathcal{S}^\mathcal{U}}}$. Ideally, we would like to obtain bounds only depending on the dimension of the quantum systems Alice and Bob manipulate. Following this line of thought, one could ask whether, given a strategy, is possible to construct a corresponding assignment $\Phi_{\mathcal{S}^{\mathcal{U}}}$ that additionally display the property of being regular enough, that is, with $\sigma_{\Phi_{\mathcal{S}^\mathcal{U}}}\lesssim_{\log} 1/n$. The answer is affirmative, but the cost of doing so is that the output Banach space of $\Phi_{\mathcal{S}^{\mathcal{U}}}$ becomes more involved and its type properties escape from the techniques used in this work. We define: \beq\label{Def_Phi3} \begin{array}{rccc} \Phi^{iii}_{\mathcal{S}^\mathcal{U}} : & \mathcal{Q}_{n^2} & \longrightarrow & \left( \mathcal{S}_1^{\tilde k'\!, n } \otimes_{\mathfrak{S}_2^{cb-w}} \mathcal{S}_1^{\tilde k'\!, n} \right)\otimes_\varepsilon \ell_2^{k \tilde k'} \\[1em] & \varepsilon & \mapsto & \Phi^{ii}_{\mathcal{S}^\mathcal{U}} (\varepsilon) = \frac{1}{n^2} \, \sum_{ij} \, \, \varepsilon_{ij} \, \langle i | \otimes \langle j | \otimes (V|ij\rangle \otimes \mathrm{Id}_{\ell_2^{\tilde k'}}) \end{array}, \eeq that relates with the value of the game $G_{Rad}$ as stated in the following \begin{lemma}\label{Conjecture_Lemma1} For any pure strategy $\mathcal{S}^{\mathcal{U}} \in \mathfrak{S}_{s2w;\tilde k', k}$: $$ \omega^{1/2}(G_{Rad}; \mathcal{S}^\mathcal{U}) \lesssim \mathbb{E}_\varepsilon \ \| \Phi^{iii}_{\mathcal{S}^\mathcal{U}} (\varepsilon ) \|_{X^{iii} }, $$ where $X^{iii} := \left( \mathcal{S}_1^{\tilde k'\!, n } \otimes_{\mathfrak{S}_2^{cb-w}} \mathcal{S}_1^{\tilde k'\!, n} \right)\otimes_\varepsilon \ell_2^{k \tilde k'} $ \end{lemma} \begin{proof} For each $\varepsilon\in \mathcal{Q}_{n^2}$, we have to interpret the tensor $\Phi^{iii}_{\mathcal{S}^\mathcal{U}} (\varepsilon )$ as the mapping: $$ \begin{array}{rccc} \Phi^{iii}_{\mathcal{S}^\mathcal{U}}(\varepsilon) : & \ell_2^{k \tilde k'} & \longrightarrow & \mathcal{S}_1^{\tilde k'\!, n } \otimes_{\mathfrak{S}_2^{cb-w}} \mathcal{S}_1^{\tilde k'\!, n} \\[1em] & |\varphi\rangle & \mapsto & \Phi^{iii}_{\mathcal{S}^\mathcal{U}}(\varepsilon) ( |\varphi\rangle ) = \frac{1}{n^2} \, \sum_{ij} \, \, \varepsilon_{ij} \, \langle i | \otimes \langle j | \otimes (V|ij\rangle \otimes \mathrm{Id}_{\ell_2^{\tilde k'}})\,|\varphi\rangle \end{array} . $$ Then, the norm of this map is \begin{align*} \|\Phi^{iii}_{\mathcal{S}^\mathcal{U}}(\varepsilon) \| &= \sup_{|\varphi \rangle \in B_{ \ell_2^{k \tilde k'} } } \Big\| \frac{1}{n^2} \, \sum_{ij} \, \, \varepsilon_{ij} \, \langle i | \otimes \langle j | \otimes (V|ij\rangle \otimes \mathrm{Id}_{\ell_2^{\tilde k'}})\,|\varphi\rangle \Big\|_{ \mathcal{S}_1^{\tilde k'\!, n } \otimes_{\mathfrak{S}_2^{cb-w}} \mathcal{S}_1^{\tilde k'\!, n} } \\ &= \sup_{ W \in B_{ M_{ \tilde k'} } } \sup_{|\varphi \rangle \in B_{ \ell_2^{k \tilde k'} } } \Big\| \frac{1}{n^2} \, \sum_{ij} \, \, \varepsilon_{ij} \, \langle i | \otimes \langle j | \otimes (V|ij\rangle \otimes W)\,|\varphi\rangle \Big\|_{ \mathcal{S}_1^{\tilde k'\!, n } \otimes_{\mathfrak{S}_2^{cb-w}} \mathcal{S}_1^{\tilde k'\!, n} } . \end{align*} Recalling now Lemma \ref{lemma_CharacSigma}, and proceeding similarly to the proof of Proposition \ref{Prop_Main2}, ii., we can write explicitly the norm above as: \begin{align*} \|\Phi^{iii}_{\mathcal{S}^\mathcal{U}}(\varepsilon) \| &= \sup_{ \begin{subarray}{c} m \in \mathbb{N} \\ \tilde V, \tilde W \in B_{M_{\tilde k'\!, n m}} \end{subarray} } \sup_{ \begin{subarray}{c} W \in B_{ M_{ \tilde k'} } \\ |\varphi \rangle \in B_{ \ell_2^{k \tilde k'} } \end{subarray} } \Big\| \frac{1}{n^2} \, \sum_{ij} \, \, \varepsilon_{ij} \,( \langle i | \tilde V \otimes\langle j | \tilde W) \, (V|ij\rangle \otimes W )\,|\varphi\rangle \Big\|_{ \ell_2^{m^2} } . \end{align*} Finally, squaring this last expression and taking the expectation over $\varepsilon$ we conclude that: \begin{align*} \mathbb{E}_\varepsilon \, \|\Phi^{iii}_{\mathcal{S}^\mathcal{U}}(\varepsilon) \| ^2 &= \mathbb{E}_\varepsilon \, \sup_{ \begin{subarray}{c} m \in \mathbb{N} \\ \tilde V, \tilde W \in B_{M_{\tilde k'\!, n m}} \end{subarray} } \sup_{ \begin{subarray}{c} W \in B_{ M_{ \tilde k'} } \\ |\varphi \rangle \in B_{ \ell_2^{k \tilde k'} } \end{subarray} } \Big\| \frac{1}{n^2} \, \sum_{ij} \, \, \varepsilon_{ij} \,( \langle i | \tilde V \otimes\langle j | \tilde W) \, (V|ij\rangle \otimes W)\,|\varphi\rangle \Big\|_{ \ell_2^{m^2} } ^2 \\ & \ge \mathbb{E}_\varepsilon \, \Big\| \frac{1}{n^2} \, \sum_{ij} \, \, \varepsilon_{ij} \,( \langle i | \tilde V_\varepsilon \otimes\langle j | \tilde W_\varepsilon) \, (V|ij\rangle \otimes W_\varepsilon)\,|\varphi\rangle \Big\|_{ \ell_2^{m^2} } ^2 = \omega(G_{Rad}; \mathcal{S}^\mathcal{U}), \end{align*} where we have considered that $\mathcal{S}^\mathcal{U} = \lbrace \tilde V_\varepsilon,\tilde W_\varepsilon, V, W_\varepsilon, |\varphi\rangle \rbrace_\varepsilon$. With that we are almost done. This last expression is enough to obtain $$ \omega(G_{Rad}; \mathcal{S}^\mathcal{U}) \le \mathbb{E}_\varepsilon \, \|\Phi^{iii}_{\mathcal{S}^\mathcal{U}}(\varepsilon) \| ^2 \le \mathbb{E}_\varepsilon \, \|\Phi^{iii}_{\mathcal{S}^\mathcal{U}}(\varepsilon) \| . $$ Furthermore, using Kahane's inequality \cite{KahaneBook}, we can improve on that taking into account the equivalence $\mathbb{E}_\varepsilon \, \|\Phi^{iii}_{\mathcal{S}^\mathcal{U}}(\varepsilon) \| ^2 \approx \left( \mathbb{E}_\varepsilon \, \|\Phi^{iii}_{\mathcal{S}^\mathcal{U}}(\varepsilon)\| \right)^2$. This allows us to obtain the statement of the lemma. \end{proof} Now, notice that $\Phi^{iii}_{\mathcal{S}^\mathcal{U}}$ is by construction a linear map of the kind of Example \ref{Example_sigma1}, and, consequently, $\sigma_{\Phi^{iii}_{\mathcal{S}^\mathcal{U}}} \lesssim \log(n)/n $. Furthermore, by symmetry, $\mathbb{E}_\varepsilon \Phi^{iii}_{\mathcal{S}^\mathcal{U} } = 0$. Therefore, Corollary \ref{Cor1} applied to the statement of Lemma \ref{Conjecture_Lemma1} implies: \begin{equation}\label{Eq_Conjecture_1} \omega(G_{Rad}; \mathcal{S}^{\mathcal{U}}) \lesssim_{\log} \left(\frac{ \mathrm{T}_2^{(n^2)} (X^{iii})}{n} \right)^2. \end{equation} The problem now reduces to find a good estimate for the type-2 constant in the last expression. We note that the norm $ X^{iii}$ is the smallest one for which we were able to prove an equivalent to Lemma \ref{Conjecture_Lemma1}. However, the whole argument from this lemma until here would be valid for any norm larger than $ X^{iii} $ fulfilling a normalization condition with respect to the elements that sum up to $\Phi^{iii}_{\mathcal{S}^{\mathcal{U}}}(\varepsilon)$. We will be more explicit later on. An example of such a norm is $ X^{ii} \otimes_\varepsilon \ell_2^{k \tilde k'}$ where $X^{ii} = \mathcal{S}_1^{\tilde k'\!, n } \otimes_{ (\varepsilon, \pi)_{\sfrac{1}{2}} } \mathcal{S}_1^{\tilde k'\!, n}$. Motivated by the result obtained previously about the type of $X^{ii}$, Equation \eqref{Type_Int1/2}, we are led to conjecture that: \begin{conjecture}[strongest form] \beq\label{Conjecture_1} \mathrm{T}_2^{(n^2)} \left( \big( \mathcal{S}_1^{\tilde k'\!, n } \otimes_{(\varepsilon, \pi)_{\sfrac{1}{2}} } \mathcal{S}_1^{\tilde k'\!, n } \big)\otimes_\varepsilon \ell_2^{k \tilde k'} \right) \lesssim_{\log} \mathrm{T}_2^{(n^2)} \left( \mathcal{S}_1^{\tilde k'\!, n } \otimes_{ (\varepsilon, \pi)_{\sfrac{1}{2}} } \mathcal{S}_1^{\tilde k'\!, n} \right) \lesssim_{\log} n^{3/4}. \eeq \end{conjecture} A weaker conjecture which would also imply the desired bounds in the setting of PBC is: \begin{conjecture}[weaker form] \beq\label{Conjecture_1'} \mathrm{T}_2^{(n^2)} \left( \big( \mathcal{S}_1^{\tilde k'\!, n } \otimes_{ (\varepsilon, \pi)_{\sfrac{1}{2}} } \mathcal{S}_1^{\tilde k'\!, n} \big)\otimes_\varepsilon \ell_2^{ k \tilde k' } \right) \lesssim_{\log} n^{\beta} \qquad \text{for some } \beta <1. \eeq \end{conjecture According to what we explained above, there is a plethora of norms for which the positive resolution of the corresponding conjecture would imply unconditional exponential lower bounds for the resources in attacks for PBC. Next, we formalize this discussion characterizing those norms and then we rewrite the Conjecture in a unified form. First, we characterize what we need from a norm $X$ to follow the previous argument substituting $X^{iii}$ by this $X$. In this section we refer to $X$ as a \emph{valid norm} if it satisfies: \begin{enumerate}[P.i.] \item $X$ is a norm on the algebraic tensor product $\mathcal{S}_1^{\tilde k'\!, n } \otimes \mathcal{S}_1^{\tilde k'\!, n} \otimes \ell_2^{k \tilde k'}$; \item $\| x \|_X \gtrsim \| x \|_{X^{iii}}$ for any $ x \in X$; \item $ \big\| |i\rangle \otimes |j\rangle \otimes (V|ij\rangle \otimes \mathrm{Id}_{\ell_2^{\tilde k'}}) \big \|_X \le 1 $ for any $ V \in B_{M_{n^2 k, \tilde k'}} $ and any $i,\, j = 1,\ldots,n$. \end{enumerate} Notice that P.ii. guarantees a relation with the value of $G_{Rad}$ in analogy with Lemma \ref{Conjecture_Lemma1} and P.iii. guarantees that $\Phi^{iii.}_{\mathcal{S}^\mathcal{U}} : D_{n_2} \rightarrow X$ still falls in the setting of Example \ref{Example_sigma1}, i.e., we still have $\sigma_{\Phi^{iii}_{\mathcal{S}^\mathcal{U}}} \lesssim \log(n)/n $. These two properties therefore translates in the fact that the bound \eqref{Eq_Conjecture_1} is still true with the type-2 constant of any valid norm $X$ instead of $X^{iii} $. We can state \begin{conjecture}[even weaker form] For some valid norm, i.e. a norm $X$ satisfying properties \emph{P.i., P.ii.} and \emph{P.iii.} above, and some dimension independent constant $\beta <1:$ \beq\label{Conjecture_3} \mathrm{T}_2^{(n^2)} \left( X \right) \lesssim_{\log} n^{\beta}. \eeq \end{conjecture Now, to state our conjecture in its weakest form we need to introduce the notion of type constant of an operator $F:X\rightarrow Y$. The type-2 constant of a linear map $F:X\rightarrow Y$ is the infimum of the constants $T$ such that \beq \nonumber \left( \mathbb{E}_\varepsilon \Big[ \big\| \sum_{i} \varepsilon_i F( x_i) \big\|_Y^2 \Big] \right)^{1/2} \hspace{-1mm} \le \mathrm{T} \left( \sum_{i} \|x_i\|_X^2 \right)^{1/2}, \eeq for any finite sequence $\lbrace x_i \rbrace_{i} \subset X$. In analogy with the case of the type constant of a Banach space, when the cardinal of this sequence is restricted, we refer to the type-2 constant with $m$ vectors of $F:X\rightarrow Y$ and denote $ \mathrm{T}_2^{(m)} (F:X\rightarrow Y) $. We are interested here in the type of the identity map $\mathrm{Id}: X \rightarrow X^{iii}$, being X a \emph{valid norm}. In fact, the final statement of our conjecture is as follows: \begin{conjecture}[weakest form] For some valid norm, i.e. a norm $X$ satisfying properties \emph{P.i., P.ii.} and \emph{P.iii.} above, and some dimension independent constant $\beta <1:$ \beq\label{Conjecture_4} \mathrm{T}_2^{(n^2)} \left( \mathrm{Id}: X \rightarrow X^{iii} \right) \lesssim_{\log} n^{\beta}. \eeq \end{conjecture \begin{remark} Notice that in particular, $\mathrm{T}_2^{(n^2)} \left( \mathrm{Id}: X \rightarrow X^{iii} \right) \lesssim \mathrm{T}_2^{(n^2)} \left( Y \right) $ for any \emph{valid norm} $Y$ such that $\| x \|_{X^{iii}} \lesssim \| x \|_Y \lesssim \| x \|_X$. Therefore, the last statement for our conjecture, Equation \eqref{Conjecture_4}, is indeed weaker than the previous ones. \end{remark} Within the family of \emph{valid} norms characterized by properties P.i., P.ii., P.iii. we obviously find the spaces $X^{iii}$ and $\big( \mathcal{S}_1^{\tilde k'\!, n } \otimes_{ (\varepsilon, \pi)_{\sfrac{1}{2}} } \mathcal{S}_1^{\tilde k'\!, n} \big )\otimes_\varepsilon \ell_2^{ k \tilde k'}$. But also, spaces such as $\big( \mathcal{S}_1^{\tilde k'\!, n } \otimes_{\mathfrak{S}^{w}_2} \mathcal{S}_1^{\tilde k'\!, n} \big)\otimes_\varepsilon \ell_2^{ k \tilde k'}$ or $\big( \mathcal{S}_1^{\tilde k'\!, n } \otimes_{\pi_2} \mathcal{S}_1^{\tilde k'\!, n} \big)\otimes_\varepsilon \ell_2^{ k \tilde k'}$, see Section \ref{Sec.2.4.1} for the definition of $\mathfrak{S}_2^{w}$ and $\pi_2$. An obstruction for the techniques used in this work to obtain upper bounds for the type constants of these spaces is the pathological behaviour of the injective tensor product with respect to interpolation methods \cite{Lemerdy98}. In order to support the validity of the stated conjecture, in the next subsections we explore the most direct approaches to disprove it, lower bounding the type-2 constant of the spaces involved. We find that these approaches do not lead to bounds stronger than $\mathrm{T}_2 (X) \gtrsim_{\log} n^{3/4}$ for at least some \emph{valid} norm $X$. \subsection{Type constant of subspaces} From the definition of the type constant of a normed space $X$, $\mathrm{T}_p(X)$, $1\le p \le 2$, follows that, $$\text{for any subspace }S\subseteq X,\qquad \mathrm{T}_p(X) \ge \mathrm{T}_p(S). $$ This applies as well to $\mathrm{T}_p^{(m)}$ instead of $\mathrm{T}_p$. Then, a way to disprove \eqref{Conjecture_3} is finding for any valid norm $X$, a subspace with type-2 constant of order $n$ or greater. For the sake of concreteness, we now restrict our attention to norms of the form $\big( \mathcal{S}_1^{\tilde k'\!, n } \otimes_{\alpha} \mathcal{S}_1^{\tilde k'\!, n} \big) $ $\otimes_\varepsilon \ell_2^{ k \tilde k'}$, being $\mathcal{S}_1^{\tilde k'\!, n } \otimes_{\alpha} \mathcal{S}_1^{\tilde k'\!, n} $ a normed space satisfying that, for any $x \in \mathcal{S}_1^{\tilde k'\!, n } \otimes_{\alpha} \mathcal{S}_1^{\tilde k'\!, n}$, $$ \| \, x \, \|_{ \mathcal{S}_1^{\tilde k'\!, n } \otimes_{ \mathfrak{S}_2^{w-cb} } \mathcal{S}_1^{\tilde k'\!, n} } \le \| \, x \, \|_{ \mathcal{S}_1^{\tilde k'\!, n } \otimes_{\alpha} \mathcal{S}_1^{\tilde k'\!, n} } \le \| \, x \, \|_{ \mathcal{S}_1^{\tilde k'\!, n } \otimes_{ (\varepsilon, \pi)_{\sfrac{1}{2}} } \mathcal{S}_1^{\tilde k'\!, n} } . $$ All these norms clearly satisfy properties P.i., P.ii, P.iii.\footnote{ For P.iii., recall Remark \ref{Rmk_normalizationInt1/2}.} What we do here is looking at the most obvious subspaces of $\big( \mathcal{S}_1^{\tilde k'\!, n } \otimes_{\alpha} \mathcal{S}_1^{\tilde k'\!, n} \big)\otimes_\varepsilon \ell_2^{ k \tilde k'}$ and study their type properties. Concretely, we study the following subspaces (in increasing order of complexity): \begin{enumerate} \item first, we find copies of $\ell_2^{k\tilde k'}$ and $\mathcal{S}_1^{\tilde k'\!, n}$; \item at the next level, we also find the subspaces $\mathcal{S}_1^{\tilde k'\!, n } \otimes_{\alpha} \mathcal{S}_1^{\tilde k'\!, n}$ and $\mathcal{S}_1^{\tilde k'\!, n } \otimes_\varepsilon \ell_2^{ k \tilde k'}$; \item finally, we also study the subspaces $\big( \ell_2^{\tilde k' } \otimes_{\alpha} \mathcal{S}_1^{\tilde k'\!, n} \big)\otimes_\varepsilon \ell_2^{ k \tilde k'}$ and $\big( \ell_1^n \otimes_{\alpha} \ell_1^n \big)\otimes_\varepsilon \ell_2^{ k \tilde k'}$, We were not able to obtain good estimates for $\big( \ell_1^n \otimes_{\alpha} \mathcal{S}_1^{\tilde k'\!, n} \big)\otimes_\varepsilon \ell_2^{ k \tilde k'}$. \end{enumerate} Next, we provide upper estimates compatible with the conjecture bound \eqref{Conjecture_3} for the type-2 constants of these spaces for some choices of $\alpha$. The limitation on the possible norms for which our bounds work comes from the limited scope of the techniques available to obtain such estimates. Nonetheless, it might be the case that these bounds are more general than stated here. In fact, we didn't find any choice of $\alpha$ for which we have results contradicting \ref{Conjecture_3}, so in principle any of these norms could be suitable for a positive solution of the conjecture. The first item above is already well understood. The following estimates are very well known: $$ \mathrm{T}_2(\ell_2) =1, \ \mathrm{T}_2(\mathcal{S}_1^{\tilde k'\!, n}) \le \sqrt{\min(n, \tilde k')}. $$ Continuing with the second item, in Section \ref{Sec2.4.3}, Equation \eqref{Type_Int1/2}, we have already obtained a satisfactory bound for the type constant (with $n^2$ vectors in this case) of $\mathcal{S}_1^{\tilde k'\!, n } \otimes_{\alpha} \mathcal{S}_1^{\tilde k'\!, n}$ with $\alpha = (\varepsilon, \pi)_{\sfrac{1}{2}} $. We don't rule out the possibility that a similar bound applies to other $\alpha$'s, but we were not able to find a proof for that. The reason why we manage to better understand the case $\alpha = (\varepsilon, \pi)_{\sfrac{1}{2}} $ is purely technical in origin, and it is due to the nice behaviour of type constants under interpolation methods. A bound for the type constant of $\mathcal{S}_1^{\tilde k'\!, n } \otimes_\varepsilon \ell_2^{ k \tilde k'}$ is easier to obtain. Taking into account that $\| \mathrm{Id} : \mathcal{S}_2^{\tilde k'\!, n} \rightarrow \mathcal{S}_1^{\tilde k'\!, n} \| \, \| \mathrm{Id} : \mathcal{S}_1^{\tilde k'\!, n} \rightarrow \mathcal{S}_2^{\tilde k'\!, n} \| = \sqrt{n}$ and that $\ell_2^{n\tilde k'} \otimes_\varepsilon \ell_2^{k \tilde k'} = M_{n\tilde k',k \tilde k'}$, we obtain $$ \mathrm{T}_2 (\mathcal{S}_1^{\tilde k'\!, n } \otimes_\varepsilon \ell_2^{ k \tilde k'}) \lesssim_{\log} \sqrt{n}. $$ Finally, we state our findings regarding the third item in the form of two propositions: \begin{proposition} For $\alpha = \mathfrak{S}_2^w$ or $\pi_2$, $$ \mathrm{T}_2 \left( ( \ell_2^{\tilde k'} \otimes_{\alpha} \mathcal{S}_1^{\tilde k'\!, n} ) \otimes_\varepsilon \ell_2^{k \tilde k'} \right) \lesssim_{\log} \sqrt{n}. $$ \end{proposition} \begin{proof} The proof is as simple as noting that $\| \mathrm{Id}: \mathcal{S}_1^{\tilde k'\!, n} \rightarrow \ell_2^{n\tilde k'} \| \, \| \mathrm{Id}: \ell_2^{n\tilde k'} \rightarrow \mathcal{S}_1^{\tilde k'\!, n} \| \le \sqrt{n} $ and, for the $\alpha$'s in the statement of the proposition, $\ell_2^{\tilde k'} \otimes_{\alpha} \ell_2^{n \tilde k'} = \ell_2^{n \tilde k'^2}$. This provides the following bound for the quantity of interest: $$ \mathrm{T}_2 \left( ( \ell_2^{\tilde k'} \otimes_{\alpha} \mathcal{S}_1^{\tilde k'\!, n} ) \otimes_\varepsilon \ell_2^{k \tilde k'} \right) \le \sqrt{ n } \ \mathrm{T}_2 \left( \ell_2^{n \tilde k'^2} \otimes_\varepsilon \ell_2^{k \tilde k'} \right) \lesssim \sqrt{n \log(n k \tilde k'^2)}. $$ \end{proof} \begin{proposition} For any $\alpha$ in between of $\varepsilon $ and $\pi_2$: $$ \mathrm{T}_2 \left( (\ell_1^{n} \otimes_{\alpha} \ell_1^n ) \otimes_\varepsilon \ell_2^{ k \tilde k'} \right) \lesssim_{\log} \sqrt{n}. $$ \end{proposition} \begin{proof} The reason for which the claim turns out to be valid for $\alpha$ in a wide range of norms is due to Grothendieck's inequality, which makes all these norms collapse: $$ \ell_1^{n} \otimes_{\alpha} \ell_1^n \approx \ell_1^{n} \otimes_{\varepsilon} \ell_1^n . $$ Therefore, it is enough to study the type-2 constant of the space $ \ell_1^{n} \otimes_{\varepsilon} \ell_1^n \otimes_\varepsilon \ell_2^{k\tilde k'}$. For this, we can isomorphically embed $\ell_1^n $ into $\ell_\infty^{c^n}$ for some constant $c > 2$. With this, we obtain $ \ell_1^{n} \otimes_{\varepsilon} \ell_1^n \otimes_\varepsilon \ell_2^{k \tilde k'} \approx \ell_\infty^{c^n} \otimes_\varepsilon \ell_\infty^{c^n} \otimes_\varepsilon \ell_2^{k \tilde k'} = \ell_\infty^{2 \, c^n} ( \ell_2^{ k \tilde k'} )$. To conclude, we note that the type-2 constant of this last space is, up to logarithmic factors, of order $ \sqrt{n}$. \end{proof} \subsection{Volume ratio} Although the Banach spaces that appear in this thesis are prominently complex, for the sake of simplicity, we will restrict ourselves to real spaces in this section. There exist standard tools \cite{Michal41,Taylor43,Wenzel95,Munoz1999} to transpose results in this case to the complex domain, albeit some technicalities might appear in that process \cite{Wenzel97}. Since our aim here is restricted to show some evidence in favour of our conjecture, we do not think that these intricacies add anything of essential importance to the following discussion. A standard approach to understand the type/cotype properties of a space $X$ consists on the computation of its volume ratio, $\mathrm{vr}(X)$, a notion originated in \cite{Szarek_78,Szarek_80}. The reason is that this parameter provides a lower bound for the cotype-2 constant. This is the content of the the following result due to Milman and Bourgain: \begin{theorem}[\cite{Bourgain_87}] For a Banach space $X$, $$ \mathrm{C}_2(X) \log\left( \mathrm{C}_2(X) \right) \gtrsim \mathrm{vr} (X) . $$ \end{theorem} Taking into account the duality between type and cotype constant, Eqaution \eqref{typecotype_duality2}, the last result translates into a lower bound for the type-2 constant of the dual space: $$ \mathrm{T}_2(X) \ge \mathrm{C}_2(X^*) \gtrsim_{\log} \mathrm{vr} (X^*), $$ giving as another technique to try to disprove Conjecture \ref{Conjecture_3}. In this section we upper bound the volume ratio of various \emph{valid} norms -obtaining results again compatible with a positive resolution of the conjecture. We make now a tiny digression about the relation between volume ratio and cotype. In few words, this relation is still far from being well understood. In fact, in \cite{Szarek_80} was asked whether $\mathrm{vr}(X)$ can be estimated from the cotype-2 constant of the space and, in the more recent work \cite{Tomczak16}, the authors of that paper comment on the question whether bounded volume ratio implies cotype q for every $q>2$. Studying further these questions is an extremely interesting avenue to tackle the problems we are concerned with in this work, at the same time as shedding light on the relation between two very fundamental notions in local Banach space theory. We start defining the volume ratio of a normed space $X$, $\mathrm{vr}(X)$. Given a $d$-dimensional Banach space $X$, \beq\label{Vr_Eq0} \mathrm{vr} (X) = \left( \frac{\mathrm{vol}_d (B_X) }{ \mathrm{vol}_d (\mathcal{E}_X) } \right)^{1/d}, \eeq where $\mathcal{E}_X$ is the ellipsoid of maximal volume contained in $B_X$ and $\mathrm{vol}_d( \, \cdot \, )$ denotes the $d$-dimensional Lebesgue measure. We focus again in spaces of the form $\big( \mathcal{S}_1^{\tilde k'\!, n } \otimes_{\alpha} \mathcal{S}_1^{\tilde k'\!, n} \big)\otimes_\varepsilon \ell_2^{ k \tilde k'}$, as in the previous section. We prove: \begin{theorem}\label{Thm_vr} Let $\alpha$ be a \emph{tensor norm} such that, for any $ x \in \mathcal{S}_1^{\tilde k'\!, n } \otimes_{\alpha} \mathcal{S}_1^{\tilde k'\!, n}$: \begin{enumerate} \item $ \| x \|_{\mathcal{S}_1^{\tilde k'\!, n } \otimes_{\alpha} \mathcal{S}_1^{\tilde k'\!, n} } \le \| x \|_{\mathcal{S}_1^{\tilde k'\!, n } \otimes_{\pi} \mathcal{S}_1^{\tilde k'\!, n} } ^{1/2} \| x \|_{\mathcal{S}_1^{\tilde k'\!, n } \otimes_{\varepsilon} \mathcal{S}_1^{\tilde k'\!, n} } ^{1/2} ; $ \item $ \| x \|_{ \ell_2^{n^2\tilde k'^2} } \le \| x \|_{\mathcal{S}_1^{\tilde k'\!, n } \otimes_{\alpha} \mathcal{S}_1^{\tilde k'\!, n} } .$ \end{enumerate} Then, considering $X = \left( \mathcal{S}_1^{\tilde k'\!, n } \otimes_{\alpha} \mathcal{S}_1^{\tilde k'\!, n} \right) \otimes_\varepsilon \ell_2^{k \tilde k'}$, $$ \mathrm{vr} (X^*) \lesssim n^{3/4}. $$ \end{theorem} he proof uses several standard tools from geometric Banach space theory, mainly following the approach of \cite{Tomczak16}. But before going into the proof, we note that some of our \emph{valid} norms indeed fulfil the conditions of the theorem. Some examples are $\mathcal{S}_1^{\tilde k'\!, n } \otimes_{\mathfrak{S}_2^{w}} \mathcal{S}_1^{\tilde k'\!, n} $ or $\mathcal{S}_1^{\tilde k'\!, n } \otimes_{\pi_2} \mathcal{S}_1^{\tilde k'\!, n} $. An important feature of these spaces is the fact that, by construction, they \emph{have enough symmetries}. This will be exploited in the following proof with no further mention. The reader can find some additional information in Appendix \ref{Appendix_EnoughSym}. \begin{proof} We start noticing that $\alpha$ being a tensor norm translates into the fact that $X$ \emph{has enough symmetries}. This means that the only operator on that space that commutes with every isometry is the identity (or a multiple of it). The same happens with the dual $X^*$. Next we give an alternative way to compute the volume ratio using this property. To simplify notation, denote $d = \dim(X) = n^2 k\tilde k'^{3}$. Then, we can bound \eqref{Vr_Eq0} as follows: \begin{align} \mathrm{vr} (X^*) &\stackrel{(i.)}{=} \left( \frac{\mathrm{vol}_d (B_{X^*}) }{ \mathrm{vol}_d (B_{\ell_2^d}) } \right)^{1/d} \left\| \mathrm{Id}: \ell_2^d \rightarrow X^* \right\| \nonumber\\ &\!\stackrel{(ii.)}{\le} \left( \frac{ \mathrm{vol}_d (B_{\ell_2^d}) }{ \mathrm{vol}_d (B_{X}) } \right)^{1/d} \left\| \mathrm{Id}: \ell_2^d \rightarrow X^* \right\| \nonumber \\ &\!\stackrel{(iii.)}{\le} \frac{\left\| \mathrm{Id}: \ell_2^d \rightarrow X^* \right\|}{\sqrt{d}} \left( \frac{ 1 }{ \mathrm{vol}_d (B_{X}) } \right)^{1/d} \nonumber \\\label{Vr_Eq1} &\!\stackrel{(iv.)}{\le} \frac{\left\| \mathrm{Id}: \ell_2^d \rightarrow X^* \right\|}{\sqrt{d}} \ \mathbb{E} \, \| G \|_{X} \, , \end{align} where $G = \sum_{i,j,k,l,m} g_{ijklm} |i\rangle\langle j|\otimes |k\rangle\langle l| \otimes \langle m |$ is a tensor in $X^*$ with i.i.d. gaussian entries $g_{ijklm}$. The expectation is over these random variables. With respect to the chain of claims implicit in the previous manipulation: (i.) follows from the fact that the maximal volume ellipsoid $\mathcal{E}_{X^*}$ coincides with $\left\| \mathrm{Id}: \ell_2^d \rightarrow X^* \right\|^{-1} B_{\ell_2^d}$ when $X^*$ has enough symmetries \cite[Section 16]{Tomczak1989banach}, in (ii.) we have used the famous Blaschke-Santaló inequality \cite[Section 7]{pisier89_book}, in (iii.), the standard volume estimation for the Euclidean ball, $\mathrm{vol}_d (\mathsf{ball} ({\ell_2^d})) \approx d^{-d/2}$ and (iv.) follows from Lemma 3.4. in \cite{Tomczak16}. As a consequence, to obtain the stated bound we have to estimate the quantities $\| \mathrm{Id}: \ell_2^d \rightarrow X^* \|$ and $ \mathbb{E} \, \| G \|_{X}$. \begin{itemize} \item Upper bounding $\left\| \mathrm{Id}: \ell_2^d \rightarrow X^* \right\|$: We show two complementary bounds for this quantity. The first one uses the second condition in the statement of the theorem, that can be equivalently stated as: $ \big \| \mathrm{Id} : \mathcal{S}_1^{\tilde k'\!, n } \otimes_{\alpha} \mathcal{S}_1^{\tilde k'\!, n} \longrightarrow \ell_2^{n^2\tilde k'^2} \big\| \le 1.$ This allows us to bound: \begin{align*} \left\| \mathrm{Id}: \ell_2^d \rightarrow X^* \right\| &= \left\| \mathrm{Id}: X \rightarrow \ell_2^d \right\| \\ &= \left\| \mathrm{Id}: \left( \mathcal{S}_1^{\tilde k'\!, n} \otimes_\alpha \mathcal{S}_1^{\tilde k'\!, n} \right) \otimes_\varepsilon \ell_2^{k\tilde k'} \longrightarrow \ell_2^{n^2 \tilde k'^2 k\tilde k'} \right\| \\ & \le \left\| \mathrm{Id}: \ell_2^{n^2\tilde k'^2} \otimes_\varepsilon \ell_2^{k\tilde k'} \longrightarrow \ell_2^{n^2 \tilde k'^2 k\tilde k'} \right\| \\ &\le \sqrt{k \tilde k'}. \end{align*} The mentioned hypothesis was used in the first inequality above. Our second bound comes from the observation that the operator norm we want to bound is indeed upper bounded by the 2-summing norm of the identity between $\mathcal{S}_1^{\tilde k'\!, n} \otimes_\alpha \mathcal{S}_1^{\tilde k'\!, n}$ and $\ell_2^{ n^2 \tilde k'^2}$. We can alternatively understand the studied norm as: \begin{align*} \left\| \mathrm{Id}: \ell_2^d \rightarrow X^* \right\| &= \left\| \mathrm{Id}: X \rightarrow \ell_2^d \right\| \\ &= \left\| \mathrm{Id}: \ell_2^{k\tilde k'} \otimes_\varepsilon \left( \mathcal{S}_1^{\tilde k'\!, n} \otimes_\alpha \mathcal{S}_1^{\tilde k'\!, n} \right) \longrightarrow \ell_2^{k\tilde k'}( \ell_2^{n^2 \tilde k'^2} ) \right\| \\ & \le \sup_{ k\in\mathbb{N} } \left\| \mathrm{Id}: \ell_2^{k} \otimes_\varepsilon \left( \mathcal{S}_1^{\tilde k'\!, n} \otimes_\alpha \mathcal{S}_1^{\tilde k'\!, n} \right) \longrightarrow \ell_2^{k}( \ell_2^{n^2 \tilde k'^2} ) \right\| \\ &= \pi_2\left( \mathrm{Id}: \mathcal{S}_1^{\tilde k'\!, n} \otimes_\alpha \mathcal{S}_1^{\tilde k'\!, n} \longrightarrow \ell_2^{n^2 \tilde k'^2} \right), \end{align*} where the last equality is simply the definition of the 2-summing norm of the indicated map -- recall \eqref{Def_2summing}. While now we don't need the hypothesis used before, we need to invoke the tensor norm properties of $\alpha$. Hopefully, thanks to this property\footnote{ See again Appendix \ref{Appendix_EnoughSym} for clarification.}, Lemma 5.2. of \cite{Defant06} provides us a satisfactory way to compute the above norm. Under the consideration that $\mathcal{S}_1^{\tilde k'\!, n} \otimes_\alpha \mathcal{S}_1^{\tilde k'\!, n}$ as well as $\ell_2^{n^2 \tilde k'^2}$ have \emph{enough symmetries in the orthogonal group} -- see Appendix \ref{Appendix_EnoughSym} --, the cited lemma allows to write the following identity: $$ \pi_2\left( \mathrm{Id}: \mathcal{S}_1^{\tilde k'\!, n} \otimes_\alpha \mathcal{S}_1^{\tilde k'\!, n} \longrightarrow \ell_2^{n^2 \tilde k'^2} \right) = \frac{ n \tilde k' }{ \left\| \mathrm{Id}: \ell_2^{n^2 \tilde k'^2} \longrightarrow \mathcal{S}_1^{\tilde k'\!, n} \otimes_\alpha \mathcal{S}_1^{\tilde k'\!, n} \right\| }. $$ Taking into account the two bounds above, we can state that, under the conditions in the theorem: \beq\label{Vr_Eq2} \left\| \mathrm{Id}: \ell_2^d \rightarrow X^* \right\| \le \min\left(\sqrt{k\tilde k'}, \ \frac{ n \tilde k' }{ \left\| \mathrm{Id}: \ell_2^{n^2 \tilde k'^2} \longrightarrow \mathcal{S}_1^{\tilde k'\!, n} \otimes_\alpha \mathcal{S}_1^{\tilde k'\!, n} \right\| } \right). \eeq \item Upper bounding $ \mathbb{E} \, \| G \|_{X}$: The upper estimate of this quantity follows from Chevet's inequality \cite{Chevet78}, see also \cite[Section 43]{Tomczak1989banach}. According to that: \begin{align*} \mathbb{E} \, \| G \|_{X} &= \| G \|_{\left( \mathcal{S}_1^{\tilde k'\!, n } \otimes_{\alpha} \mathcal{S}_1^{\tilde k'\!, n} \right) \otimes_\varepsilon \ell_2^{k \tilde k'}} \\ &\le \sup_{\varphi \in B_{\left( \mathcal{S}_1^{\tilde k'\!, n } \otimes_{\alpha} \mathcal{S}_1^{\tilde k'\!, n} \right)^*} } \left( \sum_{i,j,k,l} \Big| \varphi\left( |i\rangle\langle j|\otimes |k\rangle\langle l| \right) \Big|^2 \right)^{1/2} \, \mathbb{E} \, \Big\| \, \sum_m g_m \langle m | \, \Big\|_{ \ell_2^{k \tilde k'} } \\ & + \sup_{\varphi \in B_{\left( \ell_2^{k\tilde k'} \right)^*} } \left( \sum_{m} \left| \varphi\left( \langle m | \right) \right|^2 \right)^{1/2} \, \mathbb{E} \, \Big\| \, \sum_{i,j,k,l} g_{ijkl} |i\rangle\langle j|\otimes |k\rangle\langle l| \, \Big\|_{ \mathcal{S}_1^{\tilde k'\!, n } \otimes_{\alpha} \mathcal{S}_1^{\tilde k'\!, n} }. \end{align*} Here we note the coincidence of the 2-sums above with the norm of the following identity maps: \begin{align*} \sup_{\varphi \in B_{\left( \mathcal{S}_1^{\tilde k'\!, n } \otimes_{\alpha} \mathcal{S}_1^{\tilde k'\!, n} \right)^*} } \left( \sum_{i,j,k,l} \left| \varphi\left( |i\rangle\langle j|\otimes |k\rangle\langle l| \right) \right|^2 \right)^{1/2} &= \big\| \mathrm{Id}: \left( \mathcal{S}_1^{\tilde k'\!, n} \otimes_\alpha \mathcal{S}_1^{\tilde k'\!, n} \right)^* \longrightarrow \ell_2^{n^2 \tilde k'^2} \big\|, \\ \sup_{\varphi \in B_{\left( \ell_2^{k\tilde k'} \right)^*} } \left( \sum_{m} \left| \varphi\left( \langle m | \right) \right|^2 \right)^{1/2} &= \big\| \mathrm{Id}: \ell_2^{k\tilde k'} \longrightarrow \ell_2^{k \tilde k'} \big\| =1 . \end{align*} Furthermore, to simplify the presentation we also introduce the notation $ \boldsymbol{g}= \sum_{i,j,k,l} g_{ijkl} |i\rangle\langle j|\otimes |k\rangle\langle l| $. With these comments, we can write \begin{align*} \mathbb{E} \, \| G \|_{X} & \le \big\| \mathrm{Id}: \left( \mathcal{S}_1^{\tilde k'\!, n} \otimes_\alpha \mathcal{S}_1^{\tilde k'\!, n} \right)^* \longrightarrow \ell_2^{n^2 \tilde k'^2} \big\| \ \mathbb{E} \ \Big\| \, \sum_m g_m \langle m | \, \Big\|_{ \ell_2^{k \tilde k'} } \ \\ & + \ \big\| \mathrm{Id}: \ell_2^{k\tilde k'} \longrightarrow \ell_2^{k \tilde k'} \big\| \ \mathbb{E} \ \| \boldsymbol{g} \|_{ \mathcal{S}_1^{\tilde k'\!, n } \otimes_{\alpha} \mathcal{S}_1^{\tilde k'\!, n} } \\ & \approx \big\| \mathrm{Id}: \left( \mathcal{S}_1^{\tilde k'\!, n} \otimes_\alpha \mathcal{S}_1^{\tilde k'\!, n} \right)^* \longrightarrow \ell_2^{n^2 \tilde k'^2} \big\| \ \sqrt{k\tilde k'} \ + \ \mathbb{E} \ \| \boldsymbol{g} \|_{ \mathcal{S}_1^{\tilde k'\!, n } \otimes_{\alpha} \mathcal{S}_1^{\tilde k'\!, n} }. \end{align*} Now, it just left to bound $\mathbb{E} \, \| \boldsymbol{g} \|_{ \mathcal{S}_1^{\tilde k'\!, n } \otimes_{\alpha} \mathcal{S}_1^{\tilde k'\!, n} }$. For that, we make use of hypothesis 1. in the statement, that is: \begin{align*} \mathbb{E} \, \| \boldsymbol{g} \|_{ \mathcal{S}_1^{\tilde k'\!, n } \otimes_{\alpha} \mathcal{S}_1^{\tilde k'\!, n} } &\le \mathbb{E} \, \left( \| \boldsymbol{g} \|_{ \mathcal{S}_1^{\tilde k'\!, n } \otimes_{\pi} \mathcal{S}_1^{\tilde k'\!, n} }^{1/2} \| \boldsymbol{g} \|_{ \mathcal{S}_1^{\tilde k'\!, n } \otimes_{\varepsilon} \mathcal{S}_1^{\tilde k'\!, n} }^{1/2} \right) \\ &\le \left( \mathbb{E} \, \| \boldsymbol{g} \|_{ \mathcal{S}_1^{\tilde k'\!, n } \otimes_{\pi} \mathcal{S}_1^{\tilde k'\!, n} } \right)^{1/2} \left( \mathbb{E} \, \| \boldsymbol{g} \|_{ \mathcal{S}_1^{\tilde k'\!, n } \otimes_{\varepsilon} \mathcal{S}_1^{\tilde k'\!, n} } \right)^{1/2} . \end{align*} The first term can be bounded as follows: \begin{align*} \mathbb{E} \, \| \boldsymbol{g} \|_{ \mathcal{S}_1^{\tilde k'\!, n } \otimes_{\pi} \mathcal{S}_1^{\tilde k'\!, n} } &= \mathbb{E} \, \| \boldsymbol{g} \|_{ \ell_2^n \otimes_\pi \ell_2^n \otimes_\pi \mathcal{S}_1^{\tilde k'} } \le \sqrt{\tilde k'} \ \mathbb{E} \, \| \boldsymbol{g} \|_{ \ell_1^{n^2} ( \mathcal{S}_2^{\tilde k'} ) } \le n \sqrt{\tilde k'} \ \mathbb{E} \, \| \boldsymbol{g} \|_{ \ell_2^{n^2} ( \mathcal{S}_2^{\tilde k'} ) } \\ & = n \sqrt{\tilde k'} \ \mathbb{E} \, \| \boldsymbol{g} \|_{ \ell_2^{n^2 \tilde k'^2} } \lesssim n \sqrt{\tilde k'} n\tilde k' = n^{2} \tilde k'^{3/2}. \end{align*} For the other term, we use again Chevet's inequality: \begin{align*} \mathbb{E} \, \Big\| \, \sum_{i,j,k,l} g_{ijkl} |i\rangle\langle j|\otimes |k\rangle\langle l| \, \Big\|_{ \mathcal{S}_1^{\tilde k'\!, n } \otimes_{\varepsilon} \mathcal{S}_1^{\tilde k'\!, n} } &\le 2 \left\| \mathrm{Id}: \big( \mathcal{S}_1^{\tilde k'\!, n} \big)^* \longrightarrow \ell_2^{n \tilde k'} \right\| \ \mathbb{E} \, \Big\| \, \sum_{i,j} g_{ij} |i\rangle\langle j| \, \Big\|_{ \mathcal{S}_1^{\tilde k'\!, n } } \\ & = 2 \sqrt{n} \ \mathbb{E} \, \Big\| \, \sum_{i,j} g_{ij} |i\rangle\langle j| \, \Big\|_{ \mathcal{S}_1^{\tilde k'\!, n } } \\ & \lesssim \sqrt{n} \, n \sqrt{\tilde k'} = n^{3/2} \tilde k'^{1/2} . \end{align*} With the previous bounds, we obtain: \begin{equation}\label{Vr_Eq3} \mathbb{E} \, \| G \|_{X} \ \lesssim \ \big\| \mathrm{Id}: \ell_2^{n^2 \tilde k'^2} \longrightarrow \mathcal{S}_1^{\tilde k'\!, n} \otimes_\alpha \mathcal{S}_1^{\tilde k'\!, n} \big\| \ \sqrt{k\tilde k'} + n^{7/4} \tilde k'. \end{equation} \end{itemize} To finish, we introduce in \eqref{Vr_Eq1} the information given by \eqref{Vr_Eq2} and \eqref{Vr_Eq3}: \begin{align*} \mathrm{vr} (X^*) &{\le} \frac{\left\| \mathrm{Id}: \ell_2^d \rightarrow X^* \right\|}{\sqrt{d}} \ \mathbb{E} \, \| G \|_{X} \, \\ &\le \frac{ \min\left(\sqrt{k\tilde k'}, \ \frac{ n \tilde k' }{ \left\| \mathrm{Id}: \ell_2^{n^2 \tilde k'^2} \longrightarrow \mathcal{S}_1^{\tilde k'\!, n} \otimes_\alpha \mathcal{S}_1^{\tilde k'\!, n} \right\| } \right) }{n \tilde k' \sqrt{k \tilde k'}} \ \left( \big\| \mathrm{Id}: \ell_2^{n^2 \tilde k'^2} \longrightarrow \mathcal{S}_1^{\tilde k'\!, n} \otimes_\alpha \mathcal{S}_1^{\tilde k'\!, n} \big\| \ \sqrt{k\tilde k'} \ + \ n^{7/4} \tilde k' \right) \\ & \le \frac{ n \tilde k' }{n \tilde k' \sqrt{k \tilde k'} \left\| \mathrm{Id}: \ell_2^{n^2 \tilde k'^2} \longrightarrow \mathcal{S}_1^{\tilde k'\!, n} \otimes_\alpha \mathcal{S}_1^{\tilde k'\!, n} \right\| } \ \big\| \mathrm{Id}: \ell_2^{n^2 \tilde k'^2} \longrightarrow \mathcal{S}_1^{\tilde k'\!, n} \otimes_\alpha \mathcal{S}_1^{\tilde k'\!, n} \big\| \ \sqrt{k\tilde k'} \\&+ \frac{\sqrt{k\tilde k'}}{n \tilde k' \sqrt{k \tilde k'}} n^{7/4} \tilde k' = 1 + n^{3/4}, \end{align*} that is enough to conclude the proof of statement of the theorem. \end{proof} \section{Discussion}\label{Sec7} In this work we have proposed a protocol for PV, referred as $G_{Rad}$ throughout the text, and proved lower bounds on the quantum resources necessary to break it. Our bounds, appearing in Theorem \ref{mainThm_2}, do not answer in a definite way Question \ref{question1} and, in particular, are not enough for proving $G_{Rad}$ \emph{secure for all practical purposes}. The reason is that the bounds presented in Theorem \ref{mainThm_2} depend on some additional properties of the strategy under consideration: the parameters $\sigma_{\mathcal{S}}^i$, $\sigma_{\mathcal{S}}^{ii}$, related with the regularity of the strategy when regarded as a vector-valued assignment on the Boolean hypercube, cf. Section \ref{Sec4}. However, our Theorem \ref{mainThm_2} is strong enough to encapsulate some previous results. As mentioned in Section \ref{Sec1}, the hypotheses of Corollary \ref{mainCor} are satisfied by the teleportation based attacks of \cite{Buhrman2011} and \cite{KonigBeigi} and also by Universal Programmable Quantum Processors, rederiving in that way some results in \cite{Buhrman2011,KonigBeigi,Kubicki_19}. Furthermore, we have related the final solution of Question \ref{question1} with the type/cotype properties of specific Banach spaces and, in fact, the obtained results led us to put forward a conjecture about these mathematical objects. The positive solution of this conjecture would imply the \emph{security for all practical purposes} of $G_{Rad}$. This would represent a major progress toward Question \ref{question1} -- See Section \ref{Sec6} for a formal statement of the conjecture and details about the connection with the security of $G_{Rad}$. In this last section we have also provided some estimates supporting the conjecture. Concretely, we have obtained bounds for the type-2 constants of some subspaces involved in the conjecture as well as bounds for the volume ratio of the duals of the spaces appearing there. This last estimate relates our conjecture, and therefore, the problem about the security of $G_{Rad}$, with open problems in Banach space theory concerning the relation between cotype and volume ratio. The future direction for this work is clear: trying to resolve the status of the security of $G_{Rad}$. Starting with the setting we introduced in Section \ref{Sec6}, the most direct approach consist on developing new techniques to estimate type/cotype constants of tensor norm spaces. This is in fact an interesting avenue also in the context of local Banach space theory and we hope that this work could serve as motivation to pursue it. Extending the family of spaces whose type/cotype constants can be accurately estimated might shed new light on several poorly understood questions in this context, as it is the relation between volume ratio and cotype or the prevalence of type/cotype in tensor norms. Coming back to our $\sigma$--dependent bounds, Theorem \ref{mainThm_2}, it would be also a desirable development to achieve a better understanding of the regularity parameters introduced there, $\sigma_{\mathcal{S}}^i$ and $\sigma_{\mathcal{S}}^{ii}$. For example, it would be very clarifying to understand how the structure of strategies is restricted under the assumption of these parameters being small (in the sense of Corollary \ref{mainCor}) or whether general strategies can be \emph{made more regular} in order to have a better behaviour in terms of these parameters. Another interesting question in this direction is understanding whether $\sigma_{\mathcal{S}}^i$, $\sigma_{\mathcal{S}}^{ii}$ can be related with some physical properties of the strategies involved, such as their robustness against noise or the complexity of the operations performed. Beyond the specific setting studied here, we have introduced a whole toolbox of constructions and connections that can be of interest in other related contexts. Firstly, most of the ideas we have used to study $G_{Rad}$ can be explored in other quantum games. Being more speculative, the recent connection between PBC and AdS/CFT \cite{May_2019,May_2020} seems to indicate that the tools we use here might have potential application to the understanding of holographic duality. Along this line, we can ask, for example, whether the notions of regularity studied here can be related with properties of the mapping between bulk and boundary theories in this context. In \cite{May_2020} it was claimed that properties of the AdS/CFT holographic correspondence allow to find cheating strategies that break PBC with polynomial resources. According to that, the exponential lower bounds in Corollary \ref{mainCor} opens the possibility to impose restrictions on the regularity of such holographic correspondence. This would be in consonance with a recent result of Kliesch and Koenig \cite{Kliesch20}, based on previous work of Jones \cite{Jones18}. In \cite{Kliesch20}, the continuum limit of discrete tensor-network toy models for holography was studied finding that, generically, this limit is extremely discontinuous. \paragraph{Acknowledgements.} We thank Jop Briët for kindly sharing personal notes on Pisier's method for bounding the cotype-2 constant of the projective tensor product of type-2 spaces. We acknowledge financial support from MICINN (grants MTM2017-88385-P and SEV-2015-0554), from Comunidad de Madrid (grant QUITEMAD-CM, ref. S2018/TCS-4342), and the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 648913). A.M.K. also acknowledge support from Spanish MICINN project MTM2014-57838-C2-2-P.
1,108,101,566,306
arxiv
\section{Introduction} For the past several years, there has been much interest in applying the powerful field theory/gravity dualities developed in the late 90s and early 2000s to field theories without Lorentz invariance. These non-relativistic forms of AdS/CFT, often collectively referred to as AdS/CMT due to their relevance for condensed matter systems, have provided a new tool for examining strongly coupled non-relativistic systems (see, e.g.~\cite{Hartnoll:2009sz,McGreevy:2009xe,Huijse:2011hp,Sachdev:2010ch} and references therein). Although many aspects of the bulk-boundary dictionary familiar from AdS/CFT carry forward to these systems without alteration, some aspects differ strongly. The first obvious difference is the symmetry group; since the goal is to consider spacetime duals to nonrelativistic systems, the asymptotic symmetries of the spacetime should be nonrelativistic. As a consequence, the spatial and temporal components of the metric near the boundary must scale differently with the radius. This different scaling in fact means the notion of a boundary itself is altered; using the Penrose definition of a conformal boundary leads to a degenerate boundary metric. However, more careful treatments have shown that there is still a reasonable notion of a boundary as the location where metric components go to infinity, and holographic calculations can be performed using suitable prescriptions \cite{Baggio:2011cp,Papadimitriou:2010as,Papadimitriou:2011qb,Ross:2009ar,Ross:2011gu,Mann:2011hg,Chemissany:2012du}. The spacetimes studied as possible nonrelativistic duals fall into two main classes: those which have Lifshitz scaling symmetry, and those which have the larger Schr\"odinger symmetry. There are also other spacetimes in the literature, including the warped AdS spacetimes \cite{Anninos:2008fx,Chakrabarti:2009ww,Liu:2009kc,Anninos:2010pm,Compere:2013bya}, which exhibit temporo-spatial anisometry. In this paper, we will concentrate on spacetimes which have Lifshitz symmetry at least in some region, but many of our conclusions apply to more general spacetimes with scaling differences between space and time. One of the best studied examples of a boundary-bulk duality system with space/time anisotropy is the so-called Lifshitz spacetime, given by \begin{equation} ds_{d+2}^{2}=-\left(\fft{L}r\right)^{2z}dt^{2}+\left(\fft{L}r\right)^{2}(d\vec{x}_d^{2}+dr^{2}). \end{equation} It was first proposed in \cite{Kachru:2008yh} and has been extensively studied since. In order to remove some of the concerns about degenerate boundary behavior, \cite{Braviner:2011kz,Singh:2010cj,Singh:2013iba} have considered replacing the near-boundary UV region of the spacetime with an asymptotically AdS spacetime. Other numerical constructions of these backgrounds are available in \cite{Goldstein:2009cv,Harrison:2012vy,Bhattacharya:2012zu,Knodel:2013fua}. Additionally, there are a set of ``hyperscaling-violating'' solutions which still have a Lifshitz-like symmetry, proposed in \cite{Narayan:2012hk} and studied further in \cite{Dong:2012se,Bueno:2012sd,Dey:2013oba,Edalati:2012tc}. We will consider an ansatz which allows for analysis of all these cases. Much recent progress has been made in creating a complete bulk/boundary dictionary for nonrelativistic systems \cite{Ross:2009ar,Baggio:2011cp,Ross:2011gu,Mann:2011hg}. In the well studied case of Lorentzian AdS/CFT, an important part of this dictionary is the correspondence between normalizable modes, which scale as $r^{\Delta_{+}}$ near the boundary, and states in the Hilbert space of the dual field theory. In particular, a quantized bulk field $\phi$ can be mapped to its corresponding boundary operator $O$ via \begin{equation} \phi\mapsto O=\lim_{r\rightarrow0}r^{-\Delta_{+}}\phi\label{eq:bulk to boundary map}. \end{equation} The remarkable fact here is that both operators can be quantized in terms of the same creation/annihilation operators, which implies an isomorphism between the Fock space representations of bulk and boundary Hilbert spaces \cite{Balasubramanian:1998sn,Balasubramanian:1998de}. Moreover, the map \eqref{eq:bulk to boundary map} can be inverted in position space. As a result, local quantum fields in the bulk can be expressed in terms of boundary operators with the help of a so-called smearing function $K$ \cite{Bena:1999jv,Hamilton:2005ju,Hamilton:2006az}. Consequently, we can study CFTs to learn something about their gravitational duals \cite{Balasubramanian:1999ri,Banks:1998dd,Fitzpatrick:2011jn}. If AdS/CMT is to be understood as a `true' equivalence between a field theory and a gravitational theory, rather then just a set of prescriptions to compute condensed matter quantities, one should expect that a similar statement can be made for nonrelativistic systems. In other words, the field theory should somehow contain all the relevant information about the gravitational theory. In this paper, we address this issue by investigating the extent of the reconstructability of bulk information from boundary data in nonrelativistic spacetimes. A simple argument why this procedure is not straightforward can be made by studying geodesics in the corresponding backgrounds. For Lifshitz spacetime, the effective potential is given by \begin{equation} V_{{\rm eff}}(r)=\left(\fft{L}r\right)^{2z}\kappa+\left(\fft{L}r\right)^{2(z-1)}\vec{p}\,^{2}. \label{eq:Veff geo for lif} \end{equation} Null geodesics ($\kappa=0$) with nonzero transverse momentum $p$ turn around at finite $r$ and never reach the boundary (see Figures~\ref{fig:Vpnull} and \ref{fig:nullcurves}). This is a result of the nonrelativistic nature of the dual theory, which manifests itself in the fact that the effective speed of light $g_{tt}/g_{xx}$ diverges as $r\rightarrow0$. Therefore, information about the transverse direction of the bulk geometry can never reach an observer at the boundary. \begin{figure}[t] \centering \includegraphics[height=6cm]{lifgeopotential} \caption{\label{fig:Vpnull}Effective potential \eqref{eq:Veff geo for lif} for null geodesics ($\kappa=0$) in AdS ($z=1$) and Lifshitz spacetimes ($z=2,3,4$). In Lifshitz, light rays sent from the bulk in any nonradial direction have to turn around at finite $r$ and can never reach the boundary.} \end{figure} Quantum mechanically the picture is different. In general, wavefunctions are allowed to tunnel through any classically forbidden region to reach the boundary, so there is hope that bulk reconstruction is possible after all. However, as we will demonstrate, at large momenta the imprint these tunneling modes leave at the boundary is exponentially small and as a consequence, a smearing function cannot be constructed. Our arguments closely follow those of \cite{Bousso:2012mh,Leichenauer:2013kaa}, where first steps towards generalizing smearing functions to spaces other than pure AdS were made. Our analysis for the case of pure Lifshitz spacetime can be easily generalized to show that smearing functions do not exist for any geometry that allows for `trapped modes', that is, modes that have to tunnel through a momentum-barrier in the potential to reach the boundary. In \cite{Leichenauer:2013kaa}, the authors show that the smearing function in their spherically symmetric spacetimes can indeed become well-defined, at least in some bulk region, once they change from an AdS-Schwarzschild solution to a nonsingular asymptotically AdS spacetime. Our case, however, does not allow such a resolution. Importantly, the smearing function in Lifshitz remains ill-defined everywhere if we resolve the tidal singularity \cite{Kachru:2008yh,Copsey:2010ya,Horowitz:2011gh} into an $\mathrm{AdS}_{2}\times\mathbb{R}^{d}$ or $\mathrm{AdS}_{d+2}$ region. It also remains ill-defined everywhere if we replace the near-boundary region with an asymptotic $\mathrm{AdS}_{d+2}$ region, or if we do both replacements at once \cite{Harrison:2012vy,Bhattacharya:2012zu,Knodel:2013fua}. The problem we encounter when trying to construct a smearing function is related to modes with large transverse momentum. Introducing a momentum-cutoff $\Lambda$, however, will force us to give up the ability of reconstructing full bulk locality in the transverse direction. The outline of this paper is as follows: In section \ref{sec:classical}, we discuss the idea of bulk reconstruction via classical geodesics in Lifshitz spacetimes. We show that there are null geodesics that cannot reach the boundary. We generalize this statement to flows involving Lifshitz regions, as well as more general nonrelativistic spacetimes with planar symmetry, considering the constraints arising from the null energy condition. In section \ref{sec:quantum}, we turn to the quantum picture and study solutions of the scalar field equations for the same class of spacetimes. In particular, we show analytically that for $z=2$ Lifshitz, there are modes that have to tunnel through a momentum-barrier in the potential to reach the boundary and are thus exponentially suppressed. We generalize this result to arbitrary $z$ using the WKB approximation. In section \ref{sec:sflifshitz}, we review the construction of smearing functions via the mode-sum approach and attempt to construct a Lifshitz smearing function. Using WKB methods, we show that this attempt fails due to the existence of `trapped modes', which have exponentially small boundary imprint. In section \ref{sec:sfgeneral}, we generalize our findings to show that smearing functions do not exist for a large class of nonrelativistic spacetimes. Finally, in section \ref{sec:modifyingbb} we interpret our results and their implications for bulk locality. We argue that only a hard momentum cutoff allows bulk reconstruction, at the cost of giving up locality in the transverse direction. \section{\label{sec:classical}The Classical Picture: Bulk reconstruction via light signals } We now set our notation and discuss the classical paths of geodesics within the spacetimes we study. Specifically, we consider planar metrics of the form \begin{equation} ds_{d+2}^{2}=-e^{2A(r)}dt^{2}+e^{2B(r)}d\vec{x}_{d}^{2}+e^{2C(r)}dr^{2}.\label{eq:metans} \end{equation} This ansatz is sufficiently general to include AdS, Lifshitz with general $z$ (with or without hyperscaling violation), $\mathrm{AdS}_{2}\times\mathbb{R}^{d}$ and spacetimes which interpolate among them. Note that one of the three functions $A,B$ and $C$ can always be eliminated by a suitable gauge choice. However, it is convenient to keep these functions arbitrary for now, so that we can more easily accommodate the various gauge choices that have been used for AdS and Lifshitz metrics in the literature. The metric (\ref{eq:metans}) can be trivially rewritten as \begin{equation} ds_{d+2}^{2}=e^{2B(r)}[-e^{2W(r)}dt^{2}+d\vec{x}_{d}^{2}]+e^{2C(r)}dr^{2}.\label{eq:metans with W,B,C} \end{equation} where we defined $W\equiv A-B$. For $W=0$, the $(d+1)$-dimensional metric at constant $r$ is Lorentz invariant. This encompasses the pure AdS case as well as Lorentz invariant domain wall flows. The $W\neq0$ case allows for `non-relativistic' backgrounds such as pure or asymptotic Lifshitz backgrounds as well as for planar black holes. In this case, we may interpret $e^{-W}$ as the gravitational redshift facto \footnote{Note that this assumes that there is an asymptotic reference region where $W=0$, so that $(d+1)$-dimensional Lorentz invariance is restored. This would occur, for example, in an AdS to Lifshitz flow.}. The global behavior of the metric is constrained by the null energy condition (subsequently NEC; for previous work see \cite{Hoyos:2010at,Liu:2012wf}). The two independent conditions are \begin{align} -R_{\, t}^{t}+R_{\, r}^{r} & =de^{W-C}\partial_{r}\left(-e^{-W-C}\partial_{r}B\right)\geq0,\label{eq:nullcondrr} \\ -R_{\, t}^{t}+R_{\, x_{1}}^{x_{1}} & =e^{-W-(d+1)B-C}\partial_{r}\left(e^{W+(d+1)B-C}\partial_{r}W\right)\geq0.\label{eq:nullcondxx} \end{align} Here $x_{1}$ is any one of the $\vec{x}$ transverse directions. If we choose a gauge where $A=C$, or equivalently $W=C-B$, these conditions simplify to \begin{align} \left((e^{-B})'e^{-2W}\right)' & \geq0,\label{eq:nullrrsimple}\\ \left(W'e^{dB}\right)' & \geq0,\label{eq:nullxxsimple} \end{align} where $'$ denotes derivatives with respect to the radial coordinate $\rho$ in the corresponding gauge. Since $e^{dB}\geq0$, we can use the second condition to deduce the following statements about $W$ (see Figure~\ref{fig:possibleNEC}): \begin{align} \text{If }W'|_{\rho_{-}} \leq 0 &\quad\Rightarrow\quad W'|_{\rho\leq\rho_{-}}\leq0;\nn\\ \text{If }W'|_{\rho_{+}}\geq 0 & \quad\Rightarrow\quad W'|_{\rho\geq\rho_{+}}\geq 0. \end{align} From (\ref{eq:nullrrsimple}) we can deduce similar equations for $e^{-B}$. If we combine the two constraints (\ref{eq:nullrrsimple}) and (\ref{eq:nullxxsimple}), we learn about the second derivatives of $W$ and $e^{-B}$ when their first derivatives have the same sign: \begin{align} \text{If }W'|_{\rho_{-}} \leq 0\text{ and }(e^{-B})'|_{\rho_{-}} \leq 0, &\quad \Rightarrow\quad W''|_{\rho\leq\rho_{-}} \geq 0\text{ and }(e^{-B})''|_{\rho\leq\rho_{-}} \geq 0;\nn\\ \text{If }W'|_{\rho_{+}} \geq 0\text{ and }(e^{-B})'|_{\rho_{+}} \geq 0, &\quad \Rightarrow\quad W''|_{\rho\geq\rho_{+}} \geq 0\text{ and }(e^{-B})''|_{\rho\geq\rho_{+}} \geq 0. \label{eq:concavity} \end{align} These conditions will constrain the bulk geometry, and in particular the behavior of the redshift factor $e^{-W}$. \begin{figure}[t] \begin{minipage}{.45\textwidth} \centering \includegraphics[height=6cm]{WBpicture} \end{minipage \begin{minipage}{.55\textwidth} \centering \includegraphics[height=5cm]{lifNECsketch} \end{minipage} \caption{\label{fig:possibleNEC}Two sketches of functions $W$ and $e^{-B}$ which obey the null energy conditions (\ref{eq:nullrrsimple}) and (\ref{eq:nullxxsimple}). The figure on the right approaches Lifshitz asymptotics at $\rho_{\text{bdy}}$.} \end{figure} As mentioned in the introduction, we may gain insight about the bulk spacetime by considering null geodesics. Such geodesics are easily obtained by noting that the metric (\ref{eq:metans}) admits Killing vectors \beq\label{eq:killings} \fft{\partial}{\partial t},\qquad{}\fft{\partial}{\partial x^{i}}. \eeq This allows us to define the conserved energy and momentum \begin{equation} E\equiv e^{2A}\dot{t},\qquad\vec{p}\equiv e^{2B}\dot{\vec{x}}, \end{equation} where a dot indicates a derivative with respect to the affine parameter $\lambda$. Geodesics then obey \begin{equation} -\kappa=\left(\fft{ds}{d\lambda}\right)^{2}=-e^{-2(W+B)}E^{2}+e^{-2B}\vec{p}\,^{2}+e^{2C}\dot{r}^{2}. \end{equation} If we define \begin{equation} V_{{\rm eff}}\equiv e^{2(W+B)}\kappa+e^{2W}\vec{p}^{2}, \label{eq:veff} \end{equation} with $\kappa=1$ for timelike and $\kappa=0$ for null geodesics, then we find \begin{equation} e^{2(W+B+C)}\dot{r}^{2}=E^{2}-V_{{\rm eff}}. \end{equation} This is of the form of an energy conservation equation, $E_{{\rm tot}}=E_{{\rm kin}}+V_{{\rm eff}}$, where \begin{equation} E_{\mathrm{kin}}=e^{2(W+B+C)}\dot{r}^{2}. \end{equation} \subsection{\label{sub:lifgeos}Lifshitz geodesics} We now study specifically Lifshitz spacetimes. Pure Lifshitz spacetime corresponds to taking \begin{equation} W=-(z-1)\log(r/L),\qquad B=-\log(r/L),\qquad C=-\log(r/L) \end{equation} in the metric ansatz \eqref{eq:metans with W,B,C}. Note that the `horizon' is at $r=\infty$, while the boundary is at $r=0$. The effective potential for geodesics is \begin{equation} V_{{\rm eff}}(r)=\left(\fft{L}r\right)^{2z}\kappa+\left(\fft{L}r\right)^{2(z-1)}\vec{p}\,^{2}. \label{eq:Veff lifshitz} \end{equation} The behavior of the second term depends on the value of $z$. For $z=1$, this term is a constant, and just shifts the overall potential. For $z>1$, the second term still grows as $r^{-2(z-1)}$, but this growth is slower than that of the $\kappa$ term. In addition, it vanishes at the horizon, $r\rightarrow\infty$. For null geodesics ($\kappa=0$), the effective potential is completely determined by this term. Radial null geodesics ($\vec{p}=0$) do not feel any effective potential. For $z>1$, non-radial geodesics on the other hand cannot reach the boundary. In Figure~\ref{fig:Lifgeos}, we have plotted several such light rays which all converge on one point in space; these rays delineate the causal past of that point. As we can see in the figure, only the null geodesic which stays at constant $x=0$ can reach the boundary at $r=0$; all others turn around at some minimum $r$. \begin{figure}[t] \centering \subfigure[Looking from a future vantage point.]{ \includegraphics[width=0.45\textwidth]{topr0onehalfnullcurves}} \subfigure[A ``side'' view.]{ \includegraphics[width=0.45\textwidth]{sider0onehalfnullcurves}} \caption{\label{fig:nullcurves}Plot of null curves through the point $t=0$, $\vec{x}=0$, $r=1/2$ in Lifshitz space with $z=3$.} \label{fig:Lifgeos} \end{figure} The result is that a full classica \footnote{Note that `classical' in this case refers to the geometric optics limit, as opposed to just the $N\rightarrow\infty$ limit.} reconstruction of the bulk from the boundary is not possible. An observer at the boundary can never receive any signals from the bulk which travel with a nonzero momentum in the transverse direction. Consequently, this observer will not be able to `resolve' transverse length scales in the bulk. Of course, this picture is somewhat naive and cannot be taken as a proof that bulk reconstruction is impossible. However, as we will show in the next section, the picture carries forward to the quantum case, even though tunneling through classically forbidden regions is possible. Two comments are in order at this point. First, notice that pure Lifshitz spacetime has a pathology at $r\rightarrow\infty$. An infalling extended object experiences infinitely strong tidal forces. To see this, consider two parallel radial geodesics with energy $E$ travelling in the background \eqref{eq:metans with W,B,C}. The geodesic deviation equation for the transverse separation $X^{i}$ reads \begin{equation} \frac{D^{2}X^{i}}{Dt^{2}}=X^{i}E^{2}e^{-2\left(W+B+C\right)}\left[-B^{\prime}\left(W^{\prime}+C^{\prime}\right)+B^{\prime\prime}-\frac{\kappa}{E^{2}}e^{2\left(W+B\right)}\left(B^{\prime}\left(B^{\prime}-C^{\prime}\right)+B^{\prime\prime}\right)\right]. \end{equation} For Lifshitz spacetime, we have \begin{equation} \frac{D^{2}X^{i}}{Dt^{2}}=X^{i}\frac{E^{2}}{L^2}\left[\left(1-z\right)\left(\frac{r}{L}\right)^{2z}-\frac{\kappa}{E^{2}}\right]. \end{equation} For $z\neq1$, the relative acceleration diverges near the horizon and the result is an infinitely strong tidal force. By now, there are several known ways to resolve this issue \cite{Harrison:2012vy,Bhattacharya:2012zu,Knodel:2013fua,Bao:2012yt}. For solutions which involve a running dilaton, a natural resolution is to avoid the singularity by deforming the geometry such that it flows to $\mathrm{AdS}_{2}\times\mathbb{R}^{d}$ in the deep infrared. More generally, one can imagine several possible IR deformations that change the behavior of the metric functions $W$, $B$ and $C$ at large $r$. These deformations have to be consistent with the NECs in (\ref{eq:nullrrsimple}) and (\ref{eq:nullxxsimple}) above. However, it is clear that while these procedures might cure the problems encountered near the horizon, they do not change the fact that geodesics sent towards the boundary still cannot overcome the Lifshitz barrier \eqref{eq:Veff lifshitz}. On the other hand, one could imagine that deforming the geometry in the UV might help null geodesics to reach the boundary. Deformations which replace the UV with an AdS region have the benefit of clarifying the holographic prescriptio \footnote{See, however \cite{Baggio:2011cp,Papadimitriou:2010as,Papadimitriou:2011qb,Ross:2009ar,Ross:2011gu,Mann:2011hg} for different approaches to holography in Lifshitz spacetimes }. If we imagine a geometry that is approximately Lifshitz at some $\rho_{-}$, then $W'(\rho_-)<0$. The NECs thus dictate that $e^{W}$ has to either continue increasing or asymptote to a constant as $\rho\rightarrow0$. The latter case would correspond to an AdS to Lifshitz flow. For fixed transverse momentum $p$, geodesics with large enough energy can now escape the potential and reach the boundary. However, at fixed $E$, the height of the potential barrier is controlled by $p^{2}$, so geodesics with large transverse momentum remain trapped inside the bulk. We conclude that for any spacetime that is approximately Lifshitz in some region, part of the information about the bulk will always be hidden from a classical boundary observer. The part that is missing describes physics at large $p$, or equivalently small transverse length scales. Again, we will see in the subsequent sections that this statement has an exact equivalent in the quantum case. \section{\label{sec:quantum}The Quantum Picture: Bulk reconstruction for scalar fields } While the geometric optics picture of the previous section already captures some important physical properties of nonrelativistic gauge/gravity dualities, a full analysis of the problem of bulk reconstruction from the boundary clearly requires a treatment of quantum operators. To this end, we consider solutions to the scalar field equations and investigate what kind of imprint they can leave at the boundary. Specifically, we examine the amplitude of scalar modes near the UV boundary in terms of the size of fluctuations deep in the IR. We begin by studying the Klein-Gordon equation for a scalar in the fixed background (\ref{eq:metans with W,B,C}) \begin{equation} [e^{-W-(d+1)B-C}\partial_{M}e^{W+(d+1)B+C}g^{MN}\partial_{N}-m^{2}]\phi=0.\label{eq:KGe} \end{equation} Because of the Killing vectors (\ref{eq:killings}) present in our metric ansatz, the wave equation is separable and we can write \begin{equation} \phi(t,\vec{x},r)=e^{i(\vec{p}\cdot\vec{x}-Et)}f(r). \end{equation} Then the Klein-Gordon equation (\ref{eq:KGe}) becomes \begin{equation} \left[e^{2(W+B-C)}\left(\partial_{r}^{2}+\fft{d(W+(d+1)B-C)}{dr}\partial_{r}\right)+E^{2} -e^{2W}\vec{p}\,^{2}-e^{2(W+B)}m^{2}\right]f=0. \end{equation} Let us choose a gauge where $A=C$, or $W=C-B$. Equivalently, starting in any given gauge we can introduce a new radial coordinate $\rho$ such that \begin{equation} e^{C-B-W}dr=d\rho.\label{eq:dxdr} \end{equation} Note that $\rho$ is a tortoise coordinate for our metric ansatz. This gives \begin{equation} [\partial_{\rho}^{2}+dB'\partial_{\rho}+E^{2}-e^{2W}\vec{p}\,^{2}-e^{2\left(W+B\right)}m^{2}]f=0, \end{equation} where primes denote derivatives with respect to $\rho$. If we now let \begin{equation} f=e^{-dB/2}\psi,\label{eq:spsidef} \end{equation} we end up with a Schr\"odinger-type equation \begin{equation} -\psi''+U\psi=E^{2}\psi,\label{eq:schr} \end{equation} where \begin{equation} U=V_{m}+V_{p}+V_{{\rm cos}}, \end{equation} with \begin{equation} V_{m}= e^{2(W+B)}m^{2},\qquad V_{p}= e^{2W}\vec{p}\,^{2},\qquad V_{\mathrm{cos}}=(d/2)B''+(d/2)^{2}B'^{2}. \end{equation} Here $V_{m}$ and $V_{p}$ together form the effective potential \eqref{eq:veff} for geodesics, with $\kappa$ replaced by $m^{2}$. The third term, $V_{\mathrm{cos}}$, is an additional `cosmological' potential that is absent in the classical picture. \subsection{\label{sub:scalarslifshitz}Scalars in Lifshitz spacetime} For Lifshitz backgrounds, the Schr\"odinger potential can be written as \begin{equation} U=\left(\fft{L}{z\rho}\right)^{2}\left(m^{2}+\fft{d(d+2z)}{4L^{2}}\right) +\left(\fft{L}{z\rho}\right)^{2(1-1/z)}\vec{p}\,^{2}, \end{equation} where we introduced a new radial coordinate according to \eqref{eq:dxdr}. Explicitly, we have \begin{equation} \rho=\fft{L}z\left(\fft{r}L\right)^{z}.\label{eq:rho from r lifshitz} \end{equation} Note that both $V_{m}$ and the entirety of $V_{{\rm cos}}$ contribute to the $1/\rho^{2}$ blowup as $\rho\to0$ (corresponding to the boundary). The fact that these two pieces scale with the same power of $\rho$ is a feature of Lifshitz spacetime; it will not continue to be true for more complicated spacetimes such as the AdS-Lifshitz flows studied in section \ref{sub:adslifads}. The qualitative behavior of solutions to the Schr\"odinger equation is roughly as follows: The wavefunction starts out oscillating deep in the bulk ($\rho\rightarrow\infty$) and crosses the potential barrier at the classical turning point $\rho_{0}$. For $\rho<\rho_{0}$, the mode must tunnel under the barrier, and thus the wavefunction will in general be a superposition of exponentially growing and suppressed modes. We will only be interested in the mass ranges where the growing solution is non-normalizable. Thus, the normalizable modes relevant for canonical quantization are exponentially suppressed in the area of this barrier at small $\rho$. For $z=1$, $V_{p}$ is a constant, but for $z>1$ it blows up near the boundary, although less fast than the other terms in the potential. Specifically, $V_p/V_m \propto e^{-2B}$. For spacetimes with Lifshitz asymptotics, \beq \partial_\rho \left(e^{-B}\right)\biggr|_{\rho_{\text bdy}}=\left.\partial_\rho\left(\frac{z\rho}{L}\right)^{1/z}\right|_{\rho_{\text bdy}}>0. \eeq Consequently, $\partial_\rho e^{-B}>0$ throughout the spacetime. Near the boundary, the mass term $V_m$ will always dominate, but $V_p$ will increase in relative importance as we head in towards the IR region. Because of the different behavior of the mass/cosmological and momentum-dependent terms, it is crucial to distinguish between two qualitatively different `types' of tunneling. If at a given energy, the momentum $\vec{p}$ is sufficiently small, the wavefunction crosses the barrier at a point where $V_{p}$ is subdominant compared to the other terms in the potential. Consequently, the $1/\rho^{2}$ part of $U$ will control the suppression near the boundary. We shall refer to those modes as \textit{free modes}. This name is justified, because even though they are tunneling, classically they correspond to null geodesics that can reach the boundary. If $\vec{p}$ is large, the wavefunction crosses the barrier already at a point where $U\approx V_{p}$, and the wavefunction will receive an additional suppression by an exponential in $\vec{p}$, due to tunneling through this thicker barrier. We shall refer to this class of solutions as \textit{trapped modes}. They play a crucial role in our analysis, as they are the quantum equivalent to nonradial null-geodesics that cannot reach the boundary. We may study the behavior of these free and trapped modes by solving the Schr\"odinger equation \eqref{eq:schr} in a Lifshitz background. It is convenient to scale out the energy $E$ by introducing the dimensionless coordinate \begin{equation} \zeta=E\rho.\label{eq:zetadef} \end{equation} Then \eqref{eq:schr} becomes $-\psi''(\zeta)+(U-1)\psi(\zeta)=0$ where \begin{equation} U=\fft{\nu_{z}^{2}-1/4}{\zeta^{2}}+\fft\alpha{\zeta^{k}},\label{eq:dimpot} \end{equation} with \begin{equation} \nu_{z}=\fft1z\sqrt{(mL)^{2}+(d+z)^{2}/4},\qquad\alpha=\left(\fft{EL}z\right)^{k}\biggl(\fft{\vec{p}}E\biggr)^{2},\qquad k=2(1-1/z).\label{eq:nualphadef} \end{equation} Since the null energy condition demands $z\ge1$, we generally focus on the case $0<k<2$. (The $k=0$, or pure AdS, case is familiar and can be treated by standard methods.) In this case, the boundary ($\zeta\to0$) behavior of $U$ is $\sim1/\zeta^{2}$, while the horizon ($\zeta\to\infty$) behavior is $\sim1/\zeta^{k}$. Near the boundary, we have \begin{equation} -\psi''+\fft{\nu^{2}-1/4}{\zeta^{2}}\psi\approx0\qquad\Rightarrow\qquad\psi\sim A\zeta^{1/2-\nu}+B\zeta^{1/2+\nu}.\label{eq:ABdef} \end{equation} Using \eqref{eq:rho from r lifshitz}, \eqref{eq:zetadef} and (\ref{eq:spsidef}), we can express the behavior of the original Klein-Gordon field in terms of the original coordinate $r$ as \begin{equation} \phi\sim\hat{A}\left(\fft{r}L\right)^{\Delta_{-}}+\hat{B}\left(\fft{r}L\right)^{\Delta_{+}}, \end{equation} where \begin{equation} \hat{A}=A\left(\fft{EL}z\right)^{1/2-\nu},\quad\hat{B}=B\left(\fft{EL}z\right)^{1/2+\nu},\quad\Delta_{\pm}=\fft{d+z}2\pm\sqrt{(mL)^{2}+\left(\fft{d+z}2\right)^{2}}. \end{equation} We will consider only the mass range where the first solution (related to $A$) is non-normalizable with respect to the Klein-Gordon norm, while the second solution (related to $B$) is normalizable. Via the AdS/CFT correspondence, non-normalizable modes represent classical sources of an operator $O$ at the boundary, which redefine the Hamiltonian of the field theory \cite{Maldacena:1997re,Gubser:1998bc,Witten:1998qj}. Normalizable fluctuations are placed on top of these classical sources and they correspond to different states in the field theory, or equivalently expectation values of $O$ \cite{Balasubramanian:1998de,Balasubramanian:1998sn} \footnote{Lifshitz spacetimes present some subtleties when considering alternate quantizations. The range of masses for which both boundary conditions are normalizable is larger than in the AdS case, but modes which would not be normalizable in AdS (but apparently are in Lifshitz) suffer from a novel instability. Particularly in these cases it appears more difficult to redefine the Hamiltonian in the usual way \cite{Keeler:2012mb,Andrade:2012xy,Andrade:2013wsa}.} We will only be interested in the situation where the boundary Hamiltonian is fixed, so we will consequently treat non-normalizable solutions as non-fluctuating. The fluctuating modes to be quantized are thus the normalizable modes given by $B$. As a result, we will end up setting $A=0$ and investigating the consequences of doing s \footnote{Note that this is in contrast with the computation of AdS/CFT correlators, where $B$ is interpreted as the response to turning on a source $A$.}. Turning now to the horizon, we see that both terms in \eqref{eq:dimpot} fall off as $\zeta\to\infty$. Hence the horizon behavior is given b \footnote{For simplicity, we have assumed $1<k<2$. For $0<k\le1$, the horizon falloff $\sim1/\zeta^k$ is insufficiently fast, and the potential becomes long-ranged. This introduces a correction to the horizon behavior of the wavefunction. However, this is unimportant for our discussion, as we have no need for the asymptotic phase of $\psi$ in the classically allowed region.} \begin{equation} -\psi''-\psi\approx0\qquad\Rightarrow\qquad\psi\sim ae^{i\zeta}+be^{-i\zeta}.\label{eq:sabdef} \end{equation} In terms of the original $r$ coordinate, this becomes \begin{equation} \psi\sim a\exp\left(i\fft{EL}z\left(\fft{r}L\right)^{z}\right)+b\exp\left(-i\fft{EL}z\left(\fft{r}L\right)^{z}\right), \end{equation} so that \begin{equation} \phi\sim a\left(\fft{r}L\right)^{d/2}\exp\left(i\fft{EL}z\left(\fft{r}L\right)^{z}\right)+b\left(\fft{r}L\right)^{d/2}\exp\left(-i\fft{EL}z\left(\fft{r}L\right)^{z}\right). \end{equation} The horizon modes correspond to infalling and outgoing waves, given by $a$ and $b$, respectively. Since the wave equation is second order and linear, the boundary data $(A,B)$ must be linearly related to the horizon data $(a,b)$. AdS/CFT correlators are generally computed by taking infalling conditions at the horizon, corresponding to $b=0$, while bulk normalizable modes are given instead by taking $A=0$ at the boundary. Of course, the precise relation between boundary and horizon data can only be obtained by solving the wave equation. While this cannot be performed in general, the exact solution is known for $z=2$, where the potential $U$ is analytic. We now turn to this case, as it provides a clean example of the behavior of trapped modes and in particular the exponential suppression that they receive when tunneling under the barrier in the potential. \subsection{A Specific Example: $z=2$ Lifshitz} \label{subsec:z=2Lif} For a pure Lifshitz background with $z=2$, or $k=1$, the potential (\ref{eq:dimpot}) is analytic in $\zeta$ and the Schr\"odinger equation takes the form \begin{equation} -\psi''+\left(\fft{\nu^{2}-1/4}{\zeta^{2}}+\fft\alpha\zeta-1\right)\psi=0, \end{equation} where $\alpha=\vec p\,^2L/2E$. As this is essentially Whittaker's equation, the solution can be written in terms of the Whittaker functions $M_{-i\alpha/2,\nu}(-2i\zeta)$ and $W_{-i\alpha/2,\nu}(-2i\zeta)$, or equivalently in terms of confluent hypergeometric functions \cite{Kachru:2008yh}. Expanding for $\zeta\to0$ and demanding that $\psi$ satisfies the boundary asymptotics \eqref{eq:ABdef} for normalizable and nonnormalizable modes gives \begin{align} \psi =& \left[ \left(\fft{i}2\right)^{\fft12+\nu}B-\left(\fft{i}2\right)^{\fft12-\nu}\fft{\Gamma(-2\nu) \Gamma(\fft12+\nu+\fft{i\alpha}2)}{\Gamma(2\nu)\Gamma(\fft12-\nu+\fft{i\alpha}2)}A\right] M_{-i\alpha/2,\nu}(-2i\zeta)\nn\\ &+\left[ \left(\fft{i}2\right)^{\fft12-\nu}\fft{\Gamma(\fft12+\nu+\fft{i\alpha}2)}{\Gamma(2\nu)}A \right]W_{-i\alpha/2,\nu}(-2i\zeta). \label{eq:MWpsi} \end{align} For the horizon, we expand for large $\zeta$ and compare with \eqref{eq:sabdef} to obtain \begin{eqnarray} \psi&=&\left[e^{-\pi\alpha/4}\fft{\Gamma(\fft12+\nu+\fft{i\alpha}2)}{\Gamma(1+2\nu)}2^{-i\alpha/2} b\right]M_{-i\alpha/2,\nu}(-2i\zeta)\nn\\ &&+\left[e^{\pi\alpha/4}2^{i\alpha/2}a+e^{i\pi(\fft12-\nu)} e^{\pi\alpha/4}\fft{\Gamma(\fft12+\nu+\fft{i\alpha}2)}{\Gamma(\fft12+\nu-\fft{i\alpha}2)} 2^{-i\alpha/2}b\right]W_{-i\alpha/2,\nu}(-2i\zeta). \label{eq:MWhoriz} \end{eqnarray} Comparing \eqref{eq:MWpsi} with \eqref{eq:MWhoriz} gives the relation between horizon and boundary coefficients \begin{eqnarray} A & = & (2i)^{\fft12-\nu}\fft{\Gamma(2\nu)}{\Gamma(\fft12+\nu-\fft{i\alpha}2)}e^{\pi\alpha/4}\left(2^{-i\alpha/2}b-e^{i\pi(\fft12+\nu)}\fft{\Gamma(\fft12+\nu-\fft{i\alpha}2)}{\Gamma(\fft12+\nu+\fft{i\alpha}2)}2^{i\alpha/2}a\right),\nn \\ B & = & (2i)^{\fft12+\nu}\fft{\Gamma(-2\nu)}{\Gamma(\fft12-\nu-\fft{i\alpha}2)}e^{\pi\alpha/4}\left(2^{-i\alpha/2}b-e^{i\pi(\fft12-\nu)}\fft{\Gamma(\fft12-\nu-\fft{i\alpha}2)}{\Gamma(\fft12-\nu+\fft{i\alpha}2)}2^{i\alpha/2}a\right).\label{eq:ABfromab} \end{eqnarray} Although we are primarily interested in normalizable modes in the Lifshitz bulk, we first note that the usual computation of the retarded Green's function proceeds by taking infalling boundary conditions at the horizon, namely $b=0$. Then \eqref{eq:MWhoriz} immediately gives \begin{equation} \psi_{\rm infalling}\sim W_{-i\alpha/2,\nu}(-2i\zeta). \end{equation} We now demand that the coefficient of $M_{-i\alpha/2,\nu}(-2i\zeta)$ in \eqref{eq:MWpsi} vanishes, from which we obtain \begin{equation} G_R(E,\vec p\,)\sim\fft{\hat B}{\hat A}=\left(\fft{EL}2\right)^{2\nu}\fft{B}A =\left(\fft{EL}i\right)^{2\nu}\fft{\Gamma(-2\nu) \Gamma(\fft12+\nu+\fft{i\alpha}2)}{\Gamma(2\nu)\Gamma(\fft12-\nu+\fft{i\alpha}2)}, \end{equation} in agreement with \cite{Kachru:2008yh} when continued to Euclidean space. Note that in the large momentum limit, $p\to\infty$ (or more precisely for $\alpha\gg\nu$), the Whittaker function $W_{-i\alpha/2,\nu}(-2i\zeta)$ is only large near the boundary, and decays exponentially into the bulk. This matches with the heuristic picture of AdS/CFT, where the CFT `lives' on the boundary. In the relativistic case, corresponding to an AdS geometry, the boundary data has a power law falloff as it penetrates into the bulk. However, for this Lifshitz geometry, the falloff is exponential. Of course, for the bulk reconstruction that we are interested in, we actually want to consider the space of normalizable modes, as they are the ones that span the Hilbert space in the bulk. From the Hamiltonian picture, the natural norm is the Klein-Gordon norm, which is in fact compatible with the norm for the Schr\"odinger equation \eqref{eq:schr}. Normalizable modes correspond to taking $A=0$, so that \begin{equation} \psi_{\rm normalizable}\sim M_{-i\alpha/2,\nu}(-2i\zeta). \end{equation} Comparing \eqref{eq:MWpsi} with \eqref{eq:MWhoriz} then gives the relation between bulk and boundary coefficients for normalizable modes \begin{equation} \fft{B}b=2^{-i\alpha/2}\left(\frac{2}{i}\right)^{\frac{1}{2}+\nu}\frac{\Gamma(\fft12+\nu+\fft{i\alpha}2)}{\Gamma\left(1+2\nu\right)}e^{-\pi\alpha/4}. \label{eq:Bfrombexact} \end{equation} Note that $M_{-i\alpha/2,\nu}(-2i\zeta)$ is essentially a standing wave solution in the classically allowed region $\zeta>\zeta_0$, where $\zeta_0$ is the classical turning point. Since this interval is semi-infinite, the wavefunction must be normalized by fixing the amplitude $b$ of these oscillations. Hence the ratio $B/b$ is a direct measure of the amplitude of properly normalized wavefunctions at the boundary. Recall our previous distinction between the two different types of tunneling solutions: `free' vs.\ `trapped' modes. Modes with small momenta $p$ at fixed $E$ ($\alpha\ll\nu$) are `free modes'. For these modes, we have, up to an overall phase \begin{equation} \frac{|B|}{|b|}\approx\frac{2^{\nu+\frac{1}{2}}\Gamma\left(\frac{1}{2}+\nu\right)} {\Gamma\left(1+2\nu\right)}. \end{equation} The tunneling process produces the typical scaling behavior $\sim\rho^{\Delta_{+}}$ at the boundary, but there is no exponential suppression. For large momenta ($\alpha\gg\nu$) the modes are `trapped', and we find instead \begin{equation} \frac{|B|}{|b|}\approx\frac{\sqrt{4\pi}e^{-\left(\nu+\frac{1}{2}\right)}} {\Gamma\left(1+2\nu\right)}\alpha^{\nu}e^{-\pi\alpha/2}. \label{eq:B/b large alpha} \end{equation} These modes have to tunnel not only through the $1/\rho^{2}$ potential near the boundary, but also through the wider momentum barrier $V_{p}\sim p^{2}/\rho$ at larger $\rho$. This causes the solution to be exponentially suppressed when it reaches the boundary. We conclude that the $z=2$ Lifshitz metric allows for `trapped modes', which have arbitrarily small boundary imprint for large $p$. Clearly, we could have obtained the exponential suppression factor $e^{-\pi\alpha/2}$ in \eqref{eq:B/b large alpha} by simply setting $V_{m}=V_{\mathrm{cos}}=0$ in the Schr\"odinger potential. More generally, since the size of $V_{p}$ is controlled by $p^{2}$, in any interval {[}$\rho_{1}$,$\rho_{2}${]} away from the boundary, i.e.\ in any region where the potential $U$ is bounded, at large enough $p$ the difference in amplitudes between the points $\rho_{1}$ and $\rho_{2}$ will always be governed by an exponential relation like \eqref{eq:B/b large alpha}. For the purpose of determining whether or not trapped modes exist in a given spacetime, it will therefore be enough to study the equivalent tunneling problem in the potential $U\equiv V_{p}$. We will come back to this issue later. \subsection{WKB Approximation} In order to study the existence of trapped modes in spacetimes beyond exact $z=2$ Lifshitz, it will be useful to have a formalism that provides a qualitative description of the behavior of tunneling modes even for cases where an analytic solution might not exist. This will allow us to study Lifshitz with $z\neq2$, as well as more general backgrounds \eqref{eq:metans with W,B,C} with nontrivial $W$, $B$ and $C$. The WKB method provides us with just such a formalism. We make the standard ansatz \begin{equation} \psi\sim\frac{1}{\sqrt{P(\rho)}}e^{\int d\rho^{\prime}P(\rho^{\prime})}. \end{equation} For slowly-varying potentials, we can plug this back into \eqref{eq:schr} and solve perturbatively for $P$. The details of this calculation can be found in appendix~\ref{sec:WKB}. To lowest order, $P^{2}\approx U-E^{2}$ and the solution interpolates between an oscillating region in the bulk and a tunneling region near the boundary. More explicitly, we have \begin{equation} \psi\left(\zeta\right)=\begin{cases} \left(U-E^{2}\right)^{-\frac{1}{4}}\left[Ce^{S(\rho)}+De^{-S(\rho)}\right], & \rho<\rho_{0};\\ \left(E^{2}-U\right)^{-\frac{1}{4}}\left[ae^{i\Phi(\rho)}+be^{-i\Phi(\rho)}\right], &\rho>\rho_{0}, \end{cases}\label{eq:WKB ansatz} \end{equation} where $\rho_{0}$ is the classical turning point and we defined the action $S\left(\rho\right)=\int_{\rho}^{\rho_{0}}d\rho^{\prime}\sqrt{U-E^{2}}$ and a phase $\Phi\left(\rho\right)=\int_{\rho_{0}}^{\rho}d\rho^{\prime}\sqrt{E^{2}-U}$. For potentials that behave as $U\sim 1/\rho^{2}$ near the boundary (which includes both asymptotically AdS and Lifshitz spacetimes), one has to include an additional correction term $U\rightarrow U+1/(2\rho)^{2}$ (See appendix~\ref{sec:WKB} for more details). Using the WKB matching procedure between the two asymptotic regions, we find \begin{eqnarray} C & = & \left(e^{-i\frac{\pi}{4}}a+e^{i\frac{\pi}{4}}b\right),\nonumber \\ D & = & \frac{i}{2}\left(e^{-i\frac{\pi}{4}}a-e^{i\frac{\pi}{4}}b\right). \end{eqnarray} The exponential growth/decay of the solution in the classically forbidden region is manifest in the dependence on $S$ in \eqref{eq:WKB ansatz}, which roughly corresponds to the area of the tunneling barrier. The wider/higher the barrier, the larger the corresponding factor $e^{S}$ is. We are only interested in the normalizable, or decaying solution near the boundary, so we will have to set $C=0$. Up to a finite error, the WKB approximation then accurately captures the boundary behavior of this solution, and in particular the exponential suppression between bulk and boundary amplitude \footnote{Notice however that calculating the ratio $B/A$, which is needed to calculate the standard field theory Green's function, would not be possible. This is due to the fact that for a general solution, the normalizable solution $\sim e^{-S}$ can `hide' under the non-normalizable part $\sim e^{S}$, which grows much faster as $\rho\rightarrow0$.}. \begin{figure}[t] \centering \includegraphics[width=0.6\textwidth]{lifz=2n} \caption{\label{fig:nu=1z=2} Plot of the WKB (dashed) and exact (solid) boundary normalization factor $|B|/|b|$ as a function of $\alpha$. Here we have taken $z=2$ and $\nu=1$. The large $\alpha$ behavior is exponentially suppressed, $|B|/|b|\sim \alpha^\nu e^{-\pi\alpha/2}$.} \end{figure} We can compare this WKB approximation with the exact solution for $z=2$ from section \ref{subsec:z=2Lif}. Figure~\ref{fig:nu=1z=2} shows a plot of the WKB solution for $z=2$ Lifshitz, compared to the exact solution. As we can see, the WKB approximation accurately captures the exponential momentum-suppression at large $\alpha$. (See also appendix~\ref{sec:WKB} for further `benchmark tests'.) In the next section, we will use the WKB formalism to investigate for which spacetimes smearing functions exist. \section{\label{sec:sflifshitz}Smearing Functions in Lifshitz spacetimes } In this section, we introduce smearing functions as a way to reconstruct bulk physics from boundary dynamics. Using the WKB formalism developed in appendix~\ref{sec:WKB}, we will show that for Lifshitz spacetimes, and more generally for any flow involving Lifshitz, such reconstruction is not possible. First, recall that the normalizable solutions of the Klein Gordon equation can be used to construct the Hilbert space of the bulk theory in the following way: We decompose the scalar as \begin{equation} \phi\left(t,\vec{x},r\right)=\int dEd^{d}p\frac{1}{N_{E,p}}\left(\phi_{E,p}\left(t,\vec{x},r\right)a_{E,p}+\phi_{E,p}^{*}\left(t,\vec{x},r\right)a_{E,p}^{\dagger}\right),\label{eq:phi expansion} \end{equation} where $a_{E,p}$ are operators, $N_{E,p}\equiv\Braket{\phi_{E,p},\phi_{E,p}}^{\frac{1}{2}}$and $\Braket{\cdot,\cdot}$ is the Klein-Gordon inner product, defined by \begin{equation} \Braket{f,g}\equiv i\int_{\Sigma}d^{d}xdr\sqrt{-g}g^{00}\left(f^{*}\partial_{t}g-\left(\partial_{t}f^{*}\right)g\right).\label{eq:KGNorm} \end{equation} Here, the integral is to be taken over a spacelike slice $\Sigma$ \footnote{This norm accords with the norm preserved by the effective Schr\"odinger equation in (\ref{eq:schr}). If we choose $\Braket{\phi_{E,p},\phi_{E,p}^{*}}=0$, i.e.\ pick definite frequency solutions, the $a$ and $a^{\dagger}$ are the usual creation/annihilation operators for particles with wavefunction $\phi_{E,p}$. We can create all possible states in the Fock space by repeatedly acting with $a^{\dagger}$ on the vacuum $\Ket{0}_{\mathrm{AdS}}$. In Lorentzian AdS/CFT, the bulk-boundary dictionary states that there exists a boundary operator defined by \begin{equation} O\left(t,\vec{x}\right)\equiv\lim_{r\rightarrow0}r^{-\Delta_{+}}\phi\left(t,\vec{x},r\right),\label{eq: O from phi} \end{equation} which is sourced by the classical, non-normalizable solution $\phi_{\mathrm{cl}}$ behaving as $r^{\Delta_{-}}$ at the boundary. Taking the above limit in \eqref{eq:phi expansion}, we arrive at \begin{equation} O\left(t,\vec{x}\right)=\int dEd^{d}p\frac{1}{N_{E,p}}\left(\varphi_{E,p}\left(t,\vec{x}\right)a_{E,p}+\varphi_{E,p}^{*}\left(t,\vec{x}\right)a_{E,p}^{\dagger}\right). \end{equation} Here $\varphi_{E,p}\equiv\lim_{r\rightarrow0}r^{-\Delta_{+}}\phi_{E,p}$. The remarkable fact is that the boundary operator can be expanded in terms of \textit{the same} $a$,$a^{\dagger}$ as the bulk field. Thus, to create an arbitrary state in the bulk we can use either bulk operators or boundary operators that are `smeared' over $\vec{x}$ and $t$ in an appropriate way. For example, for a single-particle state we have \begin{equation} a_{E,p}=\int dt^{\prime}d^{d}x^{\prime}N_{E,p}\varphi_{E,p}^{*}\left(t^{\prime},\vec{x}^{\prime}\right)O\left(t^{\prime},\vec{x}^{\prime}\right),\label{eq:a from O} \end{equation} so the state $\Ket{E,p}_{\mathrm{AdS}}$ can be built entirely out of boundary operators, and so on. Here we need to assume that the $\varphi$ are normalized such that \begin{equation} \int dEd^{d}p\varphi_{E,p}^{*}\left(t,\vec{x}\right)\varphi_{E,p}\left(t^{\prime},x^{\prime}\right)=\delta\left(t-t^{\prime}\right)\delta\left(\vec{x}-\vec{x}^{\prime}\right).\label{eq:boundary normalization} \end{equation} Notice that \eqref{eq:boundary normalization}, and not \eqref{eq:KGNorm}, is the relevant inner product here. This is because the $\varphi_{E,p}$ are not solutions to any equation of motion at the boundary; rather, they are a set of complete function \footnote{In other words: $O$ is an off-shell operator.}. The condition \eqref{eq:boundary normalization} is not in tension with the Klein-Gordon normalization condition in the bulk, since we have explicitly factored out $N_{E,p}$ in \eqref{eq:phi expansion}. Equation \eqref{eq:a from O} induces an isomorphism between the Fock-space representations of the bulk and boundary Hilbert spaces. The question we would like to answer is whether we can express any operator in the bulk entirely in terms of boundary operators. In particular, we would like to reconstruct $\phi$ from its corresponding boundary operator $O$. We make the ansatz \begin{equation} \phi\left(t,\vec{x},r\right)=\int dt^{\prime}d^{d}x^{\prime}K\left(t,\vec{x},r|t^{\prime},\vec{x}^{\prime}\right)O\left(t^{\prime},\vec{x}^{\prime}\right),\label{eq:phi ansatz} \end{equation} where $K$ is called a smearing function. We can plug \eqref{eq:a from O} back into \eqref{eq:phi expansion} to obtain: \begin{equation} K\left(t,\vec{x},r|t^{\prime},\vec{x}^{\prime}\right)=\int dEd^{d}p\phi_{E,p}\left(t,\vec{x},r\right)\varphi_{E,p}^{*}\left(t^{\prime},\vec{x}^{\prime}\right).\label{eq:candidate K} \end{equation} Note that this $K$ differs from the usual bulk-to-boundary propagator in that it is a relationship among normalizable modes. Throughout this paper, we will assume that $K$ has a well-defined Fourier transform, which allows us to interchange the order of integration above. We will comment on some mathematical details and the precise definition of $K$ in section \ref{sec:modifyingbb}. In Lifshitz spacetime, the normalizable solutions are given by \begin{equation} \phi_{E,p}=e^{-i\left(Et-\vec{p}\cdot\vec{x}\right)}f_{E,p}=e^{-i\left(Et-\vec{p}\cdot\vec{x}\right)}e^{-\frac{d}{2}B}\psi_{E,p}. \end{equation} Near the boundary, \begin{equation} \psi\approx B_{E,p}\zeta^{\frac{1}{2}+\nu}\equiv\hat{B}_{E,p}r^{z\left(\frac{1}{2}+\nu\right)}, \end{equation} so that \begin{equation} \varphi_{E,p}=\lim_{r\rightarrow0}r^{-\Delta_{+}}\phi=e^{-i\left(Et-\vec{p}\cdot\vec{x}\right)}\hat{B}_{E,p}. \end{equation} The normalization condition \eqref{eq:boundary normalization} then requires $|\hat{B}_{E,p}|=\left(2\pi\right)^{-(d+1)/2}$. Let us now use the WKB approximation. For normalizable solutions, we have $C=0$, or $a=-ib$, so the normalization of the wavefunction is fixed by \begin{equation} |b|=\nu^{\frac{1}{2}}z^{\frac{1}{2}+\nu}\left(2\pi\right)^{-\frac{d+1}{2}}\lim_{y\rightarrow0}y^{\nu}e^{S\left(y\right)}. \end{equation} The properly normalized WKB solution is then given by \begin{equation} \psi_{E,p}\left(\rho\right)= \begin{cases} \left(2\pi\right)^{-\frac{d+1}{2}}\nu^{\frac{1}{2}}z^{\frac{1}{2}+\nu}\left(U+\Delta U-E^{2}\right)^{-\frac{1}{4}}\lim_{y\rightarrow0}y^{\nu}e^{S\left(y\right)-S\left(\rho\right)}, & \,\rho<\rho_{0}; \\ e^{i\frac{\pi}{4}}\left(2\pi\right)^{-\frac{d+1}{2}}\nu^{\frac{1}{2}}z^{\frac{1}{2}+\nu}\left(E^{2}-U-\Delta U\right)^{-\frac{1}{4}}\lim_{y\rightarrow0}y^{\nu}e^{S\left(y\right)}\left[e^{-i\Phi\left(\rho\right)}-ie^{i\Phi\left(\rho\right)}\right], & \,\rho>\rho_{0}, \end{cases} \end{equation} where $S\left(\rho\right)=\int_{\rho}^{\rho_{0}}d\rho^{\prime}\sqrt{U+\Delta U-E^{2}}$, $\Phi\left(\rho\right)=\int_{\rho_{0}}^{\rho}d\rho^{\prime}\sqrt{E^{2}-U-\Delta U}$ and $\Delta U\equiv 1/\left(2\rho^{\prime}\right)^{2}$ (see appendix~\ref{sec:WKB}). Using this result, we can write our candidate smearing function as \begin{equation} K=e^{-\frac{d}{2}B}\int\frac{dE}{\left(2\pi\right)^{\frac{1}{2}}}\frac{d^{d}p}{\left(2\pi\right)^{\frac{d}{2}}}e^{i\left(E\left(t^{\prime}-t\right)-\vec{p}\cdot\left(\vec{x}^{\prime}-\vec{x}\right)\right)}\psi_{E,p}.\label{eq:K as FT} \end{equation} We recognize this integral as the inverse Fourier transform of $\psi_{E,p}$. We will now show that this object does not exis \footnote{For a precise definition of what we mean by nonexistence, see section~\ref{sec:modifyingbb}.} because $\psi$ grows exponentially with momentum $p$. First, let E and $\rho$ be fixed. We then choose $p$ large enough so $\rho<\rho_{0}$, i.e.\ so the $\rho$ we are considering is in the tunneling region. This choice is possible for any $\rho$. For concreteness, we can choose \begin{equation} p^{2}>E^{2}\rho^{k}. \label{eq:pbound1} \end{equation} Then \begin{equation} \left|\lim_{y\rightarrow0}y^{\nu}e^{S\left(y\right)-S\left(\rho\right)}\right|=\lim_{y\rightarrow0}y^{\nu}\exp\left(\int_{y}^{\rho}d\rho^{\prime}\sqrt{\frac{\nu^{2}}{\left(\rho^{\prime}\right)^{2}}+\frac{p^{2}}{\left(\rho^{\prime}\right)^{k}}-E^{2}}\right),\label{eq:relevant integral} \end{equation} and the integral is real-valued. Now let $0<\lambda<1$ such that $y<\lambda\rho<\rho$ and split the integral accordingly: \begin{equation} \int_{y}^{\rho}=\int_{y}^{\lambda\rho}+\int_{\lambda\rho}^{\rho}\label{eq:split integral}. \end{equation} Roughly speaking, the first integral provides the boundary data with the correct asymptotic $y$-dependence, while the second integral is responsible for the exponential behavior in $p$. In the first integral, using \eqref{eq:pbound1}, we find \begin{align} \int_{y}^{\lambda\rho}d\rho^{\prime}\sqrt{\frac{\nu^{2}}{\left(\rho^{\prime}\right)^{2}}+\frac{p^{2}}{\left(\rho^{\prime}\right)^{k}}-E^{2}} & >\nu\log\left(\frac{\lambda\rho}{y}\right).\label{eq:bound1} \end{align} In the second integral, for $p$ large enoug \footnote{For concreteness, choose e.g.\ $p^{2}>E^{2}\rho^{k}/(1-c^{2}).$} we can find a constant $0<c<1$ such that \begin{equation} \int_{\lambda\rho}^{\rho}d\rho^{\prime}\sqrt{\frac{\nu^{2}}{\left(\rho^{\prime}\right)^{2}}+\frac{p^{2}}{\left(\rho^{\prime}\right)^{k}}-E^{2}}>\int_{\lambda\rho}^{\rho}d\rho^{\prime}\frac{cp}{\left(\rho^{\prime}\right)^{\frac{k}{2}}}=cz\rho^{\frac{1}{z}}\left(1-\lambda^{\frac{1}{z}}\right)p\label{eq:bound 2}. \end{equation} Putting everything together, we conclude that for $E$ and $\rho$ fixed, there exist $c,\lambda \in (0,1)$ and $p_{0}$ such that \begin{equation} \left|\lim_{y\rightarrow0}y^{\nu}e^{S\left(y\right)-S\left(\rho\right)}\right|>\left(\lambda\rho\right)^{\nu}\exp\left[cz\rho^{\frac{1}{z}}(1-\lambda^{\frac{1}{z}})p\right],\label{eq:lowerbound pure lifshitz} \end{equation} for all $p>p_{0}$. Hence the function $\psi_{E,p}$ grows exponentially with $p$ and the smearing function defined in \eqref{eq:candidate K} does not exis \footnote{This exponential behavior in $p$ is distinct from the behavior of $|B|/|b|$ in $\alpha$ (see e.g.\ \eqref{eq:B/b large alpha}), since here we are interested in the amplitude of the wavefunction at a fixed radial location $\rho$, and not its overall normalization.}. The inability to construct a smearing function is due to the existence of trapped modes, which have to tunnel through $V_{p}$ to reach the boundary. The boundary imprint of these modes is suppressed by a factor of $e^{-cp}$, where $c$ is some positive constant depending on the geometry. However, the normalization condition \eqref{eq:boundary normalization} turns this suppression into an exponential amplification: For any given mode the smearing function takes the corresponding boundary data and amplifies it by an appropriate factor to reconstruct bulk information. Consequently, trapped modes receive a contribution $e^{+cp}$ in the smearing function integral. As $p\rightarrow\infty$, the boundary imprint of trapped modes becomes arbitrarily small, and as a result the smearing function integral diverges. The splitting of the domain of integration into a near-boundary region $[0,\lambda\rho]$ and a bulk region $[\lambda\rho,\rho]$ is crucial for our proof: In the near-boundary region, we use the fact that no matter how large $p$ is, we can make $\rho^{\prime}$ small enough such that the cosmological- and mass-terms in the potential dominate over $V_{p}$ and we can approximate $U\approx \nu^{2}/(\rho^{\prime})^{2}$. Modes that tunnel through this part do not contribute an exponential factor $\sim e^{p}$, but rather produce the correct boundary scaling $y^{-\nu}$. This scaling is consequently stripped off by the $y^{\nu}$ factor in \eqref{eq:relevant integral}. In the bulk region near $\rho$, however, there is a minimum value that $\rho^{\prime}$ can take, so as we drive $p$ to infinity, eventually $U\approx p^{2}/(\rho^{\prime})^{k}$ becomes a very good approximation. This is what produces the exponential factor in \eqref{eq:lowerbound pure lifshitz}. We see that there are two qualitatively different limits of the potential: $\rho\rightarrow0$ and $p\rightarrow\infty$. Both of them are important for understanding the behavior of \eqref{eq:relevant integral}, which is why we need to pick $0<\lambda<1$ to get a lower bound that reflects this behavior. Simply setting $\lambda=0$ corresponds to approximating $U\approx p^{2}/(\rho^{\prime})^{k}$ everywhere. However, in doing so we would be neglecting the boundary scaling $y^{-\nu}$, and consequently the lower bound \eqref{eq:lowerbound pure lifshitz} would be zero. Similarly, $\lambda=1$ corresponds to approximating $U\approx \nu^{2}/(\rho^{\prime})^{2}$ everywhere. While this is certainly true for small $\rho^{\prime}$, we would be missing the fact that the momentum part $V_{p}$ of the potential can still dominate in any interval away from the boundary (i.e.\ close to $\rho$) and lead to exponential growth. The bound \eqref{eq:lowerbound pure lifshitz} would just be a constant independent of $p$ and we would not be able to make the same conclusion about the smearing function. \subsection{\label{sub:Phase-space-analysis}Momentum-space analysis} It is instructive to analyze the behavior of the integral \eqref{eq:K as FT} at large momenta in the (E,$|p|$)-plane. We already saw that for fixed energy $E$, the smearing function diverges exponentially with $|p|$, as the tunneling barrier becomes arbitrarily large at high momenta. However, this is not necessarily the only direction along which the integral diverges. Let us introduce polar coordinates \begin{align} |p| & =q\cos\theta\nonumber \\ E & =q\sin\theta. \label{eq:polar coords} \end{align} Figure~\ref{fig:E-p plane} shows a sketch of the spectrum in the (E,$|p|$)-plane: The solid line divides trapped modes, which have to tunnel through $V_{p}$ from `free' modes, which only tunnel through $U\sim1/\rho^{2}$. If we imagine cutting off Lifshitz at some small value $\lambda\rho$ with $\lambda<1$, all modes with $E<\left(\lambda\rho\right)^{-\frac{1}{2}}|p|$ (yellow region) are trapped mode \footnote{Notice that the choice of $\lambda$ is arbitrary. In particular, along any line $E=\tan\theta|p|$, there is a choice of $\lambda$ such that all modes are below the momentum-barrier for large enough $|p|$. Nevertheless, because of the subtleties discussed at the end of the previous section, we should not simply take $\lambda\rightarrow0$ but instead work with a small but finite value.}. Let us study the integral which defines the smearing direction. If we perform this integral along any direction $\theta$ over these modes (i.e.\ $\tan\theta<\left(\lambda\rho\right)^{-\frac{1}{2}}$), the exponential term in the integrand behaves as \begin{equation} \mathrm{Re}\left(S\left(y\right)-S\left(\rho\right)\right) =\int_{y}^{\rho}d\rho^{\prime}\sqrt{\frac{\nu_{z}^{2}}{(\rho^{\prime})^{2}}+\left(\frac{1}{(\rho^{\prime})^{k}}-\tan^{2}\theta\right)q^2\cos^{2}\theta }. \end{equation} For $q$ large enough, this term grows linearly and the smearing function is exponentially divergent. We see that the variable that controls the suppression (or amplification) due to tunneling is in fact $q=\sqrt{E^{2}+p^{2}}$, as opposed to just $|p|$. \begin{figure}[t] \centering \includegraphics[height=6cm]{freevstrapped_general} \caption{\label{fig:E-p plane}Sketch of free (F) and trapped (T) modes for general case. Deforming the geometry in the IR may introduce a cutoff (dotted line), but this line will always remain below the solid line, and some trapped modes survive.} \end{figure} \subsection{\label{sub:adslifads}No smearing function $\Leftrightarrow$ singularities?} The divergence of the smearing function is due to trapped modes, which correspond to classical geodesics that cannot reach the boundary. However, those are precisely the trajectories that start and end at the tidal singularity at $\rho\rightarrow\infty$, so their fate is not well-understood even on the classical level. Therefore, one might wonder if the inability to construct smearing functions is simply due to the presence of singularities. This question has been raised before in the case of black hole solutions in Ad \footnote{However, we should point out that the two types of singularities encountered here are qualitatively different. In the Lifshitz case, the singularity is `mild', in the sense that all curvature invariants remain finite. It is, however, felt by strings that fall towards the horizon \cite{Horowitz:2011gh} } \cite{Bousso:2012mh,Leichenauer:2013kaa}. Fortunately, in our case there are known ways to resolve the singularity, so we can directly test the conjecture that non-existence of smearing functions is related to singularities. In the context of Einstein-Maxwell-dilaton systems \cite{Goldstein:2009cv}, the Lifshitz singularity can be resolved by including corrections to the dilaton effective potential. For magnetically charged branes, the dilaton runs towards strong coupling in the IR. Using a toy-model of the quantum corrected action, the authors of \cite{Harrison:2012vy} showed that the Lifshitz geometry can be resolved into an $\mathrm{AdS}_{2}\times\mathbb{R}^{2}$ region in the deep IR. For electrically charged solutions, the dilaton runs towards weak coupling near the horizon, and higher derivative corrections become important. In \cite{Knodel:2013fua}, two of the current authors showed that by coupling the dilaton to higher curvature terms in an appropriate way, the singularity can be resolved in a similar fashion. In particular, numerical solutions were constructed that interpolate between $\mathrm{AdS}_{4}$ in the UV to Lifshitz in some intermediate regime, and finally to $\mathrm{AdS}_{2}\times\mathbb{R}^{2}$ in the deep IR. We would like to use these numerical flows to test whether resolving the singularity can make the smearing function well-defined. As a warm-up, consider the following analytical toy-model describing such a flow: \begin{align} e^{2A} & =\frac{1}{\rho^{2}},\nonumber \\ e^{2B} & =\begin{cases} \frac{1}{\rho^{2}}, & 0<\rho< R_{1}; \\ \frac{1}{R_{1}^{k}\rho^{2-k}}, & R_{1}<\rho<R_{2}; \\ \frac{1}{R_{1}^{k}R_{2}^{2-k}}, & R_{2}<\rho, \end{cases}\nonumber \\ C & =A.\label{eq:adslifads toy model metric} \end{align} The last condition is a gauge choice, which fixes our radial coordinate to be $\rho$, as defined in \eqref{eq:dxdr}. The potential is given by \begin{equation} U\left(\rho\right)=\begin{cases} \frac{\nu_{1}^{2}-\frac{1}{4}}{\rho^{2}}+p^{2}, & 0<\rho<R_{1}; \\ \frac{\nu_{z}^{2}-\frac{1}{4}}{\rho^{2}}+p^{2}\left(\frac{R_{1}}{\rho}\right)^{k}, & R_{1}<\rho<R_{2}; \\ \frac{\nu_{\infty}^{2}-\frac{1}{4}}{\rho^{2}}+p^{2}\left(\frac{R_{1}}{R_{2}}\right)^{k}\left(\frac{R_{2}}{\rho}\right)^{2}, & R_{2}<\rho, \end{cases}\label{eq:adslifads toy model U} \end{equation} where $\nu_{z}$ was defined in \eqref{eq:nualphadef}, and $0<k<2$. All modes with $p>E$, or equivalently $\tan\theta<1$ are trapped. It is interesting to note that since the potential goes to zero as $\rho\rightarrow\infty$, there are now modes that are below the barrier in the $\mathrm{AdS}_{d+2}$ region. For pure AdS, this is not possible, as the wavefunction cannot be below the barrier everywhere. Let us see if a smearing function exists for any point $\rho$ in the bulk. For $0<\rho<R_{1}$, we need to compute \begin{equation} \left|\lim_{y\rightarrow0}y^{\nu}e^{S\left(y\right)-S\left(\rho\right)}\right| = \lim_{y\rightarrow0}y^{\nu}\exp\left(\mathrm{Re}\int_{y}^{\rho}d\rho^{\prime}\sqrt{\frac{\nu_{1}^{2}}{\rho^{\prime2}}+\left(1-\tan^{2}\theta\right)q^2\cos^{2}\theta}\right).\label{eq:e^S in AdS} \end{equation} Naively, one might expect that since we are integrating all the way up to the boundary at $\rho=0$, the $1/\rho^{2}$-term will eventually dominate and there is no $q$-divergence. However, we have seen before that it is necessary to split the integral into a near-boundary region and a bulk region, according to \eqref{eq:split integral}. The near boundary integral will then produce the typical boundary scaling $y^{-\nu}$, while the bulk integral will grow linearly for trapped modes. In complete analogy with \eqref{eq:lowerbound pure lifshitz} we find that there exist constants $q_{0},c>0$ and $\lambda\in(0,1)$ such that \[ \left|\lim_{y\rightarrow0}y^{\nu_{1}}e^{S\left(y\right)-S\left(\rho\right)}\right|>\left(\lambda r\right)^{\nu_{1}}e^{cq}, \] for all $q>q_{0}$. Again, even though the $1/\rho^{2}$ part of the potential dominates near the boundary, there is still an exponential divergence due to trapped modes, and the smearing function does not exist in the AdS region. For points within the Lifshitz region ($R_{1}<\rho<R_{2}$), the relevant integral contains an integral over the $\mathrm{AdS}_{d+2}$ region, which is divergent by itself, plus an additional term \begin{equation} \int_{R_{1}}^{\rho}d\rho^{\prime}\sqrt{\frac{\nu_{z}^{2}}{\rho^{\prime2}}+\left(\left(\frac{R_{1}}{\rho^{\prime}}\right)^{k}-\tan^{2}\theta\right) q^2 \cos^{2}\theta}. \end{equation} This integral gives a real contribution for $\tan\theta<\left(R_{1}/\rho\right)^{k/2}$, which grows linearly with large $q$. Hence the smearing function still grows like $e^{c^{\prime}q}$, but now $c^{\prime}>c$ and it diverges even faster than in the $\mathrm{AdS}_{d+2}$ part. The same logic can be applied to a point within the $\mathrm{AdS}_{2}\times\mathbb{R}^{d}$ region in the IR ($\rho>R_{2}$). In this case there is a contribution from both $\mathrm{AdS}_{d+2}$ and Lifshitz, plus a contribution \begin{equation} \int_{R_{2}}^{\rho}d\rho^{\prime}\sqrt{\frac{\nu_{\infty}^{2}}{\rho^{\prime2}}+\left(\left(\frac{R_{1}}{R_{2}}\right)^{k}\left(\frac{R_{2}}{\rho^{\prime}}\right)^{2}-\tan^{2}\theta\right)q^2 \cos^{2}\theta }. \end{equation} Modes with $\tan\theta<\left(R_{1}/R_{2}\right)^{k/2}R_{2}/\rho$ begin to tunnel already in the $\mathrm{AdS}_{2}\times\mathbb{R}^{d}$ part of the potential, and so the smearing function will diverge even faster at large $q$. The final result is that there is no smearing function for any point $\rho$ in the bulk. The trapped modes lead to an exponential divergence which becomes worse the deeper we try to reach into the bulk. Let us now check that the result obtained for the toy-model \eqref{eq:adslifads toy model metric} is indeed correct also for the exact numerical solution found in \cite{Knodel:2013fua} (here $d=2$). The effective potential is plotted in Figure~\ref{fig:U for adslifads}. As $p$ increases, the potential becomes better and better approximated by $V_{p}$ (shown in Figure~\ref{fig:adslifads Vp}). The metric coefficients and potential are of the form given in \eqref{eq:adslifads toy model metric} and \eqref{eq:adslifads toy model U}, except that now there is a smooth transition between the three regions. \begin{figure}[t] \centering \includegraphics[height=6cm]{U_num} \caption{\label{fig:U for adslifads}Effective potential $U$ for the numerical flow found in \cite{Knodel:2013fua}, for $m=1$. The momentum increases from bottom to top, with $p=0$ (black), $10^{2}$ (blue), $10^{4}$ (red), $10^{5}$ (green). At large momenta, the potential is well approximated by $V_{p}=e^{2W}p^{2}$.} \end{figure} \begin{figure}[t] \centering \hspace{-0.5cm}\includegraphics[height=6cm]{Vp_num} \caption{\label{fig:adslifads Vp}The factor $e^{2W}$ for the same numerical solution. The solution flows from $\mathrm{AdS}_{4}$ ($e^{2W}\approx \mathrm{const.}$), to Lifshitz ($e^{2W}\sim\rho^{1.45}$, corresponding to $z\approx3.68$) to $\mathrm{AdS}_{2}\times\mathbb{R}^{2}$ ($e^{2W}\sim\rho^{-2}$).} \end{figure} Figures~\ref{fig:ReSEpplaneAdS4}-\ref{fig:ReSEpplaneAdS2} show the real part of $S\left(y\right)-S\left(\rho\right)$ in the (E,$|p|$)-plane. Instead of taking $y$ to zero we choose $y\approx10^{-15}$, which we may think of as disregarding the near-boundary region of the $\rho^{\prime}$-integral and starting at $y=\lambda\rho$. The thick line divides free (blue) modes from trapped (yellow) modes. The contours represent lines along which $\mathrm{Re}\left(S(y)-S(\rho)\right)$ is constant. If we keep $E$ fixed and increase $p$, we cross the contours at approximately equal distances, so the integral grows linearly in $p$. This is not only true for lines of constant $E$, but for any line within the trapped region (i.e.\ any line that stays below the black solid line). Hence the integral indeed diverges linearly with $q=\sqrt{E^{2}+p^{2}}$, as was anticipated in section \ref{sub:Phase-space-analysis}. Figure~\ref{fig:Re(S) all points} shows $\mathrm{Re}\left(S\left(y\right)-S\left(\rho\right)\right)$ for three points representing $\mathrm{AdS}_{4}$, Lifshitz and $\mathrm{AdS}_{2}\times\mathbb{R}^{2}$. The energy is held fixed at $E=10^{16}$ , such that at small $p$, the wavefunction is oscillating everywhere. As we increase $p$, the mode eventually becomes trapped and the real part of the integral grows linearly. Note that in the log-log plot used here, the three curves lie nicely on top of each other. This fact confirms our prediction that the smearing function diverges faster the deeper we try to reach into the bulk. \begin{figure}[t] \centering \includegraphics[height=5.7cm]{Re_S_pointnearAdS4_contour} \vspace{-.4cm} \caption{\label{fig:ReSEpplaneAdS4}Plot of $\mathrm{Re\left(S(y)-S(\rho)\right)}$ for a point within the $\mathrm{AdS}_{4}$ region ($\rho\approx1.3\cdot10^{-15}$). The black solid line represents $V_{p}=E^{2}$ and divides free (blue) from trapped modes (yellow). Contours indicate lines of constant $\mathrm{Re\left(S(y)-S(\rho)\right)}$, with a linear increase between different contours.} \end{figure} \begin{figure}[t] \centering \includegraphics[height=5.7cm]{Re_S_pointnearLif_contour} \vspace{-.4cm} \caption{\label{fig:ReSEpplaneLif}Plot of $\mathrm{Re\left(S(y)-S(\rho)\right)}$ for a point within the Lifshitz region ($\rho\approx9\cdot10^{-8}$).} \end{figure} \begin{figure}[t] \centering \includegraphics[height=5.7cm]{Re_S_pointnearAdS2_contour} \vspace{-.4cm} \caption{\label{fig:ReSEpplaneAdS2}Plot of $\mathrm{Re\left(S(y)-S(\rho)\right)}$ for a point within the $\mathrm{AdS}_{2}\times\mathbb{R}^{2}$ region ($\rho\approx1$).} \end{figure} We conclude that resolving the tidal singularity is not enough to make the smearing function well defined. The $\mathrm{AdS}_{2}\times\mathbb{R}^{2}$ region in the IR can be thought of as the $z\rightarrow\infty$ limit of Lifshitz spacetime. As a consequence, $V_{p}\sim\rho^{-2}$, and there are still trapped modes with arbitrarily small boundary imprint. It is also worth commenting on the addition of an AdS region in the UV, as in \eqref{eq:adslifads toy model metric}, which may seem desirable to make the holographic renormalization procedure better-defined. We have seen explicitly that the integral over \eqref{eq:e^S in AdS} is still divergent at large momenta and a smearing function does not exist, even for points close to the boundary. This is the quantum equivalent of the observation made at the end of section \ref{sub:lifgeos}, that null geodesics with large enough $p$ still see a `Lifshitz barrier' and remain trapped inside the bulk, regardless of the near-boundary geometry. \begin{figure}[t] \centering \hspace{-4cm}\includegraphics[height=7cm]{Re_S_allpoints_loglog} \caption{\label{fig:Re(S) all points}Plot of the real part of $S(y)-S(\rho)$ vs. $p$ at three different positions within the $\mathrm{AdS}_{4}$ ($\rho\approx1.3\cdot10^{-15}$), Lifshitz ($\rho\approx9\cdot10^{-8}$) and $\mathrm{AdS}_{2}\times\mathbb{R}^{2}$ ($\rho\approx1$) regions (from bottom to top). The energy is fixed at $E=10^{16}$ and we chose $m=1$. For large momenta, the solution begins to tunnel and contributes an exponential factor in $K$.} \end{figure} \subsection{\label{sub:lifads}Other flows involving Lifshitz} The $\mathrm{AdS}_{2}\times\mathbb{R}^{d}$ geometry considered in the previous section is not the only possible IR endpoint of the RG-flow for Lifshitz solutions. Ref.~\cite{Braviner:2011kz,Singh:2010cj,Singh:2013iba} have considered flows from Lifshitz in the IR to an $\mathrm{AdS}_{d+2}$ fixed point in the UV. These flows are of particular interest to us, since $V_{p}$ does not go to zero as $\rho \rightarrow \infty$, but reaches a constant value corresponding to the AdS geometry at the horizon. Consequently, some of the problematic trapped modes never oscillate, and are thus removed from the spectrum. To see how this works, consider the following toy-model of such a Lifshitz to $\mathrm{AdS}_{d+2}$ flow: \begin{align} e^{2A} & =\frac{1}{\rho^{2}},\nonumber \\ e^{2B} & =\begin{cases} \frac{1}{\rho^{2-k}}, & 0<\rho\leq R_{1};\\ \frac{R_{1}^{k}}{\rho^{2}}, & \rho>R_{1}, \end{cases}\nonumber \\ C & \equiv A.\label{eq:lifads4} \end{align} The potential is given by \begin{equation} U\left(\rho\right)=\begin{cases} \frac{\nu_{z}^{2}-\frac{1}{4}}{\rho^{2}}+\frac{p^{2}}{\rho^{k}} ,& 0<\rho\leq R_{1};\\ \frac{\nu_{1}^{2}-\frac{1}{4}}{\rho^{2}}+\frac{p^{2}}{R_{1}^{k}} ,& \rho>R_{1}. \end{cases} \end{equation} To compute the smearing function at some fixed $\rho\leq R_{1}$ we again split the interval {[}0,$\rho${]} into a near-boundary region $[0,\lambda\rho]$ and a bulk region {[}$\lambda\rho$, $\rho${]}, where $\lambda<1$. In the bulk region, the potential can be approximated by $V_{p}=p^{2}/\rho^{k}$ for $p$ large enough. Then, modes with $p>\left(\lambda\rho\right)^{k/2}E$ are trapped by $V_{p}$. For $\rho>R_{1}$, the potential takes a constant value. In pure Lifshitz, modes with $p<R_{1}^{k/2}E$ would have been oscillating in this region. However, these modes are now completely under the barrier and therefore have to be excluded from the spectrum. The $\mathrm{AdS}_{d+2}$ region in the IR thus introduces a natural (energy-dependent) momentum cutoff. Nevertheless, there is still a finite wedge of trapped modes with $R_{1}^{-k/2}<\tan\theta<\left(\lambda\rho\right)^{-k/2}$ (cf. Figure~\ref{fig:E-p plane}) and integrating up to $q=\infty$ will produce the same divergent behavior as before. In section \ref{sub:removing trapped modes}, we will give a general argument as to why this has to be the case, and show that no smooth IR-deformation can remove all trapped modes from the spectrum. \section{\label{sec:sfgeneral}Generalization} We have seen that the construction of smearing functions can fail if there are modes that have to tunnel through a momentum barrier in the potential. The integral \eqref{eq:candidate K} diverges if such modes exist at arbitrarily large $q=\sqrt{E^{2}+p^{2}}$. In this section, we will generalize our previous findings to prove that smearing functions do not exist for any geometries that allow trapped modes. Consider a background that satisfies \begin{equation} \partial_{\rho}e^{W}<0\;\mathrm{for}\;\rho\in[\rho_{1},\rho_{2}].\label{eq:preliminary crit} \end{equation} We would like to compute the smearing function at a bulk point $\rho>\rho_{1}$. All modes with $V_{p}\left(\rho_{1}\right)>E^{2}$ have to tunnel through some part of $V_{p}$ and are therefore trapped modes. Let us write the integral defining the smearing function in \eqref{eq:candidate K} as $\int dEd|p|\int d\Omega_{d-1}$ and focus on the integral in the ($E$,$|p|$)-plane. The domain of integration is shown in Figure~\ref{fig:E-p plane}, where free and trapped modes are separated by the solid line $E^{2}=V_{p}\left(\rho_{1}\right)$. Choosing polar coordinates \eqref{eq:polar coords}, we find that the exponential part of the integrand satisfies \[ \mathrm{Re}\left(S\left(y\right)-S\left(\rho\right)\right)>\mathrm{Re}\int_{\rho_{1}}^{\rho_{2}}d\rho^{\prime}\sqrt{V_{\mathrm{m}}(\rho^{\prime})+V_{\mathrm{cos}}(\rho^{\prime})+\left(e^{2W(\rho^{\prime})}-\tan^{2}\theta\right)\cos^{2}\theta q^{2}} \] Since the integration domain does not include the boundary, the first two terms under the square root are bounded. Thus, for $\tan\theta<e^{W(\rho_{1})}$, the integral grows linearly with large $q$ and the smearing function diverges exponentially. The divergence appears not only at fixed $E$, but under any angle in the yellow region of Figure~\ref{fig:E-p plane}. Consequently, if a geometry has trapped modes that are below the barrier at some $\rho_{1}$, a smearing function does not exist for any $\rho>\rho_{1}$. From the null energy condition \eqref{eq:nullxxsimple} and the discussion thereafter, we know that once $\partial_\rho e^{W}$ is negative for some $\rho_1$, it cannot be positive for any $\rho<\rho_{1}$. Thus, once the wavefunction is below the $V_{p}$ barrier, it will stay below it as we go towards the boundary. Using the terminology introduced in section \ref{sec:classical}, trapped modes cannot become free near the boundary. Therefore, when computing the smearing function $K\left(t,x,\rho|t^{\prime},x^{\prime}\right)$, there is an exponential contribution from trapped modes regardless of which bulk point $\rho$ we consider. The condition \eqref{eq:preliminary crit} makes it is easy to identify geometries without smearing functions. Clearly, Lifshitz has $\partial_\rho e^{W}<0$ everywhere, and as we saw earlier, $K$ does not exist. If we instead consider flows that involve only a finite region with broken Lorentz invariance, such that \eqref{eq:preliminary crit} is satisfied in some region, we still have trapped modes, and the smearing function will not exist. This analysis includes flows involving a Lifshitz region, as well as hyperscaling geometries with Lifshitz scaling. Our analysis above shows that none of these geometries admit smearing functions, provided the spacetime satisfies the NEC. \subsection{\label{sub:removing trapped modes}Removing trapped modes via deformations} In our discussion above, we always assumed that the momentum-space integral \eqref{eq:candidate K} does in fact include trapped modes with arbitrarily large $q$ on some set of nonzero measure. This is clearly the case in the examples mentioned above. On the other hand, the smearing function for $\mathrm{AdS}$ converges because modes with $p^{2}>E^{2}$ are simply not part of the spectrum, as the corresponding wavefunction would have to be below the potential globally. One might wonder if it is possible to `fix' a geometry which a priori does not admit a smearing function, by removing all trapped modes from the spectrum in a physical way. The AdS example gives us a hint on how one might accomplish this task: If the geometry is deformed in the deep IR such that would-be trapped modes never actually oscillate, they would simply not be allowed. Following our discussion of the null energy condition in section \ref{sec:classical}, it follows that there are only three relevant IR asymptotics that we need to consider: \begin{enumerate} \item $e^{W}$ decreases monotonically to a constant value $\m>0$. \item $e^{W}$ attains a minimum value $\m>0$, but then goes to constant $M>\m$. \item $e^{W}$ attains a minimum value $\m>0$, but then goes to infinity. \end{enumerate} Trapped states are equivalent to tunneling states in the potential $V_{p}=p^{2}e^{2W}.$ For $p$ large enough, these states always exist \cite{Brau:2003}. This can be seen heuristically by bounding the potential from above with an appropriate square-well potential $\widetilde{U}\left(\rho\right)$ (see Figure~\ref{fig:minmax}). Therefore, no smooth deformation can ever remove all trapped modes from the spectrum. As an example, consider case 1, which captures the case of the Lifshitz to $\mathrm{AdS}_{d+2}$ flow discussed in section \ref{sub:lifads}. The AdS region introduces an energy-dependent momentum cutoff $p<E/\m$. However, since $\m$ is by definition a global minimum and \eqref{eq:preliminary crit} holds, we clearly have $\m<e^{W(\rho_{1})}$. Although the cutoff may remove some trapped modes from the spectrum, there will always remain a wedge of trapped modes that gives a divergent contribution when integrated over (see Figure~\ref{fig:E-p plane}). We conclude that spaces without a smearing function cannot be deformed smoothly to make the smearing function well-defined. \begin{figure}[th] \noindent \begin{centering} \includegraphics[height=6cm]{minmax} \par\end{centering} \caption{\label{fig:minmax}Sketch of $V_{p}$ for a potential satisfying \eqref{eq:preliminary crit}. This includes deformations of AdS and flows involving Lifshitz. Using the min-max principle, the energy levels are bounded from above by those of a square-well potential. In the large $p$ limit, there are always trapped modes. The near-horizon behavior of the potential is irrelevant for our discussion. } \end{figure} \subsection{Adding trapped modes via deformations} Another interesting question is what happens if we take a geometry with a smearing function, such as AdS \cite{Bena:1999jv,Hamilton:2005ju,Hamilton:2006az}, and add a small (planar) perturbation in the IR. It can be seen from \eqref{eq:nullxxsimple} that $e^W$ must start with non-positive slope at the boundary for any background that is asymptotically Ad \footnote{ If we do not insist on AdS asymptotics, then we could choose $e^W$ to immediately have a positive slope. If $e^{W}$ has positive slope at some $\rho_{+}$, the NEC dictates that $e^W$ cannot begin to decrease at some larger $\rho$. Thus, in this scenario no trapped modes are introduced, and the smearing function will continue to exist everywhere. In particular, we cannot have a situation akin to Figure 5 in \cite{Leichenauer:2013kaa}, where the potential has a dip allowing trapped modes to become oscillating again close to the boundary. }. Since the potential scales with $p$, such a perturbation will always introduce new trapped modes. In particular, the momentum-potential $V_{p}=p^{2}e^{2W}$ can always be bounded from above by a semi-infinite square-well potential of width $l$ and height $h=p^{2}h_{0}$, where $h_{0}$ is some constant (see Figure~\ref{fig:minmax}). For large enough $p$, the square-well always admits bound states with $p^{2}\left(1-h_{0}\right)<E^{2}<p^{2}$ and, via the min-max principle, so will $V_{p}$. As a result, the smearing function would be destroyed anytime the metric is deformed by such a perturbation. This result is interesting, as it opens up the possibility that `small' perturbations of AdS can make the smearing function ill-defined by introducing new trapped states. However, we should keep in mind that our ansatz only allows for planar perturbations; we cannot consider localized disturbances. It would be interesting to study the effect of such perturbations in a more general setup. Again, notice that the ultimate IR fate of the geometry with AdS behavior in the UV is not important for this discussion. In particular, whether or not there is a singularity at $r\rightarrow\infty$ does not change the qualitative result. \subsection{Relativistic domain wall flows} Given the above considerations, one may get the impression that the smearing function no longer exists for any geometry other than pure AdS. However, it is important to realize that such a conclusion is in fact unwarranted. What we have seen is that the non-existence of the smearing function is intimately tied to the presence of trapped modes with exponentially small imprint on the boundary. Since such modes arise from the large $p$ limit of $V_p=e^{2W}\vec p\,^2$, they are naturally absent when $W=0$, corresponding to flows preserving $(d+1)$-dimensional Lorentz symmetry \begin{equation} ds_{d+2}^2=e^{2B(r)}[-dt^2+d\vec x_d^2]+e^{2C(r)}dr^2. \end{equation} In this case, the Schr\"odinger equation \eqref{eq:schr} is more naturally written as \begin{equation} -\psi''+(V_m+V_{\rm cos})\psi=(E^2-\vec p\,^2)\psi. \label{eq:relsch} \end{equation} In particular, the effective potential $\hat U=V_m+V_{\rm cos}$ no longer scales with $p$. In general, $\hat U$ may admit bound states and/or modes trapped at the horizon. Although bound states fall off exponentially outside the classically allowed region, since such states occur only at fixed values of $Q^2\equiv E^2-\vec p\,^2$, they will always have a non-vanishing (although small) amplitude at the boundary. Hence the presence of such states do not present an obstruction to the existence of a smearing function. Trapped modes at the horizon, on the other hand, are potentially more troubling, as they may form a continuum spectrum with a limit of vanishing amplitude on the boundary. However, it turns out that this possibility does not prevent the construction of a well-defined smearing function $K(t,x,r|t',x)$ for any fixed value of $r$. The point here is that since $\hat U$ is independent of $Q$, the maximum suppression factor to tunnel from the boundary to $r$ is bounded by setting $Q=0$ in \eqref{eq:relsch}. As a result, it is impossible to make the suppression arbitrarily small. Hence we conclude that the smearing function exists for finite $r$ in the case of relativistic domain wall flows, although the $r\to\infty$ limit of $K$ may not exist if there are trapped modes that live arbitrarily far from the boundary. We see that it is generally possible to define a smearing function only for relativistic flows, where $W=0$ along the entire flow. Furthermore, for the case of $\mathrm{AdS}_{d+2}\to \mathrm{AdS}_{d+2}$ flows, the effective potential $\hat U$ falls off as $1/\rho^2$ both in the UV and the IR. Since this potential is too steep to admit trapped modes in the deep IR, there are no modes completely removed from the boundary, and hence the $r\to\infty$ limit of the smearing function is well-defined. Thus in this case the entire bulk may be reconstructed. \section{\label{sec:modifyingbb}Modifying the bulk-boundary dictionary } We have seen that for transverse Lorentz-breaking spacetimes with locally decreasing transverse speed of light, the smearing function is not well defined, even after resolving potential singularities. Thus, we are left with the option of loosening some of our initial assumptions about this function and its corresponding entry in the bulk-boundary dictionary. In particular, we need to reexamine our implicit assumption that $K$ can reconstruct the bulk up to arbitrarily small transverse length scales. Let us be a bit more precise about what kind of mathematical object the smearing function really is, and what we mean by saying that $K$ does or does not exist. The most general possible definition is to let the smearing function be any map from boundary operators to bulk fields. However, a reasonable condition is that $K$ defines a continuous, linear functional on the space of boundary operators. Continuity means that for any convergent sequence of boundary operators $O_{n}$ we have \begin{equation} \lim_{n\rightarrow\infty}K\left[O_{n}\right]=K\left[\lim_{n\rightarrow\infty}O_{n}\right].\label{eq:continuity} \end{equation} The difficulty in constructing such a $K$ is due to the fact that the two limits are defined with respect to very different norms. The bulk norm relevant for the left hand side is the Klein-Gordon norm \eqref{eq:KGNorm}, while the boundary norm for $O$ is given by \eqref{eq:boundary normalization}. We have seen that in spacetimes with $\partial_\rho e^{W}<0$ locally, there exist nonzero bulk solutions that have exponentially small boundary imprint, which provide an obstruction for constructing continuous smearing functions. Our strategy in this paper was to calculate a candidate smearing function $\widehat{K}$ in momentum space, and ask whether it defines a well-behaved object in position space. The problematic case is when the function defined in this way grows exponentially, i.e.\ $\widehat{K}\approx e^{cp}$. Its action on a boundary field can be written in momentum space as \begin{equation} K\left[O\right]\sim\int dp\,\widehat{K}\left(p\right)\widehat{O}\left(p\right)\label{eq:K(O)pspace}. \end{equation} Whether or not this integral is well-defined clearly depends on what we allow $\widehat{O}$ to be: If $\widehat{O}$ is a square-integrable function, the smearing function has to be square-integrable as well, which is clearly not the case here. What if we impose a stricter fall-off condition at $p\rightarrow\infty$? One rather strict condition would be that $\widehat{O}$ falls off faster than any inverse power of $p$ at infinit \footnote{In other words: $O$ is a Schwartz-function and $K$ is a tempered distribution. }. A classic example of such a function is a Gaussian $\sim e^{-p^{2}}$. However, $e^{cp}$ is not a well-defined functional on this space either. This can be seen by explicitly constructing a sequence of functions with `arbitrarily small' boundary imprint, i.e.\ a sequence that goes to zero in the boundary norm. For example, consider \begin{equation} \widehat{O}_{n}\left(p\right)\equiv e^{-cn}\Psi\left(p-n\right), \end{equation} where $\Psi$ is some bump-function. Attempting to reconstruct the corresponding bulk solution yields $K[O_{n}]\sim\int dp\Psi\left(p\right)$, which is independent of $n$, and in particular never equal to zero. Using \eqref{eq:continuity}, this means that the smearing function is not continuous. The only way to make sense of the smearing function is to completely avoid configurations with arbitrarily small boundary imprint. This can only be achieved by introducing a hard momentum cutoff $\Lambda$. In other words, we attempt to invert the bulk-boundary map $\phi\mapsto O$ only for configurations with $\widehat{O}(p>\Lambda)=0$. Acting on these functions, the exponential $e^{cp}$ is indeed a well-defined continuous functional, and the integral \eqref{eq:K(O)pspace} converges. There is, however, a price to pay: as is well-known, the Fourier transform of such compactly supported functions does not have compact support. The position space wavefunction necessarily has to `leak out' to infinity, and thus full localization in the transverse direction can never be achieve \footnote{Here we have taken the necessity of smearing $\phi$ in position space as an indication of nonlocality. However, from a quantum point of view, a more proper indication of nonlocality would be the nonvanishing of the commutator outside of the lightcone.}. \section{\label{sec:conclusion}Conclusion} Motivated by some of the difficulties that have been observed in trying to understand the global structure of Lifshitz spacetimes, we have studied the possibility of bulk reconstruction from the boundary information. At the classical level, the presence of non-radial null geodesics that do not reach the Lifshitz boundary suggests that much of the bulk data is inaccessible from the boundary. We have confirmed this heuristic picture by studying smearing functions for a bulk scalar field and demonstrating that they do not exist for Lifshitz spacetimes with $z>1$. The reason for this is that there will always be trapped modes in the bulk that have exponentially vanishing imprint on the boundary. It is these modes and the information that they contain that cannot be reconstructed from any local boundary data. Of course, it is well known that a pure Lifshitz background has a tidal singularity at the horizon. Since the trapped modes begin and end in the tidal singularity, we had initially conjectured that resolving the Lifshitz singularity would remove such modes and lead to a well defined smearing function. However, this is not the case, as we have seen; even with a regular horizon such as $\mathrm{AdS}_2\times \mathbb R^d$ or $\mathrm{AdS}_{d+2}$, there will be trapped modes with vanishing imprint on the boundary as the transverse momentum is taken to infinity. Thus the existence or non-existence of a smearing function is independent of the nature of the horizon, and in particular whether it is singular or not. More generally, we have seen that the constructibility of the smearing function depends crucially on whether there exists a family of trapped modes with arbitrarily small suppression on the boundary. The only way this can arise is if the momentum dependent part of the effective Schr\"odinger potential $V_p=e^{2W}\vec p\,^2$ has a local minimum or a barrier that grows as $p\to\infty$. Thus the question of whether the smearing function exists is closely related to the behavior of the gravitational redshift factor $e^{-W}$. In general, all non-relativistic backgrounds such as Lifshitz and ones with hyperscaling violation (including flows with such regions) do not admit smearing functions. The same is true for geometries such as Schwarzschild-AdS, where $e^{2W}$ starts out as unity on the boundary, but vanishes at the horizon \cite{Leichenauer:2013kaa}. On the other hand, smearing functions are expected to exist for backgrounds with $W=0$, i.e.\ ones preserving $(d+1)$-dimensional Lorentz invariance along the entire flow. The scaling of $V_p$ with $\vec p\,^2$ has the important consequence that any trapped mode will always be completely suppressed on the boundary with a factor $\sim e^{-cq}$ as $q\to\infty$, where $q^2=E^2+\vec p\,^2$ and $c$ is a geometry and radial location dependent positive constant. This gives rise to the perhaps somewhat unexpected feature that, with the existence of trapped modes, the smearing function $K(t,x,r|t',x)$ cannot exist even in an asymptotic AdS$_{d+2}$ region near the boundary, so long as $r$ is at a fixed location. One may wonder why the presence of trapped modes living in the IR would destroy the possibility of reconstruction of the UV region near the boundary. The reason for this is that, while a trapped mode in the IR indeed has to tunnel to reach the boundary, its amplitude does not immediately vanish in the interior of the bulk geometry. Moreover, these modes can live at a finite distance from the boundary. Hence they can have an imprint at any fixed $r$ in the bulk, and yet vanish on the boundary. It thus follows that the bulk information corresponding to such modes cannot be obtained from the boundary, and thus the smearing function would not exist for any fixed value of $r$. Since the existence of trapped modes with arbitrarily large values of $q$ provides an obstruction to the construction of a smearing function, one way around this difficulty is to remove such modes by considering a hard momentum cutoff $\Lambda$. Another way to think about this is that it may indeed be possible to reconstruct the bulk data from the boundary information, but only up to a fixed momentum $\Lambda$. As $\Lambda$ is taken larger, the reconstruction becomes more difficult, as there would be larger amplification in going from the boundary to the bulk due to the presence of trapped modes with larger values of $q$. With such a cutoff, one would have good control of the near boundary region in the bulk. However, one would lose complete localization in the transverse directions. Finally, let us try to give at least a partial answer to the question raised in the title of this paper. If we limit ourselves to a minimum spatial resolution, local operators in the non-relativistic CFT do indeed contain all the relevant information about fields in the bulk of Lifshitz and other `non-relativistic' space-times. However, full locality in the transverse direction cannot be achieved using smearing functions only, due to the presence of modes with vanishing boundary imprint. If and how the missing local bulk information can be extracted from the field theory remains an interesting open question. One possibility that comes to mind is to make use of non-local operators in the field theory, such as Wilson-loops \cite{Susskind:1999ey}. At the very least, our analysis demonstrates that some parts of the holographic dictionary for nonrelativistic gauge/gravity dualities are more intricate than in the well-understood AdS/CFT case. \section*{Acknowledgments} The authors would like to thank Tomas Andrade, Sheer El-Showk, Blaise Gouteraux, Monica Guica, Peter Koroteev, Leo Pando Zayas, Ioannis Papadimitriou, Simon Ross and Benson Way for fruitful discussions. CAK also thanks the Cientro de Ciencias de Benasque Pedro Pascual for its hospitality during the July 2013 Gravity Workshop. This work was supported in part by the US Department of Energy under grant DE-SC0007859.
1,108,101,566,307
arxiv
\section{Introduction} \label{sec:intro} Distributed computing systems are more and more becoming dynamic. The static and relatively stable models of computation can no longer represent the plethora of recently established and rapidly emerging information and communication technologies. In recent years, we have seen a tremendous increase in the number of new mobile computing devices. Most of these devices are equipped with some sort of communication, sensing, and mobility capabilities. Even the Internet has become mobile. The design is now focused on complex collections of heterogeneous devices that should be robust, adaptive, and self-organizing, possibly moving around and serving requests that vary with time. Delay-tolerant networks are highly-dynamic, infrastructure-less networks whose essential characteristic is a possible absence of end-to-end communication routes at any instant. Mobility can vary from being completely predictable to being completely unpredictable. Gossip-based communication mechanisms, e-mail exchanges, peer-to-peer networks, and many other contemporary communication networks all assume or induce some sort of highly-dynamic communication network. The formal study of dynamic communication networks is hardly a new area of research. There is a huge amount of work in distributed computing that deals with causes of dynamicity such as failures and changes in the topology that are rather slow and usually eventually stabilize (like, for example, in self-stabilizing systems \cite{Do00}). However the low rate of topological changes that is usually assumed there is unsuitable for reasoning about truly dynamic networks. Even graph-theoretic techniques need to be revisited: the suitable graph model is now that of a \emph{dynamic graph} (a.k.a. \emph{temporal graph} or \emph{time-varying graph}) (see e.g. \cite{KKK00,Ko09,CFQS11}), in which each edge has an associated set of time-labels indicating availability times. Even fundamental properties of classical graphs do not carry over to their temporal counterparts. See, for example, \cite{KKK00} for a violation of Menger's theorem and \cite{AKL08} for the unsuitability of the standard network diameter metric. In this work, we adopt as our dynamic network model the \emph{$1$-interval connectivity} model that was proposed in the seminal STOC paper of Kuhn \emph{et al.} \cite{KLO10} building upon previous work of O'Dell and Wattenhofer \cite{OW05}. In this model, nodes proceed in \emph{synchronous rounds} and communicate by \emph{interchanging messages}. Message transmission is \emph{broadcast} in which, in every round, each node issues a single message to be delivered to all its neighbors. In this model, the network may change arbitrarily from round to round subject to the condition that in each round the network is connected. We only consider deterministic algorithms. We focus on networks in which nodes are initially identical and, unless necessary, do not have any information about the network. In any case, nodes do not know the size $n$ of the network. By \emph{identical} we mean that they do not have unique identities (ids) and execute identical programs. So, this is some sort of minimal reliable distributed system, like, for example, a collection of particularly cheap and bulk-produced wireless sensor nodes. Nodes may execute the same program, because it is too costly to program them individually and their lack of ids may be due to the fact that ids require customization beyond the capabilities of mass production \cite{AFFR06}. Our only assumption is the existence of a unique leader that introduces some symmetry breaking. To further break the symmetry introduced by broadcast message transmission and in order to solve naming in dynamic networks, we allow to the nodes to send a different message to each one of their neighbors. \section{Related Work} \label{sec:rw} Distributed systems with worst-case dynamicity were first studied in \cite{OW05}. Their outstanding novelty was to assume a communication network that may change arbitrarily from time to time subject to the condition that each instance of the network is connected. They studied asynchronous communication and allowed nodes detect local neighborhood changes. They studied the \emph{flooding} and \emph{routing} problems in this setting and among others provided a uniform protocol for flooding that terminates in $O(Tn^2)$ rounds using $O(\log n)$ bit storage and message overhead, where $T$ is the maximum time it takes to transmit a message. Computation under worst-case dynamicity was further and extensively studied in a series of works by Kuhn \emph{et al.} in the synchronous case. In \cite{KLO10}, among others, \emph{counting} (in which nodes must determine the size of the network) and \emph{all-to-all token dissemination} (in which $n$ different pieces of information, called tokens, are handed out to the $n$ nodes of the network, each node being assigned one token, and all nodes must collect all $n$ tokens) were solved in $O(n^2)$ rounds using $O(\log n)$ bits per message. Several variants of \emph{coordinated consensus} in 1-interval connected networks were studied in \cite{KOM11}. Requiring continuous connectivity has been supported by the findings of \cite{CKLL09}, where a \emph{connectivity service} for mobile robot swarms that encapsulates an arbitrary motion planner and can refine any plan to preserve connectivity while ensuring progress was proposed. Some recent works \cite{Ha11,HK11} present information spreading algorithms in worst-case dynamic networks based on \emph{network coding}. An \emph{open} setting in which nodes constantly join and leave has very recently been considered in \cite{APRU12}. For an excellent introduction to distributed computation under worst-case dynamicity see \cite{KO11}. Two very thorough surveys on dynamic networks are \cite{Sc02,CFQS11}. The question concerning which problems can be solved by a distributed system when all processors use the same algorithm and start from the same state has a long story with its roots dating back to the seminal work of Angluin \cite{An80}, who investigated the problem of establishing a ``center''. She was the first to realize the connection with the theory of graph coverings, which was going to provide, in particular with the work of Yamashita and Kameda \cite{YK96}, several characterizations for problems that are solvable under certain topological constraints. Further investigation led to the classification of computable functions \cite{YK96,ASW88}. \cite{BV99} removed the, until then, standard assumption of knowing the network size $n$ and provided characterizations of the relations that can be computed with arbitrary knowledge. Other well-known studies on unknown networks have dealt with the problems of robot-exploration and map-drawing of an unknown graph \cite{AH00,DP90,PP98} and on information dissemination \cite{AGVP90}. Sakamoto \cite{Sa99} studied the ``usefulness'' of initial conditions for distributed algorithms (e.g. leader or knowing $n$) on anonymous networks by presenting a transformation algorithm from one initial condition to another. Fraigniaud \emph{et al.} \cite{FPP00} assumed a unique leader in order to break symmetry and assign short labels as fast as possible. To circumvent the further symmetry introduced by broadcast message transmission they also studied other natural message transmission models as sending only one message to a single neighbor. Recently, and independently of our work, Chalopin \emph{et al.} \cite{CMM12} have studied the problem of naming anonymous networks in the context of snapshot computation. Finally, Aspnes \emph{et al.} \cite{AFFR06} studied the relative powers of reliable anonymous distributed systems with different communication mechanisms: anonymous broadcast, read-write registers, or read-write registers plus additional shared-memory objects. \section{Contribution} \label{sec:con} We begin, in Section \ref{sec:prel}, by formally describing our distributed models. In Section \ref{sec:prob}, we formally define the problems under consideration, that is, naming, counting and some variations of these. Our study begins, in Section \ref{sec:static}, from static networks with broadcast. The reason for considering static networks is to arrive at some impossibility results that also carry over to dynamic networks, as a static network is a special case of a dynamic network. In particular, we prove that naming is impossible to solve under these assumptions even if a unique leader exists and even if all nodes know $n$. Then we prove that without a leader also counting is impossible to solve and naturally, in the sequel, we assume the existence of a unique leader. We provide an algorithm based on the \emph{eccentricity} of the leader (greatest distance of a node from the leader) that solves counting in linear time (inspired by the findings in \cite{FPP00}). Then, in Section \ref{sec:dynbr}, we move on to dynamic networks with broadcast. We begin with a conjecture (and give evidence for it) essentially stating that dynamicity renders nontrivial computations impossible even in the presence of a unique leader. \footnote{By \emph{nontrivial computation} we mean the ability to decide any language $L$ on input assignments s.t. $L\neq \Sigma^*$ and $L\neq\emptyset$, where input symbols are chosen from some alphabet $\Sigma$. For example, deciding the existence of any symbol in the input is considered nontrivial.} In view of this, we allow the nodes some minimal initial knowledge, which is an upper bound on the maximum degree that any instance will ever have. This could for example be some natural constraint on the capacity of the network. We provide a protocol that exploits this information to compute an upper bound on the size of the network. However, w.r.t. naming, the strong impossibility from Section \ref{sec:static} still persists (after all, knowledge of $n$ does not help in labeling the nodes). To circumvent this, in Section \ref{sec:seltr}, we relax our message transmission model to \emph{one-to-each} that allows each node to send a different message to each one of its neighbors. This is an alternative communication model that has been considered in several important works, like \cite{Ha11}, however in different contexts than ours. This further symmetry breaking, though minimal, allows us, by exploiting a leader, to uniquely label the nodes. By this, we establish that this model is equivalent to a full-knowledge model in which unique names exist and the size of the network is known. To arrive at this result, we provide four distinct naming protocols each with its own incremental value. The first presents how to assign ids in a fair context in which the leader will eventually meet every other node. The second improves on the first by allowing all nodes to assign ids in a context where no one is guaranteed to meet everybody else, but where connectivity guarantees progress. Both these are correct stabilizing solutions that do not guarantee termination. Then we provide a third protocol that builds upon the first two and manages to assign unique ids in 1-interval connected graphs while terminating in linear time. As its drawback is that messages may be $\Omega(n^2)$ bit long, we refine it to a more involved fourth protocol that reduces the bits per message to $\Theta(\log n)$ by only paying a small increase in termination time. \section{Preliminaries} \label{sec:prel} \subsection{The models} \label{subsec:mod} A \emph{dynamic network} is modeled by a \emph{dynamic graph} $G=(V,E)$, where $V$ is a set of $n$ nodes (or processors) and $E:\bbbn\rightarrow \mathcal{P}(E^\prime)$, where $E^\prime=\{\{u,v\}:u,v\in V\}$, (wherever we use $\bbbn$ we mean $\bbbn_{\geq 1}$) is a function mapping a round number $r\in\bbbn$ to a set $E(r)$ of bidirectional links drawn from $E^\prime$. Intuitively, a dynamic graph $G$ is an infinite sequence $G(1),G(2),\ldots$ of \emph{instantaneous graphs}, whose edge sets are subsets of $E^\prime$ chosen by a \emph{worst-case adversary}. A \emph{static network} is just a special case of a dynamic network in which $E(i+1)=E(i)$ for all $i\in\bbbn$. The set $V$ is assumed throughout this work to be \emph{static}, that is it remains the same throughout the execution. A dynamic graph/network $G=(V,E)$ is said to be \emph{$1$-interval connected}, if, for all $r\in\bbbn$, the static graph $G(r)$ is connected \cite{KLO10}. Note that this allows the network to change arbitrarily from round to round always subject to the condition that it remains connected. In this work, we focus on $1$-interval connected dynamic networks which also implies that we deal with connected networks in the static-network case. Nodes in $V$ are \emph{anonymous} that is they do not initially have any ids and they do not know the topology or the size of the network, apart from some minimal knowledge when necessary (i.e. we say that \emph{the network is unknown}). However, nodes have unlimited local storage. In several cases, and in order to break symmetry, we may assume a unique \emph{leader node} (or \emph{source}) $l$. If this is the case, then we assume that $l$ starts from a unique initial state $l_0$ (e.g. 0) while all other nodes start from the same initial state $q_0$ (e.g. $\perp$). All nodes but the leader execute identical programs. Communication is \emph{synchronous message passing} \cite{Ly96,AW04}, meaning that it is executed in discrete rounds controlled by a global clock that is available to the nodes and that nodes communicate by sending and receiving messages. Thus all nodes have access to the current round number via a local variable that we usually denote by $r$. We consider two different models of message transmission. One is \emph{anonymous broadcast}, in which, in every round $r$, each node $u$ generates a single message $m_u(r)$ to be delivered to all its current neighbors in $N_u(r)=\{v:\{u,v\}\in E(r)\}$. The other is \emph{one-to-each} in which a different message $m_{(u,i)}(r)$, $1\leq i\leq d_u(r)$, where $d_u(r)\mathrel{\mathop:}=|N_u(r)|$ is the degree of $u$ in round $r$, may be generated for each neighbor $v_i$. In every round, the adversary first chooses the edges for the round; for this choice it can see the internal states of the nodes at the beginning of the round. In the one-to-each message transmission model we additionally assume that the adversary also reveals to each node $u$ a set of locally unique edge-labels $1,2,\ldots,d_u(r)$, one for each of the edges currently incident to it. Note that these labels can be reselected arbitrarily in each round so that a node cannot infer what the internal state of a neighbor is based solely on the corresponding local edge-name. Then each node transitions to a new state based on its internal state (containing the messages received in the previous round) and generates its messages for the current round: in anonymous broadcast a single message is generated and in one-to-each a different message is generated for each neighbor of a node. Note that, in both models, a node does not have any information about the internal state of its neighbors when generating its messages. Deterministic algorithms are only based on the current internal state to generate messages. This implies that the adversary can infer the messages that will be generated in the current round before choosing the edges. Messages are then delivered to the corresponding neighbors. In one-to-each, we assume that each message $m_i$ received by some node $u$ is accompanied with $u$'s local label $i$ of the corresponding edge, so that a node can associate a message sent through edge $i$ with a message received from edge $i$. These messages will be processed by the nodes in the subsequent round so we typically begin rounds with a ``receive'' command referring to the messages received in the previous round. Then the next round begins. \subsection{Causal Influence} Probably the most important notion associated with a dynamic graph is the \emph{causal influence}, which formalizes the notion of one node ``influencing'' another through a chain of messages originating at the former node and ending at the latter (possibly going through other nodes in between). We use $(u,r)\rightsquigarrow (v, r^\prime)$ to denote the fact that node $u$'s state in round $r$ ($r$-state of $u$) influences node $v$'s state in round $r^\prime$. Formally: \begin{definition} [\cite{La78}] Given a dynamic graph $G=(V,E)$ we define an order $\rightarrow\subseteq (V\times\bbbn_{\geq 0})^2$, where $(u,r)\rightarrow (v,r+1)$ iff $u=v$ or $\{u,v\}\in E(r+1)$. The \emph{causal order} $\rightsquigarrow\subseteq (V\times\bbbn_{\geq 0})^2$ is defined to be the reflexive and transitive closure of $\rightarrow$. \end{definition} A very important aspect of 1-interval connectivity, that will be invoked in all our proof arguments in dynamic networks, is that it guarantees that the state of a node causally influences the state of another uninfluenced node in every round (if one exists). To get an intuitive feeling of this fact, consider a partitioning of the set of nodes $V$ to a subset $V_1$ of nodes that know the $r$-state of some node $u$ and to a subset $V_2=V\backslash V_1$ of nodes that do not know it. Connectivity asserts that there is always an edge in the cut between $V_1$ and $V_2$, consequently, if nodes that know the $r$-state of $u$ broadcast it in every round, then in every round at least one node moves from $V_2$ to $V_1$. This is formally captured by the following lemma from \cite{KLO10}. \begin{lemma} [\cite{KLO10}] \label{lem:inf} For any node $u\in V$ and $r\geq 0$ we have \begin{enumerate} \item $|\{v\in V : (u,0)\rightsquigarrow (v,r)\}|\geq\min\{r + 1, n\}$, \item $|\{v\in V : (v,0)\rightsquigarrow (u,r)\}|\geq\min\{r + 1, n\}$. \end{enumerate} \end{lemma} \section{Problem Definitions} \label{sec:prob} \noindent\textbf{$k$-labeling}. An algorithm is said to solve the $k$-labeling problem if whenever it is executed on a network comprising $n$ nodes each node $u$ eventually terminates and outputs a \emph{label} (or \emph{name} or \emph{id}) $id_u$ so that $|\{id_u: u\in V\}|\geq k$. \vspace{0.5cm} \noindent\textbf{Naming}. The naming problem is a special case of the $k$-labeling problem in which it must additionally hold that $k=n$. This, in turn, implies that $id_u\neq id_v$ for all distinct $u,v\in V$ (so, unique labels are required for the nodes). \vspace{0.5cm} \noindent\textbf{Minimal (Consecutive) Naming}. It is a special case of naming in which it must additionally hold that the $n$ nodes output the labels $\{0,1,\ldots,n-1\}$. \vspace{0.5cm} \noindent\textbf{Counting Upper Bound}. Nodes must determine an upper bound $k$ on the network size $n$. \vspace{0.5cm} \noindent\textbf{Counting}. A special case of counting upper bound in which it must hold that $k=n$. \section{Static networks with broadcast} \label{sec:static} We here assume that the network is described by a static graph $G=(V,E)$, where $E\subseteq\{\{u,v\}:u,v\in V\}$. Moreover, the message transmission model is broadcast, that is, in every round, each node $u$ generates a single message to be delivered to all its neighbors. Note that any impossibility result established for static networks is also valid for dynamic networks as a static network is a special case of a dynamic network. First of all, note that if all nodes start from the same initial state then, if we restrict ourselves to deterministic algorithms, naming is impossible to solve in general static networks, even if nodes know $n$. The reason is that in the worst-case they may be arranged in a ring (in which each node has precisely 2 neighbors) and it is a well-known fact \cite{An80,Ly96,AW04} that, in this case, in every round $r$, all nodes are in identical states. We show now that impossibility persists even if we allow a unique leader and even if nodes have complete knowledge of the network. \begin{theorem} \label{the:namimp} Naming is impossible to solve by deterministic algorithms in general anonymous (static) networks with broadcast even in the presence of a leader and even if nodes have complete knowledge of the network. \end{theorem} \begin{proof} Consider a star graph with the leader in the center (see Appendix \ref{app:star}). \qed \end{proof} An obvious generalization is that, under the same assumptions as in the statement of the theorem, it is impossible to solve $k$-labeling for any $k\geq 3$. In Appendix \ref{app:deg}, we also provide some thoughts on a degree-based labeling. We now turn our attention to the simpler counting problem. First we establish the necessity of assuming a unique leader. \begin{theorem} \label{the:impcoun} Without a leader, counting is impossible to solve by deterministic algorithms in general anonymous networks with broadcast. \end{theorem} \begin{proof} If some algorithm counts in $k$ rounds the $n$ nodes of a static ring, then it fails on a ring of $k+1$ nodes (see Appendix \ref{app:ring}). \qed \end{proof} In view of Theorem \ref{the:impcoun}, we assume again a unique leader in order to solve counting. Recall that the \emph{eccentricity} of a node $u$ is defined as the greatest geodesic distance between $u$ and $v$, over all $v\in V\backslash\{u\}$, where ``distance'' is equivalent to ``shortest path''. We first describe a protocol \textit{Leader\_Eccentricity} (inspired by the $Wake\& Label$ set of algorithms of \cite{FPP00}) that assigns to every node a label equal to its distance from the leader and then we exploit this to solve counting. We assume that all nodes have access to the current round number via a variable $r$. \noindent\textbf{Protocol \textit{Leader\_Eccentricity}.} The leader begins with $label\leftarrow 0$ and $max\_asgned\leftarrow 0$ and all other nodes with $label\leftarrow\perp$. In the first round, the leader broadcasts an $assign$ $(1)$ message. Upon reception of an $assign$ $(i)$ message, a node that has $label=\perp$ sets $label\leftarrow i$ and broadcasts to its neighbors an $assign$ $(i+1)$ message and an $ack$ $(i)$ message. Upon reception of an $ack$ $(i)$ message, a node with $label\neq\perp$ and $label<i$ broadcasts it. Upon reception of an $ack$ $(i)$ message, the leader sets $max\_asgned\leftarrow i$ and if $r > 2\cdot (max\_asgned+1)$ then it broadcasts a $halt$ message, outputs its label, and halts. Upon reception of a $halt$ message, a node broadcasts $halt$, outputs its label, and halts. \begin{theorem} In $Leader\_Eccentricity$ nodes output $\epsilon+1$ distinct labels where $\epsilon$ is the eccentricity of the leader. In particular, every node outputs its distance from the leader. \end{theorem} \begin{proof} At time $2$, nodes at distance $1$ from the leader receive $assign$ $(1)$ and set their label to $1$. By induction on distance, nodes at distance $i$ get label $i$ at round $i+1$. In the same round, they send an ack that must arrive at the leader at round $2i+1$. If not then there is no node at distance $i$. \qed \end{proof} We now use $Leader\_Eccentricity$ to solve counting in anonymous unknown static networks with a leader. We additionally assume that at the end of the $Leader\_Eccentricity$ process each node $u$ knows the number of neighbors $up(u)=|\{\{v,u\}\in E: label(v)=label(u)-1\}|$ it has to its upper level (it can store this during the $Leader\_Eccentricity$ process by counting the number of $assign$ messages that arrived at it from its upper level neighbors). Moreover, we assume that all nodes know the leader's eccentricity $\epsilon$ (just have the leader include $max\_asgned$ in its $halt$ message). Finally, let, for simplicity, the first round just after the completion of the above process be round $r=1$. For this, we just need all nodes to end concurrently the $Leader\_Eccentricity$ process. This is done by having node with label $i$ that receives or creates (this is true for the leader) a $halt$ message in round $r$ halt in round $(r+max\_asgned-i)$. Then the nodes just reset their round counters. \noindent\textbf{Protocol \textit{Anonymous\_Counting}.} Nodes first execute the modified $Leader\_Eccentricity$. When $\epsilon-r+1=label(u)$, a non-leader node $u$ receives a possibly empty (in case of no lower-level neighbors) set of $partial\_count_i$ $(rval_i)$ messages and broadcasts a $partial\_count$ $((1+\sum_i rval_i)/up(u))$ message. When $r=\epsilon+1$, the leader receives a set of $partial\_count_i$ $(rval_i)$ messages, sets $count\leftarrow 1+\sum_i rval_i$, broadcasts a $halt$ $(count)$ message, outputs $count$, and halts. When a non-leader $u$ receives a $halt$ $(count)$ message, it outputs $count$ and halts. For a given round $r$ we denote by $rval_i(u)$ the $i$th message received by node $u$. \begin{theorem} $Anonymous\_Counting$ solves the counting problem in anonymous static networks with broadcast under the assumption of a unique leader. All nodes terminate in $O(n)$ rounds and use messages of size $O(\log n)$. \end{theorem} \begin{proof} By induction on the round number $r$, in the beginning of round $r\geq 2$, it holds that $\sum_{u:label(u)=\epsilon-r+1} \left ( 1+\sum_{i} rval_i(u)\right )=|\{u:label(u)\geq \epsilon -r+1\}|$. Clearly, in round $\epsilon+1$ it holds that $count=1+\sum_i rval_i(leader)=|\{u:label(u)\geq 0\}|=n$. \qed \end{proof} \section{Dynamic Networks with Broadcast} \label{sec:dynbr} We now turn our attention to the more general case of 1-interval connected dynamic networks with broadcast. We begin with a conjecture stating that dynamicity renders nontrivial computation impossible (evidence for this conjecture can be found in Appendix \ref{app:conj}; see also \cite{OW05} for a similar conjecture in a quite different setting). Then we naturally strengthen the model to allow some computation. \begin{conjecture} \label{conj:pred} It is impossible to compute (even with a leader) the predicate $N_a\geq 1$, that is ``exists an $a$ in the input'', in general anonymous unknown dynamic networks with broadcast. \end{conjecture} In view of Theorem \ref{the:namimp}, which establishes that we cannot name the nodes of a static, and thus also of a dynamic, network if broadcast communication is assumed, and of the above conjecture, implying that in dynamic networks we cannot count even with a leader \footnote{This is implied because if we could count we could have a node wait at most $n-1$ rounds until it hears of an $a$ (provided that all nodes that have heard of an $a$ forward it) and if no reject.}, we start strengthening our initial model. Let us assume that there is a unique leader $l$ that knows an upper bound $d$ on maximum degree ever to appear in the dynamic network, that is $d\geq\max_{u\in V,r\in\bbbn}\{d_u(r)\}$. We keep the broadcast message transmission. Note first that impossibility of naming persists. However, we show that obtaining an upper bound on the size of the network now becomes possible, though exponential in the worst case. \noindent\textbf{Protocol \textit{Degree\_Counting}.} The leader stores in $d$ the maximum degree that will ever appear and begins with $label\leftarrow 0$, $count\leftarrow 1$, $latest\_event\leftarrow 0$, $max\_label\leftarrow 0$, and $r\leftarrow 0$ while all other nodes begin with $label\leftarrow\perp$, $count\leftarrow 0$, and $r\leftarrow 0$. In the beginning of each round each node increments by one its round counter $r$. The leader in each round $r$ broadcasts $assign$ $(r)$. Upon reception of an $assign$ $(r\_label)$ message, a node with $label=\perp$ sets $label\leftarrow r\_label$ and from now in each round $r$ broadcasts $assign$ $(r)$ and $my\_label$ $(label)$. A node with $label=\perp$ that did not receive an $assign$ message sends an $unassigned$ $(r)$ message. All nodes continuously broadcast the maximum $my\_label$ and $unassigned$ messages that they have received so far. Upon reception of an $unassigned$ $(i)$ message, the leader, if $i>latest\_event$, it sets $count\leftarrow 1$ and, for $k=1,\ldots,i$, $count\leftarrow count+d\cdot count$, $max\_label\leftarrow i$, and $latest\_event\leftarrow r$ and upon reception of a $my\_label$ $(j)$ message, if $j>max\_label$, it sets $count\leftarrow 1$ and, for $k=1,\ldots,j$, $count\leftarrow count+d\cdot count$, $latest\_event\leftarrow r$, and $max\_label\leftarrow j$ (if receives both $i,j$ it does it for $\max\{i,j\}$). When it holds that $r> count+latest\_event-1$ (which must eventually occur) then the leader broadcasts a $halt$ $(count)$ message for $count$ rounds and then outputs $count$ and halts. Each node that receives a $halt$ $(r\_count)$ message, sets $count\leftarrow r\_count$, broadcasts a $halt$ $(count)$ message for $count$ rounds and then outputs $count$ and halts. \begin{theorem} $Degree\_Counting$ solves the counting upper bound problem in anonymous dynamic networks with broadcast under the assumption of a unique leader. The obtained upper bound is $O(d^n)$ (in the worst case). \end{theorem} \begin{proof} In the first round, the leader assigns the label $1$ to its neighbors and obtains an $unassigned$ $(1)$ message from each one of them. So, it sets $count\leftarrow (d+1)$ (in fact, note that in the first step it can simply set $count\leftarrow d_u(1)+1$, but this is minor), $latest\_event\leftarrow 1$, and $max\_label\leftarrow 1$. Now, if there are further nodes, at most by round $count+latest\_event-1$ it must have received an $unassigned$ $(i)$ message with $i>latest\_event$ or a $my\_label$ $(j)$ with $j>max\_label$. Note that the reception of an $unassigned$ $(i)$ message implies that at least $i+1$ distinct labels have been assigned because as long as there are unlabeled nodes one new label is assigned in each round to at least one node (this is implied by Lemma \ref{lem:inf} and the fact that all nodes with labels constantly assign new labels). Initially, one node (the leader) assigned to at most $d$ nodes label $1$. Then the $d+1$ labeled nodes assigned to at most $(d+1)d$ unlabeled nodes the label $2$, totalling $(d+1)+(d+1)d$, and so on. In the worst-case, each label in $\{0,1,\ldots,n-1\}$ is assigned to precisely one node (e.g., consider a static line with the leader in the one endpoint). In this case the nodes count $O(d^n)$. \qed \end{proof} We point out that if nodes have access to more drastic initial knowledge such as an upper bound $e$ on the \emph{maximum expansion}, defined as $\max_{u,r,r^\prime}\{|\mathrm{future}_{u,r}(r^\prime +1)|-|\mathrm{future}_{u,r}(r^\prime)|\}$ (maximum number of concurrent new influences ever occuring), where $\mathrm{future}_{(u,r)}(r^\prime)\mathrel{\mathop:}= \{v\in V : (u,r)\rightsquigarrow (v,r^{\prime})\}$, for $r\leq r^\prime$, then essentially the same protocol as above provides an $O(n\cdot e)$ upper bound. \section{Dynamic Networks with One-to-Each} \label{sec:seltr} The result of Theorem 1, in the light of (a) the conjecture of Section 7, and (b) the assumption of a broadcast message transmission model, clearly indicates that nontrivial computations in anonymous unknown dynamic networks are impossible even under the assumption of a unique leader. We now relax these assumptions so that we can state a correct naming protocol. We start by relaxing the assumption of a broadcast message transmission medium by offering to nodes access to a \emph{one-to-each message transmission mechanism}. We also assume a unique leader; without a leader, even under a one-to-each model, impossibility still persists as any pair of nodes that form a static ring will not be able to break symmetry. \subsection*{1st Version - Protocol $Fair$} We now present protocol $Fair$ in which the unique leader assigns distinct labels to each node of the network. The labels assigned are tuples $(r,h,i)$, where $r$ is the round during which the label was assigned, $h$ is the label of the leader node and $i$ is a unique number assigned by the leader. The labels can be uniquely ordered first by $r$, then by $h$ and finally by $i$ (in ascending order). Each node maintains the following local variables: $clock$, for counting the rounds of execution of the protocol (implemented due to synchronous communication, see Sec. \ref{subsec:mod}), $label$, for storing the label assigned by the leader, $state$, for storing the local state that can be set to $\{anonymous, named, leader\}$, and $counter$, for storing the number of labels generated. All nodes are initialized to $clock\leftarrow 0$, $id\leftarrow (0,\perp,\perp)$, $state\leftarrow anonymous$, and $counter\leftarrow 0$ except from the leader that is initialized to $clock\leftarrow 0$, $id\leftarrow (0,1,1)$, $state\leftarrow leader$, and $counter\leftarrow 1$. Each turn the leader $u$ consults the one-to-each transmission mechanism and identifies a set of locally unique edge-labels $1,2,\ldots,d(u)$, one for each of the edges incident to it. \footnote{Recall from Section 4.1 that these edge-labels can be reselected arbitrarily in each round (even if the neighbors remain the same) by the adversary so that a node cannot infer what the internal state of a neighbor is, based solely on the corresponding local edge-name.} The leader iterates the edge-label set and transmits to each neighboring node a different message $m_i$, $1 \le i \le d(u)$ that contains the unique label $(clock, label, counter + i)$. When the transmission is complete, it increases the variable $counter$ by $d(u)$. All the other nodes of the network do not transmit any messages (or transmit a null message if message transmission is compulsory). All nodes under $state=anonymous$, upon receiving a (non-null) message set the local $label$ to the contents of the message and change $state$ to $named$. All the other nodes of the network simply ignore all the messages received. At the end of the turn all nodes do $clock++$ (where `++' is interpreted as ``increment by one''). Recall that a naming assignment is correct if \emph{all nodes} are assigned \emph{unique} labels. It is clear that $Fair$ is a non-terminating correct protocol, given the following \emph{fairness assumption}: the leader node at some point has become directly connected with each other node of the network (i.e., eventually meets all nodes). \begin{lemma} With one-to-each transmission, under the fairness assumption, and in the presence of a unique leader, protocol $Fair$ eventually computes a unique assignment for all the nodes in any anonymous unknown dynamic network. \end{lemma} \subsection*{2nd Version - Protocol $Delegate$} We now proceed by presenting a stronger protocol $Delegate$ (based on $Fair$) that is correct even without the fairness assumption. To achieve correctness the leader node delegates the role of assignment of labels to all the nodes that it encounters. Thus, without loss of generality, even if the leader does not encounter all other nodes of the network, due to the \emph{connectivity property}, all nodes will eventually hear from the leader. Therefore, all nodes will either receive a unique label from the leader or from another labeled node. The uniqueness among the labels generated is guaranteed since each label can be traced back to the node that issued it using the $h$ parameter. In $Delegate$ the nodes maintain the same variables as in $Fair$. Each turn the leader performs the same actions as in $Fair$. Also similarly to $Fair$, each node that is in $state=anonymous$ does not transmit any message (or transmits a null message if message transmission is compulsory). Each node $u$ that is in $state=named$ performs similar actions as the leader node and transmits to each edge-label $i$ a message containing the unique label $(clock_u, label_u, counter_u + i)$ and then increases the variable $counter_u$ by $d(u)$. All nodes under $state=anonymous$, upon receiving one or more (non-null) messages that contain a label, select the message that contains the lowest label (i.e., the one with the lowest $h$ parameter) and set the local $label$ to the contents of the message and change $state$ to $named$. At the end of the turn all nodes do $clock++$. \begin{lemma} With one-to-each transmission, and in the presence of a unique leader, protocol $Delegate$ correctly computes a unique assignment for all the nodes in any anonymous unknown dynamic network. \end{lemma} \subsection*{3rd Version - Protocol $Dynamic\_Naming$ (terminating)} The protocols $Fair$ and $Delegate$ compute a correct naming assignment (based on different assumptions) but do not terminate. Essentially the nodes continue to transmit labels for ever. We now present protocol $Dynamic\_Naming$ (based on $Delegate, Fair$) that manages to terminate. $Dynamic\_Naming$ is an $O(n)$-time protocol that assigns unique ids to the nodes and informs them of $n$. As usual, there is a unique leader $l$ with id $0$ while all other nodes have id $\perp$. The idea here is as follows. Similarly to $Delegate$, all nodes that have obtained an id assign ids and these ids are guaranteed to be unique. Additionally to $Delegate$, we have nodes that have obtained an id to acknowledge their id to the leader. Thus, all nodes send their ids and all nodes continuously forward the received ids so that they eventually arrive at the leader (simple flooding mechanism). So, at some round $r$, the leader knows a set of assigned ids $K(r)$. We describe now the termination criterion. If $|K(r)|\neq |V|$ then in at most $|K(r)|$ additional rounds the leader must hear (be causally influenced) from a node outside $K(r)$ (to see why, see Lemma \ref{lem:inf}). Such a node, either has an id that the leader first hears of, or has no id yet. In the first case, the leader updates $K(r)$ and in the second waits until it hears of a new id (which is guaranteed to appear in the future). On the other hand, if $|K(r)|=|V|$ no new info will ever arrive at the leader in the future and the leader may terminate after the $|K(r)|$-round waiting period ellapses. \noindent\textbf{Protocol \textit{Dynamic\_Naming}.} Initially, every node has three variables $count\leftarrow 0$, $acks\leftarrow\emptyset$, and $latest\_unassigned\leftarrow 0$ and the leader additionally has $latest\_new\leftarrow 0$, $time\_bound\leftarrow 1$, and $known\_ids\leftarrow\{0\}$. A node with $id\neq\perp$ for $1\leq i\leq k$ sends $assign$ $(id,count+i)$ message to its $i$th neighbor and sets $count\leftarrow count+k$. In the first round, the leader additionally sets $known\_ids\leftarrow\{0,(0,1),(0,2),\ldots,(0,k)\}$, $latest\_new\leftarrow 1$, and $time\_bound\leftarrow 1+|known\_ids|$. Upon receipt of $l$ $assign$ messages $(rid_j)$, a node with $id=\perp$ sets $id\leftarrow \min_j\{rid_j\}$ (in number of bits), $acks\leftarrow acks\cup id$, sends an $ack$ $(acks)$ message to all its $k$ current neighbors, for $1\leq i\leq k$ sends $assign$ $(id,count+i)$ message to its $i$th neighbor, and sets $count\leftarrow count+k$. Upon receipt of $l$ $ack$ messages $(acks_j)$, a nonleader sets $acks\leftarrow acks\cup (\bigcup_j acks_j)$ and sends $ack$ $(acks)$. A node with $id=\perp$ sends $unassigned$ $(current\_round)$. Upon receipt of $l\geq 0$ $unassigned$ messages $(val_j)$, a node with $id\notin\{0,\perp\}$ sets $latest\_unassigned\leftarrow\max\{latest\_unassigned,\max_j\{val_j\}\}$ and sends $unassigned$ $(latest\_unassigned)$. Upon receipt of $l$ $ack$ messages $(acks_j)$, the leader if $(\bigcup_j acks_j)\backslash known\_ids\neq\emptyset$ sets $known\_ids\leftarrow known\_ids\cup (\bigcup_j acks_j)$, $latest\_new\leftarrow current\_round$ and $time\_bound\leftarrow current\_round+|known\_ids|$ and upon receipt of $l$ $unassigned$ messages $(val_j)$, it sets $latest\_unassigned\leftarrow\max\{latest\_unassigned,\max_j\{val_j\}\}$. If, at some round $r$, it holds at the leader that $r>time\_bound$ and $latest\_unassigned<latest\_new$, the leader sends a $halt$ $(|known\_ids|)$ message for $|known\_ids|-1$ rounds and then outputs $id$ and halts. Any node that receives a $halt$ $(n)$ message, sends $halt$ $(n)$ for $n-2$ rounds and then outputs $id$ and halts. Denote by $S(r)=\{v\in V:(l,0)\rightsquigarrow (v,r)\}$ the set of nodes that have obtained an id at round $r$ and by $K(r)$ those nodes in $S(r)$ whose id is known by the leader at round $r$, that is $K(r)=\{u\in V: \exists r^\prime \text{ s.t. } u\in S(r^\prime) \text{ and } (u,r^\prime)\rightsquigarrow (l,r)\}$. \begin{theorem} $Dynamic\_Naming$ solves the naming problem in anonymous unknown dynamic networks under the assumptions of one-to-each message transmission and of a unique leader. All nodes terminate in $O(n)$ rounds and use messages of size $\Theta(n^2)$. \end{theorem} \begin{proof} Unique names are guaranteed as in $Delegate$. Termination is as follows. Clearly, if $V\backslash K(r)\neq\emptyset$, either $|K(r+|K(r)|)|\geq |K(r)|+1$ or $(u,r)\rightsquigarrow (l,r+|K(r)|)$ for some $u\in V\backslash S(r)$. The former is recognized by the leader by the arrival of a new id and the latter by the arrival of an $unassigned$ $(timestamp)$ message, where $timestamp\geq r$. On the other hand, if $K(r)=V$ then $|K(r+|K(r)|)|=|K(r)|$ and $\nexists u\in V\backslash S(r)$ s.t. $(u,r)\rightsquigarrow (l,r+|K(r)|)$ as $V\backslash S(r)=\emptyset$. Finally, note that connectivity implies that $|S(r+1)|\geq \min\{|S(r)|+1,n\}$ which in turn implies $O(n)$ rounds until unique ids are assigned. Then another $O(n)$ rounds are required until nodes terminate. \qed \end{proof} Clearly, by executing a simple $O(n)$-time process after $Dynamic\_Naming$ we can easily reassign minimal (consecutive) names to the nodes. The leader just floods a list of $(old\_id,new\_id)$ pairs, one for each node in the network. Though $Dynamic\_Naming$ is a correct and time-efficient terminating protocol for the naming problem it still has an important drawback. The messages sent may be of size $\Omega(n^2)$. We now refine $Dynamic\_Naming$ to arrive at a more involved construction that reduces the message size to $\Theta(\log n)$ by paying a small increase in termination time. We call this 4th version of our naming protocols $Individual\_Conversations$. Due to space restrictions, we only give that main idea here. The full presentation can be found in the Appendix \ref{app:indcon}. \noindent\textbf{Protocol \textit{Individual\_Conversations} [Main Idea].} To reduce the size of the messages (i) the assigned names are now of the form $k\cdot d+id$, where $id$ is the id of the node, $d$ is the number of \emph{unique consecutive} ids that the leader knows so far, and $k\geq 1$ is a name counter (ii) Any time that the leader wants to communicate to a remote node that has a unique id it sends a message with the id of that node and a timestamp equal to the current round. The timestamp allows all nodes to prefer this message from previous ones so that the gain is twofold: the message is delivered and no node ever issues a message containing more than one id. The remote node then can reply in the same way. For the assignment formula to work, nodes that obtain ids are not allowed to further assign ids until the leader freezes all named nodes and reassigns to them unique consecutive ids. During freezing, the leader is informed of any new assignments by the named nodes and terminates if all report that no further assignments were performed. \begin{theorem} $Individual\_Conversations$ solves the (minimal) naming problem in $O(n^3)$ rounds using messages of size $\Theta(\log n)$. \end{theorem} Finally, in the Appendix \ref{app:hdy}, we discuss how a high-dynamicity assumption can help in breaking the impossibility of Conjecture \ref{conj:pred} and exploit the above algorithmic ideas to solve naming under broadcast communication.
1,108,101,566,308
arxiv
\section{Introduction} With a surge in the range of applications from economics, finance, environmental science, social science and epidemiology, there has been renewed interest in developing models for time series of counts. The majority of these models assume that the observations follow a Poisson distribution conditioned on an accompanying intensity process that drives the dynamics of the models, e.g., \cite{Davis03}, \cite{Fokianos}, \cite{Neumann}, \cite{Sarah} and \cite{WeakDepPoisAR}. According to whether the evolution of the intensity process depends on the observations or solely on an external process, \cite{Cox81} classified the models into observation-driven and parameter-driven. This paper focuses on the theory and inference for a particular class of observation-driven models. Many of the commonly used models, such as the Poisson integer-valued GARCH (INGARCH), are special cases of our model. For an INGARCH, the observations $\{Y_t\}$ given the intensity process $\{\lambda_t\}$ follow a Poisson distribution and $\lambda_t$ is a linear combination of its lagged values and lagged $Y_t$. The model is capable of capturing positive temporal correlation in the observations and it is relatively easy to fit via maximum likelihood. \cite{Ferland} showed the second moment stationarity through a sequence of approximating processes and \cite{Fokianos} established the consistency and asymptotic normality of the MLE by introducing a perturbed model. However, all the above results rely heavily on the Poisson assumption and the GARCH-like dynamics of $\lambda_t$. Later \cite{Neumann} relaxed the linear assumption to a general contracting evolution rule and proved the absolute regularity for this Poisson count process and \cite{WeakDepPoisAR} showed the existence of moments under similar conditions by utilizing the concept of weak dependence. In our study the conditional distribution of the observation $Y_t$ given the past is assumed to follow a one-parameter exponential family. The temporal dependence in the model is defined through recursions relating the conditional mean process $X_t$ with its lagged values and lagged observations. Theory from iterated random functions (IRF), see e.g., \cite{Diaconis} and \cite{Weibiao04}, is utilized to establish some key stability properties, such as existence of a stationary and mixing solution. This theory allows us to consider both linear and nonlinear dynamic models as well as inference questions. In particular, the asymptotic normality of the maximum likelihood estimates can be established. The nonlinear dynamic models are also investigated in a simulation study and both linear and nonlinear models are applied to two real datasets. The organization of the paper is as follows. Section 2 formulates the model and establishes stability properties. The maximum likelihood estimates of the parameters and the relevant asymptotic theory are derived in Section 3. Examples of both linear and nonlinear dynamic models are considered in Section 4. Numerical results, including a simulation study and two data applications are given in Section 5, where the models are applied to the number of transactions per minute of Ericsson stock and to the return times of extreme events of Goldman Sachs Group (GS) stock. Some diagnostic tools for assessing and comparing model performance are also given in Section 5. Appendix A reviews some standard properties of the one-parameter exponential family and the proofs of the key results in Sections 2-4 are deferred to Appendix B. \section{Model formulation and stability properties} \subsection{One-parameter exponential family} A random variable $Y$ is said to follow a distribution of the one-parameter exponential family if its probability density function with respect to some $\sigma$-finite measure $\mu$ is given by \begin{eqnarray} p(y|\eta)=\exp\{\eta y-A(\eta)\}h(y), ~~~ y\ge 0, \label{eq:expfamily} \end{eqnarray} where $\eta$ is the natural parameter, and $A(\eta)$ and $h(y)$ are known functions. If $B(\eta)=A'(\eta)$, then it is known that $\mbox{E} Y=B(\eta)$ and $\mbox{Var}(Y)=B'(\eta)$. The derivative of $A(\eta)$ exists generally for the exponential family, see e.g., \cite{TPE}. Since $B'(\eta)=\mbox{Var}(Y)>0$, so $B(\eta)$ is strictly increasing, which establishes a one-to-one association between the values of $\eta$ and $B(\eta)$. Moreover, because we assume that the support of $Y$ is non-negative throughout this paper, so $B(\eta)=\mbox{E} Y>0$, which implies that $A(\eta)$ is strictly increasing. Other properties of this family of distributions are presented in Appendix A. Many familiar distributions belong to this family, including Poisson, negative binomial, Bernoulli, exponential, etc. If the shape parameter is fixed, then the gamma distribution is also a member of this family. While we restrict consideration to only the univariate case, extensions to the multi-parameter exponential family is a topic of future research. \subsection{Model formulation} Set $\mathcal{F}_0=\sigma\{\eta_1\}$, where $\eta_1$ is a natural parameter of (\ref{eq:expfamily}) and assumed fixed for the moment. Let $Y_1, Y_2, \ldots$ be observations from a model that is defined recursively in the following fashion, \begin{eqnarray} Y_t|\mathcal{F}_{t-1}\sim p(y|\eta_t),~~~X_t=g_{\theta}(X_{t-1}, Y_{t-1}), \label{eq:expmodel} \end{eqnarray} for all $t\ge 1$, where $p(y|\eta_t)$ is defined in (\ref{eq:expfamily}), $\mathcal{F}_t=\sigma\{\eta_1, Y_1,\ldots, Y_t\}$ and $X_t$ is the conditional mean process, i.e., $X_t=B(\eta_t)=\mbox{E}(Y_t|\mathcal{F}_{t-1})$. Here $g_{\theta}(x, y)$ is a non-negative bivariate function defined on $[0, \infty)\times [0,\infty)$ when $Y_t$ has a continuous conditional distribution or on $[0,\infty)\times \mathbb{N}_0$, where $\mathbb{N}_0=\{0, 1,\ldots\}$, when $Y_t$ only takes non-negative integers. Throughout, we assume that the function $g_{\theta}$ satisfies a contraction condition, i.e., for any $x,x'\ge 0$, and $y,y'\in [0,\infty)~\mbox{or}~\mathbb{N}_0$, \begin{eqnarray} |g_{\theta}(x,y)-g_{\theta}(x',y')|\le a|x-x'|+b|y-y'|, \label{ContractionFunction} \end{eqnarray} where $a$ and $b$ are non-negative constants with $a+b<1$. Note that (\ref{ContractionFunction}) implies \begin{eqnarray} g_{\theta}(x,y)\le g_{\theta}(0,0)+ax+by, ~~\mbox{for any}~~x,y\ge 0. \label{eq:BoundOfG} \end{eqnarray} We point out that model (\ref{eq:expmodel}) with the function $g_{\theta}$ satisfying (\ref{ContractionFunction}) includes the Poisson INGARCH model (see Example \ref{PoissonIngarchExample}) and the exponential autoregressive model (\ref{eq:PoisExpModel}) as special cases under some restrictions on the parameter space. The generalized linear autoregressive moving average model (GLARMA) (see \cite{Davis03}) also belongs to this class, although the contraction condition is not necessarily satisfied. Only under very simple model specifications have the stability properties of GLARMA been established and the relevant work is still ongoing. The primary focus of this paper is on the conditional mean process $\{X_t\}$, which can be easily seen as a time-homogeneous Markov chain. Note that the observation process $\{Y_t\}$ is not a Markov chain itself. \subsection{Strict stationarity} The iterated random function approach (see e.g., \cite{Diaconis} and \cite{Weibiao04}) provides a useful tool when investigating the stability properties of Markov chains and turns out to be particularly instrumental in our research. In the definition of iterated random functions (IRF), the state space $(\mathcal{W}, \rho)$ is assumed to be a complete and separable metric space. Then a sequence of \emph{iterated random functions} $\{f_{\theta_t}\}$ is defined through \begin{eqnarray*} W_t=f_{\theta_t}(W_{t-1}),~~ t\in \mathbb{N}, \end{eqnarray*} where $\{\theta_t\}_{t\ge 1}$ take values in another measurable space $\Theta$ and are independently distributed with identical marginal distribution, and $W_0$ is independent of $\{\theta_t\}_{t\ge 1}$. In working with iterated random functions, \cite{Weibiao04} introduces the idea of geometric moment contraction (GMC), which is useful for deriving further properties of IRF. Our research is also relying heavily on GMC. Suppose there exists a stationary solution to the Markov chain $\{W_t\}$, denoted by $\varpi$, let $W_0, W_0'\sim \varpi$ be independent of each other and of $\{\theta_t\}_{t\ge 1}$, and define $W_t(w)=f_{\theta_t}\circ f_{\theta_{t-1}}\circ\ldots\circ f_{\theta_1}(w)$. Then $\{W_t\}$ is said to be \emph{geometric moment contracting} if there exist an $\alpha>0$, a $C=C(\alpha)>0$ and an $r=r(\alpha)\in (0,1)$ such that, for all $t\in \mathbb{N}$, \begin{eqnarray*} \mbox{E}\{\rho^{\alpha}(W_n(W_0), W_n(W_0'))\}\le Cr^n. \end{eqnarray*} The conditional mean process $\{X_t\}$ specified in (\ref{eq:expmodel}) can be embedded into the framework of IRF and shown to be GMC. In this section and the next we use $g$ to represent the function $g_{\theta}$ in (\ref{eq:expmodel}) evaluated at the true parameter. For any $u\in (0, 1)$, the random function $f_{u}(x)$ is defined as \begin{eqnarray} f_{u}(x):=g\bigr(x, F^{-1}_{x}(u)\bigr), \label{eq:IRFexp} \end{eqnarray} where $F_x$ is the cumulative distribution function of $p(y|\eta)$ in (\ref{eq:expfamily}) with $x=B(\eta)$, and its inverse $F_x^{-1}(u):=\inf\{t\ge 0: F_x(t)\ge u\}$ for $u\in [0,1]$. Let $\{U_t\}$ be a sequence of independent and identically distributed (iid) uniform $(0,1)$ random variables, then the Markov chain $\{X_t\}$ defined in (\ref{eq:expmodel}) starting from $X_0=x$ can be represented as the so-called forward process $X_t(x)=(f_{U_t}\circ f_{U_{t-1}}\circ\ldots\circ f_{U_1})(x)$. The corresponding backward process is defined as $Z_t(x)=(f_{U_1}\circ f_{U_2}\circ\ldots\circ f_{U_t})(x)$, which has the same distribution as $X_t(x)$ for any $t$. \begin{prop} \label{modelgmc} Assume model (\ref{eq:expmodel}) and that the function $g$ satisfies the contraction condition (\ref{ContractionFunction}). Then \begin{enumerate} \item There exists a random variable $Z_{\infty}$ such that, for all $x\in S$, $Z_n(x)\rightarrow Z_{\infty}$ almost surely. The limit $Z_{\infty}$ does not depend on $x$ and has distribution $\pi$, which is the stationary distribution of $\{X_t\}$. \item The Markov chain $\{X_t, t\ge 1\}$ is geometric moment contracting with $\pi$ as its unique stationary distribution. In addition, $\mbox{E}_{\pi}X_1<\infty$. \item If $\{X_t, t\ge 1\}$ starts from $\pi$, i.e., $X_1\sim \pi$, then $\{Y_t, t\ge 1\}$ is a stationary time series. \end{enumerate} \end{prop} Proposition \ref{modelgmc} implies that starting from any state $x$, the limiting distribution of the Markov chain $X_n(x)$ exists and the $n$-step transition probability measure $P^n(x,\cdot)$ converges weakly to $\pi$, as $n\rightarrow\infty$. \subsection{Ergodicity} In this section we further investigate the stability properties, including ergodicity and mixing for model (\ref{eq:expmodel}). Under the conditions of Proposition \ref{modelgmc}, the process $\{(X_t, Y_t)\}$ is strictly stationary, so we can extend it to be indexed by all the integers. The following proposition establishes ergodicity and absolute regularity when $Y_t$ is discrete. \begin{prop} \label{discreteergodicity} Assume model (\ref{eq:expmodel}) where the support of $Y_t$ is a subset of $\mathbb{N}_0=\{0,1,\ldots,\}$, and that $g$ satisfies the contraction condition (\ref{ContractionFunction}). Then \begin{enumerate} \item There exists a measurable function $g_{\infty}:\mathbb{N}_0^{\infty}=\{(n_1, n_2, \ldots), n_i\in \mathbb{N}_0, i=1,2,\ldots\}\longrightarrow [0,\infty)$ such that $X_t=g_{\infty}(Y_{t-1}, Y_{t-2},\ldots)$ almost surely. \item The count process $\{Y_t\}$ is absolutely regular with coefficients satisfying \begin{eqnarray*} \beta(n)\le (a+b)^n/(1-(a+b)), \end{eqnarray*} and hence $\{(X_t, Y_t)\}$ is ergodic. \end{enumerate} \end{prop} When $Y_t$ has a continuous distribution, geometric ergodicity of $\{X_t\}$ can be established under stronger conditions on $g$. The proof of the result relies on the classic Markov chain theory since $\{X_t\}$ is $\phi$-irreducible due to the continuity of the distribution in this situation. \begin{prop} \label{ContinuousErgodicity} Assume model (\ref{eq:expmodel}) where the support of $Y_t$ is $[0, \infty)$, and that the function $g$ satisfies the contraction condition (\ref{ContractionFunction}). Moreover if $g$ is increasing and continuous in $(x, y)$, then \begin{enumerate} \item There exists $g_{\infty}:[0,\infty)^{\infty}\rightarrow [0,\infty)$ such that $X_t=g_{\infty}(Y_{t-1}, Y_{t-2},\ldots)$ almost surely. \item The Markov chain $\{X_t, t\ge 1\}$ is geometrically ergodic provided that $a+b<1$, and hence $\{(X_t, Y_t)\}$ is stationary and ergodic. \end{enumerate} \end{prop} \section{Likelihood Inference} In this section, we consider maximum likelihood estimates of the parameters and study their asymptotic behavior, including consistency and asymptotic normality. Denote the $d-$dimensional parameter vector by $\theta\in \mathbb{R}^d$, i.e., $\theta=(\theta_1,\ldots, \theta_d)^T$, and the true parameter vector by $\theta_0=(\theta_1^0,\ldots,\theta_d^0)^T$. Then the likelihood function of model (\ref{eq:expmodel}) conditioned on $\eta_1$ and based on the observations $Y_1, \ldots, Y_n$ is given by \begin{eqnarray*} L(\theta|Y_1,\ldots,Y_n,\eta_1)=\displaystyle\prod_{t=1}^n \exp\{\eta_t(\theta) Y_t-A(\eta_t(\theta))\}h(Y_t), \end{eqnarray*} where $\eta_t(\theta)=B^{-1}(X_t(\theta))$ is updated through the iterations $X_t=g_{\theta}(X_{t-1}, Y_{t-1})$. The log-likelihood function, up to a constant independent of $\theta$, is given by \begin{eqnarray} l(\theta)=\displaystyle\sum_{t=1}^n l_t(\theta)=\sum_{t=1}^n \{\eta_t(\theta) Y_t-A(\eta_t(\theta))\},\label{eq:loglikeexp} \end{eqnarray} with score function \begin{eqnarray} S_n(\theta)=\frac{\partial l(\theta)}{\partial \theta}=\sum_{t=1}^n \{Y_t-B(\eta_t(\theta))\}\frac{\partial \eta_t(\theta)}{\partial \theta}. \label{eq:scoreexp} \end{eqnarray} The maximum likelihood estimator $\hat{\theta}_n$ is a solution to the equation $S_n(\theta)=0$. Let $P_{\theta_0}$ be the probability measure under the true parameter $\theta_0$ and unless otherwise indicated, $\mbox{E}[\cdot]$ is taken under $\theta_0$. Recall that $X_t=g_{\infty}^{\theta}(Y_{t-1}, Y_{t-2},\ldots)$ according to part (a) of Propositions \ref{discreteergodicity} and \ref{ContinuousErgodicity}. We will derive the asymptotic properties of the maximum likelihood estimator $\hat{\theta}_n$ based on a set of regularity conditions: \begin{enumerate} \item[(A0)] $\theta_0$ is an interior point in the compact parameter space $\Theta\in\mathbb{R}^d$. \item[(A1)] For any $\theta\in \Theta$, $g^{\theta}_{\infty}\ge x_{\theta}^{\ast}\in \mathcal{R}(B)$, where $\mathcal{R}(B)$ is the range of $B(\eta)$. Moreover $x_{\theta}^{\ast}\ge x^{\ast}\in \mathcal{R}(B)$ for all $\theta$. \item[(A2)] For any $\mathbf{y}\in [0,\infty)^{\infty}$ or $\mathbb{N}_0^{\infty}$, the mapping $\theta\mapsto g_{\infty}^{\theta}(\mathbf{y})$ is continuous. \item[(A3)] $g(x,y)$ is increasing in $(x, y)$ if $Y_t$ given $\mathcal{F}_{t-1}$ has a continuous distribution. \item[(A4)] $\mbox{E}\{Y_1\sup_{\theta\in \Theta}B^{-1}(g_{\infty}^{\theta}(Y_0,Y_{-1},\ldots))\}<\infty$. \item[(A5)] If there exists a $t\ge 1$ such that $X_t(\theta)=X_t(\theta_0)$, $P_{\theta_0}$-a.s., then $\theta=\theta_0$. \item[(A6)] The mapping $\theta\mapsto g_{\infty}^{\theta}$ is twice continuously differentiable. \item[(A7)] $\mbox{E}\{B'(\eta_1(\theta_0))(\partial \eta_1(\theta)/\partial \theta_i)^2|_{\theta=\theta_0}\}<\infty$, for $i=1,\ldots,d$. \end{enumerate} Strong consistency of the estimates is derived according to the lemma below, which is adapted from Lemma 3.11 in \cite{Pfanzagl69}. \begin{lemma} \label{WaldConsistency} Assume that $\Theta\subset \mathbb{R}^d$ is a compact set, and that $(\Omega, \mathcal{F}, P)$ is a probability space. Let $\{f_{\theta}: \mathbb{R}^{\infty}\mapsto [-\infty,\infty], \theta\in \Theta\}$ be a family of Borel measurable functions such that: \begin{enumerate} \item $\theta\mapsto f_{\theta}(\mathbf{x})$ is upper-semicontinuous for all $\mathbf{\mathbf{x}}\in \mathbb{R}^{\infty}$. \item $\sup_{\theta\in C}f_{\theta}(\mathbf{x})$ is Borel measurable for any compact set $C\subset \Theta$. \item $\mbox{E}\{\sup_{\theta\in\Theta}f_{\theta}(X)\}<\infty$ for some random variable $X$ defined on $(\Omega, \mathcal{F}, P)$. \end{enumerate} Then \begin{enumerate} \item $\theta\mapsto \mbox{E}[f_{\theta}(X)]$ is upper-semicontinuous. \item If $\{X_t: \Omega\mapsto \mathbb{R}^{\infty}, t\in \mathbb{Z}\}$ is an ergodic stationary process defined on $(\Omega, \mathcal{F}, P)$, and for all $t$, $X_t$ has the same distribution as $X$, then \begin{eqnarray*} \limsup_{n\rightarrow\infty}\sup_{\theta\in C}\frac{1}{n}\sum_{i=1}^n f_{\theta}(X_i)\le \sup_{\theta\in C}\mbox{E}\{f_{\theta}(X_1)\},~~\mbox{a.s.-}P, \end{eqnarray*} for any compact set $C$. \end{enumerate} \end{lemma} \cite{Pfanzagl69} proved the result assuming the independent structure of $\{X_t\}$, but the same result proves to be true provided that the strong law of large numbers can be applied. By virtue of Lemma \ref{WaldConsistency}, we can derive the strong consistency of the estimates. \begin{thm} \label{Consistency} Assume model (\ref{eq:expmodel}) with the function $g$ satisfying the contraction condition (\ref{ContractionFunction}), and that assumptions (A0)-(A5) hold. Then the maximum likelihood estimator $\hat{\theta}_n$ is strongly consistent, that is, \begin{eqnarray*} \hat{\theta}_n\stackrel{a.s.}{\longrightarrow}\theta_0,~~ \mbox{as}~n\rightarrow\infty. \end{eqnarray*} \end{thm} The following theorem addresses the asymptotic distribution of the MLE and the idea of proof is similar to that in \cite{Davis03}. Unless otherwise indicated, $\eta_t$ and $\dot{\eta}_t$ are both evaluated at $\theta_0$, i.e., $\eta_t=\eta_t(\theta_0)$ and $\dot{\eta_t}=(\partial \eta_t/\partial \theta)|_{\theta=\theta_0}$. \begin{thm} \label{AsympNormal} Assume model (\ref{eq:expmodel}) with the function $g$ satisfying the contraction condition (\ref{ContractionFunction}), and that assumptions (A0)-(A7) hold. Then the maximum likelihood estimator $\hat{\theta}_n$ is asymptotically normal, i.e., \begin{eqnarray*} \sqrt{n}(\hat{\theta}_n-\theta_0)\stackrel{\mathcal{L}}{\longrightarrow}N(0, \Omega^{-1}),~~~ \mbox{as}~~n\rightarrow\infty, \end{eqnarray*} where $\Omega=\mbox{E}\{B'(\eta_t)\dot{\eta}_t\dot{\eta}_t^T\}$. \\ \end{thm} We remark that in practice, the population quantities in $\Omega$ can be replaced by their estimated counterparts. Examples of such substitution will be illustrated below in specific models. \section{Examples} \subsection{Linear dynamic models} The conditional mean process $\{X_t\}$ in these models has GARCH-like dynamics. Specifically they are described as \begin{eqnarray} Y_t|\mathcal{F}_{t-1}\sim p(y|\eta_t),~~~X_t=\delta+\alpha X_{t-1}+\beta Y_{t-1}, \label{eq:LinearModel} \end{eqnarray} where $X_t=B(\eta_t)=\mbox{E}(Y_t|\mathcal{F}_{t-1})$, and $\delta>0, \alpha, \beta\ge 0$ are parameters. Observe that model (\ref{eq:LinearModel}) is a special case of model (\ref{eq:expmodel}) by defining the function $g_{\theta}$ as \begin{eqnarray} g_{\theta}(x,y)=\delta+\alpha x+\beta y, \label{eq:LinearG} \end{eqnarray} with $\theta=(\delta,\alpha,\beta)^T$ and the contraction condition (\ref{ContractionFunction}) corresponds to $\alpha+\beta<1$. Note that by recursion we have, for all $t$, \begin{eqnarray} X_t(\theta)=\delta/(1-\alpha)+\beta\displaystyle\sum_{k=0}^{\infty}\alpha^k Y_{t-1-k}. \label{eq:InfinitePastRep} \end{eqnarray} It follows that $X_t(\theta)\ge x^{\ast}=\delta/(1-\alpha)$ since $Y_t$ only takes non-negative values. A direct application of Propositions \ref{modelgmc}, \ref{discreteergodicity} and \ref{ContinuousErgodicity} gives the stability properties of model (\ref{eq:LinearModel}). \begin{prop} \label{LinearStability} Assume model (\ref{eq:LinearModel}) with $\alpha+\beta<1$. Then the process $\{X_t, t\ge 1\}$ has a unique stationary distribution $\pi$, and $\{(X_t, Y_t), t\ge 1\}$ is ergodic if $X_1\sim \pi$. \end{prop} \medskip If $\theta_0=(\delta_0, \alpha_0, \beta_0)^T$ denotes the true parameter vector, then the log-likelihood function $l(\theta)$ and the score function $S_n(\theta)$ of model (\ref{eq:LinearModel}) are given by (\ref{eq:loglikeexp}) and (\ref{eq:scoreexp}) respectively, where $\partial \eta_t(\theta)/\partial \theta=(\partial \eta_t/\partial \delta, \partial \eta_t/\partial \alpha, \partial \eta_t/\partial \beta)^T$ is determined recursively by \begin{eqnarray} \frac{\partial \eta_t}{\partial \theta}=\begin{pmatrix} 1 \\ B(\eta_{t-1})\\ Y_{t-1} \end{pmatrix}/B'(\eta_t)+\alpha\frac{B'(\eta_{t-1})}{B'(\eta_t)}\frac{\partial \eta_{t-1}}{\partial \theta}. \end{eqnarray} The maximum likelihood estimator $\hat{\theta}_n$ is a solution of the equation $S_n(\theta)=0$. Furthermore, the Hessian matrix can be found by taking derivatives of the score function, i.e., \begin{eqnarray*} H_n(\theta)=\frac{\partial^2 l(\theta)}{\partial \theta\partial \theta^T}=\sum_{t=1}^n [-B'(\eta_t(\theta))\frac{\partial\eta_t(\theta)}{\partial\theta}\frac{\partial \eta_t(\theta)}{\partial\theta^T}+\{Y_t-B(\eta_t(\theta))\}\frac{\partial^2\eta_t(\theta)}{\partial\theta\partial \theta^T}], \end{eqnarray*} where \small \begin{eqnarray*} \frac{\partial^2\eta_t}{\partial\theta\partial\theta^T}&=&\biggr(\frac{B''(\eta_t)}{(B'(\eta_t))^2}\frac{\partial\eta_t}{\partial \theta}~~~\frac{B'(\eta_{t-1})B'(\eta_t)}{(B'(\eta_t))^2}\frac{\partial\eta_{t-1}}{\partial \theta}-\frac{B'(\eta_{t-1})B''(\eta_t)}{(B'(\eta_t))^2}\frac{\partial\eta_{t}}{\partial \theta}\\ &&\frac{-Y_{t-1}B''(\eta_t)}{(B'(\eta_t))^2}\frac{\partial\eta_t}{\partial\theta}\biggr)+(0~~~1~~~0)^T\frac{B'(\eta_{t-1})}{B'(\eta_t)}\frac{\partial\eta_{t-1}}{\partial\theta^T}+\alpha\frac{B''(\eta_{t-1})B'(\eta_t)}{(B'(\eta_t))^2}\\ &&\frac{\partial\eta_{t-1}}{\partial\theta}\frac{\partial \eta_{t-1}}{\partial\theta^T}-\alpha\frac{B'(\eta_{t-1})B''(\eta_t)}{(B'(\eta_t))^2}\frac{\partial\eta_t}{\partial\theta}\frac{\partial \eta_t}{\partial\theta^T}+\alpha\frac{B'(\eta_{t-1})}{B'(\eta_t)}\frac{\partial^2\eta_{t-1}}{\partial\theta \partial\theta^T}. \end{eqnarray*} \normalsize It follows from the representation with the infinite past (\ref{eq:InfinitePastRep}) that assumptions (A1)-(A3) and (A6) are satisfied. In order to apply Theorem \ref{AsympNormal} when investigating the asymptotic behavior of the MLE, we need to impose the following regularity conditions: \begin{enumerate} \item[(L0)] The true parameter vector $\theta_0$ lies in a compact neighborhood $\Theta\in \mathbb{R}_+^3$ of $\theta_0$, where $\Theta=\{\theta=(\delta, \alpha, \beta)^T\in \mathbb{R}_+^3: 0<\delta_L\le \delta\le \delta_U, \epsilon\le \alpha+\beta\le 1-\epsilon\}$ for some $\epsilon>0$. \item[(L1)] $\mbox{E}\{Y_1\sup_{\theta\in \Theta}B^{-1}(\delta/(1-\alpha)+\beta\sum_{k=0}^{\infty}\alpha^k Y_{-k})\}<\infty$. \item[(L2)] $\mbox{E}\{B'(\eta_1(\theta_0))(\partial \eta_1(\theta)/\partial \theta_i)^2|_{\theta=\theta_0}\}<\infty$, for $i=1,2,3$. \end{enumerate} \begin{thm} \label{LinearAsymp} Assume model (\ref{eq:LinearModel}) and that assumptions (L0)-(L2) hold. Then the maximum likelihood estimator $\hat{\theta}_n$ is strongly consistent and asymptotically normal, i.e., \begin{eqnarray*} \sqrt{n}(\hat{\theta}_n-\theta_0)\stackrel{\mathcal{L}}{\longrightarrow}N(0, \Omega^{-1}),~~~ \mbox{as}~~n\rightarrow\infty, \end{eqnarray*} where $\Omega=\mbox{E}\{B'(\eta_t)\dot{\eta}_t\dot{\eta}_t^T\}$, where $\eta_t=\eta_t(\theta_0)$ and $\dot{\eta_t}=\frac{\partial \eta_t}{\partial \theta}|_{\theta=\theta_0}$. \end{thm} \medskip \begin{remark} \label{ARMARemark} Under the contraction condition $\alpha+\beta<1$, $\{Y_t\}$ can be represented as a causal ARMA(1,1) process. To see this, denote $d_t=Y_t-X_t$, then it follows from $\mbox{E}(d_t|\mathcal{F}_{t-1})=0$ that $\{d_t, t\in \mathbb{Z}\}$ is a martingale difference sequence. Therefore model (\ref{eq:LinearModel}) can be written as \begin{eqnarray} Y_t-(\alpha+\beta)Y_{t-1}=\delta+d_t-\alpha d_{t-1}. \label{eq:ARMArepresentation} \end{eqnarray} Denote $\gamma_Y(h)$ as the auto-covariance function of $\{Y_t\}$. If $\gamma_Y(0)<\infty$, then $\gamma_Y(h)=(\alpha+\beta)^{h-1}\gamma_Y(1)$, for $h\ge 1$, see for example \cite{TSTM}. \end{remark} In practice, it can be difficult to verify assumptions (L1) and (L2), so we provide some alternative sufficient conditions for them in the following two remarks. \begin{remark} \label{L1Remark} A sufficient condition for assumption (L1) is \begin{eqnarray*} \mbox{E}\{Y_1B^{-1}(\delta_U/\epsilon+\displaystyle\sum_{k=1}^{\infty}(1-\epsilon)^k Y_{1-k})\}<\infty, \end{eqnarray*} provided that $\delta_U/\epsilon+\sum_{k=1}^{\infty}(1-\epsilon)^k Y_{1-k}$ is in the range of $B(\eta)$. This can be seen by noting that $X_1(\theta)\le \delta_U/\epsilon+\sum_{k=1}^{\infty}(1-\epsilon)^k Y_{1-k}$. \end{remark} \begin{remark} \label{L2Remark} If $A''(\eta_t)\ge \underline{c}$ for some $\underline{c}>0$, this is true, for example, when $A''(\eta)$ is increasing and $A''(B^{-1}(\delta_L))>0$, then a sufficient condition for assumption (L2) is $\gamma_Y(0)<\infty$. \end{remark} \medskip Next we consider some specific models belonging to class (\ref{eq:LinearModel}), most of which are geared towards modeling time series of counts. \begin{example} \label{PoissonIngarchExample} \noindent As a special case of the linear dynamic model (\ref{eq:LinearModel}) with $\eta_t=\log\lambda_t$ and $A(\eta_t)=e^{\eta_t}$, the Poisson INGARCH$(1, 1)$ model is given by \begin{eqnarray} Y_t|\mathcal{F}_{t-1} \sim \mbox{Pois}(\lambda_t),~~\lambda_t=\delta+\alpha\lambda_{t-1}+\beta Y_{t-1}, \label{eq:poisingarch11} \end{eqnarray} where $\delta>0, \alpha, \beta\ge 0$ are parameters. According to Proposition \ref{LinearStability}, it is easy to see that if $\alpha+\beta<1$, then $\{\lambda_t\}$ is geometric moment contracting and has a unique stationary distribution $\pi$; moreover if $\lambda_1\sim \pi$, then $\{(Y_t, \lambda_t), t\ge 1\}$ is an ergodic stationary process. As for inference, the MLE $\hat{\theta}_n$ is strongly consistent and asymptotically normal according to Theorem \ref{LinearAsymp}, i.e., $\sqrt{n}(\hat{\theta}_n-\theta_0)\stackrel{\mathcal{L}}{\longrightarrow}N(0, \Omega^{-1})$, as $n\rightarrow\infty$, where $\Omega=\mbox{E}\{1/\lambda_t(\partial\lambda_t/\partial\theta)(\partial\lambda_t/\partial\theta)^T\}$. To see this, we only need to verify assumptions (L1) and (L2). Note that by \cite{Fokianos}, we have $\gamma_Y(0)=\{1-(\alpha+\beta)^2+\beta^2\}/\{1-(\alpha+\beta)^2\}$ and $\gamma_Y(h)=\mu C(\theta)(\alpha+\beta)^{h-1}$ for $h\ge 1$, where $\mu=\mbox{E} Y_t=\delta/(1-\alpha-\beta)$ and $C(\theta)$ is a positive constant dependent on $\theta$. Hence by monotone convergence theorem, we have \begin{eqnarray*} \mbox{E}[Y_1\log\{\delta_U/\epsilon+\displaystyle\sum_{k=1}^{\infty}(1-\epsilon)^k Y_{1-k}\}]&\le&\mbox{E}[Y_1\{\delta_U/\epsilon+\displaystyle\sum_{k=1}^{\infty}(1-\epsilon)^k Y_{1-k}\}]\\ &=&\frac{\delta_U}{\epsilon} \mbox{E} Y_1+\displaystyle\sum_{k=1}^{\infty}(1-\epsilon)^k\mbox{E} Y_1Y_{1-k}\\ &=&\mu\frac{\delta_U}{\epsilon}+\displaystyle\sum_{k=1}^{\infty}(1-\epsilon)^k\{\gamma_Y(k)+\mu^2\}<\infty. \end{eqnarray*} Hence assumption (L1) holds according to Remark \ref{L1Remark}. Notice that $B(\eta_t)=\lambda_t\ge \lambda^{\ast}:=\delta/(1-\alpha)$ for all $t$, so $A''(\eta_t)=e^{\eta_t}$ is bounded away from 0, so assumption (L2) holds according to Remark \ref{L2Remark}. \end{example} \medskip Moreover, the iterated random function approach can be used to study the properties of INGARCH models with higher orders. A Poisson INGARCH($p,q$) model takes the form \begin{eqnarray} Y_t|\mathcal{F}_{t-1}\sim \mbox{Pois}(\lambda_t),~~\lambda_t=\delta+\displaystyle\sum_{i=1}^p \alpha_i\lambda_{t-i}+\sum_{j=1}^q \beta_j Y_{t-j}, \label{eq:PoisIngarchpq} \end{eqnarray} where $\delta>0, \alpha_i, \beta_j\ge 0, i=1,\ldots, p$; $j=1,\ldots, q$. Applying similar ideas as in the INGARCH($1,1$) case, we have the following stationarity result. \begin{prop} \label{poissonpq} Consider the INGARCH$(p,q)$ model (\ref{eq:PoisIngarchpq}) and suppose $\sum_{i=1}^p \alpha_i + \sum_{j=1}^q \beta_j<1$, then $\{\lambda_t\}$ is geometric moment contracting and has a unique stationary distribution. \end{prop} \medskip \begin{example} \label{NbIngarchExample} The negative binomial INGARCH$(1, 1)$ model (NB-INGARCH) is defined as \begin{eqnarray} Y_t|\mathcal{F}_{t-1}\sim \mbox{NB}(r,p_t), ~~X_t=\delta+\alpha X_{t-1}+\beta Y_{t-1}, \label{eq:nbingarch11} \end{eqnarray} where $X_t=r(1-p_t)/p_t$, $\delta>0,\alpha,\beta\ge 0$ are parameters and the notation $Y\sim\mbox{NB}(r, p)$ represents the negative binomial distribution with probability mass function given by \begin{eqnarray*} P(Y=k)={k+r-1 \choose r-1}(1-p)^k p^r, ~~~~~~ k=0,1,2,\ldots. \end{eqnarray*} When $r=1$, the conditional distribution of $Y_t$ becomes geometric distribution with probability of success $p_t$, in which case (\ref{eq:nbingarch11}) reduces to a geometric INGARCH model. By virtue of Proposition \ref{LinearStability}, if $\alpha+\beta<1$, then $\{X_t, t\ge 1\}$ is a geometric moment contracting Markov chain, and has a unique stationary distribution $\pi$; and when $X_1\sim \pi$, $\{(X_t, Y_t), t\ge 1\}$ is ergodic. As for inference, we can first estimate $\theta=(\delta, \alpha, \beta)^T$ for $r$ fixed and calculate the profile likelihood as a function of $r$. Then $r$ is estimated by choosing the one which maximizes the profile likelihood, and thus $\hat{\theta}$ can be otained correspondingly. Moreover, if we assume $r$ is known and $(\alpha+\beta)^2+\beta^2/r<1$, then under assumption (L0), the maximum likelihood estimator $\hat{\theta}_n$ is strongly consistent and asymptotically normal with mean $\theta_0$ and covariance matrix $\Omega^{-1}/n$, where $\Omega=\mbox{E}\{r/X_t/(X_t+r)(\partial X_t/\partial \theta)(\partial X_t/\partial\theta)^T\}$. Verification of assumptions (L1) and (L2) is sufficient to demonstrate the result. Since $B^{-1}(x)=\log\{x/(x+r)\}<0$, so assumption (L1) holds according to Remark \ref{L1Remark}. Note that $A''(\eta_t)=re^{\eta_t}/(1-e^{\eta_t})^2$ is increasing, so assumption (L2) holds provided $\gamma_Y(0)<\infty$ according to Remark \ref{L2Remark}. Because $\mbox{Var}(X_1)=\alpha^2\mbox{Var}(X_0)+\beta^2\mbox{Var}(Y_0)+2\alpha\beta\mbox{Cov}(X_0, Y_0)$, where \begin{eqnarray*} \mbox{Var}(Y_0)&=&\mbox{E}\{\mbox{Var}(Y_0|X_0)\}+\mbox{Var}\{\mbox{E}(Y_0|X_0)\}\\ &=&\mbox{E}\{r(1-p_0)/p_0^2\}+\mbox{Var}(X_0)=\mu+1/r\mbox{E} X_0^2+\mbox{Var}(X_0), \end{eqnarray*} and $\mbox{Cov}(X_1, Y_1)=\mbox{E} Y_1X_1-\mu^2=\mbox{E} X_1^2-\mu^2=\mbox{Var}(X_1)$, it follows from the stationarity that \begin{eqnarray*} \mbox{Var}(X_0)=\frac{\beta^2\mu(1+\mu/r)}{1-(\alpha+\beta)^2-\beta^2/r}. \end{eqnarray*} Hence $\gamma_Y(0)<\infty$ provided $(\alpha+\beta)^2+\beta^2/r<1$. \end{example} \medskip \begin{example} We define the binomial INGARCH$(1, 1)$ model as \begin{eqnarray} Y_t|\mathcal{F}_{t-1}\sim \mbox{B}(m, p_t),~~mp_t=\delta+\alpha mp_{t-1}+\beta Y_{t-1}, \label{eq:BinomIngarch11} \end{eqnarray} where $\delta>0, \alpha, \beta\ge 0$ are parameters and $\delta+\alpha m+\beta m\le m$ since $p_t\in (0, 1)$. This implies the contraction condition $\alpha+\beta<1$. In particular, when $m=1$, it models time series of binary data, and is called a Bernoulli INGARCH model. If $\delta+\alpha m+\beta m\le m$, then $\{X_t=mp_t, t\ge 1\}$ is geometric moment contracting and has a unique stationary distribution $\pi$; furthermore, $\{(X_t, Y_t), t\ge 1\}$ is ergodic when $X_1\sim \pi$. We now consider the inference of the model. Firstly, because of the special constraint $p_t\in (0, 1)$, the parameter space becomes \begin{eqnarray*} \Theta=\{(\delta,\alpha,\beta)^T: 0<\delta_L\le \delta\le \delta_U, \epsilon\le \alpha+\beta\le 1-\epsilon\}~~\mbox{for some}~~\epsilon>\delta_U/m. \end{eqnarray*} Since $Y_t\le m$, so $X_1(\theta)\le (\delta+\alpha m)/(1-\alpha)$ and $B^{-1}(X_1(\theta))\le \log\{(\delta_U+(1-\epsilon)m)/(\epsilon m-\delta_U)\}$. Hence assumption (L1) holds. Notice that $A''(\eta_t)=mp_t(1-p_t)$ and $p_t\in[\delta_U/m, (\delta+\beta m)/(m(1-\alpha))]\subsetneq [0, 1]$, so $A''(\eta_t)$ is bounded away from 0. Similar to the proof in Example \ref{NbIngarchExample}, one can show that $\gamma_Y(0)<\infty$ provided that $(\alpha+\beta)^2+\beta^2/m<1$. So assuming $m$ is known and $(\alpha+\beta)^2+\beta^2/m<1$, the maximum likelihood estimator $\hat{\theta}_n$ is strongly consistent and asymptotically normal with mean $\theta_0$ and covariance matrix $\Omega^{-1}/n$, where $\Omega=\mbox{E}\{m/X_t/(m-X_t)(\partial X_t/\partial\theta)(\partial X_t/\partial\theta)^T\}$. \end{example} \begin{example} The gamma INGARCH model, which has a continuous response, is given by \begin{eqnarray} Y_t|\mathcal{F}_{t-1}\sim \Gamma(\kappa, s_t),~~s_t=\delta/\kappa+\alpha s_{t-1}+\beta/\kappa Y_{t-1}, \label{eq:GammaIngarch11} \end{eqnarray} where $\kappa$ and $s_t$ are the shape and scale parameters of the gamma distribution respectively and $\delta>0,\alpha,\beta\ge 0$ are parameters. Here the natural parameter is $\eta_t=-1/s_t$ and the Markov chain $X_t=B(\eta_t)=-\kappa/\eta_t$. If $\alpha+\beta<1$, then $\{X_t=\kappa s_t, t\ge 1\}$ is geometric moment contracting and has a unique stationary distribution $\pi$; furthermore, $\{(Y_t, X_t), t\ge 1\}$ is an ergodic stationary process if $X_1\sim \pi$. As for the inference in this model, assume $\kappa$ is known and $(\alpha+\beta)^2+\beta^2/\kappa<1$. Then the maximum likelihood estimator $\hat{\theta}_n$ is strongly consistent and asymptotically normal with mean $\theta_0$ and covariance matrix $\Omega^{-1}/n$ where $\Omega=\mbox{E}\{\kappa/s_t^2(\partial s_t/\partial \theta)(\partial s_t/\partial\theta)^T\}$. To see this, note that $B^{-1}(x)=-\kappa/x<0$ when $x>0$, which verifies assumption (L1) according to Remark \ref{L1Remark}. Similar to the proof in Example \ref{NbIngarchExample}, one can show that $\gamma_Y(0)=(1/\kappa+1)\gamma_X(0)+\mu^2/\kappa$ and $\gamma_X(0)=(\beta^2\mu^2/\kappa)/\{1-(\alpha+\beta)^2-\beta^2/\kappa\}$. Hence as long as $(\alpha+\beta)^2+\beta^2/\kappa<1$, we have $\gamma_Y(0)<\infty$. Since $A''(\eta_t)=\kappa/\eta_t^2\ge \delta_L^2/\kappa>0$, assumption (L2) holds according to Remark \ref{L2Remark}. \end{example} \subsection{Nonlinear dynamic models} It is possible to generalize (\ref{eq:LinearModel}) to nonlinear dynamic models. One approach is based on the idea of spline basis functions, see for example, \cite{SemiRegression}. In this framework, the model specification is given by \begin{eqnarray} Y_t|\mathcal{F}_{t-1}\sim p(y|\eta_t),~~X_t=\delta+\alpha X_{t-1}+\beta Y_{t-1}+\displaystyle\sum_{k=1}^K\beta_k(Y_{t-1}-\xi_k)^+, \label{eq:NonLinear} \end{eqnarray} where $K\in \mathbb{N}_0$, $\delta>0, \alpha,\beta\ge 0, \beta_1,\ldots, \beta_K$ are parameters, $\{\xi_k\}_{k=1}^K$ are the so-called \emph{knots}, and $x^+$ is the positive part of $x$. In particular, when $K=0$, (\ref{eq:NonLinear}) reduces to the linear model (\ref{eq:LinearModel}). It is easy to see that model (\ref{eq:NonLinear}) is a special case of model (\ref{eq:expmodel}) by defining $g_{\theta}(x, y)=\delta+\alpha x+\beta y+\sum_{k=1}^K \beta_k(y-\xi_k)^+,$ where $\theta=(\delta, \alpha, \beta, \beta_1,\ldots, \beta_K)^T$. Note that in each of the pieces segmented by the knots, (\ref{eq:NonLinear}) has INGARCH-like dynamics. For example, if $Y_{t-1}\in [\xi_s, \xi_{s+1})$ for some $s< K$, then $X_t=(\delta-\sum_{k=1}^s \beta_k \xi_k) + \alpha X_{t-1}+ (\beta+\sum_{k=1}^s\beta_k) Y_{t-1}$. This can be viewed as one of the generalizations (e.g., \cite{ThreshGLM}) to the threshold autoregressive model (\cite{Tong90}). According to Propositions \ref{modelgmc}, \ref{discreteergodicity} and \ref{ContinuousErgodicity}, we can establish the stability properties of the model. \begin{prop} \label{NonLinearStability} Consider model (\ref{eq:NonLinear}) with parameters satisfying $\alpha+\beta<1, \beta+\sum_{k=1}^s \beta_k\ge 0$ and $\alpha+\beta+\sum_{k=1}^s \beta_k<1$ for $s=1,\ldots, K$, then $\{X_t\}$ is geometric moment contracting and has a unique stationary distribution $\pi$. Moreover if $X_1\sim \pi$, then $\{(X_t, Y_t), t\ge 1\}$ is ergodic. \end{prop} \medskip We now consider inference for this model. Assume the knots $\{\xi_k\}_{k=1}^K$ are known for $K$ fixed. Then the parameter vector $\theta=(\delta, \alpha, \beta, \beta_1,\ldots, \beta_K)^T$ can be estimated by maximizing the conditional log-likelihood function, which is available according to (\ref{eq:loglikeexp}). The number of knots $K$ can be selected by virtue of an information criteria, such as AIC and BIC. As for the locations of knots, there are different strategies one can adopt for choosing them. One method is to place the knots at the $\{j / (K + 1), j = 1, \ldots, K\}$ quantiles of the population, which can be estimated from the data. A second method is to choose the locations that maximize the log likelihood. We will employ both procedures to real datasets in the next section. To study the asymptotic behavior of the estimates, first note that by iterating the recursion, \begin{eqnarray} X_t&=&\delta/(1-\alpha)+\beta\displaystyle\sum_{i=0}^{\infty}\alpha^i Y_{t-1-i}+\sum_{k=1}^K \beta_k\sum_{i=0}^{\infty}\alpha^i (Y_{t-1-i}-\xi_k)^+ \nonumber\\ &=&\delta/(1-\alpha)+\displaystyle\sum_{i=0}^{\infty}\alpha^i\{\beta Y_{t-1-i}+\sum_{k=1}^K \beta_k(Y_{t-1-i}-\xi_k)^+\}. \end{eqnarray} This defines the function $g_{\infty}^{\theta}$ as in $X_t=g_{\infty}^{\theta}(Y_{t-1}, Y_{t-2},\ldots)$ and also verifies assumptions (A1)-(A3). Hence in order to apply Theorem \ref{LinearAsymp}, we only need to impose the following regularity assumptions for the nonlinear model (\ref{eq:NonLinear}): \begin{enumerate} \item[(NL1)] $\theta_0$ is an interior point in the parameter space $\Theta$, which is a compact subset of the parameter set satisfying the conditions in Proposition \ref{NonLinearStability}. \item[(NL1)] $\mbox{E}[Y_1\displaystyle\sup_{\theta\in \Theta}B^{-1}((\delta/(1-\alpha)+\sum_{i=0}^{\infty}\alpha^i\{\beta Y_{t-1-i}+\sum_{k=1}^K \beta_k(Y_{t-1-i}-\xi_k)^+\})]<\infty$. \item[(NL2)] $\mbox{E}[B'(\eta_1(\theta_0))\{\partial \eta_1(\theta)/\partial \theta_i)\}^2|_{\theta=\theta_0}]<\infty$, for $i=1,\ldots, K+3$. \end{enumerate} Sufficient conditions for assumptions (NL1) and (NL2) can be established similarly to those given in Remarks \ref{L1Remark} and \ref{L2Remark}. The asymptotic properties of the MLE are summarized in the following theorem. \begin{thm} \label{NonLinearAsympNormal} For model (\ref{eq:NonLinear}), suppose that the placement of the knots is known, and that assumptions (NL0)-(NL2) hold, then the maximum likelihood estimator $\hat{\theta}_n$ is strongly consistent and asymptotically normal, i.e., \begin{eqnarray*} \sqrt{n}(\hat{\theta}_n-\theta_0)\stackrel{\mathcal{L}}{\longrightarrow}N(0, \Omega^{-1}),~~\mbox{as}~~n\rightarrow\infty, \end{eqnarray*} where $\Omega=\mbox{E}\{B'(\eta_t)\dot{\eta}_t\dot{\eta}_t^T\}$. \end{thm} \medskip We use the Poisson nonlinear dynamic model as an illustrative example of the above results and refer readers to Section 5 for implementation of the estimation procedure. The model is defined as \begin{eqnarray} Y_t|\mathcal{F}_{t-1}\sim \mbox{Pois}(\lambda_t),~~\lambda_t=\delta+\alpha \lambda_{t-1}+\beta Y_{t-1}+\displaystyle\sum_{k=1}^K\beta_k(Y_{t-1}-\xi_k)^+. \label{eq:PoisNonLinear} \end{eqnarray} It follows that under the conditions of Proposition \ref{NonLinearStability} and Theorem \ref{NonLinearAsympNormal} that $\{(\lambda_t, Y_t), t\ge 1\}$ is a stationary and ergodic process, and the estimates are strongly consistent and asymptotically normal. In practice the covariance matrix of the estimates can be obtained by recursively applying \small \begin{eqnarray*} \frac{\partial\lambda_t}{\partial\theta}=\begin{pmatrix} 1 & \lambda_{t-1} & Y_{t-1} & (Y_{t-1}-\xi_1)^+ & \ldots & (Y_{t-1}-\xi_K)^+ \end{pmatrix}^T + \alpha\frac{\partial\lambda_{t-1}}{\partial\theta}. \end{eqnarray*} \normalsize Another example of nonlinear dynamic models is the Poisson exponential autoregressive model proposed by \cite{Fokianos}, and it is given by \begin{eqnarray} \label{eq:PoisExpModel} Y_t|\mathcal{F}_{t-1}\sim\mbox{Pois}(\lambda_t),~~\lambda_t=(\alpha_0+\alpha_1 \exp\{-\gamma \lambda_{t-1}^2\})\lambda_{t-1}+\beta Y_{t-1}, \end{eqnarray} where $\alpha_0,\alpha_1,\beta, \gamma>0$ are parameters. We point out that if $\alpha_0+\alpha_1+\beta<1$, then model (\ref{eq:PoisExpModel}) belongs to the class of models (\ref{eq:expmodel}) and hence enjoys the stability properties stated in Propositions \ref{modelgmc} and \ref{discreteergodicity}. As for the inference of the model, we refer readers to \cite{Fokianos} for details. \section{Numerical results} The performance of the estimation procedure for the Poisson nonlinear dynamic model is illustrated in a simulation study. The MLE is obtained by optimizing the log-likelihood function (\ref{eq:loglikeexp}) using a Newton-Raphson method. Simulation results of the Poisson INGARCH can be found in \cite{Fokianos}. Other models including the negative binomial linear and nonlinear dynamic models and the exponential autoregressive model (\ref{eq:PoisExpModel}) will be applied to two real datasets, and tools for checking goodness of fit will be considered. \subsection{Simulation for the nonlinear model} As specified in (\ref{eq:PoisNonLinear}), a 1-knot nonlinear dynamic model is simulated according to \begin{eqnarray*} Y_t|\mathcal{F}_{t-1}\sim \mbox{Pois}(\lambda_t),~~\lambda_t=0.5+0.5\lambda_{t-1}+0.4Y_{t-1}-0.2(Y_{t-1}-5)^+ \end{eqnarray*} with different sample sizes. Each sample size and parameter configuration is replicated $1000$ times. For each realization, the first $500$ simulated observations are discarded as burn-in in order to let the process reach its stationary regime. We first estimate the parameters assuming that the location of the knot is known, i.e., the true underlying model is (\ref{eq:NonLinear}) with only one knot at 5. The means and standard errors of the estimates from all 1000 runs are summarized in Table \ref{tab:simulation} and the histograms of the estimates are depicted in Figure \ref{fig:1_known_knot}. The performance of these estimates is reasonably good and consistent with the theory described in Theorem \ref{NonLinearAsympNormal}. As for estimating the parameters without knowing the location of the knots, the corresponding results of the MLE obtained by fitting a 1-knot model to all the 1000 replications are summarized in Table \ref{tab:1_unknown}. Here the locations of the knots are determined by sample quantiles. Not surprisingly, the performance of the maximum likelihood estimates of $\beta$ and $\beta_1$ is not as good as in the known knot case. However, the overall model performance, as reflected in the computation of the scoring rules (described in the next section), is competitive with the known knot case. For instance when $n=1000$, the means of ranked probability scores (RPS) for known and unknown knot cases are $1.0906$ and $1.0914$, respectively. \begin{table} \caption{\label{tab:simulation}Estimation results for 1-knot model with known knot location} \centering \resizebox{11cm}{!}{ \begin{tabular}{| c | c c c c c| } \hline & $\delta$ & $\alpha$ & $\beta$ & $\beta_1$ & $n$ \\ \hline True & 0.5 & 0.5 & 0.4 & -0.2 & \\ Estimates & 0.5596 & 0.4861 & 0.3990 & -0.2009 & 500 \\ s.e. & (0.0087) & (0.0030) & (0.0026) & (0.0051) & \\ Estimates & 0.5265 & 0.4944 & 0.3991 & -0.2016 & 1000 \\ s.e. & (0.0041) & (0.0016) & (0.0013) & (0.0025) &\\ \hline \end{tabular}} \end{table} \begin{table} \caption{\label{tab:1_unknown}Estimation for 1-knot model with unknown knot location} \centering \resizebox{11cm}{!}{ \begin{tabular}{| c | c c c c c | } \hline & $\delta$ & $\alpha$ & $\beta$ & $\beta_1$ & $n$ \\ \hline True & 0.5 & 0.5 & 0.4 & -0.2 & \\ Estimates & 0.5387 & 0.4852 & 0.4187 & -0.1614 & 500\\ s.e. & (0.0089) & (0.0030) & (0.0031) & (0.0047) & \\ Estimates & 0.5002 & 0.4943 & 0.4197 & -0.1679 & 1000 \\ s.e. & (0.0042) & (0.0016) & (0.0015) & (0.0023) & \\ \hline \end{tabular}} \end{table} \begin{figure} \centering \makebox{\includegraphics[scale=.6]{sim_1_known}} \caption{Histograms of the 1-knot model with sample size 1000 assuming the knot is known. The overlaying curves are the density estimates and the dashed vertical lines represent the true values of the parameters.} \label{fig:1_known_knot} \end{figure} Next we turn to the problem of selecting the number of knots using an information criterion. Simulations with different sample sizes are implemented and the model selection results are summarized in Table \ref{tab:sim_1_unknown}. Numbers in the table stand for the proportion of times that each particular model is selected in the 1000 runs. For AIC, the 1-knot model is selected most often followed by a 2-knot model, at least in the cases when $n=1000$. \begin{table} \caption{\label{tab:sim_1_unknown}Model selection of 1-knot simulation} \centering \resizebox{13.5cm}{!}{ \begin{tabular}{| c | c c c c c c|} \hline Criteria & 0 knot & 1 knot & 2 knots & 3 knots & $\ge4$ knots & $n$\\ \hline AIC & $34.3\%$ & $37.6\%$ & $20.9\%$ & $5.2\%$ & $2.0\%$ & 500\\ BIC & $80.5\%$ & $18.8\%$ & $0.6\%$ & $0.1\%$ & 0 & \\ \hline AIC & $12.4\%$ & $45.0\%$ & $29.9\%$ & $8.3\%$ & $4.4\%$ & 1000\\ BIC & $59.4\%$ & $38.4\%$ & $2.0\%$ & $0.2\%$ & 0 & \\ \hline \end{tabular}} \end{table} \normalsize In light of the idea of interpolating the nonlinear dynamic of $\lambda_t$ by a piecewise linear function, we plot in Figure \ref{fig:curve_1_unknown} the fitted functions $\hat{\beta}y+\sum_{k=1}^K \hat{\beta}_k (y-\hat{\xi}_k)^+$ for each run of the simulations against its true form $0.4y-0.2(y-5)^+$. From the graph, we can see that the piecewise linear function fitted by the 1-knot model is closest to the true curve. \begin{figure} \centering \makebox{\includegraphics[scale=.8]{curve_1_unknown}} \caption{Left: the black curve is the true function $0.4y-0.2(y-5)^+$, and the other curves are the piecewise linear functions fitted in each simulation where the number of knots $K$ is selected via AIC; Right: for each value of $K$, we plot the fitted curve from one specific run that chooses the particular number of knots.} \label{fig:curve_1_unknown} \end{figure} \subsection{Two data applications} \subsubsection*{1. Number of transactions of Ericsson stock} As an illustrative example, both linear and nonlinear dynamic models are employed to fit the number of transactions per minute for the stock Ericsson B during July 2nd, 2002 which consists of 460 observations. Figure \ref{fig:Ericsson_data} plots the data and the autocorrelation function. The positive dependence displayed in the data suggests the application of the models in our study. \begin{figure} \centering \makebox{\includegraphics[scale = .48]{Ericsson_data}} \caption{Top: Number of transactions per minute of the stock Ericsson B during July 2nd 2002; Bottom: ACF of the data.} \label{fig:Ericsson_data} \end{figure} By computing the MLE of the parameters, the fitted Poisson INGARCH model is given by \begin{eqnarray*} \hat{\lambda}_t&=&0.2912+0.8312\hat{\lambda}_{t-1}+0.1395Y_{t-1},\\ &&(0.1000)~(0.0242)~~~~~~~(0.0188) \end{eqnarray*} and the fitted NB-INGARCH model is \begin{eqnarray*} Y_t|\mathcal{F}_{t-1}\sim \mbox{NB}(8, \hat{p}_t),~~\hat{X}_t&=& 0.2676+ 0.8447\hat{X}_{t-1}+0.1282Y_{t-1},\\ &&(0.1406)~(0.0350)~~~~~~~(0.0274) \end{eqnarray*} where $\hat{X}_t=8(1-\hat{p}_t)/\hat{p}_t$. The standard deviations in the parentheses are calculated according to the remark after Theorem \ref{AsympNormal}. As for the Poisson nonlinear dynamic model, AIC and BIC are used to help select the number of knots among 0 to 5; the values are reported in Table \ref{tab:infor_criteria}. \begin{table} \caption{\label{tab:infor_criteria}Model selection results for Ericsson data} \centering \resizebox{13.5cm}{!}{ \begin{tabular}{| l | c c c c c c | \hline & 0-knot & 1-knot & 2-knot & 3-knot & 4-knot & 5-knot \\ \hline LogL & -1433.19 & -1431.21 & -1431.08& -1430.58& $\boldsymbol{-1429.65}$ & -1431.12 \\ AIC & 2874.38& $\boldsymbol{2872.41}$ & 2874.17 & 2875.17 & 2875.30 & 2880.25 \\ BIC & $\boldsymbol{2890.90}$& 2893.07 & 2898.95& 2904.08 & 2908.35 & 2917.43 \\ \hline \end{tabular}} \end{table} The fitted 1-knot Poisson model, which has the smallest AIC, is given by \begin{eqnarray*} \hat{\lambda}_t&=&0.5837+0.8319\hat{\lambda}_{t-1}+0.0906Y_{t-1}+0.0722(Y_{t-1}-9)^+.\\ &&(0.1884)~(0.0241)~~~~~~~(0.0295)~~~~~~~(0.0373) \end{eqnarray*} Note that the AIC values of the 2-knot and 3-knot models are both close to that of the 1-knot model, and therefore are used as a basis for comparison with the minimum AIC model. These models are given by $\hat{\lambda}_t=0.5519+0.8326\hat{\lambda}_{t-1}+0.0961Y_{t-1}+0.0154(Y_{t-1}-7)^++0.0559(Y_{t-1}-11)^+$ and $\hat{\lambda}_t=0.3614+0.8361\hat{\lambda}_{t-1}+0.1206Y_{t-1}+0.0433(Y_{t-1}-6)^+-0.0914(Y_{t-1}-9)^++0.0914(Y_{t-1}-13)^+$, respectively. As can be seen from the model checking below, the negative binomial INGARCH model seems to outperform the Poisson-based models. This could be explained by the over-dispersion exhibited by the data, since the mean and variance are 9.91 and 32.84, respectively. To this end, we fit the nonlinear negative binomial models and select the number of knots by minimizing the AIC. It turns out that the AIC value of a 1-knot model is the second smallest among all the candidates, with 2674.69 compared to the smallest value 2674.04, which is attained by the negative binomial INGARCH model fitted above. The fitted 1-knot negative binomial nonlinear model is given by $Y_t|\mathcal{F}_{t-1}\sim \mbox{NB}(8, \hat{p}_t)$, where $\hat{X}_t=8(1-\hat{p}_t)/\hat{p}_t$ follows \begin{eqnarray*} \hat{X}_t&=&0.4931+0.8444\hat{X}_{t-1}+0.0903Y_{t-1}+0.0603(Y_{t-1}-9)^+.\\ &&(0.2559)~(0.0350)~~~~~~~(0.0412)~~~~~~~(0.0546) \end{eqnarray*} Here the locations of knots for the nonlinear dynamic model are all estimated by the corresponding sample quantiles. We also tried estimating the knots by maximizing the likelihood, and in this application, the results by both methods are nearly identical. The exponential autoregressive model (\ref{eq:PoisExpModel}) is also applied to this dataset by \cite{Fokianos} and is given by \begin{eqnarray*} \hat{\lambda}_t&=&(0.8303+7.030\exp\{-0.1675\hat{\lambda}_{t-1}^2\})\hat{\lambda}_{t-1}+0.1551Y_{t-1}.\\ &&(0.0232)~(3.0732)~~~~~~(0.0592)~~~~~~~~~~~~~~~(0.0218) \end{eqnarray*} To assess the adequacy of the fit by all of the above models, we will consider an array of graphical and quantitative diagnostic tools for time series, some of which are specifically designed for time series of counts. Readers can refer to \cite{Davis03} and \cite{Jung11} for a comprehensive treatment of the tools. In our study, we first consider the standardized Pearson residuals $e_t=(Y_t-\mbox{E}(Y_t|\mathcal{F}_{t-1}))/\sqrt{\mbox{Var}(Y_t|\mathcal{F}_{t-1})}$ which can be obtained by replacing the population quantities by their estimated counterparts. If the model is correctly specified, then the residuals $\{\hat{e}_t\}$ should be a white noise sequence with constant variance. It turns out that all the models considered above give very similar fitted conditional mean processes and the standardized Pearson residuals appear to be white. Figure \ref{fig:ericsson_fit} displays the fitted result for the 1-knot negative binomial model. \begin{figure} \centering \makebox{\includegraphics[scale=.45]{Ericsson_NB_nonlinear.eps}} \caption{Top: Dotted curve represents the number of transactions of Ericsson stock, and the black curve is the fitted conditional mean process by 1-knot NB-based model; Bottom: ACF of the standardized Pearson residuals.} \label{fig:ericsson_fit} \end{figure} Another tool for model checking is through the probability integral transform (PIT). When the underlying distribution is continuous, it is well known that the PIT follows standard uniform distribution. However, if the underlying distribution is discrete, some adjustments are required and the so-called randomized PIT is therefore introduced by perturbing the step function characteristic of the CDF of discrete random variables (see \cite{Brockwell06}). More recently, \cite{Czado} proposed a non-randomized version of PIT as an alternative adjustment. Since it usually gives the same conclusion for model checking, we do not provide the non-randomized version here. For any $t$, the randomized PIT is defined by \begin{eqnarray*} \tilde{u}_t:=F_t(Y_t-1)+\nu_t \bigr[F_t(Y_t)-F_t(Y_t-1)\bigr], \end{eqnarray*} where $\{\nu_t\}$ is a sequence of iid uniform $(0,1)$ random variables, $F_t(\cdot)$ is the predictive cumulative distribution. In our situation, $F_t(\cdot)$ is simply the CDF of a Poisson or a negative binomial distribution. If the model is correct, then $\tilde{u}_t$ is an iid sequence of uniform $(0,1)$ random variables. \cite{Jung11} reviewed several ways to depict this and we adopt their method in our study. To test if the PIT follows $(0,1)$ uniform distribution, the histograms of PIT from different models are plotted and a Kolmogorov-Smirnov test is carried out. The results are summarized in Figure \ref{fig:Ericsson_PIT}, and the $p$-values are reported in Table \ref{tab:Ericsson_scores}. It can be seen that both of the two negative binomial-based models pass the PIT test, while none of the Poisson-based models does. This observation could be explained, as mentioned above, by the over-dispersion phenomenon of the data. \begin{figure} \centering \makebox{\includegraphics[scale=.5]{Ericsson_PIT}} \caption{Left: histograms of randomized PIT's for all of the models fitted to the Ericsson stock data; Right: QQ-plots of $\tilde{u}_t$ against standard uniform distribution for the corresponding models, where the straight line is the $45^{\circ}$ line with zero intercept.} \label{fig:Ericsson_PIT} \end{figure} To measure the power of predictions by models, various scoring rules have been proposed in literature, see e.g., \cite{Czado} and \cite{Jung11}. Most of them are computed as the average of quantities related to predictions and take the form $(n-1)^{-1}\sum_{t=2}^n s(F_t(Y_t))$ where $F_t(\cdot)$ is the CDF of the prediction distribution and $s(\cdot)$ denotes some scoring rule. In this paper we calculate three scoring rules: logarithmic score (LS), quadratic score (QS) and ranked probability score (RPS), as a basis for evaluating the relative performance of our fitted models. For definition of these scores, see \cite{Jung11}. Table \ref{tab:Ericsson_scores} summarizes these scores for all of the fitted models. As seen from the table, most of the diagnostic tools favor the one-knot negative binomial model for the Ericsson data. \begin{table} \caption{\label{tab:Ericsson_scores}Quantitative model checking for Ericsson data} \centering \resizebox{13.5cm}{!}{ \begin{tabular}{| l c c c c c|} \hline Model & log likelihood &$p$-value of PIT & LS & QS & RPS \\ \hline Poisson INGARCH & -1433.19 & $<10^{-5}$ & 3.1167 & -0.0576 & 2.6883 \\ NB INGARCH & -1332.02 & 0.7386 & 2.8958 & -0.0671 & 2.6063 \\ 1-knot Poisson model & -1431.21 & $<10^{-5}$& 3.1123 & -0.0573 & 2.6848 \\ 2-knot Poisson model & -1431.08 & $<10^{-5}$ & 3.1121 & -0.0575 & 2.6843 \\ 3-knot Poisson model & -1430.58 & $<10^{-5}$ & 3.1110 & -0.0580 & 2.6779 \\ 1-knot NB model & $\boldsymbol{-1331.34}$ & 0.8494 & $\boldsymbol{2.8942}$ & $\boldsymbol{-0.0671}$ & $\boldsymbol{2.6021}$ \\ Exp-auto model & -1448.69 & $<10^{-5}$& 3.1504 & $-0.0600$ & 2.6924 \\ \hline \end{tabular}} \end{table} \normalsize \subsubsection*{2. Return times of extreme events of Goldman Sachs Group (GS) stock} \medskip As a second example, we construct a time series based on daily log-returns of Goldman Sachs Group (GS) stock from May 4th, 1999 to March 16th, 2012. We first calculate the hitting times, $\tau_1,\tau_2,\ldots$, for which the log-returns of GS stock falls outside the $0.05$ and $0.95$ quantiles of the data. The discrete time series of interest will be the return (or inter-arrival) times $Y_t=\tau_t-\tau_{t-1}$. If the data are in fact iid, or do not exhibit clustering of large values, then the $Y_t$'s should be independent and geometrically distributed with probability of success $p=0.1$ (\cite{ChangPhD}). Figure \ref{fig:gs_return_times} plots the return times of the stock, and the ACF and histogram of the return times. Note that in order to ameliorate the visual effect of some extremely large observations, the time series is also plotted in the top right panel of Figure \ref{fig:gs_return_times} on a reduced vertical scale, in which it is truncated at 80 and the five observations that are affected are depicted by solid triangles. \begin{figure} \centering \makebox{\includegraphics[scale = .7]{gs_return_times}} \caption{Top left: Return times of GS stock, the dashed horizontal line locates at 80; Top right: Return times truncated at 80 in order to ameliorate the visual effect of the five large observations that are represented by solid triangles; Bottom left: ACF of the return times; Bottom right: Histogram of the return times, where the curve overlaid is the density function of a geometric distribution with $p=0.1$.} \label{fig:gs_return_times} \end{figure} To explore this time series, three models: the geometric INGARCH (negative binomial INGARCH (\ref{eq:nbingarch11}) with $r=1$), and the 1-knot and 2-knot geometric-based models are fitted to the data. The number of knots for the nonlinear dynamic models is chosen by minimizing the AIC, and the locations of knots are estimated by maximizing the likelihood based on a grid search. In addition, the following constraint is imposed: there should be at least 30 observations in each of the regimes segmented by the knots in order to guarantee that there are sufficient observations to obtain quality estimates of the parameters. The sample quantile method for estimating knot locations did not perform as well. Since it follows from the definition of return times that $Y_t\ge 1$ for any $t$, we use a version of the geometric distribution that counts the total number of trials, instead of only the failures. In particular, the fitted 1-knot geometric-based model is given by $Y_t-1|\mathcal{F}_{t-1}\sim \mbox{Geom}(p_t)$, where \begin{eqnarray*} X_t=0.5042+0.4729X_{t-1} + 0.5271(Y_{t-1}-1)-0.0526(Y_{t-1}-5)^+, \end{eqnarray*} and the fitted 2-knot geometric-based model is \begin{eqnarray*} X_t=0.5414+0.4531X_{t-1}+0.5469Y_{t-1}-0.2333(Y_{t-1}-9)^++0.2332(Y_{t-1}-18)^+, \end{eqnarray*} where $X_t=(1-p_t)/p_t$. Notice that in both models, $\hat{\alpha}+\hat{\beta}$ is very close to unity, i.e., the estimated parameters are close to the boundary of the parameter space. This is similar to the integrated GARCH (IGARCH) model in which $\alpha+\beta=1$. In our application, the mean of the time series of return times is about 10, while the variance is 1101. A simple simulation according to the fitted model yields the mean and median very close to those of the data, but the variance of the simulated data is extraordinarily large, which resembles the feature of the observed data. This is because, although the fitted models are still stationary, the parameters no longer satisfy the conditions specified in Theorem \ref{NonLinearAsympNormal} that ensure a finite variance. It turns out that the geometric-based models fitted above are capable of capturing the high volatility part of the data. Their standardized Pearson residuals are also calculated and appear to be white. Results of the PIT test are depicted in Figure \ref{fig:exceedance_PIT}, and the prediction scores and the $p$-values of the PIT test are summarized in Table \ref{tab:exceedance_scores}. Two Poisson-based models are also included for comparison, and as expected, they do not perform as well as the geometric-based models. \begin{figure} \centering \makebox{\includegraphics[scale=.5]{gs_exceedance_PIT.eps}} \caption{Left: histograms of randomized PIT's for the models fitted to GS return times; Right: QQ-plots of $\tilde{u}_t$ against standard uniform distribution for the corresponding models, where the straight line is the $45^{\circ}$ line with zero intercept.} \label{fig:exceedance_PIT} \end{figure} \begin{table} \caption{\label{tab:exceedance_scores}Quantitative model checking for GS return times} \centering \resizebox{13.5cm}{!}{ \begin{tabular}{| l c c c c c|} \hline Model & log likelihood &$p$-value of PIT & LS & QS & RPS \\ \hline Poisson INGARCH & -2681.06 & $<10^{-5}$ & 8.2842 & -0.0675 & 4.1373 \\ Geom INGARCH & -857.73 & 0.2581 & 2.6477 & -0.1436 & 3.4100 \\ 3-knot Poisson model & -2670.33 & $<10^{-5}$ & 8.2510 & -0.0693 & 4.1400 \\ 1-knot Geom model & -857.58 & 0.3988 & 2.6472 & $\boldsymbol{-0.1436}$ & 3.4041 \\ 2-knot Geom model & $\boldsymbol{-857.42}$ & 0.2006 & $\boldsymbol{2.6468}$ & -0.1435 & $\boldsymbol{3.3939}$ \\ \hline \end{tabular}} \end{table} \normalsize \section*{Acknowledgement} This research is supported in part by NSF grant DMS-1107031. \section*{Appendix A. Properties of the exponential family} An important property of the one-parameter exponential family that is heavily used in this paper is the stochastic monotonicity. A random variable $X$ is said to be stochastically smaller than a random variable $Y$ (written as $X\le_{ST}$ Y) if $F(x)\ge G(x)$ for all $x$, where $F(x)$ and $G(x)$ are the cumulative distribution functions of $X$ and $Y$ respectively. We refer readers to \cite{YamingYu} for the related theory. \begin{prop} \label{STexponential} Suppose two random variables $Y'$ and $Y''$ follow distributions belonging to the one-parameter exponential family (\ref{eq:expfamily}) with the same $A, h$ and $\mu$, but with natural parameters $\eta'$ and $\eta''$ respectively. If $\eta'\le \eta''$, then $Y'$ is stochastically smaller than $Y''$. \end{prop} \begin{proof} Denote the probability density functions of $Y'$ and $Y''$ as $p(y|\eta')$ and $p(y|\eta'')$ defined in (\ref{eq:expfamily}), respectively. Then the log ratio of the two densities is \begin{eqnarray*} l(y)&=&\log\frac{p(y|\eta')}{p(y|\eta'')}=\log\frac{\exp\{\eta' y-A(\eta')\}h(y)}{\exp\{\eta'' y-A(\eta'')\}h(y)}\\ &=&y(\eta'-\eta'')+[A(\eta'')-A(\eta')], \end{eqnarray*} which is apparently a concave function in $y$. So it follows from Definition 2 in \cite{YamingYu} that $Y'$ is log concave relative to $Y''$, i.e., $Y'\le_{lc} Y''$. Moreover, since $A(\eta)$ is increasing in $\eta$, so $\lim_{y\downarrow 0}l(y)=A(\eta'')-A(\eta')\ge 0$ for continuous $p(y|\eta)$, and $p(0|\eta')/p(0|\eta'')\ge1$ for discrete $p(y|\eta)$. Hence according to Theorem 1 in \cite{YamingYu}, $Y'$ is stochastically smaller than $Y''$, i.e., $Y'\le_{ST} Y''$. \end{proof} Denote $F_x$ as the cumulative distribution function of $p(y|\eta)$ in (\ref{eq:expfamily}) with $x=B(\eta)$, and its inverse $F_x^{-1}(u):=\inf\{t\ge 0: F_x(t)\ge u\}$ for $u\in [0,1]$. The result below provides a useful tool for the coupling technique employed to establish mixing conditions for the observation process. \begin{prop} \label{SameTheta} Suppose that $U$ is a uniform $(0, 1)$ random variable, and define two random variables $Y'$ and $Y''$ as \begin{eqnarray*} Y'=F_{x'}^{-1}(U)~~~\mbox{and}~~~ Y''=F_{x''}^{-1}(U), \end{eqnarray*} where $x'=B(\eta')$ and $x''=B(\eta'')$. Then $\mbox{E} |Y'-Y''|=|x'-x''|$. \end{prop} \begin{proof} It follows from the construction of $Y'$ and $Y''$ that they follow the one-parameter exponential family (\ref{eq:expfamily}) with natural parameters $\eta'$ and $\eta''$ respectively, and $\mbox{E} Y'=x'$, $\mbox{E} Y''=x''$. If $x'\le x''$, then $Y'$ is stochastically smaller than $Y''$ by virtue of Proposition \ref{STexponential}. It follows that $F_{x'}^{-1}(\theta)\le F_{x''}^{-1}(\theta)$ for $\theta\in (0,1)$, i.e., $Y'\le Y''$. This implies $\mbox{E}|Y'-Y''|=\mbox{E}(Y''-Y')=x''-x'$. Similarly if $x'\ge x''$, then $\mbox{E}|Y'-Y''|=x'-x''$. Hence we have $\mbox{E}|Y'-Y''|=|x'-x''|$. \end{proof} \section*{Appendix B. Proofs} \subsection*{B.1. Proof of Proposition \ref{modelgmc}} It suffices to verify the two conditions formulated in \cite{Weibiao04}. For any $y_0$ in the state space $S$, $\mbox{E}|y_0-f_{u}(y_0)|=\int_0^1 |y_0-g(y_0, F^{-1}_{y_0}(u))|du\le y_0+g(0,0)+a y_0+b\int_0^1 F_{y_0}^{-1}(u)du\le g(0,0)+(1+a+b)y_0<\infty$. Next for a fixed $x_0\in S$, there exists a unique $\eta_0$ such that $x_0=B(\eta_0)$ due to the strict monotonicity of $B(\eta)$. For any $x\ge x_0$, there exists a unique $\eta\ge \eta_0$ such that $x=B(\eta)\ge B(\eta_0)=x_0$. Hence by the contraction condition (\ref{ContractionFunction}), we have \begin{eqnarray} \mbox{E}|X_1(x)-X_1(x_0)|&=&\int_0^1\bigr|g\bigr(x, F_{x}^{-1}(u)\bigr)-g\bigr(x_0, F_{x_0}^{-1}(u)\bigr)\bigr|du \nonumber\\ &\le&a|x-x_0|+b\int_0^1\bigr|F_x^{-1}(u)-F_{x_0}^{-1}(u)\bigr|du. \label{eq:gmc2} \end{eqnarray} It follows from $x\ge x_0$ and Proposition \ref{STexponential} that for any $u\in (0,1)$, $F_{x_0}^{-1}(u)\le F_x^{-1}(u)$. Therefore \begin{eqnarray*} \mbox{E}|X_1(x)-X_1(x_0)|&\le&a(x-x_0)+b\{\int_0^1 F_x^{-1}(u)du-\int_0^1 F_{x_0}^{-1}(u)du\}\\ &=&(a+b)(x-x_0). \end{eqnarray*} Similarly for $x<x_0$, we have $\mbox{E}|X_1(x)-X_1(x_0)|\le(a+b)(x_0-x)$. So for any $x\in S$, we have $\mbox{E}|X_1(x)-X_1(x_0)|\le(a+b)|x-x_0|$. Now suppose $\mbox{E}|X_n(x)-X_n(x_0)|\le(a+b)^n|x-x_0|$, then \small \begin{eqnarray*} \mbox{E}|X_{n+1}(x)-X_{n+1}(x_0)|&=&\mbox{E}[\mbox{E}\{|X_{n+1}(X_n(x))-X_{n+1}(X_n(x_0))|\bigr|U_1,\ldots, U_n\}]\\ &\le&\mbox{E}\{(a+b)|X_n(x)-X_n(x_0)|\}\\ &\le&(a+b)^{n+1}|x-x_0|. \end{eqnarray*} \normalsize By induction, $\{X_t\}$ is geometric moment contracting and as a result, $\pi$ is its unique stationary distribution. To show that $\mbox{E}_{\pi}X_1<\infty$, notice that by taking conditional expectation on both sides of (\ref{eq:BoundOfG}), we have $\mbox{E}(X_t|X_{t-1})\le g(0,0)+(a+b)X_{t-1}$. Inductively one can show that for any $t\ge 1$, \begin{eqnarray*} \mbox{E}(X_t|X_1)\le \frac{1-(a+b)^{t-1}}{1-(a+b)}g(0,0)+(a+b)^{t-1}X_1. \end{eqnarray*} Since for any $x\in S$, $X_t(x)\stackrel{\mathcal{L}}{\longrightarrow}X_1\sim\pi$ as $t\rightarrow \infty$, in particular, $X_t(0)\stackrel{\mathcal{L}}{\longrightarrow}X_1\sim\pi$, so by Theorem 3.4 in \cite{Billingsley99} we have \begin{eqnarray*} \mbox{E}_{\pi}X_1\le \displaystyle\liminf_{t\rightarrow\infty}\mbox{E}(X_t|X_1=0)\le \frac{g(0,0)}{1-(a+b)}<\infty. \end{eqnarray*} To prove (c), let $\{\xi_t, t\ge 1\}$ be a sequence of independent uniform $(0,1)$ random variables and independent of $\{X_t, t\ge 1\}$, then $Y_t=F_{X_t}^{-1}(\xi_t)$. Since $\{(X_t, \xi_t), t\ge 1\}$ is a stationary sequence if $X_1\sim \pi$, so $\{Y_t, t\ge 1\}$ must also be a stationary process. \subsection*{B.2. Proof of Proposition \ref{discreteergodicity}} Define a sequence of functions $\{g_k, k\ge 1\}$ in a way such that $g_1=g$, and for $k\ge 2$, $g_k(x, y_1,\ldots, y_k)=g_{k-1}(g(x, y_k), y_1,\ldots, y_{k-1})$. Then it follows from (\ref{eq:expmodel}) that for all $t\in\mathbb{Z}$, \begin{eqnarray*} X_t=g_k(X_{t-k}, Y_{t-1}, \ldots, Y_{t-k}). \end{eqnarray*} By virtue of the contraction condition (\ref{ContractionFunction}), we have $\mbox{E}\bigr|X_t-g_1(0, Y_{t-1})\bigr|=\mbox{E}\bigr|g_1(X_{t-1}, Y_{t-1})-g_1(0, Y_{t-1})\bigr|\le a\mbox{E} X_{t-1}$. By induction, it follows that for any $k\ge 1$, \begin{eqnarray*} \mbox{E} \bigr|X_t-g_k(0, Y_{t-1},\ldots, Y_{t-k})\bigr|\le a^k~\mbox{E} X_{t-k}. \end{eqnarray*} Since $\mbox{E}_{\pi}X_1<\infty$, it follows that $g_k(0, Y_{t-1},\ldots, Y_{t-k})\stackrel{L^1}{\longrightarrow}X_t$, as $k\rightarrow \infty$. Hence there exists a measurable function $g_{\infty}:\mathbb{N}_0^{\infty}=\{(n_1, n_2, \ldots), n_i\in \mathbb{N}_0\}\longrightarrow [0,\infty)$ such that $X_t=g_{\infty}(Y_{t-1}, Y_{t-2},\ldots)$ almost surely, which proves (a). To prove (b), denote $\mathcal{F}^{Y}_{k,l}=\sigma\{Y_k, \ldots, Y_l\}$ for $-\infty\le k\le l\le \infty$. Then the coefficients of absolute regularity of the stationary count process $\{Y_t, t\in \mathbb{Z}\}$ are defined as \begin{eqnarray*} \beta(n)=\mbox{E}\bigr\{\sup_{A\in \mathcal{F}^Y_{n,\infty}}\bigr|P(A|\mathcal{F}^Y_{-\infty, 0})-P(A)\bigr|\bigr\}, \end{eqnarray*} where $\mathcal{F}^{Y}_{-\infty,0}=\sigma\{X_1, Y_0,Y_{-1}, \ldots\}$ according to $(a)$. Because the distribution of $(Y_n, Y_{n+1}, \ldots)$ given $\sigma\{X_1, Y_0, Y_{-1}, \ldots\}$ is the same as that of $(Y_n, Y_{n+1}, \ldots)$ given $X_1$ for $n\ge 1$, the coefficients of absolute regularity become \begin{eqnarray} \beta(n)&=&\mbox{E}\bigr\{\sup_{A\in \mathcal{F}^{Y}_{n,\infty}}\bigr|P(A|\sigma\{X_1, Y_0,Y_{-1},\ldots\})-P(A)\bigr|\bigr\}\nonumber \\ &=&\mbox{E}\bigr\{\sup_{A\in \mathcal{F}^{Y}_{n,\infty}}\bigr|P(A|X_1)-P(A)\bigr|\bigr\}. \label{betacoef} \end{eqnarray} Let $\mathcal{B}^{\infty}$ be the $\sigma$-field in $\mathbb{R}^{\infty}$ generated by the cylinder sets, then we can rewrite the coefficients of absolute regularity as \begin{eqnarray} \beta(n)=\mbox{E}\Bigr\{\sup_{A\in\mathcal{B}^{\infty}}\bigr|P\bigr((Y_n, Y_{n+1},\ldots)\in A|X_1\bigr)-P\bigr((Y_n, Y_{n+1},\ldots)\in A\bigr)\bigr|\Bigr\}. \label{beta_n} \end{eqnarray} We will provide an upper bound for (\ref{beta_n}) by coupling two chains $\{(X_n', Y_n'), n\in \mathbb{Z}\}$ and $\{(X_n'', Y_n''), n\in \mathbb{Z}\}$ defined on a common probability space. Assume that both chains start from the stationary distribution, that is, $X_1'\sim \pi$, $X_1''\sim \pi$ and that $X_1'$ is independent of $X_1''$. Let $\{U_k, k\in \mathbb{Z}\}$ as be an iid sequence of uniform $(0,1)$ random variables, and construct the chains as follows: \begin{eqnarray*} &&X_n'=g\bigr(X_{n-1}', F^{-1}_{X_{n-1}'}(U_{n-1})\bigr),~~~ Y_n'=F_{X_n'}^{-1}(U_n),\\ &&X_n''=g\bigr(X_{n-1}'', F^{-1}_{X_{n-1}''}(U_{n-1})\bigr),~~~ Y_n''=F_{X_n''}^{-1}(U_n). \end{eqnarray*} Since $X_1'$ and $X_1''$ are independent, so for any $A\in \mathcal{B}^{\infty}$, \begin{eqnarray*} P((Y_n'', Y_{n+1}'',\ldots)\in A|X_1')=P((Y_n, Y_{n+1},\ldots)\in A). \end{eqnarray*} Hence we have \begin{eqnarray} &&\bigr|P\bigr((Y_n, Y_{n+1},\ldots)\in A|X_1=x\bigr)-P\bigr((Y_n, Y_{n+1},\ldots)\in A\bigr)\bigr| \nonumber \\ &=&\bigr|P\bigr((Y_n', Y_{n+1}',\ldots)\in A|X_1'=x\bigr)-P\bigr((Y_n'', Y_{n+1}'',\ldots)\in A|X_1'=x\bigr)\bigr| \nonumber \\ &\le& P\bigr((Y_n',Y_{n+1}',\ldots)\neq (Y_n'', Y_{n+1}'',\ldots)|X_1'=x\bigr). \label{eq:differenceP} \end{eqnarray} Therefore the coefficients of absolute regularity are bounded by \begin{eqnarray} \beta(n)\le P\bigr((Y_n',Y_{n+1}',\ldots)\neq (Y_n'', Y_{n+1}'',\ldots)\bigr)\le \displaystyle\sum_{k=0}^{\infty}P(Y_{n+k}'\neq Y_{n+k}''). \label{eq:finalbeta} \end{eqnarray} Observe that the construction of the two chains agrees with the definition of geometric moment contraction (Definition 1 in \cite{Weibiao04}), so it follows from Proposition \ref{modelgmc} that $\mbox{E}|X_n'-X_n''|\le (a+b)^n$ for all $n$. Then \begin{eqnarray*} P(Y_n'\neq Y_n'')&=&\mbox{E}\{P(Y_n'\neq Y_n''|X_n,X_n'')\}= \mbox{E}\{P(|Y_n'-Y_n''|\ge 1|X_n, X_n'')\} \\ &\le&\mbox{E}\{E|Y_n'-Y_n''|\bigr|X_n',X_n'')\}=\mbox{E}|X_n'-X_n''|\le (a+b)^n. \end{eqnarray*} Hence according to (\ref{eq:finalbeta}), the coefficients of absolute regularity satisfy $\beta(n)\le \sum_{k=0}^{\infty}(a+b)^{n+k}=(a+b)^n/(1-(a+b))$. Recall the well-known fact that $\beta$-mixing implies strong mixing (e.g., \cite{Doukhan94}), so $\{Y_t, t\ge 1\}$ is stationary and strongly mixing at geometric rate, in fact, it is ergodic. In particular, $\{Y_t, t\ge 1\}$ is an ergodic stationary process. It follows from $X_t=g_{\infty}(Y_{t-1}, Y_{t-2},\ldots)$ that $\{X_t, t\ge 1\}$ is also ergodic. \subsection*{B.3. Proof of Proposition \ref{ContinuousErgodicity}} The proof utilizes the classic Markov chain theory, see for example \cite{MeynTweedie}. (a) follows from the same argument as in the proof of Proposition \ref{discreteergodicity}. As for (b), for any fixed $\epsilon>0$, define $\phi$ as Lebesgue measure on $[x^{\ast}, \infty)$, where $x^{\ast}=(g(0,0)+b\epsilon)/(1-a)$, and let $A$ be a set with $\phi(A)>0$. To prove the $\phi-$irreducible, we need to show that for any $x_1\in S$, there exists $n\ge 1$, such that $P^n(x_1, A)>0$. If $x_1<x^{\ast}$, then $g(x_1, \epsilon)<g(0,0)+ax_1+b\epsilon\le x^{\ast}$, which implies that $\phi\bigr(A\cap[g(x_1, \epsilon),\infty)\bigr)>0$. Because of the assumptions on the function $g$, and the fact that the distribution of $Y_1$ given $X_1=x_1$ has positive probability everywhere, so $P(x_1, A)>0$. On the other hand, if $x_1\ge x^{\ast}$, it is easy to see that $g(x_1, \epsilon/2)\le g(x_1, \epsilon)\le x_1$. If $g(x_1, \epsilon/2)<x^{\ast}$, then by the same argument above, we have $P(x_1, A)>0$. However, if $g(x_1, \epsilon/2)\ge x^{\ast}$, then $ag(x_1,\epsilon/2)+b\epsilon\le g(x_1,\epsilon/2)-g(0,0)\le ax_1+b\epsilon/2$. Hence we have $x^{\ast}\le g(x_1, \epsilon/2)\le x_1-(b\epsilon)/(2a)$. By induction, there exists $n\ge 1$ such that $g(x_n, \epsilon/2)\le x_1-n(b\epsilon)/(2a)<x^{\ast}$, where $x_t=g(x_{t-1},\epsilon/2)$ for $t=1,\ldots,n$. Since $\epsilon>0$, and the function $g$ is increasing in both coordinates, so $P^{n+1}(x_1, A)>0$. Hence $\{X_t, t\ge 1\}$ is $\phi-$irreducible. We now show that $\{X_t, t\ge 1\}$ is aperiodic, i.e., a $\phi-$irreducible Markov chain is said to be aperiodic if there exists a small set $A$ with $\phi(A)>0$ such that for any $x\in A$, $P(x, A)>0$ and $P^2(x, A)>0$. Note that in the setting of the proposition, any compact set is a small set. So we take $A=[x^{\ast}, K]$ for some positive $K$ large enough. For any $x_1\in A$, from the proof of $\phi-$irreducibility, it is easy to see that $P(x_1, A)>0$. Similarly we have $P^2(x, A)=P(X_2\in A|X_0=x)\ge P(X_2\in A|X_1\in A)P(X_1\in A|X_0=x)>0$. To check the drift condition, let $V(x)=1+x$. There exists $\delta>0$, such that $a+b<1-\delta$. For $x\ge (g(0,0)+\delta)/(1-a-b-\delta)$, we have \begin{eqnarray*} \mbox{E} \{V(X_1)|X_0=x\}&=&\mbox{E} (1+X_1|X_0=x)=1+\mbox{E}\{g(x, Y_0)|X_0=x\}\\ &\le& 1+g(0,0)+(a+b)x\le(1-\delta)(1+x)=(1-\delta)V(x). \end{eqnarray*} Hence the drift condition holds by taking the small set $A=[x_0^{\ast}, \{g(0,0)+\delta\}/(1-a-b-\delta)]$, which establishes the geometric ergodicity of $\{X_t\}$. It is well known that a geometrically ergodic Markov chain starting from its stationary distribution is strongly mixing with geometrically decaying rate, hence is an ergodic stationary time series (e.g., \cite{MeynTweedie}). Denote $\{\xi_t, t\ge 1\}$ as a sequence of iid uniform $(0,1)$ random variables, then it follows from $Y_t=F_{X_t}^{-1}(\xi_t)$ that $\{Y_t, t\ge 1\}$ is stationary and ergodic. \subsection*{B.4. Proof of Theorem \ref{Consistency}} We first show the identifiability and then establish the consistency result using Lemma \ref{WaldConsistency}. Throughout the proof, we assume that the process $\{(Y_t, X_t), t\in \mathbb{Z}\}$ is in its stationary regime. Note that by assumption (A1), $X_t(\theta)\ge x_{\theta}^{\ast}\in\mathcal{R}(B)$, which implies $\eta_t(\theta)\ge B^{-1}(x_{\theta}^{\ast})$. So it follows from assumptions (A2) and (A4) that for any $\theta\in \Theta$, \begin{eqnarray*} \mbox{E} l_t(\theta)&=&\mbox{E}\bigr\{Y_t B^{-1}(X_t(\theta))-A\bigr(B^{-1}(X_t(\theta))\bigr)\bigr\}\\ &\le&\mbox{E}\bigr\{Y_t\displaystyle\sup_{\theta\in \Theta}B^{-1}(X_t(\theta))\bigr\}-A((B^{-1}(x^{\ast}_{\theta}))<\infty. \end{eqnarray*} This implies $\mbox{E} l_t^{+}(\theta)<\infty$. Denote $M_n(\theta)=\sum_{t=1}^n l_t(\theta)/n$, then $M_n(\theta)\stackrel{a.s.}{\longrightarrow}M(\theta)=\mbox{E}\bigr\{Y_1\eta_1(\theta)-A(\eta_1(\theta))\bigr\}$ according to the extended mean ergodic theorem (see \cite{billingsley95} pp. 284 and 495). In order to prove the identifiability, we need to show that $\theta_0$ is the unique maximizer of $M(\theta)$, that is, for any $\theta\in \Theta\setminus\{\theta_0\}$, $M(\theta)-M(\theta_0)<0$. First it follows from assumption (A5) that for any $\theta\neq \theta_0$ and all $t$, $P_{\theta_0}(G_t(\theta,\theta_0))>0$, where $G_t(\theta,\theta_0)=\{X_t(\theta)\neq X_t(\theta_0)\}$. Let $G=G_t(\theta,\theta_0)$, then we have \begin{eqnarray*} M(\theta)-M(\theta_0)&=&\mbox{E}\bigr[Y_t\bigr\{B^{-1}(X_t(\theta))-B^{-1}\bigr(X_t(\theta_0)\bigr)\bigr\}\\ &&-\bigr\{A(B^{-1}(X_t(\theta)))-A(B^{-1}(X_t(\theta_0)))\bigr\}\bigr]\\ &=&\mbox{E}\bigr[X_t(\theta_0)\bigr\{B^{-1}(X_t(\theta))-B^{-1}\bigr(X_t(\theta_0)\bigr)\bigr\}\\ &&-\bigr\{A(B^{-1}(X_t(\theta)))-A(B^{-1}(X_t(\theta_0)))\bigr\}\bigr]\\ &=&\int_GX_t(\theta_0)\bigr\{B^{-1}(X_t(\theta))-B^{-1}\bigr(X_t(\theta_0)\bigr)\bigr\}\\ &&-\bigr\{A(B^{-1}(X_t(\theta)))-A(B^{-1}(X_t(\theta_0)))\bigr\}dP_{\theta_0}. \end{eqnarray*} On the set $G$, there exists $c\in \mathbb{R}$ between $B^{-1}\bigr(X_t(\theta)\bigr)$ and $B^{-1}\bigr(X_t(\theta_0)\bigr)$ such that $A(B^{-1}(X_t(\theta)))-A(B^{-1}(X_t(\theta_0)))=B(c)\{B^{-1}(X_t(\theta))-B^{-1}(X_t(\theta_0))\}$ by the mean value theorem. It follows from $A''(\eta)>0$ that $A(\eta)$ is strictly convex and $c$ must be strictly between $B^{-1}(X_t(\theta))$ and $B^{-1}(X_t(\theta_0))$. So there exists $\xi\in\mathbb{R}$ lying strictly between $X_t(\theta)$ and $X_t(\theta_0)$ such that $\xi=B(c)$. Therefore \begin{eqnarray*} M(\theta)-M(\theta_0)=\int_G (X_t(\theta_0)-\xi)\{B^{-1}(X_t(\theta))-B^{-1}(X_t(\theta_0))\}dP_{\theta_0}. \end{eqnarray*} Since $B(\eta)$ is strictly increasing, so $(X_t(\theta_0)-\xi)\{B^{-1}(X_t(\theta))-B^{-1}(X_t(\theta_0))\}<0$ in either of the two cases: $X_t(\theta)<X_t(\theta_0)$ and $X_t(\theta)>X_t(\theta_0)$. Hence $M(\theta)-M(\theta_0)<0$, for any $\theta\neq \theta_0$, which establishes the identifiability. To show the consistency, first note that by assumption (A4), we have \begin{eqnarray*} \mbox{E}\displaystyle\sup_{\theta\in \Theta}l_t(\theta)&=&\mbox{E}\{Y_t \sup_{\theta\in \Theta}B^{-1}(X_t(\theta))-\inf_{\theta\in\Theta}A(B^{-1}(X_t(\theta)))\}\\ &\le&\mbox{E}\{Y_t\displaystyle\sup_{\theta\in \Theta}B^{-1}(X_t(\theta))\}-A(B^{-1}(x^{\ast}))<\infty. \end{eqnarray*} The function $f_{\theta}$ in Lemma \ref{WaldConsistency} can be defined as \begin{eqnarray*} f_{\theta}(\mathbf{y})=y_1B^{-1}(g_{\infty}^{\theta}(y_0,y_{-1},\ldots))-A(B^{-1}(g_{\infty}^{\theta}(y_0, y_{-1},\ldots))), \end{eqnarray*} where $\mathbf{y}=(y_1, y_0, y_{-1},\ldots)$. Hence it follows from assumption (A2) and Lemma \ref{WaldConsistency} that $M(\theta)$ is upper-semicontinuous and for any compact subset $K\subset \Theta$, $\limsup_{n\rightarrow\infty}\sup_{\theta\in K}M_n(\theta)\le \sup_{\theta\in K}M(\theta)$. Take $\mathcal{U}_0$ as a local base of $\theta_0$ and let $U\in\mathcal{U}_0$ be a neighborhood of $\theta_0$, then Lemma \ref{WaldConsistency} can be applied to $\Theta\setminus U$. Because u.s.c function attains its maximum on compact sets and $M(\theta)<M(\theta_0)$ for any $\theta\neq \theta_0$, we have \begin{eqnarray} \displaystyle\limsup_{n\rightarrow\infty}\sup_{\theta\in \Theta\setminus U}M_n(\theta)\le \sup_{\theta\in\Theta\setminus U}M(\theta)<M(\theta_0),~~~P_{\theta_0}\mbox{-a.s.} \label{usc1} \end{eqnarray} Notice that for any $\tilde{\theta}\notin U$, $M_n(\tilde{\theta})\le \sup_{\theta\in\Theta\setminus U}M_n(\theta)$. Let $\omega\in\Omega$ such that (\ref{usc1}) holds and $M(\theta_0)=\lim_{n\rightarrow\infty}M_n(\theta_0)$. For such $\omega$, suppose $\hat{\theta}_n\notin U$ infinitely often, say, along a sequence denoted by $\widetilde{\mathbb{N}}$, then \begin{eqnarray} \displaystyle\liminf_{n\rightarrow\infty} M_n(\hat{\theta}_n)&\le& \liminf_{n\rightarrow\infty, n\in\widetilde{\mathbb{N}}}M_n(\hat{\theta}_n)\le \limsup_{n\rightarrow\infty, n\in\widetilde{\mathbb{N}}}M_n(\hat{\theta}_n) \nonumber\\ &\le& \limsup_{n\rightarrow\infty, n\in\widetilde{\mathbb{N}}}\sup_{\theta\notin U}M_n(\theta)\le \limsup_{n\rightarrow\infty}\sup_{\theta\notin U}M_n(\theta). \label{usc2} \end{eqnarray} However, according to (\ref{usc1}), we have \begin{eqnarray*} \displaystyle\limsup_{n\rightarrow\infty}\sup_{\theta\in\Theta\setminus U}M_n(\theta)\le \sup_{\theta\in\Theta\setminus U}M(\theta)<M(\theta_0)=\lim_{n\rightarrow\infty} M_n(\theta_0)\le \liminf_{n\rightarrow\infty}M_n(\hat{\theta}_n), \end{eqnarray*} which contradicts (\ref{usc2}). Hence there exists a null-set $N_U$ such that for all $\omega\notin N_U$, $\hat{\theta}_n\in U$ for all $n$ large enough. It follows by taking any set $U\in \mathcal{U}_0$ that $\hat{\theta}_n$ converges to $\theta_0$ almost surely. \subsection*{B.5. Proof of Theorem \ref{AsympNormal}} We define a linearized form of $\eta_t(\theta)$ as $\eta_t^\dagger(\theta):=\eta_t(\theta_0)+(\theta-\theta_0)^T\dot{\eta}_t$, and the corresponding linearized log-likelihood function of $l(\theta)$ as \begin{eqnarray*} l^{\dagger}(\theta):=\displaystyle\sum_{t=1}^n \eta_t^{\dagger}(\theta)Y_t-\sum_{t=1}^n A(\eta_t^{\dagger}(\theta)). \end{eqnarray*} Let $u=\sqrt{n}(\theta-\theta_0)$, then define \small \begin{eqnarray} R_n^{\dagger}(u)&=& l^{\dagger}(\theta_0)-l^{\dagger}(\theta_0+u n^{-1/2}) \nonumber \\ &=&\displaystyle\sum_{t=1}^n Y_t \eta_t-\sum_{t=1}^n A(\eta_t)-\sum_{t=1}^n (\eta_t+u^T n^{-1/2}\dot{\eta_t})Y_t+\sum_{t=1}^n A(\eta_t+u^Tn^{-1/2}\dot\eta_t) \nonumber \\ &=&-u^T n^{-1/2} \sum_{t=1}^n Y_t \dot{\eta_t}+\sum_{t=1}^n \{A(\eta_t+u^Tn^{-1/2}\dot{\eta_t})-A(\eta_t)\} \nonumber \\ &=&-u^T n^{-1/2} \sum_{t=1}^n \{Y_t-B(\eta_t)\}\dot{\eta_t} \nonumber \\ &&+\sum_{t=1}^n \{A(\eta_t+u^Tn^{-1/2}\dot{\eta_t})-A(\eta_t)-u^Tn^{-1/2}B(\eta_t)\dot{\eta_t}\}. \label{eq:LinearR} \end{eqnarray} \normalsize Let $s_t=n^{-1/2}\{Y_t-B(\eta_t)\}\dot{\eta_t}$, then $\mbox{E}(s_t|\mathcal{F}_{t-1})=n^{-1/2}\mbox{E}[\{Y_t-B(\eta_t)\}\dot{\eta_t}|\mathcal{F}_{t-1}]=0$, so $\{s_t,t \ge 1\}$ is a martingale difference sequence. Note that \begin{eqnarray*} \displaystyle\sum_{t=1}^n \mbox{E}(s_ts_t^T|\mathcal{F}_{t-1})&=&\frac{1}{n}\sum_{t=1}^n \mbox{E}[\{Y_t-B(\eta_t)\}^2 \dot{\eta_t}\dot{\eta_t}^T|\mathcal{F}_{t-1}]\\ &=&\frac{1}{n}\sum_{t=1}^n B'(\eta_t)\dot{\eta_t}\dot{\eta_t}^T, \end{eqnarray*} which converges almost surely to $\Omega$ by the mean ergodic theorem and assumption (A7). Moreover, for any $\epsilon>0$, \begin{eqnarray*} &&\displaystyle\sum_{t=1}^n \mbox{E}\{s_t s_t^T \mathbf{1}_{[|s_t|\ge \epsilon]}|\mathcal{F}_{t-1}\}\\ &=&1/n\displaystyle\sum_{t=1}^n \dot{\eta_t} \dot{\eta_t}^T \mbox{E}[\{Y_t-B(\eta_t)\}^2\mathbf{1}_{[|\{Y_t-B(\eta_t)\}\dot{\eta_t}|\ge \epsilon\sqrt{n}]}|\mathcal{F}_{t-1}]\\ &\le&1/n\displaystyle\sum_{t=1}^n \dot{\eta_t} \dot{\eta_t}^T \mbox{E}[\{Y_t-B(\eta_t)\}^2\mathbf{1}_{[|\{Y_t-B(\eta_t)\}\dot{\eta_t}|\ge M]}|\mathcal{F}_{t-1}]\\ &\longrightarrow& \mbox{E}[\{Y_1-B(\eta_1)\}^2\dot{\eta_1}\dot{\eta_1}^T\mathbf{1}_{[|\{Y_t-B(\eta_t)\}\dot{\eta_t}|\ge M]}]~~\mbox{as}~n\rightarrow \infty\\ &\longrightarrow&0~~ \mbox{as}~M\rightarrow 0. \end{eqnarray*} \normalsize Then it follows from the central limit theorem for martingale difference sequences that \begin{eqnarray*} \displaystyle\sum_{t=1}^n s_t\stackrel{\mathcal{L}}{\longrightarrow} V\sim N(0, \Omega),~~~\mbox{as}~~~n\rightarrow\infty, \end{eqnarray*} where $\Omega$ is evaluated at $\theta_0$. The other term in (\ref{eq:LinearR}) by Taylor expansion is \begin{eqnarray*} \frac{1}{2n}\displaystyle\sum_{t=1}^n u^T \{B'(\eta_t)\dot{\eta_t}\dot{\eta_t}^T\} u+\mathcal{O}_p(n^{-3/2}\displaystyle\sum_{t=1}^n B''(\eta_t)(u^T\dot{\eta_t})^3), \end{eqnarray*} which is of the order of $u^T\Omega u/2+o_P(1)$. Hence $R_n^{\dagger}(u)\stackrel{\mathcal{L}}{\longrightarrow} -u^T V+\frac{1}{2}u^T \Omega u$, where $V\sim N(0, \Omega)$. It then follows that $\mbox{argmin}_u R_n^{\dagger}(u) \stackrel{\mathcal{L}}{\longrightarrow} \mbox{argmin}_u\{-u^T V+\frac{1}{2}u^T \Omega u\}=\Omega^{-1}V\sim N(0, \Omega^{-1})$. For the rest of the proof, we show that the difference between $R_n(u):=l(\theta_0)-l(\theta_0+un^{-1/2})$ and $R_n^{\dagger}(u)$ is negligible as $n$ grows large. By writing $\theta=\theta_0+un^{-1/2}$, the difference becomes \begin{eqnarray} R_n^{\dagger}(u)-R_n(u)&=& \displaystyle\sum_{t=1}^n \{Y_t-B(\eta_t)\}\{\eta_t(\theta)-\eta_t-u^Tn^{-1/2}\dot{\eta_t}\} \nonumber \\ &&-\displaystyle\sum_{t=1}^n [A(\eta_t(\theta))-A(\eta_t+u^Tn^{-1/2}\dot{\eta_t}) \nonumber\\ &&-B(\eta_t)\{\eta_t(\theta)-\eta_t-u^Tn^{-1/2}\dot{\eta_t}\}]. \label{eq:DifferenceR} \end{eqnarray} By Taylor expansion, the first term in (\ref{eq:DifferenceR}) is $1/(2n)\sum_{t=1}^n \{Y_t-B(\eta_t)\}u^T \ddot{\eta_t}(\theta_t^{\ast})u=1/(2n)u^T\newline[\sum_{t=1}^n \{Y_t-B(\eta_t)\}\ddot{\eta_t}+\sum_{t=1}^n \{Y_t-B(\eta_t)\}\{\ddot{\eta_t}(\theta_t^{\ast})-\ddot{\eta_t}\}]u$, where $\theta_t^{\ast}$ lies between $\theta$ and $\theta_0$, and $\ddot{\eta_t}=\partial^2\eta_t/\partial \theta\partial\theta^T$. Since \begin{eqnarray*} \frac{1}{n}\sum_{t=1}^n \{Y_t-B(\eta_t)\}\ddot{\eta_t} &\stackrel{a.s.}{\longrightarrow}& \mbox{E}[\{Y_t-B(\eta_t)\}\ddot{\eta_t}]\\ &=&\mbox{E}[\ddot{\eta_t}\mbox{E}\{Y_t-B(\eta_t)|\mathcal{F}_{t-1}\}]=0, \end{eqnarray*} and $1/n\sum_{t=1}^n \{Y_t-B(\eta_t)\}\{\ddot{\eta_t}(\theta_t^{\ast})-\ddot{\eta_t}\} \stackrel{a.s.}{\longrightarrow} 0$ under the smoothness assumption, so the first term in (\ref{eq:DifferenceR}) converges to 0 uniformly on $[-K, K]$ for any $K>0$. We now apply Taylor expansion to each component in the second term of (\ref{eq:DifferenceR}), \begin{eqnarray*} &&A(\eta_t(\theta))=A(\eta_t)+u^Tn^{-1/2}B(\eta_t)\dot{\eta}_t\\ &&~~~~~~~~~~~~~~~+\frac{1}{2n}u^T\{B(\eta_t(\theta_1^{\ast}))\ddot{\eta}_t(\theta_1^{\ast})+B'(\theta_1^{\ast})\dot{\eta}_t(\theta_1^{\ast})\dot{\eta}_t(\theta_1^{\ast})^T\}u,\\ &&A(\eta_t+u^Tn^{-1/2}\dot{\eta_t})=A(\eta_t)+B(\eta_t)u^Tn^{-1/2}\dot{\eta_t}+\frac{1}{2n}u^TB'(c)\dot{\eta_t}\dot{\eta_t}^Tu,\\ &&\eta_t(\theta)=\eta_t(\theta_0+un^{-1/2})=\eta_t+\dot{\eta_t}u^Tn^{-1/2}+\frac{1}{2n}u^T\ddot{\eta_t}(\theta_2^{\ast})u, \end{eqnarray*} where $0\le c\le u^Tn^{-1/2}\dot{\eta_t}$, $\theta_1^{\ast}$ and $\theta_2^{\ast}$ both lie between $\theta_0$ and $\theta$. Therefore the second term in (\ref{eq:DifferenceR}) becomes \small \begin{eqnarray*} &&\displaystyle\sum_{t=1}^n [A(\eta_t(\theta))-A(\eta_t+u^Tn^{-1/2}\dot{\eta_t})-B(\eta_t)\{\eta_t(\theta)-\eta_t-u^Tn^{-1/2}\dot{\eta_t}\}]\\ &=&\displaystyle\sum_{t=1}^n [A(\eta_t)+u^Tn^{-1/2}B(\eta_t)\dot{\eta_t}+\frac{1}{2n}u^T\{B(\eta_t(\theta_1^{\ast}))\ddot{\eta}_t(\theta_1^{\ast})+B'(\theta_1^{\ast})\dot{\eta}_t(\theta_1^{\ast})\dot{\eta}_t(\theta_1^{\ast})^T\}u\\ &&-A(\eta_t)-B(\eta_t)u^Tn^{-1/2}\dot{\eta_t}-\frac{1}{2n}u^TB'(c)\dot{\eta_t}\dot{\eta_t}^Tu-B(\eta_t)\frac{1}{2n}u^T\ddot{\eta_t}(\theta_2^{\ast})u]\\ &=&\frac{1}{2n}u^T\displaystyle\sum_{t=1}^n[\{B(\eta_t(\theta_1^{\ast}))\ddot{\eta_t}(\theta_1^{\ast})-B(\eta_t)\ddot{\eta_t}(\theta_2^{\ast})\}+\{B'(\theta_1^{\ast})\dot{\eta_t}(\theta_1^{\ast})\dot{\eta_t}(\theta_1^{\ast})^T\\ &&-B'(c)\dot{\eta_t}\dot{\eta_t}^T\}]u, \end{eqnarray*} \normalsize which converges to 0 on a compact set of $u$ under smoothness assumptions. So (\ref{eq:DifferenceR}) converges to 0 as $n\rightarrow\infty$, which implies that $\mbox{argmin}_u R_n(u)$ and $\mbox{argmin}_u R_n^{\dagger}(u)$ have the same asymptotic distribution, i.e., \begin{eqnarray*} \displaystyle\mbox{argmin}_u R_n(u)\stackrel{\mathcal{L}}{\longrightarrow}\Omega^{-1}V\sim N(0, \Omega^{-1}). \end{eqnarray*} Note that $\mbox{argmin}_u R_n(u)=\mbox{argmax}_u~l(\theta_0+un^{-1/2})=\sqrt{n}(\hat{\theta}_n-\theta_0)$, where $\hat{\theta}_n$ is the conditional maximum likelihood estimator. Hence \begin{eqnarray*} \sqrt{n}(\hat{\theta}_n-\theta_0) \stackrel{\mathcal{L}}{\longrightarrow} N(0, \Omega^{-1}),~~~\mbox{as}~~n\rightarrow\infty. \end{eqnarray*} \subsection*{B.6. Proof of Theorem \ref{LinearAsymp}} According to Theorems \ref{Consistency} and \ref{AsympNormal}, it is sufficient to establish the identifiability of the model, that is, we need to verify assumption (A5). Suppose for some $t\in\mathbb{Z}$, $X_t(\theta)=X_t(\theta_0)$, $P_{\theta_0}$-a.s, then $\delta+\alpha X_{t-1}(\theta)+\beta Y_{t-1}=\delta_0+\alpha_0 X_{t-1}(\theta_0)+\beta_0 Y_{t-1}$. It follows from (\ref{eq:InfinitePastRep}) that \small \begin{eqnarray*} (\beta-\beta_0)Y_{t-1}=\delta_0-\delta+\alpha_0\bigr(\frac{\delta_0}{1-\alpha_0}+\beta_0\displaystyle\sum_{k=0}^{\infty}\alpha_0^k Y_{t-k-2}\bigr)-\alpha\bigr(\frac{\delta}{1-\alpha}+\beta\displaystyle\sum_{k=0}^{\infty}\alpha^k Y_{t-k-2}\bigr). \end{eqnarray*} \normalsize If $\beta\neq \beta_0$, then $Y_{t-1}\in \mbox{span}\{Y_{t-2}, Y_{t-3},\ldots\}$ which contradicts the fact that $\mbox{Var}(Y_{t-1}|\mathcal{F}_{t-2})>0$. So $\beta$ must be the same as $\beta_0$. Similarly one can show that $\alpha=\alpha_0$ and $\delta=\delta_0$, which implies $\theta=\theta_0$. Hence the model is identifiable. \subsection*{B.7. Proof of Remark \ref{L2Remark}} The most difficult case is the derivative with respect to $\theta_2=\alpha$ and we only give its proof, since the arguments for $\delta$ and $\beta$ are similar. First note that \begin{eqnarray*} \mbox{E}\{B'(\eta_1(\theta_0))\bigr(\frac{\partial \eta_1(\theta_0)}{\partial \alpha}\bigr)^2\}=\mbox{E}\{\frac{1}{B'(\eta_1)}\bigr(\frac{\partial B(\eta_1)}{\partial \alpha}\bigr)^2\}\le\frac{1}{\underline{c}}\mbox{E}\{\frac{\partial B(\eta_1)}{\partial\alpha}\}^2, \end{eqnarray*} where $\partial B(\eta_1)/\partial \alpha=\delta/(1-\alpha)^2+\beta\sum_{k=1}^{\infty}k \alpha^{k-1}Y_{-k}$. Then on account of stationarity, one can show that \begin{eqnarray*} \mbox{E}\bigr(\sum_{k=1}^{\infty}k \alpha^{k-1}Y_{-k}\bigr)^2&\le& \{\gamma_Y(0)+\frac{2\gamma_Y(1)}{1-\alpha(\alpha+\beta)}\}\displaystyle\sum_{k=1}^{\infty}k^2 \alpha^{2k-2}\\ &&+\frac{2\alpha\gamma_Y(1)}{1-\alpha^2(\alpha+\beta)^2}\sum_{k=1}^{\infty}k \alpha^{2k-2}+\mu^2\bigr(\sum_{k=1}^{\infty}k \alpha^{k-1}\bigr)^2<\infty, \end{eqnarray*} where $\mu=\mbox{E} Y_t<\infty$. Hence $\mbox{E}[B'(\eta_1(\theta_0))\{\partial \eta_1(\theta_0)/\partial \alpha\}^2]<\infty$ if $\gamma_Y(0)<\infty$. \subsection*{B.8. Proof of Proposition \ref{poissonpq}} The proof considers two separate cases: $q=1$ and $q>1$, since they require different methods to construct the state space. \begin{enumerate} \item $q=1$: without loss of generality we consider $p=2$. Denote $\mathbf{X}_t=(\lambda_t,\lambda_{t+1})$, then $\mathbf{X}_t$ is a Markov chain. Note that $\lambda_t\ge \lambda^{\ast}=\delta/(1-\alpha_1-\alpha_2)$. $\mathbf{X}_t$ can be constructed by iteratively imposing the random function $f_u$, $u\in (0,1)$, \begin{eqnarray*} f_u: [\lambda^{\ast},\infty)\times[\lambda^{\ast},\infty) &\longrightarrow& [\lambda^{\ast},\infty)\times[\lambda^{\ast},\infty)\\ \mathbf{x}=(\lambda_1,\lambda_2) &\longmapsto& (\lambda_2,\delta+\alpha_1\lambda_2+\alpha_2\lambda_1+\beta F_{\lambda_2}^{-1}(u)). \end{eqnarray*} For any $\mathbf{x}=(x_1,x_2), \mathbf{y}=(y_1,y_2)$ in the state space $S=[\lambda^{\ast},\infty)\times[\lambda^{\ast},\infty)$, define metric $\rho$ as $\rho(\mathbf{x}, \mathbf{y})=w_1|x_1-y_1|+w_2|x_2-y_2|$, where $w_i >0, i=1,2$ and $w_1, w_2$ are to be decided. Let $\mathbf{x}_1=(\lambda_1^0,\lambda_2^0):=(\lambda^{\ast},\lambda^{\ast})$, then for any $\mathbf{x}=(\lambda_1,\lambda_2)$ we have \begin{eqnarray} \mbox{E}\rho(\mathbf{X}_1(\mathbf{x}),\mathbf{X}_1(\mathbf{x}_1))&=&\int_0^1 \rho(f_u(\mathbf{x}),f_u(\mathbf{x}_1))du \nonumber\\ &=&a_2w_2|\lambda_1-\lambda_1^0|+\{w_1+w_2(a_1+b)\}|\lambda_2-\lambda_2^0|, \nonumber \end{eqnarray} where the last equation holds because $\lambda_t\ge \lambda^{\ast}$. Therefore it is sufficient to find an $r\in (0,1)$ and strictly positive $(w_1, w_2)$ such that \begin{eqnarray*} \mbox{E}\rho(\mathbf{X}_1(\mathbf{x}),\mathbf{X}_1(\mathbf{x}_1))\le r\rho(\mathbf{x},\mathbf{x}_1)=r\{w_1|\lambda_1-\lambda_1^0|+w_2|\lambda_2-\lambda_2^0|\}. \end{eqnarray*} This can be obtained if the equation $r^2-(a_1+b)r-a_2=0$ yields a root $r_{+}=\frac{a_1+b+\sqrt{(a_1+b)^2+4a_2}}{2}<1$. It can be shown that under $\alpha_1+\alpha_2+\beta<1$ the root $r_{+}\in (0, 1)$. Note that the choice of $(w_1, w_2)$ is not unique.\\ \item $q>1$: without loss of generality we consider the INGARCH(2,2) model. Define a Markov chain $\mathbf{X}_t=(Y_t,\lambda_t,\lambda_{t+1})$, then the chain can be obtained by defining the iterated random functions $f_u: \mathbb{Z}_0\times[\lambda^{\ast},\infty)\times[\lambda^{\ast},\infty)\rightarrow \mathbb{Z}_0\times[\lambda^{\ast},\infty)\times[\lambda^{\ast},\infty)$ as $f(\mathbf{x})=f(n,\lambda_1,\lambda_2)=(F_{\lambda_2}^{-1}(u),\lambda_2, \delta+\alpha_1\lambda_2+\alpha_2\lambda_1+\beta_1F_{\lambda_2}^{-1}(u)+\beta_2n)$, where $\lambda^{\ast}=\delta/(1-\alpha_1-\alpha_2)$ and $u\in (0,1)$. Note that we cannot define $\mathbf{X}_t$ in the same way as in the first case, since otherwise it contradicts the independence assumption of $\{u_t\}$ sequence. Define the metric $\rho$ on $S=\mathbb{Z}_0\times[\lambda^{\ast},\infty)\times[\lambda^{\ast},\infty)$ as $\rho(\mathbf{x},\mathbf{y})=\sum_{i=1}^3 w_i|x_i-y_i|$, where $\mathbf{x}=(x_i)_{i=1}^3, \mathbf{y}=(y_i)_{i=1}^3$ and $w_i>0, i=1,2,3$. Take $\mathbf{x}_1=(n_0,\lambda_1^0,\lambda_2^0):=(0,\lambda^{\ast},\lambda^{\ast})$, then for any $\mathbf{x}=(n,\lambda_1,\lambda_2)$, we have \begin{eqnarray*} \mbox{E}\rho(\mathbf{X}_1(\mathbf{x}),\mathbf{X}_1(\mathbf{x}_1))&=&\int_0^1|f_u(\mathbf{x})-f_u(\mathbf{x}_1)|du\\ &=&\beta_2 w_3|n-n^0|+w_3\alpha_2|\lambda_1-\lambda_1^0|\\ &&+\{w_1+w_2+(\alpha_1+\beta_1)w_3\}|\lambda_2-\lambda_2^0|. \end{eqnarray*} Similarly to the first case, one needs to solve the inequality \begin{eqnarray*} (\alpha_2+\beta_2)(w_1+w_2)&\le& [r-(\alpha_1+\beta_1)](\alpha_2+\beta_2)w_3\\ &\le& r(w_1+w_2)[r-(\alpha_1+\beta_1)] \end{eqnarray*} for an $r\in (0,1)$ and a strictly positive triple $(w_1, w_2, w_3)$. This can be achieved if $\alpha_1+\alpha_2+\beta_1+\beta_2<1$, which implies the quadratic equation $r^2-(\alpha_1+\beta_1)r-(\alpha_2+\beta_2)=0$ has a root $r_+\in (0,1)$. The result hence follows by a simple induction. \end{enumerate} \subsection*{B.9. Proof of Theorem \ref{NonLinearAsympNormal}} According to Theorem \ref{AsympNormal}, we only need to establish the identifiability of the model. Similar to the proof of Theorem \ref{LinearAsymp}, one can demonstrate that if $X_t(\theta)=X_t(\theta_0), P_{\theta_0}$-a.s. for some $t$, where $\theta_0=(\delta_0, \alpha_0,\beta_0,\beta_{1,0}, \ldots, \beta_{K,0})$, then \begin{eqnarray*} &&(\beta-\beta_0)Y_{t-1} + \displaystyle\sum_{k=1}^K (\beta_k-\beta_{k,0})(Y_{t-1}-\xi_k)^+\\ &=&\delta_0-\delta+\alpha_0 X_{t-1}(\theta_0)-\alpha X_{t-1}(\theta)\in \sigma\{Y_{t-2}, Y_{t-3},\ldots\}. \end{eqnarray*} It follows that $\beta=\beta_0$ and $\beta=\beta_{k,0}, k=1,\ldots, K$. Similarly one can show that $\delta=\delta_0$ and $\alpha=\alpha_0$, hence $\theta=\theta_0$ which verifies the identifiability of the model. \bibliographystyle{rss}
1,108,101,566,309
arxiv
\section{ Introduction} A variety of experimental probes have revealed the existence of inhomogeneity in the electronic spectra of several families of the cuprate high-$T_c$ superconductors, which manifests itself as spatial modulation of the charge or spin density. These manifestations of inhomogeneity include the one-dimensional stripe ordered phase observed in neutron scattering experiments \cite{tranquada} and the two-dimensional checkerboard pattern observed in optical spectroscopy measurements\cite{dordevic} and STM studies on underdoped Bi-2212 \cite{davis,vershenin,lawler} and CaNaCuOCl \cite{lupien}. Such observations have motivated a number of theoretical scenarios involving inhomogeneity as the key ingredient in high-$T_c$ superconductivity \cite{spingap, arrigoni, caprara}. In addition, there exist a number of theoretical results on the Hubbard model reporting an enhancement in the strength of superconductivity in the presence of inhomogeneity \cite{kivelson1, kivelson2, kivelson3, contractor, maska, okamoto}, while other studies \cite{jarrell} find a suppression of superconductivity with inhomogeneity. Whether inhomogeneity in the cuprates is a friend or foe of superconductivity in cuprates is therefore still an open issue. Here we employ the Cellular Dynamical Mean-Field Theory (CDMFT) approach at zero temperature with an exact diagonalization solver to study $d$-wave superconductivity on what is commonly called (somewhat abusively) the checkerboard Hubbard model. In contrast to previous quantum cluster based studies\cite{jarrell} using Dynamical Cluster Approximation (DCA) at finite temperature that focused entirely on the underdoped regime, we consider the entire doping range of interest from half-filling up to the extreme overdoped regime and over a wider range of inhomogeneity. In addition, we study both the weak and strong coupling regimes. In Section \textbf{II} we introduce the checkerboard Hubbard model and describe the details of the method employed to study superconductivity. In Section \textbf{III} we present comparisons with previous accurate results \cite{kivelson4} that serve as benchmark for our approach. We present our results in Section \textbf{IV}. This is followed up in Section \textbf{V} by discussions and further comparisons with previous literature. Finally, a summary of main results and final conclusions appear in Section \textbf{VI}. \section{Method} Our starting point is the one band Hubbard model on a two-dimensional square lattice \begin{eqnarray}\label{hubb} H_{\rm Hubb} = -\sum_{i,j,\sigma} t_{ij} d^\dagger_{i\sigma}d_{j\sigma} + U\sum_{i} d^\dagger_{i\uparrow}d^\dagger_{i\downarrow}d_{i\downarrow}d_{i\uparrow}, \end{eqnarray} in which electrons hop among a set of lattice sites, but pay an energy cost $U$ whenever they doubly occupy the same site. Here $i,j$ label lattice sites, the hopping matrix elements $t_{ij}$ vanish unless $i,j$ are nearest neighbors, and $d_{i\sigma}$ annihilates an electron with spin $\sigma$ on site $i$. As a simple toy model for inhomogeneity based on this Hamiltonian, we consider a checkerboard modulation of the nearest neighbor hopping amplitude $t_{ij}$ which varies between alternate bonds with values $t$ and $t'$ along either direction, (Figure ~\ref{check}) with $t = t'$ being the homogeneous case. \begin{figure} \includegraphics[width = 4cm]{Fig1.eps} \caption{Checkerboard lattice with hopping amplitudes $t$ (thick lines) and $t'$ on alternate bonds. A four site plaquette can be chosen in three distinct ways namely: (1) all $t$ bonds (plaquette $A$), (2) $t$ along $y$($x$) and $t'$ along $x$($y$) (plaquette $B$) and (3) all $t'$ bonds (plaquette $C$)} \label{check} \end{figure} We employ CDMFT, a cluster generalization of Dynamical Mean Field Theory (DMFT)\cite{kotliar1} that allows one to reliably study d-wave superconductivity. In CDMFT, the lattice problem is mapped to one involving a finite cluster coupled to a bath of non-interacting electrons \cite{kotliar2,maier_rmp,hettler}. The local quantum correlations within the cluster are included exactly while longer-range correlations are treated using a mean-field approximation by writing down an effective action \begin{eqnarray} \begin{split} S_{\rm eff} &= \displaystyle\int_0^\beta \mathrm{d}\tau\, \mathrm{d}\tau'\,\Psi^\dagger_d(\tau)\left[\Gcv_0^{-1}\right]\Psi_d(\tau')\nonumber \\ &\qquad +U\sum_\mu\displaystyle\int_0^\beta \mathrm{d}\tau\, n_{\mu\uparrow}n_{\mu\downarrow}, \end{split} \end{eqnarray} where $\Gcv_0$ is a dynamical (time dependent) Weiss field that describes the coupling of the cluster to the bath. The cluster is a four-site ($2\times2$) plaquette, which has been used extensively to study superconductivity in the Hubbard and $t-J$ models \cite{kotliar, maier, kyung}. $\Gcv_0$ contains both normal (particle-hole) as well as anomalous (particle-particle) components in order to include superconducting pairing correlations. The Nambu spinor is defined by $\Psi^\dagger_d\equiv (d^\dagger_{1\uparrow},\cdots , d^\dagger_{4\uparrow}, d_{1\downarrow}, \cdots ,d_{4\downarrow})$ and $\mu,\nu$ label the degrees of freedom within the cluster. Using a starting guess for the Weiss field $\Gcv_0$, the cluster Green function $\bm{G}'$ is computed by solving a cluster impurity problem using a Lanczos exact diagonalization scheme, the details of which are discussed in Refs \onlinecite{kyung} or \onlinecite{senechal1}: \begin{eqnarray} \bm{G}'(\tau,\tau')= \begin{pmatrix} \bm{G}'_\uparrow(\tau,\tau')& \bm{F}'(\tau,\tau') \\ \bm{F}^{\prime\dagger}(\tau,\tau')& -\bm{G}'_\downarrow(\tau,\tau') \end{pmatrix}, \end{eqnarray} with $G'_{\mu\nu,\sigma}\equiv-\left\langle T d_{\mu\sigma}(\tau)d^\dagger_{\nu\sigma}(\tau')\right\rangle$ and $F'_{\mu\nu}\equiv-\left\langle T d_{\mu\uparrow}(\tau)d_{\nu\downarrow}(0)\right\rangle$, the normal and anomalous time-ordered Green functions respectively. The cluster self-energy $\bm{\Sigma}'$ is obtained from \begin{eqnarray} \bm{\Sigma}' = \Gcv_0^{-1}-\bm{G}'^{-1}. \end{eqnarray} Finally, the following self consistency condition is employed to recalculate $\Gcv_0^{-1}$ iteratively until convergence is achieved, \begin{eqnarray} \Gcv_0^{-1}(i\omega_n)= \left[\frac{N_c}{(2\pi)^2}\int \mathrm{d}\bm{\tilde k}\,\bm{G}(\bm{\tilde k},i\omega_n)\right]^{-1} + \bm{\Sigma}'(i\omega_n), \end{eqnarray} with $N_c$=$4$ the cluster size and with the following definition for the superlattice Green's function \begin{eqnarray}\label{eq:green} \bm{G}(\bm{\tilde k},i\omega_n)=\left[i\omega_n + \mu -\bm{t}(\bm{\tilde k})-\bm{\Sigma}'(i\omega_n)\right]^{-1}, \end{eqnarray} where $\bm{t}(\bm{\tilde k})$ is the Fourier transform of the superlattice hopping matrix and the momentum integral is performed over the reduced Brillouin zone of the superlattice. The $d$-wave superconducting order parameter is defined as the expectation value of a particular pairing operator. Since the sites on the plaquette are connected via two distinct types of links (with hopping amplitudes $t$ and $t'$), we define correspondingly two singlet pairing operators $\hat D$ and $\hat D'$ as \begin{eqnarray} \hat D = \sum_{i,j} D_{ij} d_{i\uparrow}d_{j\downarrow} \qquad \hat D' = \sum_{i,j} D'_{ij} d_{i\uparrow}d_{j\downarrow} \end{eqnarray} where the matrices $\bm{D}$ and $\bm{D}'$ are defined as follows: $D_{ij}=\pm 1$ on $t$-links in the $x$ and $y$ directions respectively, and likewise for $D'_{ij}$ on $t'$ links. The corresponding order parameters are calculated using the anomalous part $\bm{F}$ of the superlattice Green function (\ref{eq:green}) as follows (for details, please consult Ref.~\onlinecite{senechal1}): \begin{eqnarray}\label{eq:D} D = \frac1{(2\pi)^2}T\sum_{i\omega_n}\int\mathrm{d}\bm{\tilde k}\,{\rm tr}\left[\bm{F}(\bm{\tilde k},i\omega_n)\bm{D}(\bm{\tilde k})\right] \end{eqnarray} and likewise for $D'$. The effective order parameter $\Psi$ is just the average $(D+D')/2$. The hopping strengths $t$ and $t'$ are defined keeping the average bandwidth $t_0$ fixed to a constant value $t_0$ of unity as \begin{eqnarray} t = t_0 -\Delta t \qquad t' = t_0 +\Delta t \end{eqnarray} with $\Delta t \geq 0$. $\Delta t$ measures the degree of inhomogeneity in the system, which may be varied independently of the average bandwidth $t_0$. We insist on the importance of keeping the latter constant when varying $\Delta t$, since varying $t_0/U$ may cause effects that are likely more important that the inhomogeneity itself. \begin{figure*} \includegraphics[width=5.9cm]{Fig2A.eps} \includegraphics[width=5.9cm]{Fig2B.eps} \includegraphics[width=5.9cm]{Fig2C.eps} \caption{(Color online) $d-$wave SC order parameters $D$,$D'$ for the homogeneous case ($t=t'$) for $U =8t_0$, as a function of electron filling for plaquette $A$, $B$ and $C$. The order parameter $\Psi=(D+D')/2$ that is averaged over plaquettes $A$ and $C$ has been plotted alongside $D$,$D'$ for plaquette $B$ (middle plot) for comparison. In the latter case the curves overlap.} \label{abc} \end{figure*} An immediate question arises as to the appropriate choice for the cluster. As seen in Figure ~\ref{check}, there are three distinct ways in which a $2\times2$ plaquette might be selected, namely a plaquette with all $t$ links; one with $t$ links along $x$($y$) axis and $t'$ links along $y$($x$) axis; or one with all $t'$ links. We label them as plaquette $A$, $B$ and $C$ respectively. In order to select the appropriate cluster for the CDMFT calculation, one needs to determine which of the plaquettes best captures the physics of inhomogeneity. Plaquette $B$ seems to be the natural choice under such considerations, since it includes both $t$ and $t'$ links within the cluster allowing both to be treated on the same footing within CDMFT. In contrast, plaquette $A$($C$) treats only $t$($t'$) hoppings exactly while $t'$($t$) are treated in a mean-field approximation via the bath degrees of freedom. Since the link inhomogeneity expands the unit-cell by a factor of two in each direction, the four-site plaquette really constitutes a single unit cell of the model, i.e., it is not really a cluster. As such, we are literally using single-site dynamical mean field theory instead of CDMFT, albeit with a four-band model. Our treatment naturally collapses into a single band model (and CDMFT) in the homogeneous limit $t'=t$. In order to further justify our choice of cluster, we compute the superconducting order parameters ($D$,$D'$) for the three clusters in the homogeneous limit ($t=t'$), where $D$ and $D'$ are expected to be identical. We see in Figure ~\ref{abc} that, whereas the order parameters $D$ and $D'$ are identical for plaquette $B$, they are significantly different for plaquettes $A$ and $C$. To understand this result, one has to remember that even in the homogeneous case, where $t=t'$, the CDMFT lattice Green's function breaks translational symmetry, unless it is ``periodized''\cite{senechal1}. As a consequence, the value of the order parameter on $t$ and $t'$ links will be different depending on whether they are inside or outside the plaquette. Mathematically, the self-energy matrix entering the lattice Green's function Eq.(\ref{eq:green}) is the same for all three plaquette choices when $t=t'$. However the $\bm{D}(\bm{\tilde k})$ operators in Eq.(\ref{eq:D}) have indices that are shifted with respect to those of the hopping matrix in Eq.(\ref{eq:green}) depending on which plaquette is chosen. This also explains why, for $t=t'$, the value of $D$ on plaquette A is the same as the value of $D'$ on plaquette C while for plaquette B, $D=D'$. It should be noted however, that, as illustrated by Fig.~\ref{abc}, the mean value of the order parameter averaged over the dissimilar links is virtually identical for any choice of plaquette in the homogeneous limit. \begin{figure} \includegraphics[width = 8.0cm]{Fig3A.eps} \includegraphics[width = 8.0cm]{Fig3B.eps} \caption{(Color online)Average $d$-wave superconducting order parameter $\Psi$, as a function of electron filling at $U=8t_0$, $\Delta t=0.05$(top) and $\Delta t=0.10$(bottom), for plaquette $B$, the average of results for plaquettes $A$ and $C$, and the homogeneous case.} \label{abc2} \end{figure} Away from the homogeneous limit, we observe quantitative differences for different choices of plaquette. In order to check whether or not the choice of cluster leads to a qualitative difference in our results, we compare the $d$-wave order parameter for plaquette $B$ to that of the average of plaquettes $A$ and $C$ for moderate levels of inhomogeneity ($\Delta t = 0.05t_0$ and $0.1t_0$), as shown in Figure ~\ref{abc2}. We find that even though the average of the superconducting order parameters for plaquettes $A$ and $C$ is larger than that for plaquette $B$, it never exceeds the corresponding values for the homogeneous case. The results discussed in subsequent parts of this paper have been obtained using plaquette $B$ as the cluster of choice. Note that since we do not compute direction dependent quantities, the results are identical for plaquette B, whichever of the two possible $\pi/2$ related orientations we choose. \section{Benchmark with Checkerboard Hubbard ladder} \begin{figure} \includegraphics[width = 5.0cm]{Fig4A.eps} \includegraphics[width = 8.0cm]{Fig4B.eps} \caption{(Color online) (Top) The checkerboard Hubbard ladder. (Bottom) $d$-wave order parameter vs $t'/t$ for two different values of electron density $n$. Note that contrary to the rest of this paper, in this subsection we work directly with the ratio $t'/t$ instead of $\Delta t$ to have a fair comparison with published results.} \label{ladder} \end{figure} \begin{figure} \includegraphics[width = 8.0cm]{Fig5A.eps} \includegraphics[width = 8.0cm]{Fig5B.eps} \caption{(Color online) Single-particle density of states in the superconducting state of the checkerboard Hubbard ladder for different $t'/t$ at two values of the electron density $n$.} \label{ladder2} \end{figure} To test the reliability of our CDMFT approach for inhomogeneous systems, we present results for the checkerboard Hubbard ladder, in which the Hubbard model is defined on a one-dimensional, period two array of square plaquettes consisting of $t$ links, connected by $t'$ links as illustrated in the top panel of Figure ~\ref{ladder}. This is motivated by the availability of numerically exact results for this problem that were obtained using the Density Matrix Renormalization Group (DMRG) technique \cite{kivelson4}. An enhancement in superconductivity, as measured by the pair binding energy, was found with increasing inhomogeneity up to a moderately large value ($t'/t \approx 0.6$ for $U$=$8t$ and $n$=$0.875$), where the pair binding energy attains its maximum value. While DMRG does not lead to long-range order on the ladder, the superconducting correlations decay algebraically. This will be mimicked by true long-range order in CDMFT, which treats long-range correlations in a mean-field way. Our studies on the checkerboard ladder find that the dependence of superconductivity on inhomogeneity is qualitatively similar to the DMRG results. As seen in the bottom panel of Figure ~\ref{ladder}, the $d$-wave order parameter increases with inhomogeneity (decreasing $t'/t$), and attains a maximum value at a relatively larger value of inhomogeneity ($t'/t \approx 0.35$). We do not have access to the pair binding energy, as opposed to DMRG, however we can plot the density of states. The heights of the coherence peaks on each side of the energy gap in the one-particle density of states can be adopted as a measure of the strength of the superconducting correlations. These heights are indeed correlated with the magnitude of the order parameter on varying $t'/t$, as seen for two different dopings in Figure ~\ref{ladder2}. Such behavior is in stark contrast to that of the two-dimensional checkerboard Hubbard model, where, as we shall discuss below, there is no non-zero optimal value of inhomogeneity that favors superconductivity. Regardless of the contrasting observations in the two systems, this exercise serves to strengthen our claim of the validity of our CDMFT results for the two-dimensional system. \begin{figure} \includegraphics[width = 8.0cm]{Fig6.eps} \caption{(Color online) $d-$wave SC order parameters $D$,$D'$ and their average $\Psi$ as a function of electron filling $n$ for $U$ = $8t_0$ and $\Delta t = 0.05$ computed with plaquette B. } \label{ddp} \end{figure} \section{Results} This section is divided into three parts. We first describe the results of computation of the $d$-wave superconducting order parameter for strong and weak coupling, without allowing for antiferromagnetic long-range order. In addition, we show that in the superconducting state at a given doping, the heights of the peaks of the density of states lying on either side of the Fermi energy (across the energy gap) correlate with the magnitude of the order parameter. We then show that the correlation found previously\cite{senechal} between the low energy peak in the imaginary part of the spin susceptibility and the d-wave order parameter is still preserved in the inhomogeneous case. \subsection{Superconducting order parameter} Figure ~\ref{ddp} shows $D$, $D'$ and $\Psi$ plotted as a function of the electron density $n$ for $U = 8t_0$ and $\Delta t = 0.05$. The strength of $d-$wave superconductivity over the entire doping range is larger across a link with larger hopping amplitude ($t'$). This is consistent with other studies \cite{kyung} which find that in the strong-coupling limit $\Psi$ scales roughly with $J = 4t^2/U$, the nearest-neighbor spin super-exchange coupling (at $U = 8t_0$ we are indeed entering the strong-coupling regime). Quantitative differences aside, the plots of $D$ and $D'$ otherwise look qualitatively very similar, arising from the fact that $t$ and $t'$ are both treated on the same footing within the plaquette thereby allowing us to systematically isolate the physics of inhomogeneity from that of varying the effective bandwidth. The results for the superconducting order parameter for several values of inhomogeneity $\Delta t$, displayed in the top panel of Figure~\ref{inhomo}, exhibit two interesting features. We find that the strength of $d$-wave superconductivity decreases monotonically as a function of $\Delta t$ over the entire doping range over which superconductivity exists. This stands in contrast to results obtained by studies on finite clusters \cite{kivelson1, kivelson2, kivelson3, contractor} where the pair binding energy was found to be maximized for moderately high levels of inhomogeneity at low hole dopings. However, our findings are in qualitative agreement with those of Doluweera \textit{et. al.} \cite{jarrell}, in which DCA was used to study the checkerboard Hubbard model and the superconducting transition temperature $T_c$ was found to fall monotonically as a function of inhomogeneity in the underdoped regime. In addition, we observe a first-order superconducting to normal transition in the underdoped regime beyond a moderately large level of inhomogeneity ($\Delta t \geq0.15t_0$). The existence of the first-order transition is confirmed by the observation of hysteretic behavior in the order parameter depending on the initial state being normal (small doping) or superconducting (larger doping) (Figure ~\ref{hysterisis}), as the system is tuned across a superconducting to normal transition. Finally, as inhomogeneity is increased further to $\Delta t \geq 0.16t_0$ (corresponding to $t/t'\leq 0.72$, we find that superconductivity is completely destroyed for all dopings. \begin{figure} \includegraphics[width = 8.3cm]{Fig7A.eps} \includegraphics[width=8.3cm]{Fig7B.eps} \caption{(Color online)(Top)$d$-wave superconducting order parameter $\Psi=(D+D')/2$ as a function of chemical potential for various values of inhomogeneity $\Delta t$ for plaquette B. (Bottom) Dependence of the lattice electron density on the chemical potential for the values of $\Delta t$ that appear in the above plot.} \label{inhomo} \end{figure} \begin{figure} \includegraphics[width = 8.0cm]{Fig8.eps} \caption{(Color online) Average $d$-wave superconducting order parameter as a function of electron filling $n$ for $\Delta t$= 0.15, showing hysteretic behavior as the system is tuned between the normal and the superconducting states in the underdoped regime. The blue (dashed) curve corresponds to increasing $n$, starting from a superconducting initial state close to optimal doping, while the red (bold) curve corresponds to reducing $n$, starting from a normal initial state close to half-filling. The value of the order parameter in the filling range of (0.89-0.94) is dependent on the initial state being normal or superconducting.} \label{hysterisis} \end{figure} So far we have focused on strong coupling ($U=8t_0$), where the system is a Mott insulator at half-filling. It has been shown \cite{kyung} that in the strong-coupling regime, proximity to the Mott insulating state leads to the suppression of superconducting order parameter close to half-filling. In contrast, the behavior is very different for weak coupling, where the Mott transition is absent, and no suppression is observed in the $d$-wave order parameter in the underdoped regime unless antiferromagnetic long-range order is allowed\cite{kyung}. Results for $U=4t_0$ in the presence of inhomogeneity are shown in Figure ~\ref{u4}. We find that the superconducting order parameter in the inhomogeneous case ($\Delta t = 0.10t_0$) is suppressed compared to the homogeneous case except except for large dopings, where the superconducting order parameter is larger in the inhomogeneous case. The maximum value of the order parameter which occurs at half-filling in both cases, is however larger in the homogeneous case. We must emphasize here that for weak coupling, the CDMFT method is not completely reliable as longer range antiferromagnetic correlations, which are important at weak coupling close to half-filling, are not adequately captured by this technique. Nevertheless, these results may serve to demonstrate the qualitative difference between the weak coupling and strong coupling results. \begin{figure} \includegraphics[width = 8.0cm]{Fig9.eps} \caption{(Color online) Average $d$-wave superconducting order parameter $\Psi=(D+D')/2$, as a function of electron filling at $U=4t_0$, for homogeneous limit ($\Delta t=0$) and $\Delta t =0.1t_0$ for plaquette B.} \label{u4} \end{figure} \subsection{Density of states} The principal disadvantage of using the $d$-wave order parameter as a measure of superconductivity is that the order parameter, though easily computed within CDMFT, is not an experimentally measurable quantity, in contrast to the superconducting energy gap or $T_c$. Within the BCS theory, the gap is given by the order parameter times the effective interaction\cite{tinkham}. Regardless of the validity of BCS theory for superconductivity in the Hubbard model, if one assumes that such a relation holds approximately true, a knowledge of the effective interaction would be required in addition to the order parameter to estimate the gap. In particular, in order to accurately estimate the gap as a function of $\Delta t$, one would have to determine the dependence of the effective interaction on inhomogeneity. A more straightforward and physically meaningful way to estimate the strength of superconducting correlations is to compute the single-particle density of states, which is directly measured in tunnelling experiments\cite{tinkham}, for varying $\Delta t$ (Figure ~\ref{DOS1}). The density of states features a gap around the Fermi energy as seen in earlier studies\cite{kyung}. However, the magnitude of the energy gap does not change appreciably with $\Delta t$ on the scale of the Lorentzian broadening $\eta$= $0.1t_0$, used to compute the density of states, which makes it unsuitable to probe reliably the variation of superconductivity with inhomogeneity. Alternatively, one might consider the quasiparticle peak heights in the density of states. The quasiparticle spectrum of BCS superconductors is characterized by peaks on either side of the energy gap, and the height of the peaks is a measure of the coherence of the quasiparticle excitations, and may therefore be used to gauge the strength of coherence in the superconducting state for varying $\Delta t$ at a fixed doping. As seen in Figure ~\ref{DOS1}, for various $\Delta t$ the heights of the peaks vary concomitantly with the magnitude of the $d$-wave order parameter(Figure~\ref{inhomo}) in both underdoped and overdoped regimes, thereby providing an experimentally measurable probe of superconductivity whose behavior is consistent with that of the order parameter. \begin{figure} \includegraphics[width = 8.0cm]{Fig10A.eps} \includegraphics[width = 8.0cm]{Fig10B.eps} \caption{(Color online) Single-particle density of states for different values of $\Delta t$ at two values of the electron density $n$ for plaquette B.} \label{DOS1} \end{figure} \subsection{Spin susceptibility} The top panel in Figure ~\ref{ss} displays the imaginary part of the cluster spin susceptibility $\chi''(\omega)$ in the overdoped regime ($n$= $0.84$), for $U$= $8t_0$ at $\textbf{Q}$ = $(\pi,\pi)$ for three different values of $\Delta t$. The strength of the low energy peak, whose connection with superconductivity was confirmed by theoretical studies \cite{senechal,maier2} on the overdoped homogeneous Hubbard model and neutron scattering experiments \cite{wakamoto1,wakamoto2,dahm} on LSCO samples, falls with increasing inhomogeneity, concomitant with the behavior of the superconducting order parameter. In contrast with the homogeneous case, the fall in the susceptibility peak with inhomogeneity is slower than that of $d$-wave superconductivity. Nevertheless, we find that the association between superconductivity and the low energy peak in the spin susceptibility at $\textbf{Q}$ =$(\pi,\pi)$ in the overdoped regime is preserved in the presence of inhomogeneity as well. As seen in Figure ~\ref{ss2}, the low energy peak in the spin susceptibility is present in the superconducting state, while it disappears in the normal state ($\Psi = 0$), which further corroborates the connection between antiferromagnetic fluctuations and $d$-wave superconductivity found previously in the homogeneous Hubbard model. \begin{figure} \includegraphics[width = 8.0cm]{Fig11A.eps} \includegraphics[width = 8.0cm]{Fig11B.eps} \caption{(Color online) Imaginary part of the cluster spin susceptibility at $\textbf{Q}$=$(\pi, \pi)$ (top) and $\textbf{Q}$=$(0,\pi)$ (bottom), for three different $\Delta t$ values, with $U$=$8t_0$ and $n$=0.84.} \label{ss} \end{figure} Another interesting feature in the spin susceptibility is the enhancement of the $\textbf{Q}$ =$(0,\pi)$ component of $\chi''(\omega)$ with inhomogeneity as seen in the bottom panel in Figure ~\ref{ss}, which may be an indication of development of spin fluctuations competing with the predominant antiferromagnetic fluctuations in the system. One must however be careful in interpreting the above results, since the cluster which is inherently anisotropic, favors such spin fluctuations, and does not necessarily reflect on the nature of the long-range spin correlations. The strength of the low energy peak (around $\omega$=$0.3$) however remains significantly smaller than that at $\textbf{Q}$ = $(\pi,\pi)$, indicating that antiferromagnetic correlations, although weakened by inhomogeneity, continue to dominate the physics of the two-dimensional Hubbard model in the presence of moderate checkerboard-type inhomogeneity. \begin{figure} \includegraphics[width = 8.0cm]{Fig12.eps} \caption{(Color online) Imaginary part of the cluster spin susceptibility at $\textbf{Q}$=$(\pi, \pi)$ in the superconducting ($\Delta t$=0.12, $n$=0.84), and normal state ($\Delta t$=0.12, $n$=0.76 and $\Delta t$=0.20, $n$=0.77) for $U$=$8t_0$.} \label{ss2} \end{figure} \section{Discussion} Our studies indicate that in the strong coupling regime, which is considered relevant for the cuprates, checkerboard-type inhomogeneity on the square lattice Hubbard model is detrimental to $d$-wave superconductivity over the entire doping range of interest. Superconductivity is completely destroyed beyond a moderately large inhomogeneity level. This is a strikingly different conclusion from what was reported in some recent works that studied the highly underdoped regime of the checkerboard Hubbard model on finite clusters using exact diagonalization\cite{kivelson1,kivelson2,kivelson3}, and contractor renormalization \cite{contractor} methods, where the pair binding energy was found to be maximum at moderate levels of inhomogeneity at low hole concentrations. The difference between these results and those of the present study may be attributed to several reasons. Firstly, CDMFT is substantially better equipped in capturing the physics of the extended lattice, compared to finite clusters which are expected to have significant finite-size effects. Therefore, interpreting the results obtained on finite systems and using them to predict the nature of superconductivity in the extended system must be undertaken cautiously. At least CDMFT captures some of the physics of the infinite lattice in a mean-field way, even though correlations are taken into account exactly only on short length scales. Secondly, previous studies computed the pair binding energy to quantify superconductivity which, in contrast to the $d$-wave order parameter considered in our study, is not a measure of superconducting phase coherence in the system. Furthermore, our results are in qualitative agreement with those of Doluweera \textit{et al.}\cite{jarrell}, who used DCA on a four-site plaquette to study superconductivity on the checkerboard Hubbard model at finite temperature. Their study reported a monotonic suppression of $T_c$ as a function of inhomogeneity. In contrast, our work focused on the $d$-wave order parameter at $T=0$ as a measure of superconducting strength, providing an alternative approach to this problem. It is noteworthy that in contrast to the aforementioned work where a single plaquette configuration was studied with uniform hopping within the cluster (corresponding to plaquette $A$($C$) in our work depending on $t'$ being larger (smaller) that $t$), we have verified the dependence of results on the choice of cluster. It is not surprising, however, that our results are qualitatively similar, since DCA and CDMFT are both self-consistent cluster methods that effectively capture the short-range correlations of the system, and whose results become exact in the limit of infinite cluster size. There is a qualitative difference between the strong and weak coupling results in the extreme overdoped regime where there is an enhancement in superconductivity at weak coupling that is not observed at strong coupling. It is also worth mentioning here that our investigations include the overdoped regime of the superconducting phase of the checkerboard Hubbard model, which has not been considered in any of the previously mentioned studies. In the strong coupling case, the gradual suppression of the order parameter with inhomogeneity, followed by a first-order transition to the normal state at moderately large inhomogeneity in the underdoped regime has not been noticed before. This result should be interpreted in light of a recent study that demonstrates that a first-order metal-metal transition lies beneath the superconducting dome \cite{Sordi} and that this transition is directly linked to the Mott transition. The superconducting phase of the unusual metal found close to half-filling is more sensitive to inhomogeneity. This can be verified by looking at Fig.~\ref{inhomo} at fixed filling as a function of inhomogeneity: the figure clearly suggests that a first order transition to the normal state occurs. Finally, our results support the connection between antiferromagnetic fluctuations and superconductivity found previously for the homogeneous case. \cite{Maier3,senechal} Indeed, the correlation between the magnitude of the superconducting order parameter and the height of the first peak in $\chi''$ in the overdoped regime \cite{senechal} remains valid for the inhomogeneous case studied here, although the fall of the susceptibility peak is slower than that of the order parameter. It is entirely plausible therefore that a suppression in the superconductivity is tied to the of weakening of antiferromagnetic correlations in the presence of a checkerboard-type inhomogeneity in nearest-neighbor hopping. \section{Conclusion} We benchmarked our approach with numerically exact DMRG results on the checkerboard Hubbard ladder \cite{kivelson4}. The quantities that can be obtained in CDMFT and in DMRG are different but qualitatively both approaches show similar results, namely that there exists an optimal inhomogeneity for superconductivity on a ladder. Despite suggestions that this result is general, namely that there is always an optimal inhomogeneity for superconductivity \cite{kivelson1, kivelson2, kivelson3, contractor, maska, okamoto, Chakravarty, Martin, Aryanpour, Loh, Arrigoni}, our study demonstrates instead that this statement may not be valid for d-wave superconductivity in the two-dimensional Hubbard model in the presence of arbitrary types of inhomogeneity. Indeed, previous DCA results at finite temperature \cite{jarrell} and our CDMFT study at zero temperature both find that when inhomogeneity on the checkerboard Hubbard model is in the nearest-neighbor hopping, either the maximum $T_c$ \cite{jarrell} or the maximum value of the superconducting order parameter at zero temperature cannot exceed that of the homogeneous system. The size of the order parameter at a given doping can be taken as a measure of the strength of superconductivity, since we have shown that it manifests itself directly in the height of the coherence peaks in the density of states. Note that since one finds, with CDMFT, that \textit{site} inhomogeneity on the checkerboard lattice does lead to an optimal inhomogeneity for superconductivity,\cite{okamoto} quantum cluster methods do not have intrinsic limitations that prohibit finding enhanced superconductivity in the presence of inhomogeneity. For the model of interest, we have explored a larger doping range than previous studies as well as both weak and strong coupling regimes. In the weak coupling case, as a result of inhomogeneity, superconductivity is suppressed in the underdoped and enhanced in the overdoped regime, which however does not surpass the maximum possible value of the order parameter in the homogeneous case. In the strong coupling case, our results can be summarized by the following observations a) a monotonic suppression of the superconducting order parameter for all dopings and b) at given inhomogeneity, a first order transition between normal and superconducting state at finite doping on the underdoped side, with the superconducting dome disappearing suddenly on further increasing the inhomogeneity. Further research could look into the effect of next-nearest-neighbor hopping or of mixed types of inhomogeneity, including both hopping and site energies. This would help again to verify the generality of the connection confirmed here between antiferromagnetic fluctuations and superconductivity. \cite{Maier3,senechal} One should also explore more closely the relationship between the domain where a first order transition is induced by inhomogeneity at strong coupling in the underdoped regime and the domain where a first order transition between two metals was found recently for the homogeneous case. \cite{Sordi} The latter phenomenon was clearly linked to the Mott transition. \begin{acknowledgments} The authors would like to acknowledge Steven Kivelson for discussions. This work was partially supported by NSERC and by the Tier I Canada Research Chair Program (A.-M. S. T.). Computational resources were provided by CFI, MELS, the RQCHP, and Compute Canada. \end{acknowledgments}
1,108,101,566,310
arxiv
\section{Data}\label{sec:data} \subsection{The eBOSS Survey}\label{sec:ebosssurvey} The eBOSS survey is the cosmology survey within SDSS-IV, an extension of the BOSS survey \citep[][]{dawson13a} in SDSS-III \citep[][]{eisenstein11a}. Over a six-year period beginning in Fall 2014, eBOSS will observe four independent tracers of the underlying density field: luminous red galaxies (LRGs), quasars, Ly$\alpha$ forest, and ELGs, and measure the cosmological distances as a function of redshift with BAO as a standard ruler and record the expansion history of the Universe at $0.6\lesssim z \lesssim 2.3$. The eBOSS survey uses the same two identical, multi-object spectrographs as in BOSS \citep[][]{smee13a} on the $2.5$-meter SDSS Telescope \citep[][]{gunn06a} at the Apache Point Observatory in New Mexico. Each spectrograph is equipped with two cameras, one blue and one red, with a dichroic splitting the light at roughly $6000\,$\AA\ and a combined coverage spanning between $3600\,$\AA\ and $10400\,$\AA. The spectral resolution $\mathcal{R}$ is $1560-2270$ in the blue and $1850-2650$ in the red channel, with a mean approximately $\bar{\mathcal{R}}\approx2000$. The spectrographs are fed by $1000$ optical fibers ($500$ for each), covering a field-of-view (FoV) of about $7.5$ square degrees. The aperture diameter of the fibers is $2\,$\arcsec, smaller than $3\,$\arcsec\ in SDSS-I/II. The typical total exposure time for each pointing is about $75$ minutes. eBOSS selects targets primarily based on the SDSS imaging, obtained through a set of $ugriz$ filters \citep[][]{fukugita96a} with a wide-field camera \citep[][]{gunn98a} in a drift-scan mode. As the desired targets are fainter than those in SDSS I-III, in order to achieve high targeting efficiency, eBOSS includes supplementary imaging data with other instruments, including the infrared photometry \citep[][]{lang14a} by the \textit{Wide-field Infrared Survey Explorer} \citep[\textit{WISE},][]{wright10a}, the U-band imaging from the South Galactic Cap $U$-band Sky Survey (SCUSS)\footnote{\texttt{http://batc.bao.ac.cn/Uband/}} conducted at the 2.3-meter Bok telescope at the Kitt Peak National Observatory, and the deep $grz$ imaging with the Dark Energy Camera \citep[DECam,][]{flaugher12a} on the 4-meter Blanco telescope at the Cerro Tololo Inter-American Observatory (CTIO) in Chile. The spectral-reduction and redshift-fitting pipeline in eBOSS is a continuation of the BOSS pipeline \citep[][]{bolton12a}, improved to yield required performance on spectra with lower signal-to-noise ratio (S/N) than in BOSS. As BOSS, eBOSS classifies objects and derives redshifts based upon principal-component-analysis (PCA) templates for quasars and galaxies, and archetypal templates for stars, though the team is currently exploring an alternative approach fully based on archetypal templates. For the analysis presented here, we use the results derived from the pipeline version \texttt{v5\_7\_8}, which still uses the PCA templates for galaxies and quasars. The pipeline flags a redshift with a warning (\texttt{ZWARNING}) based on various quantitative criteria, such as a small $\chi^2$ difference between the best and second-best fits. Tests have shown that the \texttt{ZWARNING} is conservatively defined and the redshift success rate for objects with \texttt{ZWARNING==0} is better than $99\%$. We only use objects classified as a \texttt{GALAXY} with \texttt{ZWARNING==0} in this analysis. For more details regarding the eBOSS survey, we refer the reader to \citet[][]{dawson15a}. \subsection{Emission-line Galaxies}\label{sec:elgs} Emission-line galaxies are one of the four tracers targeted in eBOSS. The primary tracer of the large-scale structures in BAO observations (at $z\lesssim1$) has been LRGs \citep[][]{eisenstein01a}, because of their broadband brightness and well-understood SEDs. At higher redshift, however, most of the distinctive features of LRGs, such as the $4000\,$\AA\ break, G4300-band and MgH/Mg b absorption bands, are redshifted into the Meinel hydroxyl forest. Blueward of the $4000\,$\AA\ break, the cool giant stars dominating the continuum emit little flux, which makes spectral classification and redshift determination difficult. Moreover, the number density of red galaxies is lower at higher redshift due to galaxy evolution \citep[e.g., ][]{bell04a, faber07a, moustakas13a}, rendering them less useful for cosmological purposes. On the other hand, the cosmic star-formation rate (SFR) density increases precipitously with redshift, and at $z\sim1$, is about an order-of-magnitude higher than at present \citep[e.g., ][]{madau96a, hopkins04a, madau14a}. Star-forming galaxies (SFGs) exhibit the strong [\ion{O}{2}]\,$\lambda\lambda3727,3730$\ emission feature, which is a doublet distinguishable at a medium spectral resolution $\mathcal{R}\sim4000$. The number density of [\ion{O}{2}]\,$\lambda\lambda3727,3730$\ emitters increases steadily with redshift up to $z\sim2$ \citep[e.g., ][]{zhu09a, comparat15a, sobral15a}. The next-generation BAO surveys, e.g., DESI \citep[][]{schlegel11a, levi13a} and PFS \citep[][]{takada14a}, will primarily target these [\ion{O}{2}]\ emitters at redshift $z>1$. At redshift $z\lesssim1$, the strong emission lines of SFGs enhance their brightness in the optical (in the observer frame), facilitating their detection even in the relatively shallow SDSS imaging. \citet{comparat13a} demonstrated that it is feasible to conduct an ELG survey with the BOSS spectrographs that can achieve the number density and the volume coverage required for BAO measurements at $0.6\lesssim z \lesssim 1.0$, with targets selected from the SDSS imaging. As the continuum of SFGs is dominated by the emission of hot O/B stars in the rest-frame NUV and that of dust and polycyclic aromatic hydrocarbons (PAHs) in the IR, $u$/$U$-band and/or IR data can help improve the targeting efficiency of these objects. We have been working on optimizing the ELG selection strategies with additional data, including the $U$-band photometry from SCUSS, the IR photometry from \textit{WISE}, and the deeper $grz$ imaging with DECam. Favorable weather during SDSS-III led to an early completion of the BOSS survey. A fraction of the remaining time was allocated to an eBOSS pilot program known as the Sloan Extended Quasar, ELG, and LRG Survey \citep[SEQUELS,][]{alam15a}. In Fall 2014, within eBOSS, we also conducted a series of pilot observations to test possible techniques for the ELG target selection. With data from these pilot observations, the team is currently investigating different selection algorithms to maximize the targeting efficiency \citep[][]{raichoor15a, comparat15b, delubac15a, jouvel15a}. The ELG cosmology survey will begin in Fall 2016, the third year of eBOSS, with the optimal selection strategy to be defined from these pilot data and investigations. The survey aims at obtaining secure redshift for about $200,000$ ELGs at $0.6\lesssim z \lesssim 1.0$ and measuring the BAO scale with an accuracy of about $2\%$ at an effective redshift $\left<z\right>\sim0.8$. For more details regarding the ELG target selections and the cosmological applications, we refer the reader to references above and \citet[][]{dawson15a} and \citet[][]{zhao15a}. \subsection{The ELGs from the eBOSS Pilot Observations}\label{sec:ebosselg} In total, the BOSS/SEQUELS ancillary program and the early eBOSS pilot observations provided about $12,000$ ELGs spanning $0<z\lesssim1.5$, peaked at $z\sim0.8$. We show the redshift distribution in Figure~\ref{fig:redshift}. In Figure~\ref{fig:fullcomposite}, we present the median composite spectrum of all the ELGs, in the wavelength range $2000<\lambda<7500\,$\AA. As we included \textit{all} the ELGs in this composite spectrum, the objects contributing at each wavelength are different, with high-redshift objects dominating in the NUV and low-redshift ones in the optical. The composite spectrum appears as a typical active SFG spectrum \citep[e.g., ][]{kennicutt92a}. We have labeled the prominent features in the figure. In the optical, it features strong hydrogen Balmer recombination emission on top of relatively weak stellar Balmer absorption, strong nebular forbidden lines due to collisionally-excited metal atoms, e.g., N, O, $\mathrm{N}^+$, $\mathrm{O}^+$, $\mathrm{S}^+$, $\mathrm{O}^{2+}$, $\mathrm{Ne}^{2+}$, and $\mathrm{Ar}^{2+}$. As the BOSS spectrographs cover the [\ion{O}{3}]$\,\lambda5008$ up to redshift $z\sim1.0$, we expect that eBOSS will provide an important opportunity for studies of the ISM properties, such as the gas-phase metallicity, in strong SFGs at moderate redshift. The full composite spectrum also displays some relatively weak stellar metal absorption lines, e.g., the Fraunhofer \ion{Ca}{2}\ H \& K, G4300-band, \ion{Mg}{1}\ b, and \ion{Na}{1}\ D lines. We summarize the identified lines in the list given in Appendix~\ref{app:atomic}. Comparat et al. (2015b) describes the full sample in detail and we refer the reader to that paper for more information. The mean precision of the redshift at $z\sim0.8$ is about $20\,\ensuremath{{\rm km\,s}^{-1}}$ thanks to the strength of the emission lines. From the SED fitting of the composite spectra with different line strengths, the average stellar mass $\left<M_*\right>$ at $z\sim0.8$ is about $10^{10}\,\ensuremath{\rm M_\odot}$. The investigations of the physical properties of individual galaxies, such as stellar mass ($M_*$), SFR, and metallicity, are ongoing and will be presented in future papers. The redshift coverage of the ELG observations at $z>0.6$ allows us to probe the NUV part of their SEDs, as shown in Figure~\ref{fig:fullcomposite}. The composite spectrum between $2900\,{\rm \AA}$ and the Balmer Break at $3646\,{\rm \AA}$ is essentially featureless except for the weak \ion{He}{1}$\,\lambda3189$ emission line. Blueward of $2900\,{\rm\AA}$, we see strong absorption lines similar to those in intervening quasar absorption-line systems, \ion{Mg}{1}, \ion{Mg}{2}\ and \ion{Fe}{2}, and non-resonant \ion{Fe}{2}$^*$ emission as well as the \ion{C}{2}]\ emission. The underlying stellar continuum rises towards higher energy, as typical of hot O/B stellar spectra. We fit a power law, $f(\lambda)\propto\lambda^\beta$, to the continuum between $2000\,{\rm \AA}$ and $2200\,{\rm \AA}$ and obtain a slope $\beta\sim-2.1$, as expected for an SED dominated by O/B stellar emission in the UV. In the rest of the paper, we will focus on the absorption and emission features in the NUV. \begin{figure} \epsscale{1.28} \plotone{eBOSS_redshift_distribution.eps} \caption{The redshift distribution of all emission-line galaxies (ELGs) from the eBOSS pilot observations. The solid area shows $8620$ galaxies at $0.6<z<1.2$.} \vspace{0.2cm} \label{fig:redshift} \end{figure} \section{Near-ultraviolet Spectroscopy}\label{sec:results} \subsection{The NUV ELG Sample}\label{sec:nuvsample} \begin{sidewaysfigure*} \hspace{0.0in} \includegraphics[width=1.00\textheight]{eBOSS_composite_full_NIRadded.eps} \caption{The median composite spectrum of all ELGs (at $0<z\lesssim1.5$) at $2000\,\mathrm{\AA}<\lambda<7500\,\mathrm{\AA}$ from the eBOSS pilot observations. We label emission features in green, stellar absorption features in brown, and ISM/CGM absorption lines in black. The Greek symbols indicate hydrogen Balmer lines, which appear as nebular recombination emission on top of stellar absorption.} \vspace{-3.5in} \label{fig:fullcomposite} \end{sidewaysfigure*} The eBOSS pilot observations have obtained spectra for a large sample of ELGs at redshift $z>0.6$ with rest-frame NUV coverage, providing us with a good opportunity to investigate the gas processes associated with these objects. We here focus on the wavelength range between $2200\,{\rm \AA}$ and $4000\,{\rm \AA}$. We choose the shorter wavelength limit so as to cover \ion{Fe}{2}$\,\lambda2250$ and \ion{Fe}{2}$\,\lambda2261$, and the longer limit to cover [\ion{O}{2}]\,$\lambda\lambda3727,3730$. To ensure the same wavelength coverage for all the objects and thus the same contributing objects at all the wavelengths, we select ELGs between redshift $0.6$ and $1.2$, which include $8620$ objects. As discussed in Section~\ref{sec:ebosssurvey}, we use the reduction outputs based on the spectroscopic pipeline version \texttt{v5\_7\_8}, and consider only objects classified as a \texttt{GALAXY} with no redshift warning, i.e., \texttt{ZWARNING==0}. The classification selection automatically rejects broad-line active galactic nuclei (AGNs). We do not make further cuts in our sample selection. At the redshifts we are interested in, some of the lines required in the narrow-line AGN classification schemes \citep[e.g., ][]{baldwin81a}, such as H$\alpha$\ and [\ion{N}{2}]\,$\lambda6584$, are not covered by eBOSS. For those with [\ion{O}{3}]\,$\lambda5008$\ and H$\beta$\ measurements (at $z\lesssim1$), based on the blue optical color and line ratios \citep[e.g., ][]{yan11a, trouille11a}, we expect the fraction of narrow-line AGNs to be at most a few percent. We also do not expect many low-ionization nuclear emission-line regions \citep[LINERs,][]{heckman80a} in our sample. The ELGs are selected to be blue galaxies, while the majority of LINERs are found in red galaxies \citep[e.g., ][]{ho97a}. We therefore expect the line emission in the integrated spectra of the eBOSS ELGs to be dominated by contributions from SF activities and will use the terms ELGs and SFGs interchangeably. \subsection{The Method}\label{sec:method} The average S/N per pixel in the continuum region of the individual ELG spectra is low ($\lesssim1$) and does not allow precise measurements of the absorption features for single objects. To study the gas associated with the ELGs, we construct high-S/N composite continuum-normalized spectra. We use a median estimator, which is less prone to extreme outliers. However, we also tested our analysis with the arithmetic mean estimator and found consistent results, with differences in relative dependences smaller than $1\sigma$. For each observed spectrum, $F(\lambda)$, we first blueshift it back to its rest frame on a common wavelength grid. We choose the common wavelength grid to have the same logarithmic (or equivalently, velocity) spacing as in the observer frame, i.e., with $\ensuremath{{\rm d}} \log_{10} \lambda=10^{-4}$ or $\ensuremath{{\rm d}} v=69\,\ensuremath{{\rm km\,s}^{-1}}$. In the blueshifting process, we interpolate the spectrum with the cubic-B spline method\footnote{As the spacing is identical before and after interpolation, linear interpolation yields almost the same results.}, as in the standard SDSS pipeline. We then mask out absorption and emission features and fit a cubic polynomial function through the rest of the spectrum. Using the best-fit polynomial function as an estimate of the underlying continuum, $\hat F_{\rm cont}(\lambda)$, we normalize the observed spectrum to obtain the continuum-normalized spectrum suited for absorption studies: \begin{equation} R(\lambda) \equiv \frac{F(\lambda)}{\hat F_{\rm cont}(\lambda)} \, \mathrm{.} \label{eq:residual} \end{equation} For a given sample, we construct a median composite spectrum of all the continuum-normalized spectra (in the rest frame). Finally, we fit a quadratic polynomial function to the composite spectrum, again with absorption and emission features masked out, to remove any large-scale residuals, though skipping this final step has a negligible effect on the results. We designate the final composite as $\left<R(\lambda)\right>$: \begin{equation} \left<R(\lambda)\right> \equiv \left<\frac{F(\lambda)}{\hat F_{\rm cont}(\lambda)}\right> \, \mathrm{.} \label{eq:composite} \end{equation} Since we mostly work with composite spectra, we will drop the ensemble symbol $\left<\,\right>$ in the text for simplicity. We quantify the absorption and emission strength in the continuum-normalized spectra with the rest equivalent width $W_0$. We define the rest equivalent widths for absorption and emission in such a way that they are both positive. For absorption, the rest equivalent width is given by \begin{equation} W^{\rm absorption}_0 \equiv \int_{\lambda_{\rm min}}^{\lambda_{\rm max}}\left[1-R(\lambda)\right]\,\ensuremath{{\rm d}} \lambda \,\mathrm{,} \label{eq:rewabs} \end{equation} and for emission, it is defined as \begin{equation} W^{\rm emission}_0 \equiv \int_{\lambda_{\rm min}}^{\lambda_{\rm max}}\left[R(\lambda)-1\right]\,\ensuremath{{\rm d}} \lambda \,\mathrm{,} \label{eq:rewemi} \end{equation} where the integration range ($\lambda_{\rm min}<\lambda<\lambda_{\rm max}$) encloses the absorption/emission profile. Throughout the paper, unless otherwise specified, we estimate the measurement uncertainties for a given sample by bootstrapping (i.e., with replacement) $100$ times. \begin{figure*}[t] \epsscale{1.2} \plotone{eBOSS_composite_withqsoabsorbers.eps} \caption{The composite continuum-normalized spectrum of $8620$ ELGs at $0.6<z<1.2$ (\textit{blue}). We show the rest-frame positions of emission features with vertical green dashed lines and those of ISM/CGM absorption lines with black dashed lines. The brown solid line marks the stellar photospheric absorption feature \ion{C}{3}$\,\lambda2298$. The red line shows the composite spectrum of $2310$ quasar intervening absorption-line systems with $\ensuremath{W_0^{\lambda2796}}>2\,\mathrm{\AA}$ in the same redshift range. } \label{fig:nuvcomposite} \vspace{0.05in} \end{figure*} \subsection{The Composite Continuum-normalized Spectrum}\label{sec:nuvcomposite} \begin{figure*} \epsscale{1.04} \plotone{eBOSS_composite_withqsoabsorbers_2350.eps} \vspace{-0.05in} \epsscale{1.04} \plotone{eBOSS_composite_withqsoabsorbers_2600.eps} \vspace{-0.05in} \epsscale{1.04} \plotone{eBOSS_composite_withqsoabsorbers_2800.eps} \vspace{-0.05in} \epsscale{1.04} \plotone{eBOSS_composite_withqsoabsorbers2.eps} \caption{The composite continuum-normalized spectrum of ELGs (\textit{blue}), zoomed in on prominent features and compared with the composite spectrum of quasar absorbers (\textit{red}) as in Figure~\ref{fig:nuvcomposite}. In addition, the bottom panel shows the spectra at $3000\,\mathrm{\AA}<\lambda<4000\,\mathrm{\AA}$, where we have shifted the quasar absorber composite upward by $0.2$ for clarity. } \label{fig:nuvfine} \end{figure*} Figure~\ref{fig:nuvcomposite} presents the median composite spectrum of the $8620$ ELGs at $0.6<z<1.2$ in the wavelength range $2200\,{\rm \AA}<\lambda<2900\,{\rm \AA}$. In Figure~\ref{fig:nuvfine}, we zoom in on the most prominent absorption and emission features for a more detailed illustration, and also include the part in the wavelength range $3000\,{\rm \AA}<\lambda<4000\,{\rm \AA}$. We have omitted the part between $2900\,{\rm \AA}$ and $3000\,{\rm \AA}$ since it is featureless (see Figure~\ref{fig:fullcomposite}). To guide the eye, we mark the rest-frame positions of the identified features. The line features in the NUV can be categorized into three primary groups\footnote{In the FUV, stellar wind features comprise another major class of spectral features.} based upon their origins: stellar photospheric absorption lines, nebular emission lines, and absorption/emission lines due to gas in the ISM and CGM. The last category includes combined effects caused by both the ISM and CGM and is the focus of this work. We briefly discuss the observed features in these groups before investigating the ISM/CGM features in detail. \vspace{0.15in} \noindent $\bullet$ \textbf{Stellar photospheric absorption features} \vspace{0.05in} At $2200\,{\rm \AA}<\lambda<2900\,{\rm \AA}$, we are able to identify only one photospheric absorption line, \ion{C}{3}\ at $2297.58\,$\AA. However, in subsampling exercises with bootstrapping and jackknife, we notice that there are some weak but persistent features that are not due to noise (also see Section \ref{sec:localsf} below). We do not identify these weak stellar features because the NUV part of the O/B star SEDs has not been sufficiently explored in either theory or observation. The most recent work was by \citet[][the UVBLUE library]{rodriguez05a}\footnote{\texttt{http://www.inaoep.mx/$\sim$modelos/uvblue/uvblue.html}}, who built a suite of stellar spectral templates in the NUV based on the atmospheric model code ATLAS9 \citep[][]{kurucz92a, castelli97a} and, for O/B stars, compared the model spectra with a few low-resolution spectra taken by \textit{IUE} in 1980s. Although the shape of the underlying stellar continuum is well-understood ($\beta\sim-2.0$, e.g., \citealt{kinney93a}, and Section~\ref{sec:ebosselg}), the absorption features in theoretical calculations and observations do not match each other. For example, we do not detect the \ion{O}{3}\ line at $2496\,$\AA\ predicted by the models in our composite spectrum, nor did in the \textit{IUE} observations of O/B stars \citep[][]{fanelli92a, rodriguez05a}. At $3000<\lambda<4000\,$\AA, the prominent stellar absorption features are the well-known Balmer series at $\lambda>3646\,$\AA, and \ion{Ca}{2}\ H ($3969.59\,$\AA) \& K ($3934.77\,$\AA) lines. The \ion{Ca}{2}\ doublet, however, likely has a large contribution from the gaseous ${\rm Ca}^+$ in the ISM/CGM \citep[see, e.g., ][]{zhu13a, murga15a}. \vspace{0.15in} \noindent $\bullet$ \textbf{Nebular emission features} \vspace{0.05in} Between $2200$ and $2900\,$\AA, we identify two nebular emission features with high confidence: semi-forbidden \ion{C}{2}]\ at about $2326\,{\rm \AA}$ and forbidden [\ion{O}{2}]\ at about $2471\,{\rm \AA}$. The \ion{C}{2}]\ feature is a blend including five transitions from the doublet and the triplet between the ground state and first excited state: \ion{C}{2}]$\,\lambda\lambda2324,2325$ and \ion{C}{2}]$\,\lambda\lambda\lambda2326, 2327, 2329$. The [\ion{O}{2}]\ feature is a doublet at $2470.90\,{\rm \AA}$ and $2471.10\,{\rm \AA}$, due to the transitions between the ground state and the second excited state of $\mathrm{O^+}$. For comparison, the [\ion{O}{2}]\,$\lambda\lambda3727,3730$\ doublet is due to the transitions between the ground state and the first excited state of $\mathrm{O^+}$. Both the \ion{C}{2}]\ and [\ion{O}{2}]\ emission lines were observed in \ion{H}{2}\ regions by \textit{IUE} \citep[][]{dufour87a}. As the low-ionization nebular emission lines in the optical spectra of SFGs, e.g., [\ion{O}{2}]\,$\lambda\lambda3727,3730$, [\ion{N}{2}]\,$\lambda\lambda6548,6583$\ and [\ion{S}{2}]\,$\lambda\lambda6718,6733$, they must be dominated by the emission from \ion{H}{2}\ regions, where the UV photons from O/B stars ionized the elements in the surrounding gas to low-ionization states \citep[e.g., ][]{stromgren39a}, though diffuse ionized gas in the ISM, e.g., the warm ionized medium \citep[WIM,][]{mckee77a}, also contributes to the integrated emission \citep[e.g., ][]{reynolds84a}. $\mathrm{C^+}$ can also be abundant in a cooler medium (e.g., photodissociation regions, or PDRs, \citealt{tielens85a}) because the ionization potential of neutral carbon ($11.26\,{\rm eV}$) is lower than that of hydrogen or oxygen (both $\sim13.6\,{\rm eV}$, see Appendix~\ref{app:atomic}). It is interesting to note that, though \ion{C}{2}]\ in the NUV has been little studied, the fine-structure emission of the $\mathrm{C^+}$ ground term at $157.7\,\ensuremath{\mu{\rm m}}$ in the IR is known to be a major coolant in the ISM \citep[][]{dalgarno72a}. [\ion{C}{2}]$\,\lambda157.7\,\ensuremath{\mu{\rm m}}$ can be observed for objects at $z\sim1$ in the submillimeter \citep[e.g., ][]{stacey10a}, thus for the ELG targets in eBOSS and future BAO surveys. We tentatively identify [\ion{Ne}{4}]\ (not labeled) at about $2424\,{\rm \AA}$, a doublet at $2422.56\,{\rm \AA}$ and $2425.14\,{\rm \AA}$, due to the transitions between the ground state and the first excited state of the triply-ionized $\mathrm{Ne^{3+}}$. In SFGs, this doublet is observed mostly in planetary nebulae \citep[e.g., ][]{koeppen87a}, supernova remnants \citep[e.g., ][]{blair87a}, and other environments hotter than \ion{H}{2}\ regions as the ionization potential of $\mathrm{Ne^{2+}}$ ($63.4\,{\rm eV}$) is larger than $\mathrm{He^{+}}$ ($54.4\,{\rm eV}$). At $3000<\lambda<4000\,$\AA, besides Balmer recombination lines and [\ion{O}{2}]\,$\lambda\lambda3727,3730$, we also observe [\ion{Ne}{3}]\ at $\lambda=3869.86\,{\rm \AA}$. Highly sensitive to the ionization parameter, the [\ion{Ne}{3}]\ emission can be combined with [\ion{O}{2}]\ and used to probe the metallicity and other properties of the ISM \citep[][]{nagao06a, levesque14a}. There is also a \ion{He}{1}\ line at $3889.74\,$\AA\ (not labeled), which is blended with Balmer H$\zeta$. Between $2900\,{\rm \AA}$ and the Balmer break at $3646\,{\rm \AA}$, we only detect the weak \ion{He}{1}\ emission at $3188.67\,$\AA, which likely sits on top of \ion{He}{1}\ absorption commonly associated with O/B stars \citep[e.g., ][]{morrison75a}. \begin{figure} \vspace{0.2in} \epsscale{1.15} \plotone{FeII_UV1.eps} \caption{The energy diagram for the transitions in the first UV multiplet group (UV1) of \ion{Fe}{2}, between the ground state and the first excited state. See Appendix A for a full description of the symbols and terms. } \vspace{0.03in} \label{fig:feiiuv1} \end{figure} \vspace{0.15in} \noindent $\bullet$ \textbf{ISM/CGM absorption and emission features} \vspace{0.05in} Most of the absorption lines between $2200$ and $2900\,$\AA\ are induced by gas in the ISM and/or the CGM, providing us with a unique tool to probe the diffuse gas otherwise elusive. From low to high energy, we identify the following lines due to different species: \vspace{-0.05in} \begin{itemize} \item \ion{Mg}{1}$\,\lambda2853$ (UV1); \item the \ion{Mg}{2}\,$\lambda\lambda2796,2804$\ doublet (UV1)\footnote{Compared to the absorption induced by gas in the ISM/CGM, we expect the intrinsic \ion{Mg}{1}$\,\lambda2853$ and \ion{Mg}{2}\,$\lambda\lambda2796,2804$\ absorption in the spectra of O/B stars to be very weak, even for metal-rich stars \citep[e.g., ][]{rodriguez05a}.}; \item the \ion{Mn}{2}\,$\lambda\lambda\lambda2577,2594,2606$\ triplet (UV1); \item \ion{Fe}{2}$\,\lambda2600$ and $\lambda2587$ (UV1), \ion{Fe}{2}$\,\lambda2383$ and $\lambda2374$ (UV2), \ion{Fe}{2}\,$\lambda2344$ (UV3), \ion{Fe}{2}\,$\lambda2261$ (UV4), and \ion{Fe}{2}\,$\lambda2250$ (UV5). \end{itemize} \vspace{-0.05in} Between $3000$ and $4000\,$\AA, the \ion{Ca}{2}\ H \& K lines also trace a significant fraction of gas in the ISM/CGM. All these absorption lines are also commonly seen in intervening quasar absorption-line systems (see next subsection). We identify four non-resonant \ion{Fe}{2}$^*$ emission lines with high confidence: \vspace{-0.05in} \begin{itemize} \item \ion{Fe}{2}$^*\,\lambda2626$ and $\lambda2613$ (UV1), \ion{Fe}{2}$^*\,\lambda2396$ (UV2), and \ion{Fe}{2}$^*\,\lambda2366$ (UV3). \end{itemize} \vspace{-0.05in} To illustrate the relationships between the resonant absorption and these non-resonant emission lines, we use the \ion{Fe}{2}\ UV1 multiplet group as an example. Figure~\ref{fig:feiiuv1} presents the energy-level diagram of \ion{Fe}{2}\ UV1, showing the transitions between the ground state and the first excited state of \ion{Fe}{2}. We refer the reader to Appendix~\ref{app:atomic} for a detailed description of the symbols and terms. \ion{Fe}{2}$\,\lambda2600$ is the transition between the lowest (ground) energy level (with $J=9/2$) of the ground state and the lowest level (with $J=9/2$) of the excited state. Because the second lowest energy level of the ground state has the total angular momentum number $J=7/2$, the excited electron after the resonant absorption has a high probability, i.e., a high Einstein $A$ coefficient, to spontaneously decay to this level, releasing a fluorescent photon at a slightly longer wavelength $\lambda=2626.45\,$\AA. The pair \ion{Fe}{2}$\,\lambda2586$ and \ion{Fe}{2}$^*\,\lambda2613$ also belong to UV1, though with the second energy level of the excited state as the higher anchor level. The presence of these non-resonant emission lines imply that many of the resonant absorption lines above must be blended with emission filling in. For example, the Einstein $A$ coefficient for \ion{Fe}{2}$\,\lambda2600$ ($2.35\times10^8\,\mathrm{s}^{-1}$) is over six times higher than that for \ion{Fe}{2}$^*\,\lambda2626$ ($3.53\times10^7\,\mathrm{s}^{-1}$), so an excited electron has a higher probability to decay to the lowest level and release a \ion{Fe}{2}$\,\lambda2600$ photon, though if the \ion{Fe}{2}\ optical depth is high, the emitted photon will be absorbed again. The exact amount of emission infill depends on the optical depth of the relevant transitions and the balance between absorption and emission in the multiple-scattering process. We find the velocity profiles of both absorption and emission lines to be asymmetric, skewed towards negative values, i.e., in the blueshift direction, indicating a larger fraction of the gas sources that cause these features are flowing outwards than inwards. The profiles of the emission lines appear to be similar, while the degree of asymmetry of the absorption profiles varies from line to line. For example, compared to \ion{Fe}{2}$\,\lambda2586$ or \ion{Fe}{2}$\,\lambda2600$ as shown in the second panel in Figure~\ref{fig:nuvfine}, the \ion{Mg}{2}\,$\lambda\lambda2796,2804$\ lines are blueshifted from their rest-frame positions by a larger amount. Considering these species have similar ionization potentials ($7.90\,{\rm eV}$ for neutral Fe and $7.64\,{\rm eV}$ for Mg) and likely co-exist spatially, the difference must be due to a larger amount of emission infill on top of the \ion{Mg}{2}\ absorption. For a better understanding of the absorption and emission in the composite spectrum of ELGs, we present a comparison with a composite spectrum of intervening quasar absorption-line systems in Section~\ref{sec:qsoabsorber}, and with a composite spectrum of local SF regions in Section~\ref{sec:localsf}. We will then investigate the physical processes with a gas flow model in Section~\ref{sec:interpretation}. \subsection{Comparison with Intervening Absorbers}\label{sec:qsoabsorber} \begin{figure*} \epsscale{1.20} \plotone{eBOSS_composite_withlocalsfsb.eps} \caption{The composite continuum-normalized spectrum of ELGs (\textit{blue}), the same as in Figure~\ref{fig:nuvcomposite} but compared with the composite spectrum of local star-forming regions (\textit{red}) from \citet{leitherer11a}. } \label{fig:localsf} \vspace{0.05in} \end{figure*} The composite spectrum of ELGs exhibits the absorption lines commonly seen in intervening quasar absorption-line systems. It is interesting to compare the so-called ``down-the-barrel'' spectra of ELGs with the intervening quasar absorber spectra. We select the absorbers from the JHU-SDSS metal absorber catalog \citep[][]{zhu13b}, updated to the 12th Data Release \citep[DR12,][]{alam15a}\footnote{\texttt{http://www.pha.jhu.edu/$\sim$gz323/jhusdss}}. We choose strong absorbers with \ion{Mg}{2}$\,\lambda2796$ rest equivalent width $\ensuremath{W_0^{\lambda2796}}>2\,$\AA, because there has been evidence showing that a large fraction of these strong absorbers are physically associated with the CGM of strong SF galaxies \citep[e.g., ][]{bergeron91a, norman96a, bouche07a, nestor11a, lan14a}. We select the $2310$ such strong absorbers at $0.6<z<1.2$, the same redshift range as the ELGs and construct a median composite continuum-normalized spectrum. For more details regarding the quasar absorbers and how we estimate the continua of background quasars, we refer the reader to \citet{zhu13b}. We overplot the composite spectrum of quasar absorbers in red in Figure~\ref{fig:nuvcomposite} and \ref{fig:nuvfine}. Note the absorber spectra are based on luminous quasar spectra from SDSS I-III, so the S/N of their composite is orders-of-magnitude higher than the ELG spectra. When constructing the composite spectrum, we shifted the absorber spectra to the rest frame of the absorbers, so the absorption profiles are centered on their rest-frame positions. We find two major differences between the ELG composite and the quasar absorber composite: (1) There is no detectable non-resonant emission in the quasar absorber composite, even though its S/N is orders-of-magnitude higher. (2) The ratios of the absorption lines are different. For example, the strength of \ion{Fe}{2}$\,\lambda2344$ is similar in both composites, while \ion{Fe}{2}$\,\lambda2383$ and \ion{Mg}{2}\,$\lambda\lambda2796,2804$\ are much stronger in the quasar absorber one. We note that the line ratios of quasar absorbers depend both on the strength ($\ensuremath{W_0^{\lambda2796}}$) and redshift (e.g., Figure~{3} in \citealt{dey15a}), but at $\ensuremath{W_0^{\lambda2796}}>2\,$\AA\ and a given redshift, the strength dependence is weak and has no effect on any of our conclusions. In addition to the two main differences, we do not detect \ion{Ti}{2}\ absorption lines in the ELG composite, which we suspect is due to the limited sample size and the low S/N. In Section~\ref{sec:interpretation}, we show that the line ratio difference is caused by the emission infill present in the ELG composite but absent in the quasar absorber composite. \subsection{Comparison with Local Star-forming Regions}\label{sec:localsf} Spectroscopic observations of SFGs, or astronomical sources in general, have been scarce in the NUV. \citet{leitherer11a} compiled a UV spectroscopic atlas\footnote{\texttt{http://www.stsci.edu/science/starburst/templ.html}} of \textit{local} SF galaxies observed with the Faint Object Spectrograph (FOS) and the Goddard High Resolution Spectrograph (GHRS) on \textit{HST}. The atlas includes small-(physical) aperture spectra of 15 regions of nine SFGs with coverage between $2200\,{\rm \AA}$ and $3200\,{\rm \AA}$, providing a rare opportunity for a direct comparison of the integrated NUV SEDs at different physical scales. From this compilation, we select nine spectra of six galaxies with relatively high S/N and strong absorption: ${\rm NGC}\,1569$, ${\rm NGC}\,2403$ (all $3$ spectra), ${\rm NGC}\,4569$, ${\rm NGC}\,5055$, ${\rm NGC}\,5253$ ($1$ and $3$), and ${\rm NGC}\,5457$ (${\rm NGC}\,5455$). Note the different spectra of one galaxy are independent from each other, originating from different SF regions in that galaxy. For details regarding their atlas, we refer the reader to \citet{leitherer11a}. With the selected nine spectra, we then construct the composite continuum-normalized spectrum following the same procedure as for the eBOSS ELGs. Before a careful comparison, we here emphasize several characteristics of the observations: \vspace{-0.05in} \begin{itemize} \item The aperture sizes of \textit{HST} FOS/GHRS are one to a few arcseconds, and for all the nine spectra, correspond to physical sizes smaller than $40\,{\rm pc}$ ($2-37\,{\rm pc}$, see Table~$6$ in \citealt{leitherer11a}). They are nearly three orders-of-magnitude smaller compared to the aperture size for the eBOSS ELGs at $0.6<z<1.2$, which is about $15\,{\rm\,kpc}$ (for $2\,\arcsec$)\footnote{We note that the average seeing at the SDSS telescope is about $1.5\,\arcsec$.}. \item The wavelength calibration of the FOS/GHRS spectra is largely based on the absorption lines induced by the gas in the ISM in the Milky Way, which are often blended with the lines of the low-redshift extragalactic sources. We find that, based on the position of \ion{C}{3}$\,\lambda2298$, the composite spectrum is offset to longer wavelength by about $0.7\,$\AA\ (about $91\,\ensuremath{{\rm km\,s}^{-1}}$), we therefore correct the wavelength by $-0.7\,$\AA. \item The spectral resolutions of FOS/GHRS are one to a few hundred $\ensuremath{{\rm km\,s}^{-1}}$, a factor of $2-4$ lower than that of the BOSS spectrographs. \end{itemize} \vspace{-0.05in} The last two characteristics prevent us from making a quantitative comparison of the observed velocity profiles. In Figure~\ref{fig:localsf}, we compare the ELG composite (in blue) with the composite spectrum of the nine local SF regions (in red) at $2200\,{\rm \AA}<\lambda<2900\,{\rm \AA}$. We present a zoomed-in version in Appendix B. The two spectra share common absorption features across the wavelength range, including both the stellar photospheric absorption line \ion{C}{3}$\,\lambda2298$ and the absorption lines due to the ISM/CGM. We also observe some common weak absorption lines that must be intrinsic to the underlying stellar continuum. As discussed in Section~\ref{sec:nuvcomposite}, we do not identify these weak lines yet since they are still poorly-understood. We find the following main differences: (1) There is no detectable line emission, either the nebular lines (\ion{C}{2}], [\ion{O}{2}]) or the non-resonant lines (\ion{Fe}{2}$^*$), in the composite of local SF regions. (2) The absorption line ratios are different. For instance, the strength of \ion{Mg}{2}\,$\lambda\lambda2796,2804$\ is similar in both composites, while \ion{Fe}{2}$\,\lambda2586$ and \ion{Fe}{2}$\,\lambda2600$ are about $50\%$ weaker in the composite of local SF regions. We note that in the composite of local SF regions the ratios of the lines are similar to those in the quasar absorber composite, though the absolute absorption strength is about $50\%$ weaker. \begin{figure*}[t] \epsscale{1.10} \plotone{Outflow_Model_small.eps} \vspace{-0.10in} \caption{The spherical outflow model. The color scale indicates the mean line-of-sight velocity of emission/absorption at a given position, assuming velocity $v(r) \propto r^{\alpha}$, where $\alpha=2$ is arbitrarily chosen. Resonant absorption takes place in front of the background light source (stars). Fluorescent photons, resonant or non-resonant, are scattered isotropically and only those scattered into the line of sight can be captured. The aperture size of the eBOSS fibers is $2\,\arcsec$, corresponding to about 15 {\rm\,kpc}\ at $0.6<z<1.2$. The aperture sizes of \textit{HST} FOS/GHRS are one to a few arcseconds, corresponding to less than $40\,$pc for the local star-forming regions observed. See \citet{scarlata15a} for a similar model. } \vspace{0.05in} \label{fig:model} \end{figure*} The reasons for the non-detection of different emission lines are likely different. The [\ion{O}{2}]\ doublet at $2471\,{\rm \AA}$, mostly associated with \ion{H}{2}\ regions, is very weak compared to its lower energy counterpart [\ion{O}{2}]\,$\lambda\lambda3727,3730$\ and may be buried in the noise. The \ion{C}{2}]\ emission, which is strong in the ELG spectra, is more extended than \ion{H}{2}\ regions (as observed through the fine-structure emission [\ion{C}{2}]$\,\lambda157.7\,\ensuremath{\mu{\rm m}}$, e.g., \citealt{pineda13a}) because of the lower ionization potential of neutral carbon, and the small (physical) apertures of FOS/GHRS did not capture enough \ion{C}{2}]\ photons. The non-resonant \ion{Fe}{2}$^*$ lines are not detected also because the emission is extended, though the emission mechanism is different from \ion{C}{2}]. As in the comparison with quasar absorbers, the difference in absorption line ratios must be due to emission infill in the ELG spectra. We present a more detailed discussion below in the context of a gas outflow model. \section{Interpretation}\label{sec:interpretation} We have shown that the NUV composite spectrum of eBOSS ELGs display preferentially blueshifted absorption, induced by neutral and singly-ionized species, \ion{Mg}{1}, \ion{Mg}{2}, and \ion{Fe}{2}. In addition, we detected non-resonant \ion{Fe}{2}$^*$ emission lines, which also exhibit a preferentially blueshifted profile and are not detected in either quasar absorber spectra or small-aperture spectra of local SF regions. These observed properties indicate that the gas causing the absorption and emission is predominantly flowing outwards, and the outflows must be driven by the strong SF activities in the ELGs and extend to large galactic scales. Galactic-scale outflows driven by star formation have been observed for over half a century, such as from the starburst galaxy M82 \citep[e.g., ][]{lynds63a, bland88a}. The physics of galactic winds has been extensively studied \citep[e.g., ][]{heckman90a, heckman00a}, though it is not yet conclusive due to the complexity of baryon processes involved. For our purposes, we circumvent some of the complex processes, such as the origins and properties of the wind and gas, instead we introduce a phenomenological gas outflow model and interpret our data with an observation-driven approach in the context of the model. \subsection{A Spherical Gas Outflow Model}\label{sec:model} We describe the model in three steps. First, we present the key geometrical and physical characteristics of the model. Second, we consider the properties of the observations of the integrated spectra along the line of sight. Finally, we discuss quantitatively the radiative transfer processes and the model predictions for the SFG observations. \vspace{0.15in} \noindent $\bullet$ \textbf{Basics of the gas outflow model} \vspace{0.15in} Because of the statistical fashion of our composite analysis, we construct our model for the average of an ensemble of galaxies, not for a single source. We illustrate the model in Figure~\ref{fig:model}\footnote{The image in the center is a composite image of the central region of M82 with different orientations. The original image is from \texttt{http://hubblesite.org/gallery/album/galaxy/pr2006014a/}, courtesy of NASA, ESA, and The Hubble Heritage Team (STScI/AURA).} and first emphasize the following two key characteristics. \vspace{0.05in} $\bullet$ Spherical symmetry -- A basic outflow model for an individual galaxy includes a density profile $n(\vec{r})$ (in number) and a velocity profile $v(\vec{r})$, both as a function of the vector position $\vec{r}$. Observations have shown that the large-scale outflows driven by star formation are often bipolar, in the form of an expanding envelope \citep[e.g., ][]{heckman90a}. In our composite analysis, we are averaging over all orientations randomly distributed and it is reasonable to assume spherically symmetric profiles $n(r)$ and $v(r)$. $\bullet$ Velocity distribution -- Also due to the statistical nature of our approach, at a given galactocentric distance, there is a distribution of velocities. If gas accretion takes place around the ELGs and if the infalling gas is enriched with the species we are interested in, it will also induce absorption and contributes to the statistical signatures. At small scales, the gas in the ISM also contributes to the absorption, and its motions, ordered or disordered, also affect the observed velocity distribution. \vspace{0.05in} A statistical gas flow model therefore includes an average density profile, $n(r)$, an average velocity profile, $v(r)$, and a velocity dispersion profile, $\sigma(r)$. We expect the direction of the average velocity $v(r)$ to be outwards as we expect there is more gas flowing outwards than inwards. The velocity dispersion then accounts for the contributions at different velocities from outflows, inflows, and motions of ISM at small scales. In addition, in observation, we need to consider the finite instrumental resolution and redshift precision, which can be effectively included in the velocity dispersion through convolution. For the eBOSS ELGs, the mean spectral resolution is about $\left<\mathcal{R}\right>\sim2000$ ($60$ - $70\,\ensuremath{{\rm km\,s}^{-1}}$) and the mean redshift precision is about $20\,\ensuremath{{\rm km\,s}^{-1}}$ at redshift $z\sim0.8$. Our model is in principle similar to the one introduced by \citet[][see also \citealt{rubin11a, prochaska11a}]{scarlata15a}, which the authors used as the basis in their radiative transfer simulations to interpret space-based observations in the FUV, though here we emphasize the statistical aspect of our model. Considering the sample size and S/N of our current data, we cannot yet place strong constraints on the model details, e.g, the functional form of the profiles or the parameters. In Figure~\ref{fig:model}, for illustration purposes, we assume a power law for the average velocity profile $v(r) \approx r^{\alpha}$ with $\alpha=2$, which is arbitrarily chosen. We leave the full modeling to future work, and focus on the general properties of the model predictions below. \vspace{0.15in} \noindent $\bullet$ \textbf{General model predictions} \vspace{0.15in} Based on the model, we can predict the following general properties of the absorption and emission features in the integrated spectra along the line of sight. \vspace{-0.05in} \begin{itemize} \item[i.] Origins and aperture dependence -- In the model, the observed absorption and emission have different origins. The absorption is induced by gas in front of the background light source, e.g., the stellar populations inside the galaxy. The re-emitted (fluorescent) photons, resonant or non-resonant, are scattered isotropically, so the observed emission originates from gas located everywhere within the aperture, except from behind the galaxy due to occultation (see below). The strength of the emission therefore increases with the aperture size until the aperture encloses all the absorbing gas, while the strength of absorption depends little on the aperture size unless the column density varies significantly across the galaxy. \item[ii.] Net effect -- The sum of absorption and emission in a given set of transitions, including all the resonant and non-resonant channels, would be zero if and only if the observer could collect all the re-emitted photons scattered into the line of sight with a very large aperture. However, because of the finite size of a galaxy, the photons behind the galaxy cannot penetrate the high-density regions of the galaxy to be captured. As \citet{scarlata15a}, we refer to this effect as occultation. The finite size of galaxies makes the outflow model more complicated than that for stellar winds in which the star can be considered as a point source. The net effect of absorption and emission is therefore \textit{always} absorption. At redshift $z\sim0.8$, the typical effective \textit{radius} of SFGs with $M_*\sim10^{10}\,\ensuremath{\rm M_\odot}$ is about $2-4\,{\rm\,kpc}$ \citep[e.g., ][]{williams10a, wuyts12a}. \item[iii.] Velocity profiles -- If outflows are dominant (compared to inflows), the observed emission profile is asymmetric and preferentially blueshifted due to occultation of the redshifted emission behind the galaxy. The profile of the absorption (even without emission infill) is also blueshifted, since it is only induced by the gas in front of the galaxy's stellar populations (the light source). We expect the degree of blueshift is smaller for the emission profiles than absorption, because the re-emitted photons are scattered isotropically and only a negligible fraction of photons originating from the observed absorption are scattered into the line of sight, while other re-emitted photons within the aperture are less blueshifted. \item[iv-1.] Emission infill -- Like emission via the non-resonant channels, the model also predicts re-emitted photons via the resonant channels, producing emission filling in on top of the absorption. The emission infill is not sufficient to compensate for all the absorption so an observer always sees absorption (see point ii). However, because the emission profile is less blueshifted than absorption (point iii), the \textit{observed} absorption profile is more blueshifted than the ``\textit{true}'' absorption profile before emission infill. If the emission and absorption profiles are significantly different, e.g., due to large outflow velocities or large aperture (relative to the occultation), we also expect to see P-cygni-like profiles. The amount of emission infill depends on the aperture size, the galaxy (occultation) size, the permitted channels and their transition probabilities, and the optical depth, which determines whether the observed emission originates from a single or multiple scattering process and the relative strength of different channels. The degree of emission infill (relative to the absorption) is therefore different from line to line. Below we discuss this quantitatively for the lines in the NUV. \end{itemize} \vspace{-0.05in} \vspace{0.15in} \noindent $\bullet$ \textbf{General radiative transfer processes} \vspace{0.15in} In an expanding envelope, if the velocity gradient is large, the radiative transfer processes can be treated under the Sobolev approximation \citep[][]{sobolev60a, castor70a, rybicki78a}. \citet[][see also \citealt{prochaska11a}]{scarlata15a} included an excellent discussion of the general radiative transfer processes involved in a galactic outflow, and \citet[][]{prochaska11a} also pioneered theoretical considerations of the NUV transitions. We refer the reader to those papers for more details. We here present a summary of the formulae most relevant to our analysis. \vspace{0.05in} \noindent $\bullet$ Absorption: In the Sobolev approximation, at a given position, the optical depth is given by \begin{equation} \tau(r) = \frac{\pi e^2}{m_e c} f_{lu} \lambda_{lu} n_l(r)\left|\frac{\ensuremath{{\rm d}} v(r)}{\ensuremath{{\rm d}} r}\right|^{-1} \,\mathrm{,} \end{equation} which is proportional to the density $n_l(r)$ at the lower level, the rest-frame wavelength of the transition $\lambda_{lu}=\lambda_{ul}$, oscillator strength $f_{lu}$, and the inverse of the velocity gradient (i.e., the thickness of the thin shell with the same velocity). We have ignored stimulated emission and angular dependence. Along the line of sight, the optical depth above applies to the velocity $v(r)$, or equivalently, the wavelength $\lambda_r=\lambda_{lu}[1-v(r)/c]$, at which the absorption is given by $R_r=e^{-\tau(r)}$. \vspace{0.05in} \noindent $\bullet$ Emission: In a single-scattering event, after the absorption of a photon, the probability that the excited electron can decay from the upper level ($u$) to a given lower level ($l$) is given by \begin{equation} p_{ul} = \frac{A_{ul}}{\sum\nolimits_{i} A_{ui}}\,\mathrm{,} \label{eq:single} \end{equation} where $A_{ui}$ is the spontaneous emission coefficient from the upper level $u$ to the lower level $i$, and the summation is over all the possible channels in the lower state. The above equation ignores stimulated emission, collisional (de-)excitation, and also fine-structure emission within the same state. If the electron decays to the original level, in our case, the lowest level in the lower state, the re-emitted photon can be absorbed again, resulting in a multiple-scattering process. The escape probability of a resonant photon from a shell of optical depth $\tau(v)$ is given by \citep[][]{mathis72a} \begin{equation} \beta_{esc} = \frac{1-e^{-\tau}}{\tau}\,\mathrm{,} \end{equation} and the fraction of the absorbed photons that are eventually re-emitted via a non-resonant ($nr$) channel to a lower level $l$ is \begin{eqnarray} f_{nr,l}(\tau) & = & p_{nr,l}\sum\limits_{n=0}^{\infty}\left[p_r(1-\beta_{esc})\right]^{n} \nonumber \\ & = & \frac{p_{nr,l}}{1-p_r(1-\beta_{esc})}\,\mathrm{,} \end{eqnarray} where $p_{nr, l}$ and $p_{r}$ are the probabilities of decaying to the non-resonant lower level $l$ and the resonant lowest level, respectively (Eq.~\ref{eq:single}), and we have omitted the upper level symbol $u$ for simplicity. The fraction of the absorbed photons that are eventually re-emitted via the resonant channel and \textit{escape} from the shell is \begin{eqnarray} f_{r}(\tau) & = & p_{r}\beta_{esc}\sum\limits_{n=0}^{\infty}\left[p_r(1-\beta_{esc})\right]^{n} \nonumber \\ & = & \frac{p_{r}\beta_{esc}}{1-p_r(1-\beta_{esc})}\,\mathrm{.} \label{eq:multiple} \end{eqnarray} When the optical depth of a given shell $\tau(v)$ is shallow and the escape probability $\beta_{esc}\approx1$, we reach the single-scattering approximation, i.e., Eq.~\ref{eq:single}: \begin{eqnarray} f_{nr,l} & \approx & p_{nr,l} \,\mathrm{, and }\nonumber \\ f_{r} & \approx & p_{r} \,\mathrm{.} \label{eq:shallow} \end{eqnarray} When the optical depth $\tau(v)$ is deep and the escape probability $\beta_{esc}\approx0$, all the re-emitted photons will be through the non-resonant channels, if there are any, with: \begin{eqnarray} f_{nr,l} & \approx & \frac{p_{nr,l}}{1-p_{r}} \,\mathrm{. and }\nonumber \\ f_{r} & \approx & 0 \,\mathrm{.} \label{eq:deep} \end{eqnarray} Note in both cases, summing over all channels gives \begin{equation} \sum\nolimits_{nr, i} f_{nr, i} + f_{r} = 1 \,\mathrm{.} \end{equation} \vspace{0.15in} \noindent $\bullet$ \textbf{Radiative transfer processes in the NUV} \vspace{0.15in} With the formalism above, we now investigate quantitatively the absorption and emission lines in the NUV and their correlations in the context of the outflow model. We focus on eight resonant absorption lines, among which four have non-resonant channels: \vspace{-0.05in} \begin{itemize} \item \ion{Fe}{2}$\,\lambda2600$ with \ion{Fe}{2}$^*\,\lambda2626$ (UV1), \item \ion{Fe}{2}$\,\lambda2587$ with \ion{Fe}{2}$^*\,\lambda2613$ and $\lambda2632$ (UV1), \item \ion{Fe}{2}$\,\lambda2374$ with \ion{Fe}{2}$^*\,\lambda2396$ (UV2), \item \ion{Fe}{2}$\,\lambda2344$ with \ion{Fe}{2}$^*\,\lambda2366$ and $\lambda2381$ (UV3), \end{itemize} \vspace{-0.05in} and the other four do not: \vspace{-0.05in} \begin{itemize} \item \ion{Mg}{1}$\,\lambda2853$, \ion{Mg}{2}$\,\lambda2804$, \ion{Mg}{2}$\,\lambda2796$, and \ion{Fe}{2}$\,\lambda2383$ (UV2). \end{itemize} \vspace{-0.05in} We do not consider \ion{Fe}{2}$\,\lambda2261$ (UV4), \ion{Fe}{2}$\,\lambda2250$ (UV5), and \ion{Mn}{2}\,$\lambda\lambda\lambda2577,2594,2606$\ here because of their lower S/N in the data. We present the relevant atomic data in Appendix~\ref{app:atomic}. We note that we do not detect \ion{Fe}{2}$^*\,\lambda2632$ from UV1, whose Einstein $A$ coefficient is about half that of \ion{Fe}{2}$^*\lambda2613$, and \ion{Fe}{2}$^*\,\lambda2381$ from UV3 has Einstein $A$ coefficient about half that of \ion{Fe}{2}$^*\,\lambda2366$ and is blended with \ion{Fe}{2}$\,\lambda2383$ from UV2. Within the context of the outflow model, we can now estimate the degree of the emission infill effect, i.e., the ratio of emission to absorption in the observed spectra, for the eight absorption lines in the NUV. \vspace{-0.05in} \begin{itemize} \item[iv-2.] Degree of emission infill -- The effect of emission infill depends on two main factors. For those with non-resonant channels, it depends on the fraction of resonant emission ($f_r$). In the single-scattering approximation, we have $f_r \approx p_r$ (Eqs.~\ref{eq:shallow} and \ref{eq:single}) and with the atomic data from Appendix~\ref{app:atomic}, we have \begin{eqnarray} & p^{\lambda2374}_{\rm Fe\,II}<p^{\lambda2587}_{\rm Fe\,II}<p^{\lambda2344}_{\rm Fe\,II}<p^{\lambda2600}_{\rm Fe\,II}<p_{\rm no\,nr}=1 \,\mathrm{.} \nonumber \\ & \label{eq:ordernonresonant} \end{eqnarray} When multiple-scattering events are considered (Eq.~\ref{eq:multiple}), this order does not change though the relative difference is smaller. We expect the effect of the emission infill to follow the same order. When there is no non-resonant channel, it mainly depends on the degree of saturation -- the more saturated the absorption line is, the larger an effect the emission infill has on the observation. Note the observed emission and absorption have different origins (point i). Based upon the elemental abundance \citep[][]{asplund09a} and oscillator strength (Appendix~\ref{app:atomic}), the degree of saturation should be in the order of absorption strength as \begin{equation} W^{\lambda2853}_{\rm Mg\,I}<W^{\lambda2383}_{\rm Fe\,II}<W^{\lambda2804}_{\rm Mg\,II}<W^{\lambda2796}_{\rm Mg\,II}\,\mathrm{.} \label{eq:orderresonant} \end{equation} Finally, we expect the degree of emission infill, manifested by the degree of blueshift (due to the difference in the profiles of emission and absorption (point iii)) and the change in the observed absorption strength, to follow the same order given by Eqs.~\ref{eq:ordernonresonant} and \ref{eq:orderresonant}. \end{itemize} \vspace{-0.05in} For a direct comparison of observations with the model predictions, it is necessary to understand what the true absorption profiles (before the emission infill) are. In the next section (\ref{sec:trueabsorption}), we introduce an observation-driven method to reveal the true absorption profiles. \subsection{Revealing the True Absorption Profiles}\label{sec:trueabsorption} To reveal the true absorption profiles, we make two assumptions: \vspace{-0.03in} \begin{enumerate} \item All the emission lines share the same normalized velocity profile. \item All the absorption lines share the same true normalized velocity profile, i.e., prior to the emission infill. \end{enumerate} \vspace{-0.03in} In both cases, the lines are normalized to have the same amplitude. We make these assumptions according to our understanding of quasar absorption-line systems from high S/N high-resolution spectroscopic observations, which show that \ion{Fe}{2}\ and \ion{Mg}{2}\ (as well as \ion{Mg}{1}\ when detected) usually trace each other \citep[e.g., ][]{churchill00a}. We consider it reasonable to extrapolate these results to galaxy absorption lines. Since we work on the observed profile, i.e, $R(\lambda)=e^{-\tau(\lambda)}$, but not the optical depth $\tau(\lambda)$, saturation plays an important role and we will treat the \ion{Mg}{2}\ lines separately in our method. Under the assumptions, our observation-driven method consists of two steps. \vspace{-0.03in} \begin{enumerate} \item We determine the common emission profile from the four non-resonant emission lines, which we call the unified profile and present in Section~\ref{sec:emission}. \item With the unified emission profile, we determine the unified true absorption profile with an iterative approach. We describe this process in detail in Section~\ref{sec:absorption}. \end{enumerate} \vspace{-0.03in} \subsubsection{The Unified Emission Line Profile}\label{sec:emission} \begin{figure} \vspace{0.12in} \epsscale{1.25} \plotone{Unified_Emission_Profile.eps} \caption{The velocity profiles of the non-resonant emission lines, shown in the rest frame of the galaxies. \textit{Top panel}: The observed profiles. \textit{Middle panel}: The observed profiles normalized to the same amplitude. \textit{Bottom panel}: The mean unified emission profile. The shaded region indicates the $1\sigma$ uncertainties determined by bootstrapping. } \label{fig:emission} \vspace{0.05in} \end{figure} \begin{figure} \epsscale{1.25} \plotone{Unified_Emission_Profile_BlueExcess.eps} \caption{The asymmetry of the unified emission profile. \textit{Upper panel}: The blue line and the shaded region show the mean unified emission profile and uncertainties as in the bottom panel of Figure~\ref{fig:emission}. The red dashed line shows a symmetric profile assuming the blueshifted side mirrors the observed redshifted side. \textit{Lower panel}: The emission excess on the blueshifted side. The errorbar at the top right indicates the mean uncertainty at a given pixel (velocity). } \label{fig:emissionexcess} \vspace{0.05in} \end{figure} We combine the four non-resonant emission lines, \ion{Fe}{2}$^*\,\lambda2626$, $\lambda2613$, $\lambda2396$, and $\lambda2366$, to determine the unified profile, as shown in Figure~\ref{fig:emission}. The top panel presents the observed velocity profiles, shown in the rest frame of the galaxies. In the middle panel, we normalize all the lines to have the same amplitude, and in the bottom panel, we present the mean normalized profile as the estimate of the unified emission profile. We also plot the $1\sigma$ uncertainties determined from bootstrapping. We note that, because the far blue side of \ion{Fe}{2}$^*\,\lambda2613$ at $v\lesssim-600\,\ensuremath{{\rm km\,s}^{-1}}$ overlaps with the red side of \ion{Mn}{2}$\,\lambda2606$, to avoid the contamination, we do not include the blue side of \ion{Fe}{2}$^*\,\lambda2613$ at $v<0\,\ensuremath{{\rm km\,s}^{-1}}$ while calculating the mean profile. The unified emission profile appears to be asymmetric. We investigate its asymmetry further in Figure~\ref{fig:emissionexcess}. In the upper panel, on top of the unified profile, we overlay a symmetric profile assuming the blue side mirrors the red side. We subtract the symmetric profile from the original one and show the result in the lower panel. The unified non-resonant \ion{Fe}{2}$^*$ emission exhibits an excess on the blue side, with a confidence level higher than $2\sigma$ when integrated over $-500\,\ensuremath{{\rm km\,s}^{-1}}<v<0\,\ensuremath{{\rm km\,s}^{-1}}$, indicating that a larger fraction of emission is blueshifted than redshifted. \subsubsection{The Unified Absorption Line Profile}\label{sec:absorption} \begin{figure*} \epsscale{1.20} \plotone{Observed_Absorption_Profile_Normalized.eps} \caption{The observed velocity profiles of the absorption lines, presented in the rest frame of the galaxies and normalized to the same amplitude. The color scales indicate the order given by Eqs.~\ref{eq:ordernonresonant} and \ref{eq:orderresonant}. } \label{fig:observedabsorption} \end{figure*} \begin{figure*} \epsscale{0.565} \plotone{Emission_Correction_Example.eps} \epsscale{0.565} \plotone{Absorption_Profile_Emission_Corrected.eps} \caption{\textit{Left panels}: Examples of emission-infill correction. The blue lines show the observed absorption velocity profiles, red the emission-corrected, and green the subtracted emission. The vertical dotted lines mark the rest-frame positions of the lines and the zero velocity corresponds to the wavelength of the one with lower energy in each panel. \textit{Right panels:} \textit{Top} -- The emission-corrected absorption velocity profiles. \textit{Middle} -- The emission-corrected profiles normalized to the same amplitude. \textit{Bottom} -- The unified emission-corrected absorption profile. The shaded areas indicate the $1\sigma$ bootstrapping uncertainties. The color scales in the top two panels are the same as in Figure~\ref{fig:observedabsorption}. } \label{fig:trueabsorption} \vspace{0.05in} \end{figure*} With the unified emission line profile, we determine the emission infill and the unified true absorption line profile simultaneously with an iterative approach. To proceed, we first investigate the observed velocity profiles in more detail. In Figure~\ref{fig:observedabsorption}, we present the observed profiles of the eight absorption lines in the rest frame of the galaxies, normalized to have the same amplitude integrated over $-700\,\ensuremath{{\rm km\,s}^{-1}}<v<300\,\ensuremath{{\rm km\,s}^{-1}}$. We have ordered the lines according to Eqs.~\ref{eq:ordernonresonant} and \ref{eq:orderresonant}, with bluer color indicating a larger predicted effect from emission infill. Figure~\ref{fig:observedabsorption} shows that: (1) the observed absorption profiles are different; (2) the degree of blueshift follows the order predicted by the model, and (3) the most blueshifted line, \ion{Mg}{2}$\,\lambda2796$, is about $200\,\ensuremath{{\rm km\,s}^{-1}}$ more blueshifted than the least blueshifted \ion{Fe}{2}$\,\lambda2374$. \begin{figure*} \epsscale{0.57} \plotone{Line_Ratio_QSO_Comparison.eps} \epsscale{0.57} \plotone{Line_Ratio_LSF_Comparison.eps} \caption{The comparison of line ratios of ELGs with strong quasar absorbers (QSOABSs, \textit{left}) and local SF regions (\textit{right}). The blue circles show the observed line ratios, and the green squares the emission-corrected ones. We use \ion{Fe}{2}$\,\lambda2374$ as the anchor and order the lines according to Eq.~\ref{eq:ordernonresonant} and \ref{eq:orderresonant}. The errorbars indicate uncertainties in ELG line ratio measurements determined by bootstrapping, not including the uncertainties in the measurements of QSOABSs or local SF regions. } \vspace{0.1in} \label{fig:absorptionlineratios} \end{figure*} Our iterative approach to determining the true absorption profile consists of the following steps. \vspace{-0.03in} \begin{enumerate} \item We first use the normalized profile of \ion{Fe}{2}$\,\lambda2374$ as the initial guess of the unified true absorption profile, because the fluorescent emission after the \ion{Fe}{2}$\,\lambda2374$ absorption is dominated by the non-resonant channel \ion{Fe}{2}$^*\lambda 2396$ (Eq.~\ref{eq:ordernonresonant}). \item With the unified absorption profile estimated from the previous step, we fit for the amount of emission that needs to be subtracted from the observed profile. More specifically, we express the observed profile $R^{\rm obs}_{\rm abs}(\lambda)$ by \begin{eqnarray} R^{\rm obs}_{\rm abs}(\lambda) & = & R^{\rm true}_{\rm abs}(\lambda)+\left[R_{\rm emi}(\lambda)-1\right] \, \nonumber \\ & = & \left\{1-a\left[1-R^{\rm uni}_{\rm abs}(\lambda)\right]\right\}+b\left[R^{\rm uni}_{\rm emi}(\lambda)-1\right] \, \mathrm{,} \, \nonumber \\ & & \label{eq:decomp} \end{eqnarray} where $R^{\rm true}_{\rm abs}(\lambda)$ and $R_{\rm emi}(\lambda)$ are the unnormalized true absorption and emission profiles, respectively, and $R^{\rm uni}_{\rm abs}(\lambda)$ and $R^{\rm uni}_{\rm emi}(\lambda)$ are the unified normalized absorption profile from the previous step and the unified emission profile from Section~\ref{sec:emission}, respectively. We perform a least-squares fit for the coefficients $a$ and $b$. \item We normalize the new absorption profiles $R^{\rm true}_{\rm abs}(\lambda)$ from the fitting in Step $2$, calculate the mean as the new estimate of the unified absorption profile, and then repeat Step $2$. \end{enumerate} \vspace{-0.03in} As discussed above, saturation requires special attention when it is severe. We set $R^{\rm true}_{\rm abs}$ at saturated pixels to be zero and do not include \ion{Mg}{2}\ while estimating the unified profile in Step $3$. We iterate the steps until the unified absorption profile and the coefficients $a$ and $b$ converge. In practice, we find that three iterations are sufficient to reach convergence. We show the results in Figure~\ref{fig:trueabsorption}. On the left, we show examples of the decomposition (Eq.~\ref{eq:decomp}), with the emission-infill indicated by the green dashed lines. The emission-corrected absorption profiles, shown with the red lines, are deeper but less blueshifted than the observed ones (blue). On the right, in the top panel, we show the emission-corrected profiles of all the eight absorption lines, with color scales the same as in Figure~\ref{fig:observedabsorption}. The middle panel shows the emission-corrected profiles normalized to the same amplitude, which we use to estimate the unified true absorption profile. We present the unified profile in the bottom panel, together with the uncertainties estimated via bootstrapping. Note that after emission-infill correction, the \ion{Mg}{2}\ lines are heavily saturated at the line centers and are not used in the calculation of the mean profile. The unified absorption profile is apparently asymmetric and preferentially blueshifted. With the emission-infill correction and the true absorption profiles estimated, we now investigate the details of the observed and emission-corrected profiles. \subsubsection{Non-parametric characterization of the true absorption profiles}\label{sec:nonpar} To characterize the absorption profiles, we choose to use non-parametric variables, line ratios based on rest equivalent width (Eq.~\ref{eq:rewabs}) and velocity offsets. \vspace{0.05in} \noindent $\bullet$ \textbf{Line ratios} \vspace{0.05in} Because of the different degrees of emission infill, the changes in the absorption strength vary from line to line. We compare line ratios before and after the emission-infill correction to study if the effect of emission infill follows the order of Eqs.~\ref{eq:ordernonresonant} and \ref{eq:orderresonant}. We measure the rest equivalent width of the emission-corrected profiles by integrating over $-700\,\ensuremath{{\rm km\,s}^{-1}}<v<300\,\ensuremath{{\rm km\,s}^{-1}}$. For the observed profiles of lines other than the \ion{Mg}{2}\ doublet, we integrate over the same velocity range. We select the integration velocity range $-700\,\ensuremath{{\rm km\,s}^{-1}}<v<200\,\ensuremath{{\rm km\,s}^{-1}}$ for the observed \ion{Mg}{2}$\,\lambda2796$ and $-600\,\ensuremath{{\rm km\,s}^{-1}}<v<300\,\ensuremath{{\rm km\,s}^{-1}}$ for \ion{Mg}{2}$\,\lambda2804$ to avoid the contaminations from each other (see Figure~\ref{fig:observedabsorption}). To calculate line ratios, we select \ion{Fe}{2}$\,\lambda2374$ as the anchor, i.e., the common denominator, which has negligible emission infill. In Figure~\ref{fig:absorptionlineratios}, we compare the line ratios before and after the emission-infill correction with the line ratios in the composite spectrum of strong quasar absorbers (QSOABSs, in the same redshift range) on the left and with those of local SF regions on the right. The variables plotted are the ratios of line ratios: \begin{eqnarray} \left(\frac{W^{\lambda_i}}{W^{\lambda2374}}\right)^{\rm ELG}{\bigg/}\left(\frac{W^{\lambda_i}}{W^{\lambda2374}}\right)^{\rm QSOABS} & \,\mathrm{and} \nonumber \\ \left(\frac{W^{\lambda_i}}{W^{\lambda2374}}\right)^{\rm ELG}{\bigg/}\left(\frac{W^{\lambda_i}}{W^{\lambda2374}}\right)^{\rm Local\,SF} \,\mathrm{,} & \end{eqnarray} where $\lambda_i$ represents the line names. In the figure, we have ordered the lines according to the predictions of Eqs.~\ref{eq:ordernonresonant} and \ref{eq:orderresonant}. Figure~\ref{fig:absorptionlineratios} shows that the observed line ratios in the composite spectrum of ELGs differ from those of strong quasar absorbers and also local SF regions, and the degree of the difference basically follows the predicted order. We find that the effect of emission infill (on line strength and ratio) can be larger than a factor of two, e.g., for \ion{Fe}{2}$\,\lambda2383$ and \ion{Mg}{2}\,$\lambda\lambda2796,2804$. After the emission-infill correction, we find that the line ratios are consistent in the spectra of all the sources. We stress that our method of estimating the emission infill is observation-driven, completely independent of any model. We only made the two assumptions about the unified emission and absorption profiles as elaborated at the beginning of this section. We also did not use any information about the strong quasar absorbers or the local SF regions. The agreement of the final line ratios is therefore not by construction, but rather the result of the method. As we expect that the gases inducing the absorption lines in different sources have similar origins, either supernova yields, stellar mass losses or other sources, and the line ratios in different source spectra should be in agreement, we consider our iterative fitting method successful in estimating the emission infill and determining the true absorption profiles. \begin{figure} \vspace{0.05in} \epsscale{1.2} \plotone{Velocity_Absorption_Emission.eps} \caption{The non-parametric characterization of the absorption velocity profiles. We show $v_{80}$ (\textit{blue}), $v_{50}$ (\textit{green}) and $v_{20}$ (\textit{red}), before (\textit{circles}) and after (\textit{horizontal bands}) the emission-infill correction. The order of the lines is the same as in Figure~\ref{fig:absorptionlineratios}. The width of the horizontal bands indicate the uncertainties. All the uncertainties are determined by bootstrapping. } \label{fig:absorptionvelocity} \vspace{0.05in} \end{figure} \vspace{0.05in} \noindent $\bullet$ \textbf{Velocity offsets} \vspace{0.05in} To characterize the velocity profile, we define a velocity offset variable $v_{\rm xx}$ to be the velocity where a fraction of ${\rm xx}\,$per cent of the absorption is at velocity $v>v_{\rm xx}$\footnote{We note that this is a characterization of the \textit{total} absorption profile, including contributions from both outflows and ISM.}. In Figure~\ref{fig:absorptionvelocity}, we show $v_{80}$, $v_{50}$ and $v_{20}$ of the observed absorption profiles and of the unified profile. The order of lines is the same as in Figure~\ref{fig:absorptionlineratios}, given by Eqs.~\ref{eq:ordernonresonant} and \ref{eq:orderresonant}. Figure~\ref{fig:absorptionvelocity} (see also Figure~\ref{fig:observedabsorption}) shows that, for the observed profiles, the velocity offsets of different lines are different, and the order follows the one predicted by the radiative transfer considerations in Eqs.~\ref{eq:ordernonresonant} and \ref{eq:orderresonant} and the difference in the emission and absorption profiles (point iii in Section~\ref{sec:model}). The unified emission-corrected profile is still asymmetric, but with a smaller degree of blueshift: the $50\%$ velocity offset $v_{50}$ is about $-10\,\ensuremath{{\rm km\,s}^{-1}}$, and $|v_{80}|$ is about $160\,\ensuremath{{\rm km\,s}^{-1}}$, larger than $|v_{20}|\sim110\,\ensuremath{{\rm km\,s}^{-1}}$. Our fitting result shows that without correcting for the emission infill, $v_{80}$, the $80\%$ velocity offset, can be overestimated by over a factor of two for the lines most severely affected, e.g., \ion{Fe}{2}$\,\lambda2383$ and \ion{Mg}{2}\,$\lambda\lambda2796,2804$. \subsection{Discussion}\label{sec:inflow} \begin{figure} \vspace{0.02in} \epsscale{1.20} \plotone{OIII_Emission_Gaussian.eps} \caption{\textit{Upper panel}: The comparison of the unified, emission-corrected absorption velocity profile (\textit{red}), the unified emission profile (\textit{blue}), and a Gaussian profile with the width of $108\,\ensuremath{{\rm km\,s}^{-1}}$ (\textit{green}). For display purposes, we have flipped the absorption profile and also adjusted the profiles so that they have roughly the same peak value. The shaded regions show the $1\sigma$ bootstrapping uncertainties. \textit{Lower panel}: The comparison of the normalized [\ion{O}{3}]$\,\lambda5008$ emission profile for ELGs (at $0.6<z\lesssim1.2$, \textit{magenta}) with the same Gaussian profile (\textit{green}) as in the upper panel. The uncertainties of the [\ion{O}{3}]\ profile are about the same size as the line width. } \label{fig:oiiiprofile} \vspace{0.05in} \end{figure} We now compare the observations, including the results of our emission-infill correction method, with the predictions of the spherical outflow model presented in Section~\ref{sec:model}. We go over the predictions point by point. \vspace{0.05in} \noindent [i.] Aperture dependence -- The physical aperture size of the eBOSS ELG spectra is about $15\,{\rm\,kpc}$, while that of the \textit{HST} FOS/GHRS spectra of the local SF regions is smaller than $40\,{\rm\,pc}$. In the spectra of the local SF regions, we do not detect the non-resonant emission that otherwise persist in the ELG ones. This agrees with the model and is because the \textit{HST} FOS/GHRS aperture size is too small to capture the extended fluorescent emission. \vspace{0.05in} \noindent [ii.] Net effect -- The net effect is always absorption due to occultation in the model, even if the aperture encloses all the emission scattered into the line of sight. In the data, when we sum up all the absorption and emission in a given set of channels, the net result is absorption. \vspace{0.05in} \noindent [iii.] Velocity profiles -- The outflow model predicts that both the emission and (emission-infill corrected) absorption profiles are blueshifted, with the absorption more so than the emission. Figures~\ref{fig:emissionexcess} and \ref{fig:absorptionvelocity} quantify the asymmetry and the blueshift of the profiles individually, though in different ways. To compare the two profiles directly, we show them together in Figure~\ref{fig:oiiiprofile}. For display purposes, we have flipped the absorption profile and also normalized them so that they roughly have the same peak value. The absorption profile is more blueshifted than the emission, as in the model. As discussed in the basics of the model, the composite spectra include effects from not only the outflows, but also the inflows, the motions of the ISM in the galaxy, the instrumental resolution as well as the redshift precision. If the aperture is large and collects all the re-emitted photons along the line of sight, and if the outflowing gas is extended to much larger scale than the galaxy, we expect the red side of the emission profiles to be broader than nebular lines, extended to further red. On the other hand, if a substantial fraction of the gas is falling in onto the galaxy at high velocities, we also expect the red side of both the emission and absorption profiles to be broader. In the lower panel of Figure~\ref{fig:oiiiprofile}, we show the [\ion{O}{3}]$\,\lambda5008$ profile of the NUV sample. We note that ELGs at $z\gtrsim1$ do not have [\ion{O}{3}]\ coverage, but they account for a small fraction ($10\%$) of the sample. We find the [\ion{O}{3}]$\,\lambda5008$ profile is well-represented by a Gaussian profile and the best-fit width ($\sigma$) is about $108\,\ensuremath{{\rm km\,s}^{-1}}$. We overplot this Gaussian profile in both panels for comparison. The mean spectral resolution of the BOSS spectrographs is about $60-70\,\ensuremath{{\rm km\,s}^{-1}}$ and the average redshift precision of the eBOSS ELGs at redshift $0.6<z<1.2$ is about $20\,\ensuremath{{\rm km\,s}^{-1}}$. The width of the nebular emission line profile ($108\,\ensuremath{{\rm km\,s}^{-1}}$) must therefore be dominated by the intrinsic rotation and disordered motion of the ISM along the line of sight, which account for about $85\,\ensuremath{{\rm km\,s}^{-1}}$ when we subtract the spectral resolution and redshift precision by quadrature. This is consistent with measurements of kinematic properties and the Tully-Fisher relation for bright galaxies at these redshifts \citep[e.g., ][]{vogt96a, weiner06a, miller11a}. We find that, on the red side, both the emission and absorption profiles are consistent with [\ion{O}{3}]$\,\lambda5008$ within the uncertainties. This suggests that, within the uncertainties of our data, we have not observed evidence for outflows on scales larger than the galaxy nor the evidence for inflows. The larger-scale outflows would extend the emission profile to higher velocities (farther on the red side), and the inflows would extend both the emission and absorption profile. However, the former (that we did not see evidence for larger-scale outflows) could be because the aperture size ($15\,{\rm\,kpc}$) is not sufficiently large, while the latter could be because the inflow velocities (e.g., $<200\,\ensuremath{{\rm km\,s}^{-1}}$, \citealt{rubin12a}) are comparable to the ISM motions and the emission/absorption effects of the inflows and the ISM are blended together. \begin{figure*}[t] \vspace{0.1in} \epsscale{0.55} \plotone{Emission_Strength_EW.eps} \epsscale{0.55} \plotone{Emission_Strength_Lum.eps} \caption{The dependences of the rest equivalent width of the non-resonant emission lines on the [\ion{O}{2}]\,$\lambda\lambda3727,3730$\ rest equivalent width (\textit{left}) and luminosity (\textit{right}). Note both the x axes are in logarithmic scale. } \label{fig:emissionrewoii} \vspace{0.05in} \end{figure*} If we assume the emission-corrected absorption on the red side solely originates from the ISM, we can decompose the total absorption into a part due to the ISM with a symmetric profile and the other due to the outflows \citep[e.g., ][]{weiner09a}\footnote{We note \citet{weiner09a} applied this method to the \textit{observed} \ion{Mg}{2}\ absorption as they did not consider the emission infill.}. Applying this decomposition method to the unified absorption-corrected profile, we obtain a blueshifted excess profile, more extended than the emission excess at $v<0\,\ensuremath{{\rm km\,s}^{-1}}$ as shown in Figure~\ref{fig:emissionexcess}, with maximum velocity about $-600\,\ensuremath{{\rm km\,s}^{-1}}$. The ratio of the excess to the subtracted symmetric profile, which represents the ratio of the amount of outflowing gas to that of the ISM assuming the decomposition is ideal, is about 1:3. However, because there could be a broad distribution of outflow velocities, i.e., a large $\sigma(r)$ in the outflow model even without the ISM contribution, it is likely that the symmetric component also includes a large contribution from the outflowing and/or inflowing gas, in which case the contribution from the ISM is much smaller than 3/4. \vspace{0.05in} \noindent [iv.] (Degree of) Emission infill -- The model predicts that, in large-aperture spectra, there is emission filling in on top of the resonant absorption, and because the emission profile is less blueshifted, the infill results in an observed absorption profile that is more blueshifted. The degree of the emission infill depends on the transition probabilities of the permitted channels and the degree of saturation for those without non-resonant channels. We have demonstrated that the data agree with these predictions in Figure~\ref{fig:observedabsorption}-\ref{fig:absorptionvelocity}. In particular, we show that the blueshifts, line ratios and velocity offsets of the observed profiles follow the same order as predicted (Eqs.~\ref{eq:ordernonresonant} and \ref{eq:orderresonant}). The observed $v_{80}$, the $80\%$ velocity offset, can be overestimated by a factor of two compared to that in the emission-corrected profile. After the emission correction, the line ratios in the spectra of ELGs are consistent with those of strong quasar absorbers and local SF regions. \begin{figure} \epsscale{1.20} \plotone{Observed_Absorption_MgIIEWLum.eps} \caption{The dependences of the observed absorption velocity profiles of the \ion{Mg}{2}\ lines on the [\ion{O}{2}]\,$\lambda\lambda3727,3730$\ rest equivalent width (\textit{upper panel}) and luminosity (\textit{lower panel}). } \label{fig:absorptionoiimgii} \vspace{0.05in} \end{figure} The different degrees of blueshift for different lines were also suggested by \citet{prochaska11a} and were observed in the Keck spectra of SFGs by \citet[][see their Figure $7$]{erb12a}. \citet[][see also \citealt{kornei13a}]{erb12a} also found a variety of \ion{Mg}{2}\ profiles in their individual spectra, with some showing emission that might originate from \ion{H}{2}\ regions. In our model, we have ignored such contribution. With our data, we cannot yet quantify the effect of the emission from \ion{H}{2}\ regions on the line profiles in the composite analysis. We do not observe P-cygni-like profiles in the composite spectrum of the full sample. This is likely because the difference of the emission and absorption profiles on the red side is not large (Figure~\ref{fig:oiiiprofile}) and the amount of emission infill is not sufficient, since P-cygni-like profiles require a large amount of emission infill that is more extended on the red side. In the next section, when studying the [\ion{O}{2}]\,$\lambda\lambda3727,3730$\ dependence, we show that the \ion{Mg}{2}\ absorption features P-cygni-like profiles for the subsample with the higher [\ion{O}{2}]\,$\lambda\lambda3727,3730$\ rest equivalent width. \vspace{0.05in} In summary, we conclude that our statistical, spherical outflow model can simultaneously explain the multiple observed properties of emission and absorption features in the NUV. \section{Correlations with [\ion{O}{2}]\,$\lambda\lambda3727,3730$}\label{sec:oii} Observations have shown that outflow properties, such as the velocity, depend on galaxy properties \citep[e.g., ][]{rupke05a, tremonti07a}. From the eBOSS pilot observations, we can measure the [\ion{O}{2}]\,$\lambda\lambda3727,3730$\ properties of the ELGs. We here study the dependences of the emission and absorption lines in the NUV on the total rest equivalent width ($W^{\lambda3728}_{\rm [O\,II]}$) and luminosity ($L^{\lambda3728}_{\rm [O\,II]}$) of the [\ion{O}{2}]\ doublet. For each variable, we divide the NUV sample into two subsamples, split at the median values ($\left<W^{\lambda3728}_{\rm [O\,II]}\right>=51.4\,{\rm \AA}$ and $\left<\log_{10} L^{\lambda3728}_{\rm [O\,II]}\right>=41.6\,{\rm dex}$). We then perform the same analysis as for the full sample, including constructing the composite spectra, calculating the unified emission profiles, estimating the emission-infill and determining the true absorption profiles. We present some of the details in Appendix~\ref{app:oii}, including the distributions of $W^{\lambda3728}_{\rm [O\,II]}$ and $\log_{10} L^{\lambda3728}_{\rm [O\,II]}$ (Figure~\ref{fig:oiiewlumdist}), the observed emission/absorption profiles (Figures~\ref{fig:emissionprooii} and \ref{fig:absorptionprooii}), and the (unnormalized) emission-corrected absorption profiles (Figure~\ref{fig:unifiedabsorptionprooii}). We here discuss in detail the emission strength, the observed absorption profiles of \ion{Mg}{2}, the emission-corrected absorption strength, and the unified velocity profiles. \begin{figure*}[b] \epsscale{0.53} \plotone{Absorption_Strength_EW.eps} \epsscale{0.53} \plotone{Absorption_Strength_Lum.eps} \caption{The dependences of the rest equivalent width of the emission-corrected absorption lines on the [\ion{O}{2}]\,$\lambda\lambda3727,3730$\ rest equivalent width (\textit{left}) and luminosity (\textit{right}). The color scales are the same as in Figure~\ref{fig:observedabsorption}, based on the orders given by Eqs.~\ref{eq:ordernonresonant} and \ref{eq:orderresonant}. } \label{fig:absorptionoii} \vspace{0.05in} \end{figure*} \begin{figure*}[t] \epsscale{0.53} \plotone{Unified_Emission_Profile_EW.eps} \epsscale{0.53} \plotone{Unified_Absorption_Profile_EW.eps} \caption{The dependences of the unified emission profile (\textit{left}) and the unified emission-corrected absorption profile (\textit{right}) on the [\ion{O}{2}]\,$\lambda\lambda3727,3730$\ rest equivalent width (\textit{upper panels}) and luminosity (\textit{lower panels}). } \label{fig:profileoii} \vspace{0.05in} \end{figure*} \vspace{0.05in} \noindent $\bullet$ Emission strength -- Figure~\ref{fig:emissionrewoii} shows the dependences of the non-resonant emission strength on the [\ion{O}{2}]\,$\lambda\lambda3727,3730$\ rest equivalent width and luminosity. Note for comparison, we have added the data points measured from the full sample in solid symbols, which are correlated with the measurements based on the two subsamples. Within the range probed, we find that the emission strength (in rest equivalent width) scales almost linearly with $W^{\lambda3728}_{\rm [O\,II]}$. In the right panel, we show that the emission equivalent width also positively depends on the [\ion{O}{2}]\ luminosity, but to a lesser degree. \vspace{0.05in} \noindent $\bullet$ Observed absorption profiles -- In Figure~\ref{fig:absorptionoiimgii}, we present the observed \ion{Mg}{2}\ profiles, which show the strongest dependence on [\ion{O}{2}]\ among the absorption lines (Figure~\ref{fig:absorptionprooii}). The correlation appears to be stronger with the rest equivalent width than with the luminosity. This is due to the stronger dependence of the emission infill on the rest equivalent width, as suggested by Figure~\ref{fig:emissionrewoii}. The observed \ion{Mg}{2}\ profile for the subsample with higher $W^{\lambda3728}_{\rm [O\,II]}$ has a P-cygni-like shape, indicating a large amount of emission infill. \vspace{0.05in} \noindent $\bullet$ Emission-corrected absorption strength -- Figure~\ref{fig:absorptionoii} shows the dependences of the emission-corrected absorption rest equivalent width on the [\ion{O}{2}]\ properties. Except for the saturated \ion{Mg}{2}\ lines, other lines are positively correlated with both $W^{\lambda3728}_{\rm [O\,II]}$ and $L^{\lambda3728}_{\rm [O\,II]}$, with the dependence tentatively stronger for the former. \vspace{0.05in} \noindent $\bullet$ Unified velocity profiles -- Figure~\ref{fig:profileoii} presents the unified emission and absorption profiles as a function of [\ion{O}{2}]\ rest equivalent width and luminosity. Within the uncertainties, for both profiles, we do not find a dependence on either $W^{\lambda3728}_{\rm [O\,II]}$ or $L^{\lambda3728}_{\rm [O\,II]}$, although the P-cygni-like shape of the observed \ion{Mg}{2}\ absorption for the subsample with higher $W^{\lambda3728}_{\rm [O\,II]}$ requires the emission profile to be more extended on the red side than the absorption, unlike for the main sample (Figure~\ref{fig:oiiiprofile}). Larger samples in the future will help pin down these dependences with high S/N. \vspace{0.05in} Among all the correlations, we find that the strongest one is between the rest equivalent widths of non-resonant emission and [\ion{O}{2}], which also results in the strong dependence of the observed absorption profiles, especially of the \ion{Mg}{2}\ lines, on the [\ion{O}{2}]\ rest equivalent width. To the first order, the [\ion{O}{2}]\ luminosity is an indicator of SFR, while the [\ion{O}{2}]\ rest equivalent width is an indicator of specific SFR. Our results suggest that the properties of the emission and, to a lesser degree, the absorption are stronger functions of specific SFR than of SFR. However, considering the uncertainties due to our sample size, the exact correlations between the properties of the spectral features in the NUV and those of galaxies and their implications for galaxy evolution remain to be determined. \section{Summary}\label{sec:summary} The pilot observations of the emission-line galaxy (ELG) program in the Extended Baryon Oscillation Spectroscopic Survey (eBOSS) in SDSS-IV have obtained a sample of $8620$ ELGs at $0.6<z<1.2$, providing a good opportunity for investigations of the near-ultraviolet (NUV) part of the spectral energy distributions (SEDs) of star-forming galaxies (SFGs). We constructed median composite continuum-normalized spectra to study the emission and absorption features in the NUV. Our main results are: \begin{itemize} \item The median composite spectra of the ELGs feature non-resonant \ion{Fe}{2}$^*$ emission and resonant absorption due to \ion{Mg}{1}, \ion{Mg}{2}\ and \ion{Fe}{2}. Both the emission and absorption profiles are asymmetric, preferentially blueshifted, indicating ubiquitous outflows driven by star formation at $0.6<z<1.2$. \item We found a variety of velocity profiles for the observed absorption lines with different degrees of blueshift. \item Comparing the ELG spectra with those of intervening quasar absorption-line systems in the same redshift range, we found they feature the same absorption lines but with different line ratios. \item We compared the eBOSS ELG spectra with the NUV spectra of the local star-forming regions taken with the Faint Object Spectrograph (FOS) and Goddard High-resolution Spectrograph (GHRS) on \textit{HST}. The physical aperture size of the eBOSS ELG spectra at $0.6<z<1.2$ is about $15\,{\rm\,kpc}$, while the aperture size of the FOS/GHRS spectra of the local SF regions is less than $40\,{\rm\,pc}$. We found the FOS/GHRS spectra also display the same (though weaker) absorption lines, but do not exhibit the non-resonant \ion{Fe}{2}$^*$ emission. We also found different ratios for the resonant absorption lines. \item We introduced a statistical, spherical outflow model, in which the observed non-resonant emission is the fluorescent (re-emitted) photons after the occurrence of absorption that are scattered into the line of sight. The model predicts that there is scattered resonant emission filling in on top of absorption, and the amount of emission infill depends on the transition probabilities of the allowed channels, resulting in the variety of the observed absorption profiles. \item We developed an observation-driven, model-independent method to estimate the emission infill and reveal the true absorption profile. We showed that after the emission correction, the absorption line ratios in the ELG spectra are consistent with those in the spectra of strong quasar absorbers and local star-forming regions. \item We demonstrated that the outflow model can explain simultaneously the multiple observed properties of the emission and absorption features in the NUV, including i) the aperture dependence, ii) the net effect, iii) the emission velocity profiles and the emission-infill corrected absorption profiles, and iv) the variety of the observed absorption profiles and the degree of emission infill. \item Finally, we investigated the dependence of NUV features on the [\ion{O}{2}]\,$\lambda\lambda3727,3730$\ rest equivalent width and luminosity and found that the strongest correlation is between the non-resonant emission strength (in rest equivalent width) and the [\ion{O}{2}]\ rest equivalent width. \end{itemize} Our observations provided strong evidence for ubiquitous galactic-scale outflows driven by star formation. Our analysis also demonstrated that the NUV window is an informative region in the spectrum. The series of emission and absorption lines provides a new means to probe the gas physics. The model we introduced \citep[see also][]{rubin11a, prochaska11a, scarlata15a} has many important implications, such as the dependences of non-resonant emission on the aperture, outflow, and galaxy (occultation) sizes, and points the future investigations of outflow physics into new directions. For instance, it is of great interest to explore the surface brightness profile of the non-resonant emission, e.g., through narrow-band imaging or spatially-resolved spectroscopy \citep[e.g., ][]{rubin11a, martin13a}, to further study the scale dependence of outflows. Besides using the ``down-the-barrel'' spectra to probe the gas physics directly associated with the galaxies, we can also employ the cross-correlation techniques developed recently \citep[e.g., ][]{steidel10a, zhu14a}, using absorption information induced in background source spectra, to probe the circumgalactic medium of foreground sources. Combining the two different types of observation will produce a more complete picture of the baryon processes in galaxy formation and evolution. The sample size of the NUV spectroscopic datasets will grow by orders-of-magnitude in the next decade. At the conclusion of the ELG program, eBOSS will obtain spectra for about $200,000$ ELGs at $z\gtrsim0.6$, a sample over $20$ times larger than the one used in this paper. DESI \citep[][]{schlegel11a, levi13a} and PFS \citep[][]{takada14a} will obtain higher-resolution spectra with larger telescopes for about $20$ million ELGs at higher redshift ($z>1$), where more lines are redshifted into the optical. Based upon the details revealed in the composite spectra of less than $10,000$ galaxies with the $2.5$-meter SDSS telescope, we expect that NUV spectroscopy will play an important role in future investigations of the properties and evolution of galaxies. \acknowledgments This work started when G.B.Z. was visiting Princeton University in December 2014 and he would like to thank Michael Strauss and Jim Gunn for their hospitality. He also thanks Bruce Draine, Tim Heckman, Claus Leitherer, and Jason X. Prochaska for useful discussions. G.B.Z. acknowledges support provided by NASA through Hubble Fellowship grant \#HST-HF2-51351 awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under contract NAS 5-26555. J.C. acknowledges financial support from MINECO (Spain) under project number AYA2012-31101. J-P.K. and T.D. acknowledge support from the LIDA ERC advanced grant. A.R. acknowledges funding from the P2IO LabEx (ANR-10-LABX-0038) in the framework ``Investissements d'Avenir'' (ANR-11-IDEX-0003-01) managed by the French National Research Agency (ANR). This paper represents an effort by both the SDSS-III and SDSS-IV collaborations. Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. The SDSS web site is www.sdss.org. SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, the Chilean Participation Group, the French Participation Group, Harvard-Smithsonian Center for Astrophysics, Instituto de Astrof\'isica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU)/University of Tokyo, Lawrence Berkeley National Laboratory, Leibniz Institut f\"ur Astrophysik Potsdam (AIP), Max-Planck-Institut f\"ur Astronomie (MPIA Heidelberg), Max-Planck-Institut f\"ur Astrophysik (MPA Garching), Max-Planck-Institut f\"ur Extraterrestrische Physik (MPE), National Astronomical Observatory of China, New Mexico State University, New York University, University of Notre Dame, Observat\'ario Nacional/MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Aut\'onoma de M\'exico, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University. Some of the data \citep{leitherer11a} are based on observations made with the NASA/ESA \textit{Hubble Space Telescope}, obtained from the Data Archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. \bibliographystyle{apj} \input{ms_NUV.bbl}
1,108,101,566,311
arxiv
\section{Introduction}\label{sec:intro} We study the problem of estimating the edit distance between two $n$-character strings. There is a classic $O(n^2)$ dynamic programming algorithm, and fine-grained complexity results from recent years suggest that it is nearly optimal~\cite{BK15-LCS,BI18,AHWW16-polylog_shaved,AB18}. There have been long lines of works on beating the quadratic time barrier with approximations~\cite{BEKMRRS03-edit-testing,BJKK04-edit,BES06-edit,AO12-edit,AKO10-edit,BEGHS18-edit-quantum,CDGKS18,CGKK18-ED,HSSS19,RSSS19,BR20, KS20,GRS20,RS20,AN20}, or beyond-worst case~\cite{Ukkonen85,Apostolico86,Meyers86,LMS98,AK12,ABBK17,BK18,Kuszmaul19,HRS19,BSS20}. Motivated by applications where the strings may be extremely long (e.g.~bioinformatics), we are interested in algorithms that run even faster, namely in sub-{\em linear} time. For exact computation in the worst case, this is unconditionally impossible --- even distinguishing between a pair of identical strings and a pair that differs in a single character requires reading the entire input. But in many regimes sublinear algorithms are still possible~\cite{BEKMRRS03-edit-testing,BJKK04-edit,CK06,AN10, AO12-edit,SS17, GKS19,NRRS19,BCLW19,RSSS19}. \subsubsection*{Gap Edit Distance: $k$ vs $k^2$} We give new approximation algorithms for edit distance that run in sublinear time when the input strings are close. To best understand our contribution and how it relates to previous work, we focus on the benchmark advocated by~\cite{GKS19} of distinguishing input strings whose edit distance is $\le k$ from $\gtrsim k^2$; we discuss more general parameters later in this section. Notice that we can assume wlog that $k < \sqrt{n}$ (otherwise the algorithm can always accept). Furthermore, for tiny $k$ there is an unconditional easy lower bound of $\Omega(n/k^2)$ for distinguishing even identical strings from ones with $k^2$ substitutions. So our goal is to design an algorithm that runs in truly sublinear time for $1 \ll k < \sqrt{n}$. There are two most relevant algorithms in the literature for this setting: \begin{itemize} \item \cite{AO12-edit} (building on~\cite{OR07-edit}) gave an algorithm that runs in time $n^{2+o(1)}/k^3$; in particular, it is sublinear for $k \gg n^{1/3}$. \item \cite{GKS19} gave an algorithm that runs in time ${\tilde{O}}(n/k + k^3)$; in particular, it is truly sublinear for $k \ll n^{1/3}$. \end{itemize} In particular, \cite{GKS19} left as an open problem obtaining a sublinear algorithm for $k\approx n^{1/3}$. Our main result is a very simple algorithm that runs in time ${\tilde{O}}(n/\sqrt{k})$ and hence is simultaneously sublinear for all relevant values of $k$. \begin{theorem*}[Main result (informal); see Theorem~\ref{thm:no-preprocessing}] We can distinguish between $\operatorname{ED}(A, B) \le k$ and $\operatorname{ED}(A, B) = \Omega(k^2)$ in $\tilde{O}(n/\sqrt{k})$ time with high probability. \end{theorem*} Our algorithm is better than~\cite{AO12-edit,GKS19} for $n^{2/7} \ll k \ll n^{2/5}$ (and is also arguably simpler than both). \paragraph{Independent work of Kociumaka and Saha} The open problem of Goldenberg, Krautgamer, and Saha~\cite{GKS19} was also independently resolved by Kociumaka and Saha~\cite{KS20sublinear}. They use essentially the same main algorithm (Algorithm~\label{alg:jaggedmatch} below), but use substantially different techniques to implement approximate queries to the subroutine we call $\operatorname{MaxAlign}_k$. Their running time (${\tilde{O}}(n/k+k^2)$) is faster than ours in the regime where our algorithm is faster than~\cite{AO12-edit}. \subsubsection*{Edit distance with preprocessing: results and technical insights} Our starting point for this paper is the recent work of~\cite{GRS20} that designed algorithms for {\em edit distance with preprocessing}, namely the algorithm consists of two phases: \begin{description} \item[Preprocessing] where each string is preprocessed separately; and \item[Query] where the algorithm has access to both strings and outputs of the preprocess phase. \end{description} A simple and efficient preprocessing procedure proposed by~\cite{GRS20} is to compute a hash table for every contiguous substring. In the query phase, this enables an $O(\log(n))$-time implementation of a subroutine that given indices $i_A,i_B$ returns the longest common (contiguous) substring of $A,B$ starting at indices $i_A,i_B$ (respectively). We use a simple modification of this subroutine, that we call $\operatorname{MaxAlign}_k$: given only an index $i_B$ for string $B$, it returns the longest common (contiguous) substring of $A,B$ starting at indices $i_A,i_B$ (respectively) for any $i_A \in [i_B-k,i_B+k]$. (It is not hard to see that for $k$-close strings, we never need to consider other choices of $i_A$~\cite{Ukkonen85}.) Given access to a $\operatorname{MaxAlign}_k$ oracle, we obtain the following simple greedy algorithm for $k$-vs-$k^2$ edit distance: Starting from pointer $i_B = 1$, at each iteration it advances $i_B$ to the end of the next longest common subsequence returned by $\operatorname{MaxAlign}_k$. \begin{algorithm*}[h] \caption*{{\bf Algorithm~\ref{alg:jaggedmatch}} $\operatorname{GreedyMatch}(A, B, k)$} \begin{algorithmic} \STATE $i_B \leftarrow 1$ \STATE \textbf{for} $e$ \textbf{from} $1$ \textbf{to} $2k+1$ \STATE \ \ \ $i_B \leftarrow i_B + \max(\method{MaxAlign}_k(A, B, i_B), 1)$ \STATE \ \ \ \textbf{if} $i_B > n$ \textbf{then} \textbf{return}\ \ SMALL \RETURN LARGE \end{algorithmic} \end{algorithm*} Each increase of the pointer $i_B$ costs at most $2k$ in edit distance (corresponding to the freedom to choose $i_A \in [i_B-k,i_B+k]$). Hence if $i_B$ reaches the end of $B$ in $O(k)$ steps, then $\operatorname{ED}(A,B) \le O(k^2)$ and we can accept; otherwise the edit distance is $>k$ and we can reject. The above ideas suffice to solve $k$-vs-$k^2$ gap edit-distance in ${\tilde{O}}(k)$ query time after polynomial preprocessing% \footnote{The prepocessing can be made near-linear, but in this setting our algorithm is still dominated by that of~\cite{CGK16}.}. Without preprocessing, we can't afford to hash the entire input strings. Instead, we subsample $\approx 1/k$-fraction of the indices from each string and compute hashes for the sampled subsequences. If the sampled indices perfectly align (with a suitable shift in $[\pm k]$), the hashes of identical contiguous substrings will be identical, whereas the hashes of substrings that are $>k$-far (even in Hamming distance) will be different (w.h.p.). This error is acceptable since we already incur a $\Theta(k)$-error for each call of $\operatorname{MaxAlign}_k$. This algorithm would run in ${\tilde{O}}(n/k)$ time% \footnote{There is also an additive ${\tilde{O}}(k)$ term like in the preprocessing case, but it is dominated by ${\tilde{O}}(n/k)$ for $k < \sqrt{n}$.}, but there is a caveat: when we construct the hash table, it is not yet possible to pick the indices so that they perfectly align (we don't know the suitable shift). Instead, we try $O(\sqrt{k})$ different shifts for each of $A,B$; by birthday paradox, there exists a pair of shifts that exactly adds up to the right shift in $[\pm k]$. The total run time is given by ${\tilde{O}}(n/k \cdot \sqrt{k}) = {\tilde{O}}(n/\sqrt{k})$. \cite{GRS20} also considered the case where we can only preprocess one of the strings. In this case, we can mimic the strategy from the previous paragraph, but take all $O(k)$ shifts on the preprocessed string, saving the $O(\sqrt{k})$-factor at query time. This gives the following result: \begin{theorem*}[Informal statement of Theorem~\ref{thm:1-preprocessing}] We can distinguish between $\operatorname{ED}(A, B) \le k$ and $\operatorname{ED}(A, B) = \tilde{\Omega}(k^2)$ with high probability in ${\tilde{O}}(n)$ preprocessing time of $A$ and $\tilde{O}(n/k)$ query time. \end{theorem*} Our query time improves over a ${\tilde{O}}(n/k+k^2)$-algorithm in~\cite{GRS20} that used similar ideas. (A similar algorithm with low asymmetric query complexity was also introduced in~\cite{GKS19}.) \subsubsection*{Trading off running time for better approximation} By combining our algorithm with the $h$-wave algorithm of~\cite{LMS98}, we can tradeoff approximation guarantee and running time in our algorithms. The running times we obtain for $k$ vs $k \ell$ edit distance are: \begin{description} \item[No preprocessing] ${\tilde{O}}(\frac{n\sqrt{k}+k^{2.5}}{\ell})$ running time for $\ell \in [\sqrt{k}, k]$. (Theorem~\ref{thm:no-preprocessing2}) \item[One-sided preprocessing] ${\tilde{O}}(\frac{nk}{\ell})$ preprocessing time and ${\tilde{O}}(\frac{n+k^2}{\ell})$ query time. (Theorem~\ref{thm:1-preprocessing2}) \item[Two-sided preprocessing] ${\tilde{O}}(\frac{nk}{\ell})$ preprocessing time and ${\tilde{O}}(\frac{k^2}{\ell})$ query time. (Corollary~\ref{thm:2-preprocessing2}) \end{description} \subsection*{Organization} Section~\ref{sec:prelim} gives an overview of the randomized hashing technique we use, as well as a structural lemma theorem for close strings. Section~\ref{sec:k-vs-k2} gives a meta-algorithm for distinguishing $k$ versus $k^2$ edit distance. Sections~\ref{subsec:two-sided},\ref{sec:zero-sided},\ref{sec:one-sided} respectively implement this meta-algorithm for two-, zero-, and one-sided preprocessing. % Appendix~\ref{app:k-vs-k1+eps} explains how to trade off running time for improved gap of $k$ versus $k\ell$ edit distance. Appendix~\ref{app:omit} includes the proof of our structural decomposition lemma. \section{Preliminaries}\label{sec:prelim} \subsection{Rabin-Karp Hashing} A standard preprocessing ingredient is Rabin-Karp-style rolling hashes (e.g., \cite{cormen2009introduction}). We identify the alphabet $\Sigma$ with $1, 2, \hdots, |\Sigma|$. Assume there is also $\$ \not\in \Sigma$, which we index by $|\Sigma|+1$.\footnote{We assume that all indices out of range of $A[1,n]$ are equal to $\$$.} Assume before any preprocessing that we have picked a prime $p$ with $\Theta(\log n + \log |\Sigma|)$ digits as well a uniformly random value $x \in \{0, 1, \hdots, p-1\}$. We also have $S \subset [n]$, a \emph{subsample} of the indices which allows for sublinear preprocessing of the rolling hashes while still successfully testing string matching (up to a $\tilde{O}(n/|S|)$ Hamming error). \begin{algorithm}[h] \caption{$\operatorname{InitRollingHash}(A, S)$} \begin{algorithmic} \item[{\bf Input:}] $A \in \Sigma^n$; $S$ array of indices to be hashed \\\hspace{-0.15in}\textbf{Output:} $H,$ a list of $|S|$+1 hashes \STATE $H \leftarrow [0]$ \STATE $c \leftarrow 0$ \STATE {\bf for} $i \in S$ {\bf then}\\ \STATE \ \ \ $c \leftarrow cx + A[i]\mod p$ \STATE \ \ \ append $c$ to $H$. \RETURN $H$ \end{algorithmic} \label{alg:inithash} \end{algorithm} \begin{algorithm}[h] \caption{$\operatorname{RetrieveRollingHash}(A, S, H, i, j)$} \begin{algorithmic} \item[{\bf Input:}] $A \in \Sigma^n$; $S$ array of hashed indices; $H$ list of hashes; $i \le j$ indices from $1$ to $n$. \\\hspace{-0.15in}\textbf{Output:} $h$, hash of string \STATE $i' \leftarrow$ least index such that $S[i'] \ge i$. \STATE $j' \leftarrow$ greatest index such that $S[j'] \le j$. \RETURN $h \leftarrow H[j'] - H[i'-1] x^{j'-i'+1} \mod p$ \end{algorithmic} \label{alg:retrievehash} \end{algorithm} Observe that $\operatorname{InitRollingHash}$ runs in $\tilde{O}(|S|)$ time and $\operatorname{RetrieveRollingHash}$ runs in $\tilde{O}(1)$ time. % The correctness guarantees follow from the following standard proposition. \begin{proposition}\label{prop:hash-standard} Let $A, B \in \Sigma^n$ and $S := \{1, 2, \hdots, n\}$. Let $H_A = \operatorname{InitRollingHash}(A,S)$ and $H_B = \operatorname{InitRollingHash}(B,S)$. The following holds with probability at least $1-\frac{1}{n^4}$ over the choice of $x$. For all $i_A\le j_A$ and $i_B \le j_B$, we have that \[\operatorname{RetrieveRollingHash}(A, S, H_A, i_A, j_A) = \operatorname{RetrieveRollingHash}(A, S, H_B, i_B,j_B)\] if and only if $A[i_A, j_A] = B[i_B, j_B]$. \end{proposition} This claim is sufficient for our warm-up two-sided preprocessing algorithm. However, for the other algorithms, we need to have $|S| = o(n)$ for our hashing to be sublinear. This is captured by another claim. \begin{claim}\label{claim:hash-random} Let $A, B \in \Sigma^n$ and $S \subseteq \{1, 2, \hdots, n\}$ be a random subset with each element included independently with probability at least $\alpha := \min(\tfrac{4\ln n}{k}, 1)$. Let $H_A = \operatorname{InitRollingHash}(A,S)$ and $H_B = \operatorname{InitRollingHash}(B,S)$. For any $i \le j$ in $\{1, \hdots, n\}$ we have \begin{itemize} \item[(1)] If $A[i, j] = B[i, j]$ then $\operatorname{RetrieveRollingHash}(A, S, H_A, i, j) = \operatorname{RetrieveRollingHash}(B, S, H_B, i, j)$. \item[(2)] If $\operatorname{Ham}(A[i, j], B[i, j]) \ge k$ then with probability at least $1-\frac{1}{n^3}$ over the choice of $x$ and $S$, $\operatorname{RetrieveRollingHash}(A, S, H_A, i, j) \neq \operatorname{RetrieveRollingHash}(B, S, H_B, i, j)$ \end{itemize} \end{claim} \begin{proof} Let $A_S$ and $B_S$ be the subsequences of $A$ and $B$ corresponding to the indices $S$. Note that if $A[i, j] = B[i, j]$ then $A_S[i', j'] = B_S[i', j']$, where $i'$ and $j'$ are chosen as in $\operatorname{RetrieveRollingHash}$. Property (1) then follows by Proposition~\ref{prop:hash-standard}. If $\operatorname{Ham}(A[i, j], B[i, j]) \ge k$, the probability there exists $i_0 \in S \cap [i, j]$ such that $A[i_0] \neq B[i_0]$ and thus $A_S[i', j'] \neq B_S[i', j']$ is $1$ if $\alpha = 1$ and otherwise at least \[1 - (1 - (4\ln n)/k)^{k} \ge 1 - 1/e^{4\ln n} = 1-1/n^4.\] % If $A_S[i', j'] \neq B_S[i', j']$ then by Proposition~\ref{prop:hash-standard}, \[\operatorname{RetrieveRollingHash}(A, S, H_A, i, j) \neq \operatorname{RetrieveRollingHash}(B, S, H_B, i, j)\] with probability at least $1 - 1/n^4$. Therefore, for a random $S$, $\operatorname{RetrieveRollingHash}(A, S, H_A, i, j) \neq \operatorname{RetrieveRollingHash}(B, S, H_B, i, j)$ is at least $1 - 1/n^4 - 1/n^4 > 1 - 1/n^3$. Thus, property (2) follows. \end{proof} \subsection{Structural Decomposition Lemma} \begin{definition}[$k$-alignment and approximate $k$-alignment] \hfill Given strings $A,B$, we say that a substring $B[i_B,i_B+d-1]$ with $1 \le i_B, i_B + d-1 \le n$ is in {\em $k$-alignment} in $A[i_A, i_A+d-1]$ if $|i_A-i_B|\le k$ and $A[i_A, i_A+d-1] = B[i_B,i_B+d-1]$. If instead we have $|i_A - i_B| \le 3k$ and $\operatorname{ED}(A[i_A, i_A+d-1], B[i_B, i_B+d-1]) \le 3k$, we say that $B[i_B, i_B+d-1]$ is in {\em approximate $k$-alignment} with $A[i_A, i_A+d-1]$. We say that $B[i_B,i_B+d-1]$ has a (approximate) $k$-alignment in $A$ if there is an $i_A$ with $|i_A-i_B|\le k$ such that $B[i_B,i_B+d-1]$ is in (approximate) $k$-alignment with $A[i_A,i_A+d-1]$. \end{definition} For all our algorithms we need the following decomposition lemma. The proof is deferred to Appendix~\ref{app:omit}. \begin{lemma}\label{lem:decomp} Let $A, B \in \Sigma^{*}$ be strings such that $\operatorname{ED}(A,B) \le k$. Then, $A$ and $B$ can be partitioned into at $2k+1$ intervals $I_1^A, \hdots, I_{2k+1}^A$; $I_1^B, \hdots, I_{2k+1}^B$, respectively, and a partial monotone matching $\pi : [2k+1] \to [2k+1] \cup \{\perp\}$ such that \begin{itemize} \item Unmatched intervals are of length at most $1$, and \item For all $i$ in the matching, $B[I_{\pi(i)}^B]$ is in $k$-alignment with $A[I_i^A]$.% \end{itemize} \end{lemma} \section{A meta-algorithm for distinguishing $k$ vs. $k^ 2$}\label{sec:k-vs-k2} In this section, we present $\operatorname{GreedyMatch}$ (Algorithm~\ref{alg:jaggedmatch}), a simple algorithm for distinguishing $\operatorname{ED}(A, B) \le O(k)$ from $\operatorname{ED}(A, B) \ge \Omega(k^2)$. The algorithm assumes access to data structure $\method{MaxAlign}_k$ as defined below. In the following sections, we will present different implementations of this data structure for the case of two-sided, one-sided, and no preprocessing. Define $\method{MaxAlign}_k(A, B, i_B)$ to be a function which returns $d \in [1, n]$. We say that an implementation of $\method{MaxAlign}_k(A, B, i_B)$ is \emph{correct} if with probability $1$ it outputs the maximum $d$ such that $B[i_B, i_B+d-1]$ has a $k$-alignment in $A$, and if no $k$-alignment exists, it outputs $d = 0$. We say that an implementation is \emph{approximately correct} if the following are true. \begin{enumerate} \item Let $d$ be the maximal such that $B[i_B, i_B+d-1]$ has a $k$-alignment in $A$. With probability $1$, $\method{MaxAlign}_k(A, B, i_B) \ge d$. \item With probability at least $1-1/n^2$, $B[i_B, i_B+ \method{MaxAlign}_k(A, B, i_B)-1]$ has an approximate $k$-alignment in $A$. \end{enumerate} We say that an implementation is \emph{half approximately correct} if the following are true. \begin{enumerate} \item Let $d$ be the maximal such that $B[i_B, i_B+d-1]$ has a $k$-alignment. With probability $1$, $\method{MaxAlign}_k(A, B, i_B) > d/2$ (unless $d=0$). \item With probability at least $1-1/n^2$, $B[i_B, i_B+ \method{MaxAlign}_k(A, B, i_B)-1]$ has an approximate $k$-alignment in $A$. \end{enumerate} \begin{algorithm}[h] \caption{$\operatorname{GreedyMatch}(A, B, k)$} \begin{algorithmic} \item[{\bf Input:}] $A, B \in \Sigma^n$, $k \le n$ \\\hspace{-0.15in}\textbf{Output:} SMALL if $\operatorname{ED}(A, B) \le k$ or LARGE if $\operatorname{ED}(A, B) > 40k^2$ \STATE $i_B \leftarrow 1$ \STATE \textbf{for} $e$ \textbf{from} $1$ \textbf{to} $2k+1$ \STATE \ \ \ $i_B \leftarrow i_B + \max(\method{MaxAlign}_k(A, B, i_B), 1)$ % \STATE \ \ \ \textbf{if} $i_B > n$ \STATE \ \ \ \ \ \ \textbf{return}\ \ SMALL \RETURN LARGE \end{algorithmic} \label{alg:jaggedmatch} \end{algorithm} We now give the following correctness guarantee. \begin{lemma}\label{lem:jagged-match} If $\method{MaxAlign}_k$ is approximately correct and $\operatorname{ED}(A, B) \le k$, then with probability $1$, $\operatorname{GreedyMatch}(A, B, k)$ returns SMALL. If $\method{MaxAlign}_k$ is half approximately correct and $\operatorname{ED}(A, B) \le k/(2\log n)$, then with probability $1$, $\operatorname{GreedyMatch}(A, B, k)$ returns SMALL. If $\method{MaxAlign}_k$ is (half) approximately correct and $\operatorname{ED}(A, B) > 40k^2$, then with probability $1 - \frac{1}{n}$, $\operatorname{GreedyMatch}(A, B, k)$ returns LARGE. Further, $\operatorname{GreedyMatch}(A, B, k)$ makes $O(k)$ calls to $\method{MaxAlign}_k$ and otherwise runs in $O(k\log n)$ time. \end{lemma} \begin{proof} If $\method{MaxAlign}_k$ is approximately correct and if $\operatorname{ED}(A, B) \le k$ then by Lemma~\ref{lem:decomp}, $B$ can be decomposed into $2k+1$ intervals such that they are each of length at most $1$ or they exactly match the corresponding interval $A$, up to a shift of $k$. In the algorithm, if $i_B$ is in one of these intervals, then $\method{MaxAlign}_k$ finds the rest of the interval (and perhaps more). Then, the algorithm will reach the end of $B$ in $2k+1$ steps and output SMALL. Let $k' = k/(2\log n)$. If $\method{MaxAlign}_k$ is half approximately correct and $\operatorname{ED}(A, B) \le k'$ then by Lemma~\ref{lem:decomp}, $B$ can be decomposed into $2k'+1$ intervals such that they are each of length at most $1$ or they exactly match the corresponding interval $A$, up to a shift of $k$. In the algorithm, if $i_B$ is in one of these intervals, then $\method{MaxAlign}_k$ finds more than half of the interval. Thus, it takes at most $\log n$ steps for the algorithm to get past each of the $2k'+1$ intervals. Thus, the algorithm will reach the end of $B$ in $(2k'+1)(\log n) < 2k+1$ steps and output SMALL. For the other direction, it suffices to prove that if the algorithm outputs SMALL then $\operatorname{ED}(A, B) \le 40k^2$. If $\method{MaxAlign}_k$ is (half) approximately correct, and the algorithm outputs SMALL, with probability at least $1-1/n$ over all calls to $\method{MaxAlign}_k$, there exists a decomposition of $B$ into $2k+1$ intervals such that each is either of length $1$ or has an approximate $k$-alignment in $A$. Thus, there exists a sequence of edit operations from $B$ to $A$ by \begin{enumerate} \item deleting the at most $2k+1$ characters of $B$ which do not match, \item modifying at most $3k$ characters within each interval of $B$, and \item adding/deleting $6k$ characters between each consecutive pair of exactly-matching intervals (and before the first and after the last interval), since each match had a shift of up to $3k$. \end{enumerate} This is a total of $2k+1 + 3k(2k+1) +6k(2k+2) \le 40k^2$ operations. Thus, if $\operatorname{ED}(A, B) > 40k^2$, $\operatorname{GreedyMatch}(A, B, k)$ return LARGE with probability at least $1 - \frac{1}{n}$. The runtime analysis follows by inspection. \end{proof} By Lemma~\ref{lem:jagged-match}, it suffices to implement $\method{MaxAlign}_k$ efficiently and with $1/\operatorname{poly}(n)$ error probability in various models.% \section{Warm-up: two-sided Preprocessing} \label{subsec:two-sided} As warm-up, we give an implementation of $\method{MaxAlign}_k$ that first preprocesses $A$ and $B$ (separately) for $\operatorname{poly}(n)$ time% \footnote{It is not hard to improve the preprocessing time to ${\tilde{O}}(n)$. We omit the details since this algorithm would still not be optimal for the two-sided preprocessing setting.}, and then implement $\method{MaxAlign}_k$ queries in $O(\log(n))$ time. % Algorithm~\ref{alg:twosidedpreproc} takes as input a string $A$ and produces $(H_A, T_A)$, the rolling hashes of $A$ and a collection of hash tables. We let $H_B, T_B$ denote the corresponding preprocessing output for $B$. Algorithm~\ref{alg:align-twosidedpreproc} gives a correct implementation of $\method{MaxAlign}_k$ with the assistance of this preprocessing. \begin{algorithm}[h] \caption{$\operatorname{TwoSidedPreprocessing}_k(A)$} \begin{algorithmic} \item[{\bf Input:}] $A \in \Sigma^n$, $k \le n$ \\\hspace{-0.15in}\textbf{Output:} $(H_A, T_A)$, a collection of hashes \STATE $H_A \leftarrow \operatorname{InitRollingHash}(A, [1,n])$ \STATE $\mathcal T_A \leftarrow n \times n$ matrix of hash tables \STATE \textbf{for} $i$ \textbf{from} $1$ \textbf{to} $n$ \STATE \ \ \ \textbf{for} $j$ \textbf{from} $i$ \textbf{to} $n$ \STATE \ \ \ \ \ \ \textbf{for} $a$ \textbf{from} $-k$ \textbf{to} $k$ \STATE \ \ \ \ \ \ \ \ \ \textbf{if} $[i+a, j+a] \subset [1, n]$, add $\operatorname{RetrieveRollingHash}(A, [1,n], H_A, i+a, j+a)$ to $T[i,j]$ \RETURN $(H_A, T_A)$ \end{algorithmic} \label{alg:twosidedpreproc} \end{algorithm} \begin{algorithm}[h] \caption{$\operatorname{TwoSidedMaxAlign}_k(A, B, i_B)$} \begin{algorithmic} \item[{\bf Input:}] $A \in \Sigma^n, B \in \Sigma^n$, $k \le n$, $i_B \in [1, n]$ \\\hspace{-0.15in}\textbf{Output:} $d \in [0, n]$. \STATE Binary search to find maximal $d \in [0, n-i_B+1]$ such that\\ \ \ \ \ \ \ $\operatorname{RetrieveRollingHash}(B, [1,n], H_B, i_B, i_B+d-1) \in T_A[i_B, i_B+d-1]$ \RETURN $d$ \end{algorithmic} \label{alg:align-twosidedpreproc} \end{algorithm} \begin{lemma}\label{lem:two-sided-align} $\operatorname{TwoSidedMaxAlign}_k$ is a correct implementation of $\method{MaxAlign}_k$ . \end{lemma} \begin{proof} Observe that $\operatorname{TwoSidedMaxAlign}$ is correct if for all $a \in [-k, k]$, $\operatorname{RetrieveRollingHash}(A, [1,n], H_A, i_B+a, i_B+d+a) = \operatorname{RetrieveRollingHash}(B, [1,n], H_B, i_B, i_B+d)$ if and only if $A[i_B+a, i_B+d+a] = B[i_B,i_B+d]$. By Claim~\ref{prop:hash-standard} and the union bound, this happens with probability at least $1 - \frac{1}{n^3}$. \end{proof} \begin{theorem}\label{thm:zero-sided} When both $A$ and $B$ are preprocessed for $\operatorname{poly}(n)$ time, we can distinguish between $\operatorname{ED}(A, B) \le k$ and $\operatorname{ED}(A, B) > 40k^2$ in $\tilde{O}(k)$ time with probability $1 - \frac{1}{n}$. \end{theorem} \begin{remark} Note that~\cite{CGK16}'s algorithm obtains similar guarantees while only spending $O(\log(n))$ query time. Further, sketching algorithms for edit distance often achieve much better approximation factors, but the preprocessing is often not near-linear (e.g., \cite{BZ16}).\footnote{Document exchange (e.g., \cite{BZ16,H19}) is similar to the one-sided preprocessing model, but $A$ and $B$ are never brought together (rather a hash of $A$ is sent to $B$).} \end{remark} \begin{proof}[Proof of Theorem~\ref{thm:zero-sided}] By Lemma~\ref{lem:two-sided-align}, $\operatorname{TwoSidedMaxAlign}$ is correct (and thus approximately correct) so by Lemma~\ref{lem:jagged-match} succeeds with high enough probability that $\operatorname{GreedyMatch}$ outputs the correct answer with probability at least $1 - \frac{1}{n}$. By inspection, the preprocessing runs in $\operatorname{poly}(n)$ time. Further, as the binary search, hash computation, and table lookup are all $\tilde{O}(1)$ operations, $\operatorname{TwoSidedMaxAlign}$ runs in $\tilde{O}(1)$ time, so the two-sided preprocessing version of $\operatorname{GreedyMatch}$ runs in $\tilde{O}(k)$ time. \end{proof} \section{Main Result: $k$ vs $k^2$ with No Preprocessing} \label{sec:zero-sided} As explained in the introduction, for the no preprocessing case, we take advantage of the fact that any $c \in [-k, k]$ can be written as $a\sqrt{k} + b$, there $a, b \in [-\sqrt{k}, \sqrt{k}]$.\footnote{We have $\sqrt{k}$ as shorthand for $\lceil \sqrt{k}\rceil$.} Thus, if for $A$ we compute a rolling hash tables according to $S + a\sqrt{k} := \{s + a\sqrt{k}, s \in S\} \cap [1, n]$ for $a \in \sqrt{k}$. Likewise, for $B$ we compute rolling hash tables according to $S - b := \{s - b, s \in S\} \cap [1, n]$. Then, if we seek to compare $A[i_B+c, i_B+c+d-1]$ and $B[i_B, i_B+d-1]$, it essentially suffices to compare\footnote{We need to ``shave'' $k$ from each end of the substrings as we need to ensure that $[i_B-b+k, i_B+d-1-b-k] \subset [i_B, i_B+d-1],$ etc.} $A[i_B+a\sqrt{k}+k, i_B+a\sqrt{k}+d-1-k]$ and $B[i_B-b+k, i_B+d-1-b-k]$. Before calling $\operatorname{GreedyMatch}$, we call two methods $\operatorname{ProcessA}$ and $\operatorname{ProcessB}$ which compute these hash tables. Note that the procedures are asymmetrical. These take $\tilde{O}(n/\sqrt{k})$ time each. \begin{algorithm}[h] \caption{$\operatorname{ProcessA}_k(A)$} \begin{algorithmic} \item[{\bf Input:}] $A \in \Sigma^n$ \STATE \textbf{for} $a$ \textbf{from} $-\sqrt{k}$ \textbf{to} $\sqrt{k}$ \STATE \ \ \ $H_{A,a\sqrt{k}} \leftarrow \operatorname{InitRollingHash}(A, S+a\sqrt{k})$ \RETURN $\{H_{A,a\sqrt{k}} : a \in [-\sqrt{k}, \sqrt{k}]\}$ \end{algorithmic} \label{alg:processA} \end{algorithm} \begin{algorithm}[h] \caption{$\operatorname{ProcessB}_k(B)$} \begin{algorithmic} \STATE \textbf{for} $b$ \textbf{from} $-\sqrt{k}$ \textbf{to} $\sqrt{k}$ \STATE \ \ \ \ \ \ $H_{B,b} \leftarrow \operatorname{InitRollingHash}(B, S-b)$ \RETURN $\{H_{B,b} : b \in [-\sqrt{k}, \sqrt{k}]\}$ \end{algorithmic} \end{algorithm} \begin{algorithm}[h!] \caption{$\operatorname{MaxAlign}_k(A, B, i_B)$} \begin{algorithmic} \item[{\bf Input:}] $A \in \Sigma^n, B \in \Sigma^n$, $k \le n$, $i_B \in [1, n]$ \STATE $d_{0} \leftarrow 2k$, $d_1 \leftarrow n-i_B+1$ \STATE \textbf{while} $d_0 \neq d_1$ \textbf{do} \STATE \ \ \ $d_{\text{mid}} \leftarrow \lceil(d_0 + d_1) / 2\rceil$ \STATE \ \ \ \textbf{if} $d \le 2k$ \textbf{then return} {\textbf{True}}{} \STATE \ \ \ $L_A, L_B \leftarrow 0$ \STATE \ \ \ \textbf{for} $a$ \textbf{from} $-\sqrt{k}$ \textbf{to} $\sqrt{k}$ \STATE \ \ \ \ \ \ $h \leftarrow \operatorname{RetrieveRollingHash}(A, S+a\sqrt{k}, H_{A,a\sqrt{k}}, i_B+k+a\sqrt{k}, i_B+d_{\text{mid}}-k-1+a\sqrt{k})$ \STATE \ \ \ \ \ \ append $h$ to $L_A$ \STATE \ \ \ \textbf{for} $b$ \textbf{from} $-\sqrt{k}$ \textbf{to} $\sqrt{k}$ \STATE \ \ \ \ \ \ $h \leftarrow \operatorname{RetrieveRollingHash}(B, S-b, H_{B,b}, i_B+k-b, i_B+d_{\text{mid}}-k-1-b)$ \STATE \ \ \ \ \ \ append $h$ to $L_B$ \STATE \ \ \ sort $L_A$ and $L_B$ \STATE \ \ \ \textbf{if} $L_A \cap L_B \neq \emptyset$ \STATE \ \ \ \ \ \ \textbf{then} $d_0 \leftarrow d_{\text{mid}}$ \STATE \ \ \ \ \ \ \textbf{else} $d_1 \leftarrow d_{\text{mid}}-1$. \RETURN $d_0$ \end{algorithmic} \label{alg:align-onesidedpreproc} \end{algorithm} \begin{lemma}\label{lem:two-sided-align} $\method{MaxAlign}_k$ is approximately correct. \end{lemma} \begin{proof} First, consider any $d \ge 1$ such that $B[i_B, i_B+d-1]$ has a $k$-alignment in $A$. We seek to show that $\method{MaxAlign}_k(A, B, i_B) \ge d$ with probability $1$. Note that the output of $\method{MaxAlign}_k$ is always at least $2k$, so we may assume that $d > 2k$. By definition of $k$-alignment, there exists $c \in [-k, k]$ such that $A[i_B + c, i_B+d-1+c] = B[i_B, i_B+d-1]$. Note that there exists $a, b \in [-\sqrt{k}, \sqrt{k}]$ such that $a\sqrt{k} + b = c$ and so \[ A[i_B + k + a\sqrt{k}, i_B + d-k-1+a\sqrt{k}] = B[i_B + k-b, i_B + d-k-1-b]. \] By applying Claim~\ref{claim:hash-random}, we have with probability $1$ that \begin{align*} \operatorname{RetrieveRollingHash}&(A, S+a\sqrt{k}, H_{A,a\sqrt{k}}, i_B + k + a\sqrt{k}, d-k-1+a\sqrt{k})\\&= \operatorname{RetrieveRollingHash}(B, S-b, H_{B,b}, i_B+k-b, i_B+d-k-1-b). \end{align*} Therefore, in the implementation of $\method{MaxAlign}_k(A, B, i_B)$, if $d_{\text{mid}} = d$, then $L_A$ and $L_B$ will have nontrivial intersection, so the output of the binary search will be at least $d$, as desired. Thus, $\method{MaxAlign}_k(A, B, i_B)$ will output at least the length of the maximal $k$-alignment. Second, we verify that $\method{MaxAlign}_k$ outputs an approximate $k$-alignment. Let $d$ be the output of $\method{MaxAlign}_k$, either $d = 2k$, in which case $B[i_B, i_B+d-1]$ trivially is in approximate $k$-alignment with $A[i_B, i_B+d-1]$ or $d > 2k$. Thus, for that $d$, the binary search found that $L_A \cap L_B \neq \emptyset$ and so there exists $a, b \in [-\sqrt{k}, \sqrt{k}]$ such that % \begin{align*} \operatorname{RetrieveRollingHash}&(A, S+a\sqrt{k}, H_{A,a\sqrt{k}}, i_B + k + a\sqrt{k}, d-k-1+a\sqrt{k})\\&= \operatorname{RetrieveRollingHash}(B, S-b, H_{B,b}, i_B+k-b, i_B+d-k-1-b). \end{align*} Applying Claim~\ref{claim:hash-random} over all $\tilde{O}(\sqrt{k}^2) = \tilde{O}(k)$ comparisons of hashes made during the algorithm, with probability at least $1 - 1/n^3$, we must have that \[ \operatorname{ED}(A[i_B + k + a\sqrt{k}, d-k-1+a\sqrt{k}], B[i_B+k-b, i_B+d-k-1-b]) \le k. \] Let $c := a\sqrt{k} + b$% then we have that \[ \operatorname{ED}(A[i_B + k + c-b, i_B+d-k-1+c-b], B[i_B+k-b, i_B+d-k-1-b]) \le k \] so \[ \operatorname{ED}(A[i_B+c, i_B+d-1+c], B[i_B, i_B+d-1]) \le 3k. \] Since $c = a\sqrt{k} + b \in [-3k, 3k]$, we have that $B[i_B, i_B+d-1]$ has an approximate $k$-alignment, as desired. \end{proof} \begin{theorem}\label{thm:no-preprocessing} For $k \le O(\sqrt{n})$, with no preprocessing, we can distinguish between $\operatorname{ED}(A, B) \le k$ and $\operatorname{ED}(A, B) > 40k^2$ in $\tilde{O}(n/\sqrt{k})$ time with probability at least $1 - \frac{1}{n}$. \end{theorem} \begin{proof} By Lemma~\ref{lem:two-sided-align}, $\operatorname{MaxAlign}_k$ is approximately correct so by Lemma~\ref{lem:jagged-match} succeeds with high enough probability that $\operatorname{GreedyMatch}$ outputs the correct answer with probability at least $1 - \frac{1}{n}$. By inspection, both $\operatorname{ProcessA}_k$ and $\operatorname{ProcessB}_k$ run in $\tilde{O}(n/\sqrt{k})$ time in expectation. Further, $\operatorname{MaxAlign}_k$ runs in $\tilde{O}(\sqrt{k})$ time, so $\operatorname{GreedyMatch}$ runs in $\tilde{O}(n/\sqrt{k}+k^{3/2}) = \tilde{O}(n/\sqrt{k})$ time. \end{proof} \section{One-sided Preprocessing} \label{sec:one-sided} For the one-sided preprocessing, we desire to get near-linear preprocessing time. To do that, $\method{MaxAlign}_k$ shall be half approximately correct rather than approximately correct. Recall as before we preselect $S \subset [1,n]$ with each element included i.i.d.~with probability $q := \min(\frac{4\ln n}{k}, 1)$. Also assume that every multiple of $k$ is in $S$ and that $n-1$ is in $S$.% This only increases the size of $S$ by $n/k$, and does not hurt the success probability of Claim~\ref{claim:hash-random}. To achieve near-linear preprocessing, we only store $ \operatorname{RetrieveRollingHash}(A, S+a, H_{A,a}, i+a, i+2^{i_0}-1+a)$, when $(S+a) \cap [i+a, i+2^{i_0}-1+a]$ changes. This happens when $i \in (S+1) \cup (S - 2^{i_0}+1)$. \begin{algorithm}[h] \caption{$\operatorname{OneSidedPreprocessA}_k(A)$} \begin{algorithmic} \STATE \textbf{for} $a$ \textbf{from} $-k$ \textbf{to} $k$ \STATE \ \ \ $H_{A,a} \leftarrow \operatorname{InitRollingHash}(A, S+a)$ \STATE $\mathcal T_A \leftarrow$ $\lfloor \log n\rfloor \times \frac{n}{k}$ matrix of empty hash tables % \STATE \textbf{for} $i_0$ \textbf{in} $[\lfloor \log n \rfloor]$ \STATE \ \ \ \textbf{for} $a$ \textbf{from} $-k$ \textbf{to} $k$ \STATE \ \ \ \ \ \ \textbf{for} $i$ \textbf{in} $((S+1) \cup (S - 2^{i_0}+1))$ \textbf{ with } $[i+a, i+2^{i_0}-1+a] \subset [n]$ % \STATE \ \ \ \ \ \ \ \ \ $h \leftarrow \operatorname{RetrieveRollingHash}(A, S+a, H_{A,a}, i+a, i+2^{i_0}-1+a)$ \STATE \ \ \ \ \ \ \ \ \ add $h$ to $T_A[i_0, \lfloor i/k\rfloor - 1]$. \STATE \ \ \ \ \ \ \ \ \ add $h$ to $T_A[i_0, \lfloor i/k\rfloor]$. \STATE \ \ \ \ \ \ \ \ \ add $h$ to $T_A[i_0, \lfloor i/k\rfloor + 1]$. \RETURN $T_A$ \end{algorithmic} \label{alg:processA} \end{algorithm} \begin{claim}\label{claim:preproc-fast} $\operatorname{OneSidedPreprocesA}(A)$ runs in $\tilde{O}(n)$ time in expectation. \end{claim} \begin{proof} Computing $\operatorname{InitRollingHash}(A, S+a)$ % takes $|S| = \tilde{O}(n/k)$ time in expectation. Thus, computing the $H_{A,a}$'s % takes $\tilde{O}(n)$ time. The other loops take (amortized) $\tilde{O}(1) \cdot O(k) \cdot \tilde{O}(n/k) = \tilde{O}(n)$ time. \end{proof} Before we call $\operatorname{GreedyMatch}$, we need to initialize the hash function for $B$ using $\operatorname{OneSidedProcessB}(B)$. This takes $\tilde{O}(n/k)$ time in expectation. \begin{algorithm}[h] \caption{$\operatorname{OneSidedProcessB}(B)$} \begin{algorithmic} \RETURN $H_{B} \leftarrow \operatorname{InitRollingHash}(B, S)$ \end{algorithmic} \end{algorithm} \begin{algorithm}[h] \caption{$\operatorname{OneSidedMaxAlign}_k(A, B, i_B)$} \begin{algorithmic} \item[{\bf Input:}] $A \in \Sigma^n, B \in \Sigma^n$, $k\le n$,$i_b \in [1, n]$ % \STATE \textbf{for} $d \in [2^{\lfloor \log n\rfloor}, 2^{\lfloor \log n\rfloor-1}, \hdots, 1]$ % \STATE \ \ \ \textbf{if} $\operatorname{RetrieveRollingHash}(B, S, H_B, i_B, i_B+d-1) \in T_A[\log d, \lfloor i_B/k\rfloor ]$ \textbf{then} \textbf{return} \ $d$ \RETURN 0 \end{algorithmic} \label{alg:pseudoalign-onesidedpreproc} \end{algorithm} \begin{lemma}\label{lem:one-sided-pseudo-align} $\operatorname{OneSidedMaxAlign}_{k}$ is half approximately correct. \end{lemma} \begin{proof} First, consider the maximal $d' \ge 1$ a power of two such that $B[i_B, i_B+d'-1]$ has a $k$-alignment in $A$. We seek to show that $\operatorname{OneSidedMaxAlign}_k(A, B, i_B) \ge d'$ with probability $1$. By definition of $k$-alignment, there exists $a\in [-k, k]$ such that $A[i_B + a, i_B+d'-1+a] = B[i_B, i_B+d'-1]$. By applying Claim~\ref{claim:hash-random}, we have with probability $1$ that \begin{align*} \operatorname{RetrieveRollingHash}&(A, S+a, H_{A,a}, i_B + a, i_B+d'-1+a)\\&= \operatorname{RetrieveRollingHash}(B, S, H_B, i_B, i_B+d'-1). \end{align*} Let $i'_B$ be the least integer in $((S+1) \cup (S-d'+1)) \cap [n]$ which is at least $i_B$. Since $S$ contains every multiple of $k$ (and $n-1$), we must have that $|i'_B - i_B| \le k$. Therefore, \begin{align*} \operatorname{RetrieveRollingHash}&(A, S+a, H_{A,a}, i_B + a, i_B+d'-1+a)\\ &= \operatorname{RetrieveRollingHash}(A, S+a, H_{A,a}, i'_B + a, i'_B+d'-1+a)\\ &\in T_A[\log d, \lfloor i'_B/k\rfloor + \{-1, 0, 1\} ]. \end{align*} Since $\lfloor i'_B/k\rfloor - \lfloor i_B/k\rfloor \in \{-1, 0, 1\}$. We have that % if $d = d'$, $\operatorname{RetrieveRollingHash}(B, S, H_B, i_B, i_B+d'-1) \in T_A[\log d, \lfloor i_B/k\rfloor ]$. Thus, $\operatorname{OneSidedMaxAlign}_k(A, B, i_B)$ % will output at least more than half the length of the maximal $k$-alignment. Second, we verify that $\operatorname{OneSidedMaxAlign}_k$ outputs an approximate $k$-alignment. Let $d$ be the output of $\operatorname{OneSidedMaxAlign}_k$, either $d = 0$, in which case $B[i_B, i_B+d-1]$ trivially is in approximate $k$-alignment with $A[i_B, i_B+d-1]$ or $d \ge 1$. Thus, for that $d$, the search found that $\operatorname{RetrieveRollingHash}(B, S, H_B, i_B, i_B+d'-1) \in T_A[\log d, \lfloor i_B/k\rfloor ]$. Thus, there exists, $i'_B$ with $|\lfloor i'_B/k\rfloor - \lfloor i_B/k\rfloor| \le 1$ and $a \in [-k, k]$ such that \begin{align*} \operatorname{RetrieveRollingHash}&(A, S+a, H_{A,a}, i'_B + a, i'_B+d'-1+a)\\&= \operatorname{RetrieveRollingHash}(B, S, H_B, i_B, i_B+d'-1). \end{align*} Applying Claim~\ref{claim:hash-random} over all $\tilde{O}(k)$ potential comparisons of hashes made during the algorithm, with probability at least $1 - 1/n^3$, we must have that \[ \operatorname{ED}(A[i'_B + a, i'_B+a+d'-1], B[i_B, i_B+d'-1]) \le k. \] Note that $|i'_B + a - i_B| \le |i'_B - i_B| + |a| \le 3k$. Thus $B[i_B, i_B+d'-1]$ has an approximate $k$-alignment, as desired. \end{proof} \begin{theorem}\label{thm:1-preprocessing} For all $A, B \in \Sigma^n$. When $A$ is preprocessed for $\tilde{O}(n)$ time in expectation, we can distinguish between $\operatorname{ED}(A, B) \le k/(2\log n)$ and $\operatorname{ED}(A, B) > 40k^2$ in $\tilde{O}(n/k)$ time with probability at least $1 - \frac{1}{n}$ over the random bits in the preprocessing (oblivious to $B$). \end{theorem} \begin{proof} By Lemma~\ref{lem:one-sided-pseudo-align}, $\operatorname{OneSidedMaxAlign}_k$ is half approximately correct so by Lemma~\ref{lem:jagged-match} succeeds with high enough probability that $\operatorname{GreedyMatch}$ outputs the correct answer with probability at least $1 - \frac{1}{n}$. By Claim~\ref{claim:preproc-fast}, the preprocessing runs in $\tilde{O}(n)$ time. Also $\operatorname{OneSidedProcessB}$ runs in $\tilde{O}(n/k)$ time. Further, $\operatorname{OneSidedMaxAlign}_k$ runs in $\tilde{O}(1)$ time, as performing the power-of-two search, computing the hash, and doing the table lookups are $\tilde{O}(1)$ operations), so the one-sided preprocessing version of $\operatorname{GreedyMatch}$ runs in $\tilde{O}(n/k+k) = \tilde{O}(n/k)$ time. \end{proof} \bibliographystyle{alpha}
1,108,101,566,312
arxiv
\section{\bf Introduction} In past various authors \cite{Dmi-86,Jain-90}, including two of the present authors (Jain and Santra), have analysed theoretically the data on the pp$\rightarrow $n$\Delta ^{++}$ reaction to extract the potential for pp$\rightarrow $N$\Delta $ transition. In them, the calculations of Jain et al. \cite{Jain-90} were done in the DWBA and those of Dmitriev \cite{Dmi-86} were done in the PWBA. They concluded that the spin averaged data on the pp$\rightarrow $N$\Delta $ reaction can be reproduced very well by a one pion-exchange potential with the length parameter $\Lambda _\pi $ around 1-1.2 GeV/c in DWBA and around 650 MeV/c in the PWBA. The difference in the two values of $\Lambda _\pi $ is due to distortion effects. In fact, subsequently, when Jain et al. parametrized their DWBA t-matrix \cite{Jain-92}, they found that the imaginary part of this t-matrix is very weak and the real part resembles to a great extent the one pion-exchange potential, with $\Lambda _\pi $ reduced to around 650 MeV/c. The experimental data which above studies used were somewhat inclusive \cite{Shim,Bugg}. They were deduced from the pp$\rightarrow $np$\prime \pi^+$ reaction data which did not have the complete exclusive kinematics. The delta was identified in them by seeing a bump in the missing mass spectrum. A kinematically complete data set, however, exists on the pp$\rightarrow $p$\prime \pi ^+$n reaction at 800 MeV beam energy from LAMPF due to Hancock et al.\cite{Han}. They are a good coincidence data, and, thus, provide an excellent opportunity to test in detail the correctness of the pp$\rightarrow $n$\Delta ^{++}$ DWBA t-matrix developed by two of us earlier \cite{Jain-92}. In the present paper we analyse the LAMPF data using this t-matrix. This includes the analysis of the various proton and pion energy spectra measured in coincidence and the total integrated cross section for the pp$\rightarrow $p$\prime \pi ^+$n reaction. We assume that the pp$\rightarrow $p$\prime \pi ^+$n reaction proceeds in two steps. In the first step, one of the protons in the entrance channel gets converted to $\Delta $, and in the second step this delta decays into a pion and a nucleon. The transition matrix for the pp$\rightarrow \Delta $N step is taken to be the DWBA t-matrix mentioned above. The decay of the delta is described by the pseudovector non-relativistic Lagrangian, \begin {equation} L_{\pi N\Delta}=i\frac {f^*_\pi }{m_\pi}({\bf S.\kappa _\pi})({\bf T.\phi }), \end {equation} where $f^*_\pi $ is the coupling constant at the $\pi $N$\Delta $ vertex. ${\bf S}$ and ${\bf T }$ are the spin and isospin transition operators, respectively. This framework for the pp$\rightarrow $p$\pi ^+$n reaction includes in a certain way the final state interaction [FSI] amongst p$\pi ^+$n in the final state. The FSI consists of the interaction between p and $\pi ^+$ and between the p$\pi ^+$ pair and the recoiling neutron. The dominant effect of the interaction between p and $\pi ^+$ is to produce the $\Delta ^{++}$ resonance. This is explicitely included in our framework. The interaction between p$\pi ^+$ and the neutron in our framework is approximated by that between the $\Delta ^{++}$ and the neutron. A recent work by Jain and Kundu \cite{Jain-96} on the delta decay in nuclear medium suggests that this approximation is reasonably good. The pp$\rightarrow$np$'\pi^{+}$ process has also been worked out in the literature by Engel et. al \cite{Shyam}. However, these calculations use plane waves for the continuum particles. Thus, unlike our work, this work does not include the effect of distortions in the entrance and the exit channels. Inclusion/omission of rho-exchange in the description of the pp$\rightarrow $n$\Delta ^{++}$ reaction has been the topic of much debate in the literature. The general conclusion is that the spin averaged data on the pp$\rightarrow \Delta ^{++}$n reaction are well reproduced by one pion-exchange potential only \cite{Wick,Dmi-86,Jain-90,Jain1}. Any attempt to include the rho-exchange worsens the agreement with the experiments, and yield unsatisfactory results. In this context it is also interesting to see the work of Jain et al. \cite{Jain2} which discusses the relative importance of rho-exchange in p(n,p)n and p(p,n)$\Delta ^{++}$ reactions. They conclude that, while it is absolutely essential to include the rho-exchange in the description of the p(n,p)n reaction, the rho-exchange is not required for accounting the p(p,n)$\Delta ^{++}$ data. This study deals with the spin averaged cross sections. A recent theoretical study on the microscopic structure of the $\rho N\Delta $ vertex by Haider et al. \cite{Haider} supports this conclusion. They find that the microscopically calculated value of the f$_{\rho N \Delta }$ coupling constant is much smaller than what is normally assumed. The measured spin averaged cross sections on nuclei in charge exchange reactions are also reproduced with only a pion exchange \cite{Dmi1}. It is, however, true that the measurements of Prout et al. \cite{Prout} with a polarized proton beam on nuclei, and earlier by Ellegaard et al. \cite{Elle} do show a large transverse part. But, as shown by V.F.Dmitriev \cite{Dmi1} and Sams et al. \cite{Sam}, large transverse contribution can also arise from the distortion of the continuum particles. All these discussions thus suggests that, at best, the role of rho-exchange in the charge-exchange reaction in the delta region is controversial. The spin averaged cross sections do not need it, the spin transfer measurements show some indications for it. Since the present work deals with the spin averaged cross sections, our use of one pion-exchange is consistant with other work in this field. In section 2 we write the formalism for the pp$\rightarrow$np$'\pi^{+}$ process. Section 3 gives calculated cross sections for the proton and pion energy spectra at 800 MeV beam energy and the total cross section from 500 MeV to 2 GeV. These results are compared with the available experimental cross sections. A good agreement is obtained. \section {\bf Formalism} The cross-section for the pp$\rightarrow$np$'\pi^{+}$ process is given by \begin{equation} d\sigma= <|(t_{pp\rightarrow p'\pi^+ n}|^2> [PS], \end{equation} where, the angular brackets denote the sum and average over the spins in the initial and final states, respectively. [PS] is the factor associated with the phase-space and the beam current. For the proton and pion detected in coincidence in the final state, in the lab. frame it is given by, \begin{equation} [PS]= \frac{m_p^2 m_n k_p'^2 k_\pi^3} {2 (2\pi)^5 k_p E_p'} \frac{1}{k_\pi^2 (E_i-E_p')-E_\pi |{\bf(k_p-k_p').k_\pi |}} d\Omega_p' d\Omega_\pi dk_p'. \end{equation} $t_{pp\rightarrow p'\pi^+ n}$ is the t-matrix for the $pp\rightarrow p'\pi^+ n$ process. It consists of two parts: one corresponding to the excitation of the proton in the initial state to $\Delta^{++}$ and another corresponding to its excitation to $\Delta^{+}$ [Figure 1]. That is \begin{equation} t_{pp\rightarrow p'\pi^+ n}=t^{\Delta^{++}}+t^{\Delta^{+}}. \end{equation} Furthermore, because of the antisymmetrization of the protons, each t-matrix in turn consists of two terms, one corresponding to the excitation of the beam proton and another corresponding to the excitation of the target proton. We call them ``direct'' and ``exchange" terms, respectively. Putting every thing together, we get \begin{eqnarray} t_{pp \rightarrow NN \pi} & = & \sum _{\Delta} <N \pi| {\bf S.\kappa_\pi} {\bf T.\phi_\pi}|\Delta> \nonumber\\ &\times& G_\Delta <t_{pp\rightarrow N\Delta}>, \label{ttmat} \end{eqnarray} where N represents a proton or a neutron in the final state corresponding to the decay of $\Delta^{++} \rightarrow \pi^{+}$p and $\Delta^{+} \rightarrow \pi^{+}$n, respectively. $\Delta$ stands for a $\Delta^{++}$ or $\Delta^{+}$ excitation in the intermediate state. $\kappa_\pi$ at the $\Delta$-decay vertex is the outgoing pion momentum in the $\pi$N centre-of-mass. It is given by, \begin{eqnarray} \kappa_\pi (\mu^2,m_\pi^2)=[(\mu^2+m^2-m_\pi^2)^2/4\mu^2-m^2]^{1/2}. \end{eqnarray} This relation reflects the restrictions on the available phase space for the decay of a delta of mass $\mu$ into an on-shell pion of mass m$_\pi$ (=140 MeV) and a nucleon. Since the final outgoing pion is on-shell, the $\Delta$N$\pi$ vertex does not contain the usual form factor F$^{*}$. $G_\Delta$ in equation \ref{ttmat} is the delta propagator. Its form is taken as, \begin{eqnarray} G_\Delta= \frac{2 m_\Delta}{\mu^{2}-m_\Delta^{2}+i\Gamma_\Delta m_\Delta}, \end{eqnarray} where, m$_\Delta$(=1232 MeV) and $\Gamma_\Delta$ are the resonance parameters associated with a free $\Delta$. The free width, $\Gamma_{\Delta}$, depends upon the invariant mass and is written as, \begin{eqnarray} \Gamma _\Delta=\Gamma_0 \Bigl[\frac{ k(\mu^2,m_\pi^2)} {k(m_\Delta^2,m_\pi ^2)}\Bigr]^3 \frac{k^2(m_\Delta ^2,m_\pi ^2)+\gamma ^2}{k^2(\mu ^2,m_\pi ^2)+\gamma ^2}, \label{freewidth1} \end{eqnarray} with $\Gamma _0$=120 MeV and $\gamma$=200 MeV. $\mu$ is the invariant mass of the N$\pi^{+}$ system and is given by, \begin{eqnarray} \mu ^2=(E_{N}+E_\pi)^2-({\bf k}_{N}+{\bf k}_\pi)^2 . \label{inv} \end{eqnarray} $t_{pp\rightarrow N\Delta}$ is the DWBA t-matrix for the $pp\rightarrow N\Delta$ transition. Following Jain and Santra \cite{Jain-90}, it is given by \begin{eqnarray} t_{pp\rightarrow N\Delta}= (\chi^{-}_{\bf k_f}, <n \Delta^{++}|v_\pi| \{pp\}>,\chi_{\bf k_i}^{+}), \label{disttmat} \end{eqnarray} where curly brackets around pp represent the antisymmetrization of the pp wave function. $v_\pi $ is the one pion-exchange potential for $pp\rightarrow N\Delta$ transition. $\chi$s are the distorted waves. They describe the elastic scattering of the pp and the n$\Delta$ systems. Jain and Santra [4] have evaluated equation \ref{disttmat} using eikonal approximation for $\chi$s. With $\Lambda _{\pi}$=1 GeV/c at both the $\pi$NN and $\pi$N$\Delta$ vertices, they found that this t-matrix reproduces the available experimental data on this reaction over a large energy range very well. Jain and Santra also found that their DWBA t-matrix can be easily parametrized \cite{Jain-92}. The parametrized t-matrix is complex, but its imaginary part is very weak. The real part resembles very much with the one pion-exchange potential with its length parameter, $\Lambda _\pi $, reduced to around 600-700 MeV/c. For the present calculations, instead of repeating the full calculation of the t-matrix, we have used the parametrized form, i.e. \begin{eqnarray} t_{pp\rightarrow N\Delta } \approx \it v_\pi ^{pp\rightarrow N\Delta } (\Lambda_\pi=650 MeV/c)= -\frac {ff^*} {m_\pi^2} FF^* \frac {\bf S^+ . q \sigma . q}{m_\pi^2 +q^2-\omega^2} {\bf T^+.\tau}, \label{vpot} \end{eqnarray} where f and f$^*$ at the $\pi$NN and $\pi$N$\Delta$ vertices are 1.008 and 2.156 respectively \cite{Bug1}. ${\bf q}$ is the momentum transfer in the pion-nucleon rest frame. Since the exchanged pion is virtual, it is not straight forward to define this momentum quite unambiguously. For the $\pi$N$\Delta$ vertex we use the following Galilian invariant form, \begin{eqnarray} {\bf q}= {\bf k_p -k_\Delta [=(k_N+k_\pi)]}- \frac{\omega {\bf k_\Delta}}{E_\Delta}, \label{pinn} \end{eqnarray} where, $\omega$ is the energy transfer in exciting the $\Delta$. At the $\pi$NN vertex we replace \begin{eqnarray} {\bf q}^2 \rightarrow {-t}, \label{pindel} \end{eqnarray} where t is the four momentum squared. \section{Results and Discussion} Using the above formalism we calculate the exclusive proton momentum spectra, the outgoing pion momentum spectra and the integrated total p(p,p$'\pi^{+}$)n cross-section. As the detailed measurements for the p(p,p$'\pi^{+}$)n process exist at 800 MeV beam energy, we first calculate the differential cross-sections at this energy. In figure 2, we plot the calculated as well as the measured \cite{Han} exclusive proton momentum spectra for the proton and the pion angles of 14.5$^0$ and -21$^0$ degrees , respectively. These angles correspond to the delta going at 0$^0$. The figure has got four calculated curves. The short-dashed and dot-dashed curves correspond to $\Delta ^{++}$ and and $\Delta ^+$ contributions (including both, the ``direct" as well as ``exchange" diagrams), respectively. The solid curve is the coherent sum of these two contributions. We find that this curve agrees well with the measured cross sections. We also note that the main contribution to the solid curve comes from the $\Delta ^{++}$ diagram. The $\Delta ^+$ contributes only to the extent of 5-10 $\%$. To show the contribution of the ``exchange" diagram, in fig. 2 we also show ( by long-dashed curve) the cross section for the $\Delta ^{++}$ diagram using only the ``direct" term. Comparing this with the short-dashed curve, which includes both the direct and exchange diagrams, we find that the contribution of the exchange term is around 15-20$\%$. In figure 3, we show the proton spectrum for another set of proton and pion angles. This pair of angles also correspond to the delta going at 0$^0$. The outgoing proton and pion angles are 14.5$^0$ and -42$^0$, respectively. All the curves have the same meaning as those in figure 2. Here too the calculated proton spectrum is in good accord with the measured spectrum. Other observations also remain same as in fig. 2. In figure 4 we show the double differential cross-section as a function of the outgoing pion momentum. The proton angles are integrated. Experimentally such measurements exist for 800 MeV beam energy and the pion detected at 20$^0$ \cite{Bev}. In this figure we have 3 curves alongwith the experimental data. The dash and dash-dot curves correspond separately to the $\Delta^{++}$ and $\Delta^{+}$ diagrams, respectively. The solid curve is calculated including both the diagrams. All the curves include the direct as well as exchange diagrams. Excluding the peak in the measured cross sections around 550 MeV, the solid curve is in overall accord with the measured cross sections. Relative contributions of the $\Delta ^+$ and $\Delta ^{++}$ to the cross sections are at the same level as in the earlier curves. The peak around 550 MeV, as kinematic considerations suggest, may arise from the resonance structure between neutron and proton in the final state. Finally in figure 5 we present the calculated total integrated cross section as a function of the beam energy from threshold to 2 GeV. Since, as seen from the results in figures 2 - 4, the contribution of the $\Delta^{+}$ is only at the level of 10$\%$, we give the calculated results for the $\Delta^{++}$ only. The calculated results include both the direct and the exchange contributions. We find an excellent agreement between the calculated and measured cross-sections \cite{Lock}. \newpage \section {Conclusions} In conclusion, the findings of this paper can be summarized as : \begin{enumerate} \item Experimentally measured exclusive proton momentum spectra, the pion momentum spectrum and the total integrated cross sections over a large energy range can be reproduced well with one-pion exchange potential for the delta excitation in the intermediate state; \item the contribution of the $\Delta^{++}$ dominates. $\Delta^{+}$ contributes only to the extent of 5-10$\%$, and \item the effect of the exchange process is to bring down the cross-section. Its contribution, however, is only at the level 10-20$\%$. \end{enumerate} \newpage \section *{References} \begin{enumerate} \bibitem{Han} A. D. Hancock et. al., Phys. Rev. C{\bf 27}, 2742 (1983). \bibitem {Shim} F. Shimuzu et al., Nucl. Phys. {\bf A386}, 571 (1982); Nucl. Phys. {\bf A389}, 445 (1982). \bibitem{Dmi-86} V. Dmitriev, O. Sushkov and C. Gaarde, Nucl. Phys {\bf A 459}, 503 (1986). \bibitem{Jain-90} B. K. Jain and A. B. Santra, Nucl. Phys. {\bf A519}, 697 (1990). \bibitem{Bugg} D. V. Bugg et. al., Phys. Rev {\bf B133}, 1017 (1964); S. Coletti et al., Nuov. Cim. {\bf 49}, 479 (1967); A. M. Eisner et al., Phys. Rev. {\bf B 138}, 670 (1965); G. Alexander et. al., Phys. Rev. {\bf 154}, 1284 (1967); T. C. Bacon et al., Phys. Rev. {\bf 162}, 1320 (1967). \bibitem {Jain-92} B. K. Jain and A. B. Santra, Int. Jour. of Mod. Phys {\bf E1} , 201 (1992). \bibitem {Jain-96} B. K. Jain and Bijoy Kundu, Phys. Rev. C{\bf 53}, 1917 (1996); Bijoy Kundu and B. K. Jain, Phys. Lett. B{\bf 422}, 19 (1998). \bibitem{Shyam} A. Engel et al., Nucl. Phys. {\bf A603}, 387 (1996). \bibitem{Wick} A. B. Wicklund et. al., Phys. Rev. D{\bf 34}, 19 (1986); {\it ibid }{\bf 35}, 2670 (1987). \bibitem{Jain1} B. K. Jain and A. B. Santra, Phys. Lett. B{\bf 244}, 5 (1990). \bibitem{Jain2} B. K. Jain and A. B. Santra, Phys. Rev. C{\bf 46}, 1183 (1992). \bibitem{Haider} Q. Haider and L. C. Liu, Phys. Lett. B{\bf 335}, 253 (1994). \bibitem{Dmi1} V. F. Dmitriev, Nucl. Phys. {\bf A577}, 249c (1994). \bibitem{Prout} D. Prout et. al., Nucl. Phys. {\bf A577}, 233c (1994). \bibitem{Elle} C. Ellegard et. al., Phys. Lett. B{\bf 231}, 365 (1989). \bibitem{Sam} T. Sams and V. F. Dmitriev, Phys. Rev. C{\bf 45}, R2555 (1992). \bibitem{Bug1} D. V. Bugg, A. A. Carter and J. R. Carter, Phys. Lett. B {\bf 44}, 278 (1973); O. Dumbrajs et al., Nucl. Phys. {\bf B216}, 277 (1983); E.Oset. H. Toki and W. Weise, Phys. Rep. {\bf 83}, 281 (1982); V. Flamino, W. G. Moorhead, D. R. O. Morrison and N. Rivoire, CERN Report CERN-HERA {\bf 83-01}, 1983. \bibitem {Bev} P. R. Bevington, Nucleon-Nucleon interactions, Vancouver (AIP, New York), p. 305 (1977). \bibitem {Lock} W. O. Lock and D. F. Measday, Intermediate Energy Nuclear Physics, p. 213 (1970). \end{enumerate} \newpage {\bf Figure Captions} \begin{enumerate} \item The direct and exchange diagrams for the $\Delta $ excitation. \item The outgoing proton momentum spectrum in coincidence with the pion. T$_p$=800 MeV. $\theta_p'$=14.5$^0$ and $\theta_\pi$=-21$^0$. The experimental points are from \cite{Han}. The long-dashed curve is calculated using the direct $\Delta^{++}$ diagram and the short-dashed curve includes both the direct and the exchange $\Delta^{++}$ diagrams. The solid curve is calculated using both the $\Delta^{++}$ and $\Delta^{+}$ diagrams added coherently. The dash-dot curve is the $\Delta^{+}$ contribution multiplyied by 5. $\Lambda_\pi$=650 MeV/c. \item Same as figure 2 with $\theta_p'$=14.5$^0$ and $\theta_\pi$=-42$^0$. Experimental points are from \cite{Han}. All the curves have the same meaning as in figure 2. $\Lambda_\pi$=650 MeV/c. \item The outgoing pion momentum spectra for the p(p,p$'\pi^{+}$)n reaction at T$_p$=800 MeV. $\theta_\pi$=20$^0$. The experimental points are from \cite{Bev}. The solid curve is calculated using both the $\Delta^{++}$ and $\Delta^{+}$ diagrams added coherently. The short-dashed and dot-dashed curves show separately the contribution due to $\Delta^{++}$ and $\Delta^{+}$, respectively. $\Lambda_\pi$=650 MeV/c. \item Total cross-section for the p(p,p$'\pi^{+}$)n reaction. The calculated curve includes both the direct and exchange $\Delta^{++}$ excitation diagrams. $\Lambda_\pi$=650 MeV/c. The experimental points are from \cite{Lock}. \end{enumerate} \end{document}
1,108,101,566,313
arxiv
\section{ Numerical method--OPEM "total variance"} Let the $\{E_{i},F_{i}, i=1,...M \}$ are arbitrary pairs of monitoring data $E=E_{i}$ and $F=F_{i}$, introduced in section 2. They are given with experimental errors in both variables-$\sigma(F_{i})$ and $\sigma(E_{i})$. Consider the total uncertainty (total variance) $S^{2}(E,F)$ \cite{9,10,11}, associated with $(E,F)$ \begin{equation} S_{i}^{2}=\sigma^{2}(F_{i})+({\partial F_{i}\over \partial E_{i}})^{2}\sigma^{2}(E_{i}), \end{equation} according the ideas of Bevington (1977)\cite{9}, where his proposal is to combine the errors in both variables and assign them to dependent variable. One defines the errors corridor $C(E,F)$, which is the set of all intervals \begin{equation} [F(E)-S(E,F),F(E)+S(E,F)], \end{equation}. \subsection {orthonormal expansion criteria} The first criterion to be satisfied, is that the fitting curve should pass within the errors corridor $C(E,F)$. In the cases of errors only in $F$, (i.e. $\sigma(E)=0,\sigma(F)\neq(0))$ the errors corridor $C(E,F)$ reduces to the known set of intervals \begin{equation} [F-\sigma(F), F+\sigma(F)], \end{equation} for any $F$. The second criterion is, that the fitting curve $F^{appr}(E_{i})$ satisfies the expression \begin{equation} \chi^{2}=\sum_{i=1}^{M}w_{i}[F^{appr}(E_{i})-F(E_{i})]^{2}/(M-L), w_{i}=1/S_{i}^{2}. \end{equation} should be minimal (L-number of polynomials). The preference is given to the first criterion. When it is satisfied, the search of the minimal chi- squared stops. Some details of the calculation procedure are given in Forsythe's paper \cite{12} and in our works \cite{13,14,15}.\\ Our procedure gives results for approximating function by two expansions : of orthogonal coefficients $\{a_{i}\}$ and usual ones $\{c_{i}\}$ with optimal degree $L$: \begin{equation} F^{appr(m)}(E)=\sum _{i=0}^{L}a_{i}P_{i}^{(m)}(E)=\sum _{i=0}^{L}c_{i}E^{i}. \label{3} \end{equation} The orthogonal coefficients are evaluated by the given values $F_{i}$, weights and orthogonal polynomials: \begin{equation} a_{i}=\sum _{k=1}^{M}F_{k}w_{k}P_{i}^{(m)}(E_{k}). \end{equation} Our recurrence relation for generating orthonormal polynomials and their derivatives $(m=1,2...)$( or their integrals with m=-1,-2,-3,...) are carried out by: \begin{equation} P_{i+1}^{(m)}(E)= \gamma _{i+1}[E- \mu_{i+1})P_{i}^{(m)}(E)- (1-\delta _{i0}) \nu _{i}P_{i-1}^{(m)}(E)+m P_{i}^{(m-1)}E)], \end{equation} where $\mu_{i}$ and $\nu_{i}$ are recurrence coefficients, and $\gamma_{i}$ is a normalizing coefficient, defined by scalar products of given data. One can generate $P_{i}^{m}(E)$ recursively. The polynomials satisfy the following orthogonality relations: $\sum_{i=1}^{M}w_{i}P_{k}^{(0)}(E_{i})P_{l}^{(0)} (E_{i})=\delta _{k,l}$ over the discrete point set $\{E_{i}, i=1,2,\ldots\}$. All the calculations for the sake of uniformity are carried out for $E$ in[-1,1], i.e. after the input interval is transformed to the unit interval. We remark some advantages of OPEM: It uses unchanged the coefficients of the lower-order polynomials; it avoids the procedure of inversion of the coefficient matrix to obtain the solution, the minimal chi-squared stops. All these features shorten the computing time and assure the optimal solution by the criteria(2) and (4). \subsection{ the usual expansion criteria} The inherited errors in usual coefficients are given by the inherited errors in orthogonal coefficients: \begin{equation} \Delta{c_{i}}=(\sum^{L}_{k=1}(c_{i}^{(k)})^{2})^{1/2}\Delta{a_{i}}, \end{equation} \begin{equation} \Delta{a_{i}}= [\sum_{k=1}^{M}P_{i}^{2}(E_{k})w_{k}(F_{k}-F_{k}^{appr})^{2}]^{1/2}. \end{equation} where coefficients $c^{(k)}_{i}$ are defined by orthonormal expansion of polynomials \begin{equation} P_{k}=\sum^{k}_{i=0} c^{(k)}_{i}E^{i}, k=0,..,L \end{equation} and explicitly constructed by recurrence relation in \cite{13}. The procedure is iterative because of the evaluation of derivatives on every iteration step and the result of the $k^{it}$-th consequent iteration is called below the $k^{it}$-the approximation. We note that in every iteration step the algorithm find the best approximation using given before criteria. We can add the other criteria for optimal number of polynomials for usual expansion. Having the $L_{a}$ we continue with finding the optimal $L_{c}$ the \textbf{minimal value} in \begin{equation} max( c_{i}(L), i=1,L) \end{equation} in usual coefficients through all steps of iterations ${k^{it}=1,2,..9}$ or we are asking the \textbf{minimal value} of the maximal distance between functions, evaluated by orthonormal and usual expansions \begin{equation} max\vert{(F^{appr}_{a,k}-F^{appr}_{c,k})\vert,k=1,M} \end{equation} through all iterations. We investigate both criteria, but we prefer the last one. \section { Approximation results} The main important results from approximation between $2\div 10$ degrees for iterations $1\div 9$ are presented in Table 1 for characteristics: number of iterations, number of polynomials,$\chi^{2}$, and $max\vert(F_{a}-F_{c})\vert $. We see from the Table 1, that from iteration number $2 \div 5$ with optimal number $L_{a}=6$ the results are good for both expansions and for usual expansion the 8-th iteration with optimal number $L_{c}=8$ they are also good. Note: It is very interesting to present on figures the three curves - given(B), approximated by orthogonal polynomials(C) and received from it by usual polynomials(D) at different iteration steps. \begin{table} \caption{\label{tab1}\small \bf OPEM approximations results for every step approximation} \begin{center} \begin{tabular}{llllllllll} \hline $k^{it}$ & 1 & 2& 3 & 4 & 5& 6 &7& 8& 9\\ \hline $ L (2 \div 10)$& 7&6 &6&6 &6 &5&6&5&6 \\ $\chi^{2}*10^{-1}$& 5.61 & 4.23 & 3.99 & 3.79 & 3.77 & 6.81&3.75&6.65&3.63\\ $ max\vert(F _{a}-F_{c})\vert$& 14.96&3.48&6.75&4.8&4.63&7.53&4.91&0.081&9.33\\ \hline \hline \end{tabular} \end{center} \label{tab:a} \end{table} Below the figures 3,4,5,6 present the different approximations results with 2-nd, 3-rd and 4-th iterations. \begin{figure} \begin{center} \includegraphics[height=.35\textheight]{fig33.eps} \end{center} \caption{\bf OPEM approximation by 6-th degree orthonormal polynomials (C)(2-nd iteration) of experimental water data(B)} \label{fig33.eps} \end{figure} \begin{figure} \begin{center} \includegraphics[height=.35\textheight]{fig55.eps} \end{center} \caption{\bf OPEM approximation by 6-th degree(iteration-2-nd) orthonormal polynomials (C) and received usual expansion(D) of experimental water data(B)} \label{fig55.eps} \end{figure} \begin{figure} \begin{center} \includegraphics[height=.35\textheight]{fig66.eps} \end{center} \caption{\bf OPEM approximation by 6-th degree(the 3-rd iteration)-the orthonormal polynomials (C) and received usual expansion(D) of experimental water data(B)} \label{fig66.eps} \end{figure} \begin{figure} \begin{center} \includegraphics[height=.35\textheight]{fig77.eps} \end{center} \caption{\bf OPEM approximation by 6-th degree(the 4-th iteration) orthonormal polynomials (C) and received usual expansion(D) of experimental water data(B)} \label{fig77.eps} \end{figure} \begin{figure} \begin{center} \includegraphics[height=.35\textheight]{fig88.eps} \end{center} \caption{\bf OPEM approximation by 5-th degree(the 8-th iteration) orthonormal polynomials (C) and received usual expansion(D) of experimental water data(B)} \label{fig88.eps} \end{figure} The Table 2 presents the given and approximating values by OPEM with usual and orthonormal coefficients by calculated optimal degree $5-th$ in $8-th$ iteration of $M=18$ given values of following characteristics: energy $E$ , distribution $F$ , $\sigma_{E}$ and $\sigma_{F}$, and from $5-th$ column - the approximating values with orthonormal coefficients $F^{appr,5}_{a}$, approximating values with usual coefficients $F^{appr,5}_{c}$, differences $\Delta (F_{a},F_{c})=( F^{appr,5}_{a}-F^{appr,5}_{c})$, total variance $S(5)$ (equation (1). The Table 2 shows good coincidence between two descriptions. For comparison we can see the previous results for OPEM applications in \cite {13,14,15,16,17}. \label{tab2}\label{tab2} \begin{table} \caption{ \label{tab2}OPEM approximation of contact water energy data} \begin{tabular}{llrrrrrrr} \hline $ No.$ &$ E[ev] $& $ F(E) $&$\sigma_{E}$& $\sigma_{F}$ &$F^{appr,5}_{a}$& $F^{appr,5}_{c}$& $ \Delta (F_{a},F_{c})$&S\\ \hline 1 &0.1395 & 2.820 & 0.025 & 0.72 &2.421 & 2.503 & 8.169-02 & 2.2072 \\ 2&0.1392 & 3.627 & 0.025 & 1.43 & 2.721 & 2.799 & 7.796-02 & 2.9469 \\ 3&0.1388 & 2.822 & 0.025 & 1.43 & 3.192 & 3.266 & 7.420-02& 2.2173 \\ 4&0.1367 & 3.227 & 0.025 & 1.08 & 4.408 & 4.484 & 7.614-02& 1.8114 \\ 5&0.1335 & 4.035 & 0.025 & 1.08 & 4.272 & 4.353 & 8.125-02& 1.3297\\ 6&0.1309 & 4.035 & 0.025 & 1.08 & 3.467 & 3.549 & 8.161-02 & 1.3126 \\ 7&0.1287 & 3.632 & 0.025 & 1.43 & 2.840 & 2.905 & 6.474-02& 2.6050 \\ 8&0.1265 & 3.200 & 0.025 & 0.72 & 2.534 & 2.583 & 4.910-02 & 0.9395 \\ 9&0.1235 &2.422 & 0.025 & 0.72 & 2.861 & 2.932 & 7.089-02 & 0.5500 \\ 10&0.1210& 2.017& 0.025 &1.43 & 3.821 & 3.889 & 6.886-02 & 3.4402 \\ 11&0.1188& 4.840& 0.025 & 1.08 & 5.091 & 5.137 & 4.575-02& 5.1487\\ 12&0.1157 & 8.470 &0.025 & 1.43 & 7.259 & 7.291 & 3.272-02 & 8.2753 \\ 13&0.1127& 10.887 &0.025 & 1.43 & 9.290 & 9.334 & 4.365-02& 5.3774 \\ 14&0.1097& 12.095 &0.025 & 2.15 &10.647 & 10.700 & 5.320-02 & 4.6238 \\ 15&0.1069& 9.677 &0.025 & 1.08 &10.750 & 10.793 & 4.292-02 & 6.4789\\ 16&0.1041& 6.452& 0.025 & 1.08 & 9.243 & 9.276 & 3.293-02 & 15.8508 \\ 17&0.1012& 5.242& 0.025 & 0.72 & 5.569 & 5.601 & 3.178-02 & 6.0766 \\ 18&0.0975 & 4.030 & 0.025 & 1.08& -2.384 & -2.347 & 3.714-02 & 86.5354 \\ \hline \end{tabular} \end{table} \section {Conclusions} \begin{itemize} \item We have developed new version of OPEM algorithm and Fortran 77 package to include errors in both variables according (2) and (4), defined new "`total variance"' and taking into account the respective inherited errors (8) and( 9) in coefficients. \end{itemize} \begin{itemize} \item The approximating curves are chosen at $2-{nd}, 3-rd , 4-th$ approximation step by optimal degree $L_{a}=6$ and at 8-th iteration step by optimal degree $L_{c}=5$ to satisfy the proposed criteria (2),(4) and (11), (12). The results show that the orthonormal and usual expansions values are close to given ones in the whole interval. \end{itemize} \begin{itemize} \item Our approximating results with optimal degrees of orthonormal polynomials for contact (wetting) angle found by orthogonal and usual coefficients show good \textbf{accuracy and stability}, demonstrated from Figures and Tables 1,2. We received suitable descriptions of the energy variations useful for further investigations. \end{itemize} \begin{itemize} \item The presented extended algorithm and package OPEM "total variance" with its accuracy, stability and speed can be used in other cases of data analysis (as it it shown in our previous papers with earlier versions - for calibration problems in high energy physics \cite{18}). \end{itemize} \noindent \begin {thebibliography}{99} \bibitem{1} A. Antonov, L. Todorova, Effect of $\gamma$ -ray treatment on water spectrum, Comptes rendus Acad. bulg. Sci. \textbf{48} (1995) 21-24. \bibitem{2} L. Todorova, A. Antonov , Note on Drop Evaporation method. An Application to filtration, Comptes Rendus Acad. bulg. Sci. \textbf{53}(2000) 43-45. \bibitem{3} A. Antonova, T.Galabova, L.Todorova, A.Tomov, Spectr energetic non-equilibre d'eau de neige prelieve de pic de Moussalaa, in Commun.Franco-Bulgare OM, \textbf{1} (1993). \bibitem{4} A.Antonov, A., L.Yuscesselieva, Acta Hydropyhisica, Berlin, \textbf{29} (1985) 5. \bibitem{5} D. Bonn, D. Ross, Wetting transitions, Rep. Progr. Phys. \textbf{64} (2001) 1085. \bibitem{6} N. A. Fuchs, Evaporation and droplet growth in gaseaus media, Pergamon , London ,1959. \bibitem{7} R. G. Picknet, R.Bexon, Journ.of Colloid and Interface Sci. {\bf 61}(1977) 336. \bibitem{8} S.Todorov, Comptes Rend. de l'Acad. Bulgare Sci. {\bf 55}(2000) 44-49. \bibitem{9} P.R. Bevington, Data Reduction and Error Analysis for the Physical Sciences McGrow-Hill, New York 1969. \bibitem{10} G. Jones, Preprint TRI-PP-92-31,A 1992. \bibitem{11} J. Orear, Am. J. of Physics {\bf 50}(1982) 912); M.Lybanon, Am. J. Physics {\bf 52} (1984) 276. \bibitem{12} G. Forsythe J. Soc. Ind. Appl.Math. {\bf 5}(1957) 74-87. \bibitem{13} N. Bogdanova, Commun. JINR Dubna, E11-98-3,1998. \bibitem{14} N.Bogdanova, St.Todorov, IJMPC {\bf12} (2001) 117-127. \bibitem{15} N. Bogdanova, reported at BPU6 Conference, Istanbul, August 2006, in 2007 AIP proceedings, edited by S.A.Cetin , I.Hikmet, 978-0-735400404-5/07. \bibitem{16} N. Bogdanova, St. Todorov, reported at BPU7 conference, Alexandroupolis, Greece, September 2009, in 2010 AIP proceedings, edited by Angelopoulis A, Takis Fildisis,ISBN:978-0-7354-0740-4;ISSN(print):0094-243X;ISSN(online):1551-7616. \bibitem{17} N.Bogdanova, St.Todorov, reported at MMCP 2009, Dubna, LIT, in Bulletin of PFUR, Series Mathem.Information Sciences. Physics. No \textbf{3(2)} (2011) 63-67. \bibitem{18} N. Bogdanova, V. Gadjokov, G. Ososkov, Mathematical problems of automated readout systems from optical track detectors in high energy physics, revue in J. of Elem. Part. and Atom. Nucl. {\bf 17} (1986) 982-1020. \end{thebibliography} \end{document} 0.1395000 2.820000 3.070933 -0.1058197 3.176753 0.1392500 3.627500 2.982607 -0.5491464 3.531754 0.1388000 2.822500 2.896167 -0.3358464 3.232014 0.1367000 3.227500 3.247969 -6.5078013E-02 3.313047 0.1335000 4.035000 4.140388 1.0607216E-02 4.129781 0.1309000 4.035000 4.093581 -0.7164121 4.809993 0.1287000 3.632500 3.543105 -0.3271339 3.870239 0.1265000 3.200000 2.879895 -0.2858672 3.165762 0.1235000 2.422500 2.508279 0.1752771 2.333002 0.1210000 2.017500 3.148268 0.7563224 2.391946 0.1188000 4.840000 4.530085 1.919364 2.610721 0.1157000 8.470000 7.405351 4.616438 2.788913 0.1127000 10.88750 10.21788 7.502678 2.715206 0.1097000 12.09500 11.65995 9.550237 2.109715 0.1069000 9.677500 10.76505 8.420107 2.344945 0.1041000 6.452500 7.652882 6.604472 1.048409 0.1012000 5.242500 3.886059 1.305779 2.580280 9.7599998E-02 4.030000 6.188322 4.660150 1.528172 {\it Bogdanova N.}// J. Comp. Appl. Mathem. 1986. {\bf14}. 345. \bibitem{2} {\it Bonn D., Ross D.} Wetting transitions // Rep. Progr. Phys. 2001. {\bf 64}.1085. \bibitem{3} {\it Forsythe G.}// J. Soc. Ind. Appl. Mathem. 1957. {\bf 5}. 74. \bibitem{4} {\it Antonov A.}// Comptes Rendus de l'Academie bulgare de Science.1984. {\bf 37}. 1199. \bibitem{5} {\it Bevington P. R.} Data Reduction and Error Analysis for the Physical Sciences: McGrow-Hill, New York, 1969. \bibitem{6} G. Jones, //{\it Preprint TRI-PP-92-31}, A 1992. \bibitem{7} {\it Bogdanova N.} Commun. JINR. Dubna. E11-98-3,1998. \bibitem{8} {\it Bogdanova N., Todorov St.} // IJMPC. 2001. {\bf12}., No.1. pp.117-127. \bibitem{9} {\it Lybanon M.}// Am. J. Physics. 1984. {\bf 52}. 276.
1,108,101,566,314
arxiv
\section{Introduction} Globally generated vector bundles on projective varieties play an important role in algebraic geometry. If they are non-trivial they must have strictly positive first Chern class. Globally generated vector bundles on projective spaces with low first Chern class have been investigated in several papers. If $c_1(\enm{\cal{E}})=1$ then it is easy to see that modulo trivial summands we have only $\enm{\cal{O}}_{\enm{\mathbb{P}}^n}(1)$ and $T\enm{\mathbb{P}}^n(-1)$. The classification of rank $r$ globally generated vector bundles with $c_1=2$ is settled in \cite{SU}. In \cite{huh} the second author carried out the case of rank two with $c_1=3$ on $\mathbb P^3$ and in \cite{ce} the authors continued the study until $c_1\leq 5$. This classification was extended to any rank in \cite{m} and to any $\mathbb P^n$ ($n\geq 3$) in \cite{am} and \cite{SU2}. In \cite{e} are shown the possible Chern classes of rank two globally generated vector bundles on $\mathbb P^2$. Let $Q$ be a smooth quadric threefold over an algebraically closed field of characteristic zero. The aim of this paper is to investigate the existence of globally generated vector bundles of rank $2$ on $Q$ with $c_1\leq 3$. We use an old method of associating to a rank 2 vector bundle on $Q$ a curve in $Q$, and relate properties of the bundle to properties of the curve. If $\enm{\cal{E}}$ is globally generated, he admits an exact sequence, $$0\to \enm{\cal{O}}_Q \to \enm{\cal{E}} \to \enm{\cal{I}}_C(c_1) \to 0,$$ where $C$ is a smooth curve of degree $c_2(\enm{\cal{E}})$ and genus $g$: We prove the following theorem: \begin{theorem}\label{mt} There exists an indecomposable and globally generated vector bundle $\enm{\cal{E}}$ of rank $2$ on $Q$ with the Chern classes $(c_1, c_2)$, $c_1\leq 3$ in the following cases: \begin{enumerate} \item $(c_1=1, c_2=1)$, $\enm{\cal{E}}$ is the spinor bundle $\Sigma$ and $C$ is a line. \item $(c_1=2, c_2=4)$, $\enm{\cal{E}}$ is a pull-back of a null-correlation bundle on $\mathbb {P}^3$ twisted by $1$ and $C$ is the disjoint union of two conics. \item $(c_1=3, c_2=5)$ and $\enm{\cal{E}}\cong\Sigma(1)$. \item $(c_1=3, 6\leq c_2\leq 9)$, $\enm{\cal{E}}$ is the homology of a monad $$0\rightarrow \enm{\cal{O}}_Q^{\oplus c_2-5}(1) \rightarrow \Sigma^{\oplus c_2-4}(1) \rightarrow \enm{\cal{O}}_Q(2)^{\oplus c_2-5} \rightarrow 0,$$ and $C$ is a smooth elliptic curve of degree $c_2$. \end{enumerate} \end{theorem} A typical way to construct a vector bundle on $Q$ is by restriction of a vector bundle on $\enm{\mathbb{P}} ^4$ or by a pull-back of a vector bundle on $\enm{\mathbb{P}}^3$ along a linear projection from $Q$ to $\enm{\mathbb{P}}^3$. The spinor bundle $\Sigma$ is not obtained by either one of these ways and it plays an important role in describing the globally generated vector bundle on $Q$. In fact, from the classification we observe that every rank two globally generated vector bundle on $Q$ with $c_1=3$ is, up to twist, an odd instanton that is the cohomology of a monad involving $\Sigma$ (see \cite{faenzi}). In Sect.2 we set up basic computations and deal with the case $c_1=1$ as a preliminary case. In Sect.3 we prove that every indecomposable and globally generated vector bundle of rank 2 on $Q$ with $c_1=2$ is a pull-back of a null-correlation bundle on $\enm{\mathbb{P}}^3$ twisted by $1$. In Sect.4, 5 and 6, we deal with the case of $c_1=3$. First we determine the possible second Chern class $c_2$ using the Liaison theory, that is $5\leq c_2 \leq 9$. In each case we prove the existence of globally generated vector bundle of rank $2$. In Sect.4 we explain the case of $c_2=5,6,7$ based on the result \cite{OS} about the moduli spaces of rank $2$ vector bundles on $Q$. The critical part of this paper is on the existence of globally generated vector bundles of rank $2$ with $c_2=8$ and $9$. It is equivalent to the existence of a smooth elliptic curve $C$ of degree $c_2$ whose ideal sheaf twisted by $3$ is globally generated. The main ingredient is the result in \cite{hh} with replacing $\enm{\mathbb{P}}^3$ by $Q$, by which we can deform a nodal reducible curve of degree $c_2$ constructed in a suitable way to a smooth elliptic curve that we need. In order to have a complete classification we need an answer to the following question:\\ {\bf Question:} Are the moduli spaces $\mathfrak{M}(3,8)$ and $\mathfrak{M}(3,9)$ irreducible?\\ In the last section we classify rank two globally generated bundles on higher dimensional quadrics.\\ The second author would like to thank Politecnico di Torino, especially the third author for warm hospitality. \section{Preliminaries} Let $Q$ be a smooth quadric hypersurface in $\enm{\mathbb{P}}^4$. Then we have $$\op{Pic} (Q)=H^2(Q, \enm{\mathbb{Z}})=\enm{\mathbb{Z}} h,$$ where $h$ is the class of a hyperplane section. Moreover, the cohomology ring $H^*(Q, \enm{\mathbb{Z}})$ is generated by $h$, a line $l\in H^4(Q, \enm{\mathbb{Z}})$ and a point $p\in H^6(Q, \enm{\mathbb{Z}})$ with the relations: $h^2=2l~,~ h\cdot l=p~,~ h^3=2p$. Let $\enm{\cal{E}}$ be a coherent sheaf of rank $r$ on $Q$. Then we have \cite{OS}: \begin{align*} c_1(\enm{\cal{E}}(k))&=c_1+kr\\ c_2(\enm{\cal{E}}(k))&=c_2+2k(r-1)c_1+2k^2{r\choose 2}\\ c_3(\enm{\cal{E}}(k))&=c_3+k(r-2)c_2+2k^2{r-1\choose 2}+2k^3{r\choose 3} \\ \chi(\enm{\cal{E}})~~&=(2c_1^3-3c_1c_2+3c_3)/6+3(c_1^2-c_2)/2+13c_1/6+r, \end{align*} where $(c_1, c_2, c_3)$ is the Chern classes of $\enm{\cal{E}}$. In particular, when $\enm{\cal{E}}$ is a vector bundle of rank 2 with $c_1=-1$, we have $$\chi(\enm{\cal{E}})=1-c_2~~,~~ \chi(\enm{\cal{E}}(1))=6-2c_2~~,~~ \chi(\enm{\cal{E}}(-1))=0,$$ $$\chi(\mathcal{E}nd(\enm{\cal{E}}))=7-6c_2.$$ Let $\mathfrak{M}(c_1, c_2)$ be the moduli space of stable vector bundles of rank 2 on $Q$ with the Chern classes $(c_1, c_2)$. \begin{proposition}\cite{sierra}\label{prop1} Let $\enm{\cal{E}}$ be a globally generated vector bundle of rank $r$ on $Q$ such that $H^0(\enm{\cal{E}}(-c_1))\not= 0$, where $c_1$ is the first Chern class of $\enm{\cal{E}}$. Then we have $$\enm{\cal{E}}\simeq \enm{\cal{O}}_Q^{\oplus (r-1)}\oplus \enm{\cal{O}}_Q(c_1).$$ \end{proposition} Now assume that $\enm{\cal{E}}$ is a globally generated vector bundle of rank 2 on $Q$. Then $\enm{\cal{E}}$ admits an exact sequence \begin{equation}\label{eqa1} 0\rightarrow \enm{\cal{O}}_Q \rightarrow \enm{\cal{E}} \rightarrow \enm{\cal{I}}_C (c_1) \rightarrow 0, \end{equation} where $C$ is a smooth curve on $Q$. Notice that $\omega_C \simeq \enm{\cal{O}}_C(-3+c_1)$ and $c_2(\enm{\cal{E}})=\deg (C)$. If $l$ is a line on $Q$, then $\enm{\cal{E}}|_l$ is also globally generated, which means in particular that $c_1(\enm{\cal{E}})\geq 0$. \begin{remark}\label{u0} $\enm{\cal{E}} \cong \enm{\cal{O}} _Q(c_1) \oplus \enm{\cal{O}} _Q$ if and only if $C =\emptyset$. \end{remark} We briefly recall Ottaviani's construction of the spinor bundle (\cite{O1}). Let $G(2,4)$ denote the Grassmannian of all $2$-dimensional linear subspaces of $\mathbb {K}^4$. By using the geometry of the variety of all $1$-dimensional linear subspaces of $Q$ it is possible to construct a morphism $s: Q \to G(2,4)$. Set $\Sigma (-1):= s^\ast U$ where $U$ is the universal quotient bundle of $G(2,4)$. $\Sigma$ is called spinor bundle on $Q$, he is a globally generated vector bundle of rank $2$ with the Chern classes $(c_1, c_2)=(1,1)$. Moreover he admits the following canonical exact sequence $$0\to \Sigma^\vee \to \enm{\cal{O}}_Q^{\oplus 4} \to \Sigma \to 0.$$ For a general section $s\in H^0(\Sigma)$, we have an exact sequence $$0\to \enm{\cal{O}}_Q \stackrel{s}{\to} \Sigma \to \enm{\cal{I}}_l (1) \to 0$$ where $l$ is a line on $Q$. It gives us an identification between $\enm{\mathbb{P}} H^0(\Sigma)\cong \enm{\mathbb{P}}^3$ and a family of the lines on $Q$. \begin{proposition}\label{u1} We have $c_1(\enm{\cal{E}}) =1$ if and only if either $\enm{\cal{E}} \cong \enm{\cal{O}}_Q(1)\oplus \enm{\cal{O}} _Q$ or $\enm{\cal{E}}$ is the spinor bundle. \end{proposition} \begin{proof} Both $\enm{\cal{O}}_Q(1)\oplus \enm{\cal{O}} _Q$ and the spinor bundle are globally generated and have the prescribed Chern classes. Hence it is sufficient to consider the case when $C \ne \emptyset$ by the remark \ref{u0}. Since $C$ is smooth and $\mathcal {I}_C(1)$ is globally generated, it is contained in a codimension $2$ linear section of $Q$. Hence $C$ is either a smooth conic or a line. In both cases $C$ is ACM. From the sequence (\ref{eqa1}) we get that $\enm{\cal{E}}$ is an ACM vector bundle. Hence $\enm{\cal{E}}$ is either decomposable or a twist of the spinor bundle by Theorem 3.5 in \cite{O}. Since $\enm{\cal{E}}$ is globally generated and $c_1(\enm{\cal{E}})=1$, we get that either $\enm{\cal{E}} \cong \enm{\cal{O}}_Q(1)\oplus \enm{\cal{O}} _Q$ or $\enm{\cal{E}}$ is the spinor bundle. \end{proof} \section{Rank two globally generated bundles with $c_1=2$} In this section we prove the following result. \begin{proposition}\label{u2} Let $\enm{\cal{E}}$ be a rank $2$ vector bundle with $c_1(\enm{\cal{E}}) =2$. $\enm{\cal{E}}$ is globally generated if and only if $\enm{\cal{E}}$ is either isomorphic to \begin{enumerate} \item $\enm{\cal{O}}_Q(a)\oplus \enm{\cal{O}}_Q(2-a)$ with $a=0,1$ or \item a pull-back of a null-correlation bundle on $\mathbb {P}^3$ twisted by $1$ \end{enumerate} \end{proposition} \begin{lemma}\label{u3} Let $C = E_1\sqcup E_2\sqcup J \subset Q$ be a curve with $E_1$ and $E_2$ smooth conics and $J\ne \emptyset$. Then $\enm{\cal{I}}_C(2)$ is not globally generated. \end{lemma} \begin{proof} Assume that $\enm{\cal{I}}_C(2)$ is globally generated. Let $H \subset \enm{\mathbb{P}}^4$ denote a general hyperplane containing $E_1$. Set $Q':= H\cap Q$. $Q'$ is a smooth quadric surface, the scheme $Z:= H\cap (E_2\cup J)$ is a zero-dimensional scheme of degree at least 3 and $Z\cap E_1=\emptyset$. Since $\enm{\cal{I}}_C(2)$ is globally generated, so is $\enm{\cal{I}}_{E_1\cup Z,Q'}(2,2)$; since $E_1$ is a curve of type $(1,1)$ on $Q'$ and $Z\cap E_1=\emptyset$, so $\enm{\cal{I}}_Z(1,1)$ is globally generated. But this is absurd since $\deg (Z)\ge 3$. \end{proof} \qquad {\emph {Proof of Proposition \ref{u2}.}} Let us assume that the vector bundle $\enm{\cal{E}}$ on $Q$ is globally generated with $c_1(\enm{\cal{E}})=2$ and then it fits into the exact sequence (\ref{eqa1}) with $c_1=2$ and $C$ a smooth curve. Since $\omega_C\simeq \enm{\cal{O}}_C(-1)$, $C$ is a disjoint union of conics, i.e. $C=C_1\cup \cdots \cup C_r$, where each $C_i$ is a conic. By Remark \ref{u0} we have $\enm{\cal{E}} \cong \enm{\cal{O}} _Q(2)\oplus \enm{\cal{O}} _Q$ if and only if $r=0$. Now assume $r>0$. Lemma \ref{u3} gives $r\in \{1,2\}$. As the first case let us assume that $r=1$. Since a smooth conic is ACM, the sequence (\ref{eqa1}) gives that $\enm{\cal{E}}$ is ACM. Hence we have $\enm{\cal{E}} \cong \enm{\cal{O}}_Q(1)\oplus \enm{\cal{O}}_Q(1)$ by Theorem 3.5 in \cite{O}. Now let us assume that $r=2$. Let $M_i\subset \mathbb {P}^4$ be the plane spanned by $C_i$. Since $M_i\cap Q = C_i$ and $C_1\cap C_2 =\emptyset$, the set $M_1\cap M_2$ cannot be a line. Hence $M_1\cap M_2$ is a point, say $P$. Since $M_i\cap Q = C_i$ and $C_1\cap C_2 =\emptyset$, we have $P\notin Q$. The linear projection $\ell _P: \mathbb {P}^4\setminus \{P\} \to \mathbb {P}^3$ send each $E_i$ onto a line $L_i$. In Prop. 3.2 of \cite{SW}, it was shown that all bundles as an extension in (\ref{eqa1}) are pull-back from bundles from extensions \begin{equation}\label{a2} 0 \to \mathcal {O}_{\mathbb {P}^3} \to \enm{\cal{F}} \to \mathcal {I}_{L_1\sqcup L_2}(2)\to 0 \end{equation} on $\mathbb {P}^3$, in which $\enm{\cal{F}}$ is a null-correlation bundle twisted by $1$. \section{Rank two globally generated bundles with $c_1=3$} Let $\enm{\cal{E}}$ be a globally generated vector bundle of rank 2 on $Q$ with the Chern classes $(c_1, c_2)$, $c_1\geq 3$. It fits into the exact sequence (\ref{eqa1}). \begin{lemma} The Chern classes $(c_1, c_2)$ of $\enm{\cal{E}}$ satisfies the following inequality for $c_1\geq 3$: $$c_2\leq \frac{2}{3}(2c_1+1)(c_1-1).$$ In particular, if $c_1=3$, we have $c_2\leq 9$. \end{lemma} \begin{proof} Since $\enm{\cal{I}}_C(c_1)$ is globally generated, so there are two hypersurfaces of degree $c_1$ in $Q$ whose intersection is a curve $X$ containing $C$. Let $Y$ be a curve such that $X=C+Y$. If $\enm{\cal{E}}$ does not split, then $Y$ is not empty so we have the exact sequence of liaison: $$0\rightarrow \enm{\cal{I}}_X (c_1) \rightarrow \enm{\cal{I}}_C(c_1) \rightarrow \omega_Y(3-c_1) \rightarrow 0.$$ Since $\enm{\cal{I}}_C(c_1)$ is globally generated, so is $\omega_Y(3-c_1)$. It implies that $$\deg(\omega_Y(3-c_1))=2g'-2+d'(3-c_1)\geq0,$$ $(d'=\deg (Y), g'=g(Y))$ and so $g' \geq 0$. On the other hand, by liaison, we have $$g'-g=\frac 12 (d'-d)(2c_1-3)$$ and $d'=2c_1^2-d$, $2g-2=d(c_1-3)$ since $\omega_C(3-c_1)=\enm{\cal{O}}_C$. Here $d=\deg (C)$ and $g=g(C)$. Thus we have $g'=1+\frac{d(c_1-3)}{2}+(c_1^2-d)(2c_1-3)\geq0$ and so $$c_2 \leq \frac{2(2c_1^3-3c_1^2+1)}{3c_1-3}=\frac{2}{3} (2c_1+1)(c_1-1).$$ \end{proof} Assume that $\enm{\cal{E}}$ is a globally generated vector bundle of rank 2 on $Q$ with $c_1=3$ that fits into the sequence: $$0\rightarrow \enm{\cal{O}}_Q \rightarrow \enm{\cal{E}} \rightarrow \enm{\cal{I}}_C(3) \rightarrow 0,$$ where $C$ is a smooth curve with $\deg (C)=c_2(\enm{\cal{E}})$. By Proposition \ref{prop1}, if $H^0(\enm{\cal{E}}(-3))\not=0$, then $\enm{\cal{E}}$ is isomorphic to $\enm{\cal{O}}_Q\oplus \enm{\cal{O}}_Q(3)$, which is globally generated. So let us assume that $H^0(\enm{\cal{E}}(-3))=0$. As the first case, let us assume that $H^0(\enm{\cal{E}}(-2))\not= 0$, i.e. $\enm{\cal{E}}$ is unstable. \begin{proposition} Let $\enm{\cal{E}}$ be a globally generated unstable vector bundle of rank 2 on $Q$ with $c_1(\enm{\cal{E}})=3$. Then $\enm{\cal{E}}$ is a direct sum of line bundles. In other words, we have $$\enm{\cal{E}} \simeq \enm{\cal{O}}_Q(a)\oplus \enm{\cal{O}}_Q(3-a)~~~~,~~~~a=0,1.$$ \end{proposition} \begin{proof} Note that $h^0(\enm{\cal{I}}_C(1))>0$ and so $C$ is contained in the complete intersection of $Q$ and two hypersurfaces of degree $1$ and $3$ inside $\enm{\mathbb{P}}^4$ that is the ambient space containing $Q$. In particular, we have $\deg (C)\leq 6$. On the other hand, from a section in $H^0(\enm{\cal{E}}(-2))$, we have a sequence $$0\rightarrow \enm{\cal{O}}_Q (2) \rightarrow \enm{\cal{E}} \rightarrow \enm{\cal{I}}_{C'}(1) \rightarrow 0.$$ If $C'$ is empty, then $\enm{\cal{E}}$ is isomorphic to $\enm{\cal{O}}_Q(1)\oplus \enm{\cal{O}}_Q(2)$. If not, the degree of $C'$, which is $c_2(\enm{\cal{E}})-4$ is at least $1$. Thus $\deg (C)=c_2(\enm{\cal{E}})$ is either 5 or 6. Note that $\omega_C\simeq \enm{\cal{O}}_C$ and so $C$ consists of smooth elliptic curves. If $\deg (C)=5$, then $C$ is a quintic elliptic curve contained in a hyperplane section of $Q$ which is a quadric surface $Q_2$. In the case when $Q_2$ is smooth, let $(a,b)$ is the bidegree of $C$ as a divisor of the quadric surface and then we have $\deg (C)=5=a+b$ and $g(C)=1=ab-a-b+1$. But it is impossible. Similarly we can show that $\deg(C)= 6$ is not possible. If $Q_2$ is a quadric surface cone, we can show that it is impossible by Exercise V.2.9 of \cite{Hartshorne}. \end{proof} Assume now that $H^0(\enm{\cal{E}}(-2))=0$, i.e. $\enm{\cal{E}}$ is stable. By the Bogomolov inequality, we have $c_2(\enm{\cal{E}})\geq 5$. Recall that $c_2(\enm{\cal{E}})\leq 9$. \begin{proposition}\cite{OS} Every vector bundle in $\mathfrak{M}(3, c_2)$ with $c_2=5,6$ is globally generated. In the case of $c_2=7$, the general vector bundle is globally generated. \end{proposition} \begin{proof} Note that $\enm{\cal{E}}$ fits into the sequence \begin{equation}\label{seq2} 0\rightarrow \enm{\cal{O}}_Q(1) \rightarrow \enm{\cal{E}} \rightarrow \enm{\cal{I}}_Z(2) \rightarrow 0. \end{equation} with $Z$ a locally complete intersection, $\deg (Z) =c_2-4$ and $\omega_Z\simeq \enm{\cal{O}}_Z(-2)$. If $c_2(\enm{\cal{E}})=5$, then $Z$ is a line. From the sequence (\ref{seq2}) we get that $\enm{\cal{E}}$ is ACM. Since $\enm{\cal{E}}$ is stable, we get $\enm{\cal{E}} \simeq \Sigma(1)$ \cite{O}. Obviously $\Sigma (1)$ is globally generated. If $c_2(\enm{\cal{E}})=6$, then $\enm{\cal{E}}$ is the cohomology of the following monad (Remark 4.8 in \cite{OS}): $$0\rightarrow \enm{\cal{O}}_Q(1) \rightarrow \Sigma(1)^{\oplus 2} \rightarrow \enm{\cal{O}}_Q(2) \rightarrow 0.$$ In this case, $Z$ is either 2 disjoint lines on $Q$ or a line with multiplicity 2. It is also known to be globally generated. If $c_2(\enm{\cal{E}})=7$, then a general vector bundle $\enm{\cal{E}}$ in $\mathfrak{M}(3,7)$ can be shown to be globally generated using the `Castelnuovo-Mumford criterion' (Theorem 5.2 in \cite{OS}). The same result also follows from the Lemma \ref{b1} with $C=A$. Since $h^0(\enm{\cal{I}}_C(2)) =0$ and $h^0(C,\enm{\cal{O}}_C(2)) = 14 = h^0(\enm{\cal{O}} _Q(2))$, (\ref{eqa1}) gives $h^1(\enm{\cal{E}}(-1)) =0$. Since $C$ is an elliptic curve, we have $h^1(C,\enm{\cal{O}} _C(1)) =0$. Hence (\ref{eqa1}) gives $h^2(\enm{\cal{E}}(-2)) = h^1(C,\enm{\cal{O}} _C(1)) =0$. We have $h^3(\enm{\cal{E}}(-3)) = h^0(\enm{\cal{E}}^\vee ) = h^0(\enm{\cal{E}}(-3)) =0$. Hence the `Castelnuovo-Mumford criterion' gives that $\enm{\cal{E}}$ is globally generated. \end{proof} \section{Case $(c_1, c_2)=(3,8)$} Let $\enm{\cal{F}}=\enm{\cal{E}}(-2)$. We can compute:\ $$ \left\{ \begin{array}{ll} H^1(\enm{\cal{F}}(t))=0 \text{ for } t\leq -1\\ \chi (\enm{\cal{F}} \otimes \enm{\cal{F}}^\vee)=-17 \\ \chi (\enm{\cal{F}})=-\chi (\enm{\cal{F}}(-2))=-3\\ \chi (\enm{\cal{F}}(1))=-\chi (\enm{\cal{F}}(-3))=-2\\ \chi (\enm{\cal{F}}(-1))=0\\ \chi (\enm{\cal{F}}(2))=7 \end{array} \right. $$ Then the cohomology table for $\enm{\cal{F}}=\enm{\cal{E}}(-2)$ is as follows:\\ \begin{equation}\label{t1} \begin{tabular}{|c|c|c|c|c|c|} \hline 0 & 0 & 0 & 0 & 0 & 0 \\ 2 & 3 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 3 & 2 & $b$ \\ 0 & 0 & 0 & 0 & 0 & $a$ \\ \hline \hline -3& -2& -1 & 0 & 1 & 2 \\ \hline \end{tabular}\end{equation} with $a-b=7$. \begin{proposition}\label{monad} The vector bundles $\enm{\cal{E}}$ in $\mathfrak{M}(3,8)$ with $H^0(\enm{\cal{E}} (-1))=0$ is the cohomology of the following monad: $$ 0\to \enm{\cal{O}}_Q(1)^{\oplus 3}\to \Sigma(1)^{\oplus 4}\to \enm{\cal{O}}_Q(2)^{\oplus 3} \to 0$$ \end{proposition} \begin{proof}Let us consider the sequence killing $H^1(\enm{\cal{F}})$: \begin{equation} 0\to \enm{\cal{F}}\to \enm{\cal{B}}\to \enm{\cal{O}}_Q^{\oplus 3}\to 0 \end{equation} $H^2_*(\enm{\cal{F}})\cong H^2_*(\enm{\cal{B}})$ and $H^1(\enm{\cal{F}})=H^0(\enm{\cal{F}})=0$ since the map $H^0(\enm{\cal{O}}_Q^{\oplus 3})\to H^1(\enm{\cal{F}}) $ is an isomorphism. Moreover $h^3(\enm{\cal{B}}(-3))=h^3(\enm{\cal{O}}_Q(-3)^{\oplus 3})=3$. So the cohomology table for $\enm{\cal{B}}$ and $\enm{\cal{B}}^{\vee}$ are as follows respectively :\\ \begin{center} \begin{tabular}{|c|c|c|c|} \hline 3 & 0 & 0 & 0 \\ 2 & 3 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ \hline \hline -3& -2& -1 & 0 \\ \hline \end{tabular}\quad\quad \begin{tabular}{|c|c|c|c|} \hline 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 3 & 2 \\ 0 & 0 & 0 & 3 \\ \hline \hline -3& -2& -1 & 0 \\ \hline \end{tabular} \end{center} Now let us consider the sequence killing $H^1(\enm{\cal{B}}^\vee)$ and $H^1(\enm{\cal{B}}^\vee(-1))$: \begin{equation}\label{k2} 0\to \enm{\cal{B}}^\vee\to \enm{\cal{P}} \to \enm{\cal{O}}_Q(1)^{\oplus 3} \oplus \enm{\cal{O}}_Q^{\oplus 2} \to 0 \end{equation} $H^2_*(\enm{\cal{P}})\cong H^2_*(\enm{\cal{B}}^\vee)$ and $H^1(\enm{\cal{P}})=H^1(\enm{\cal{P}}(-1))=0$. $h^0( \enm{\cal{B}}^\vee)-h^0 (\enm{\cal{P}})+ h^0(\enm{\cal{O}}_Q(1)^{\oplus 3} \oplus \enm{\cal{O}}_Q^{\oplus 2})-h^1(\enm{\cal{B}}^\vee)=0$ then $h^0(\enm{\cal{P}})=18$. Moreover $h^3(\enm{\cal{P}}(-3))=h^3(\enm{\cal{O}}_Q(-3)^{\oplus 2})=2$. So the cohomology table for $\enm{\cal{P}}$ and $\enm{\cal{P}}^{\vee}$ are as follows:\\ \begin{center} \begin{tabular}{|c|c|c|c|} \hline 2 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 18 \\ \hline \hline -3& -2& -1 & 0 \\ \hline \end{tabular}\quad\quad \begin{tabular}{|c|c|c|c|} \hline 18 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 2 \\ \hline \hline -3& -2& -1 & 0 \\ \hline \end{tabular} \end{center} Since $\enm{\cal{P}}(1)$ and $\enm{\cal{P}}^\vee(1) $ are Castelnuovo-Mumford regular, so $\enm{\cal{P}}$ is an ACM vector bundle. Note that the rank of $\enm{\cal{P}} $ is $10$, $c_1(\enm{\cal{P}})=4$, $h^0(\enm{\cal{P}})=18$ and $h^0(\enm{\cal{P}}^\vee)=2$. So we have $\enm{\cal{P}}=\enm{\cal{O}}_Q^{\oplus 2}\oplus \Sigma^{\oplus 4}$, the sequence (\ref {k2}) reduces to $$0\to \enm{\cal{B}}^\vee\to \Sigma^{\oplus 4} \to \enm{\cal{O}}_Q(1)^{\oplus 3} \to 0$$ and the monad is as we claimed. \end{proof} \begin{remark} In \cite{faenzi}, every vector bundle $\enm{\cal{F}}$ of rank 2 with the Chern classes $(c_1, c_2)=(-1,k)$ and $H^1(\enm{\cal{F}}(-1))=0$ on $Q$, can be shown to be the cohomology of a monad $$0\rightarrow \enm{\cal{O}}_Q^{\oplus k-1} \rightarrow \Sigma^{\oplus k} \rightarrow \enm{\cal{O}}_Q(1)^{\oplus k-1} \rightarrow 0$$ using the Kapranov spectral sequences on $Q$. \end{remark} Note that $H^1(\enm{\cal{E}}(-3))=H^1(\enm{\cal{I}}_C)=0$ implies that $H^0(\enm{\cal{O}}_C)=1$, i.e. $C$ is a smooth irreducible elliptic curve of degree 8. \begin{proposition}\label{a00} There is a globally generated and stable vector bundle $\enm{\cal{E}}$ of rank 2 on $Q$ with $(c_1, c_2)=(3,8)$ such that \begin{itemize} \item $h^0(\enm{\cal{E}}(-1))=0$, $h^0(\enm{\cal{E}})=7$ \item $h^1(\enm{\cal{E}}(-1))=2, h^1(\enm{\cal{E}}(-2))=3$ and $h^1(\enm{\cal{E}}(t))=0$ for all $t\ge 0$,$t\le -3$ \end{itemize} with a smooth, non-degenerate and irreducible elliptic curve of degree $8$ as its associated curve on $Q$. \end{proposition} Let us fix $5$ distinct points $P_1,\dots P_5\in \mathbb {P}^2$ such that no $3$ of them are collinear. Let $\pi : W\to \mathbb {P}^2$ be the blowing-up of these 5 points. The anticanonical line bundle $\omega _W^\vee$ of $W$ is very ample and the image $U \subset \mathbb {P}^4$ of $W$ by the complete linear system $\vert \omega _W^\vee \vert$ is a smooth Del Pezzo surface of degree $4$ which is the complete intersection of two quadric hypersurfaces (\cite{d}) (in the standard notation for Del Pezzo surfaces $\omega _W^\vee = (3;1,1,1,1,1)$, i.e. it is given by the strict transform of all cubic plane curves containing all points $P_1,\dots ,P_5$). Conversely, any smooth complete intersection $U'\subset \mathbb {P}^4$ is the complete intersection of two quadric hypersurfaces. Hence for general $P_1,\dots ,P_5$ we may assume that $U$ is contained in a smooth quadric. Hence we see $U\subset Q$. Every curve $C\subset U$ is a curve contained in $Q$. We have $h^0(U,\mathcal {O}_U(1)) = 5$, $h^0(U,\mathcal {O}_U(2)) =13$ and $h^0(U,\mathcal {O}_U(3)) = \chi (O_U) +\mathcal {O}_U(3)\cdot \mathcal {O}_U(4)/2 =25$ by the Riemann-Roch theorem. \begin{remark}\label{a1} Let $E\subset \mathbb {P}^4$ be a smooth and non-degenerate elliptic curve such that $\deg (E) =6$. Since $h^1(\mathcal {I}_{E,\mathbb {P}^4}(1)) =1$, it is easy to check that $h^1(\mathcal {I}_{E,\mathbb {P}^4}(2)) =0$. Castelnuovo-Mumford's lemma implies that $\mathcal {I}_{E,\mathbb {P}^4}(3)$ is spanned. Hence for every smooth elliptic curve $E\subset Q$ with $\deg (E)=6$ and $E$ not contained in a hyperplane, the sheaf $\mathcal {I}_E(3)$ on $Q$ is spanned. \end{remark} Fix any such smooth elliptic curve $E\subset Q$ such that $\deg (E)=6$ and $E$ is not contained in a hyperplane of $\mathbb {P}^4$. Since $h^0(E,\mathcal {O}_E(2)) =12 = h^0(\mathcal {O}_{Q}(2))-2$, it is contained in some quadric hypersurface of $Q$. Let $U\subset Q$ be the smooth Del Pezzo surface of degree $4$ just introduced. We find smooth elliptic curves of degree $6$ inside $U$ by taking the smooth elements of type $(3;1,1,1,0,0)$. Fix one such curve $E$. Since $h^0(U,\mathcal {O}_U(2))=13$ and $h^0(E,\mathcal {O}_E(2))=12$, $E$ is contained in at least one quadric hypersurface, $T$, of $U$. See $E$ as a curve of type $(3;1,1,1,0,0)$. $T$ has type $(6;2,2,2,2,2)$. Hence the curve $T-E$ is a curve of type $(3;1,1,1,2,2)$. No cubic plane curve with at least two singular points is integral. We see that $h^0(U,\mathcal {O}_U(T-U)) =1$ and that the unique curve in $\vert T-U\vert$ is the disjoint union of two lines $R_1$ and $R_2$ with $R_1$ the image of the strict transform in $W$ of the only conic containing the five points $P_1,\dots ,P_5$ (i.e. the only curve of type $(2;1,1,1,1,1)$), while $R_2$ is the image in $U$ of the strict transform of the line of $\mathbb {P}^2$ spanned by $P_4$ and $P_5$ (i.e. the only curve of type $(1;0,0,0,1,1)$). \begin{remark}\label{a1.1} Since $\mathcal {I}_E(3)$ is spanned the line bundle $\mathcal {L}:= \mathcal {O}_U(3)(-E)$ is spanned. Let $\alpha : U \to \mathbb {P}^r$, $r:= h^0(U,\mathcal {L})-1$, denote the morphism induced by $\vert \mathcal {L}\vert$. Since $\mathcal {L}$ has type $(6;2,2,2,3,3)$, we have $\mathcal {L}\cdot \mathcal {L} = 36 -4-4-4-9-9=6$ and $\mathcal {L}\cdot \omega _U^\vee = \mathcal {L}\cdot \mathcal {O}_U(1) = 18 -2-2-2-3-3 =6$. Riemann-Roch gives $\chi (\mathcal {L}) =1 +(6+6)/2 =7$. Since $\mathcal {L}$ is spanned, Serre duality gives $h^2(\mathcal {L})=0$. Hence $r\ge 6$. Since $\alpha (U)$ spans $\mathbb {P}^r$, we have $\deg (\alpha (U)) \ge 6$. Since $\mathcal {L}\cdot \mathcal {L} >0$, $\alpha (U)$ is a surface. Since $\deg (\alpha )\cdot \deg (\alpha (U)) = \mathcal {L}\cdot \mathcal {L} = 6$, we have $\deg (\alpha ) =1$, i.e. $\alpha$ is birational onto its image. \end{remark} \begin{lemma}\label{a2} If the pair $(O_1,O_2)$ is general in $U\times U$, then $\mathcal {I}_{E\cup \{O_1,O_2\},U}(3)$ is spanned. \end{lemma} \begin{proof} Take $\mathcal {L}$, $\alpha$ and $\mathbb {P}^r$ as in Remark \ref{a1.1}. Lemma \ref{a2} just says that $\{O_1,O_2\}$ is the scheme-theoretic base locus, $\beta$, of the linear system $\vert \mathcal {I}_{\{O_1,O_2\}}\otimes \mathcal {L}\vert$ on $U$. Since $\alpha$ is birational onto its image and $O_1,O_2$ are general, we have $\alpha (O_1) \ne \alpha (O_2)$ and $\alpha ^{-1}(\alpha (O_i)) =O_i$ as schemes. In characteristic zero a general codimension $2$ section of the non-degenerate surface $\alpha (U)$ is in linearly general position (\cite{acgh}, pages 112--113). Since the pair $(\alpha (O_1),\alpha (O_2))$ is general in $\alpha (U)\times \alpha (U)$, we get that $\{\alpha (O_1),\alpha (O_2)\}$ is the scheme-theoretic intersection of $\alpha (U)$ with the line of $\mathbb {P}^r$ spanned by $\alpha (O_1)$ and $\alpha (O_2)$. Hence $\beta = \{O_1,O_2\}$. \end{proof} Fix $E\subset U \subset Q$ as above and a general $(O_1,O_2)\in U\times U$. The union of the set of all lines containing $O_i$ and contained in $Q$ is the quadric cone $(T_{O_i}Q)\cap Q$. Hence there is a line $D_i\subset Q$ containing $O_i$ and intersecting $E$. For general $(O_1,O_2)$ we may assume $D_1\cap D_2 =\emptyset$ and that $D_i$ intersects quasi-transversally $E$ and only at the point $A_i:= D_i\cap E$. Hence $Y:= E\cup D_1\cup D_2$ is a nodal and connected curve of degree $8$ inside $Q$ with $p_a(Y)=1$. The reduced curve $Y$ is locally a complete intersection and hence its normal sheaf $\enm{\cal{N}}_{Y}$ is a rank 2 vector bundle of degree $\deg (TQ\vert_Y) = 24$. Since $Y$ has nodes only at $A_1$ and $A_2$, the vector bundles $\enm{\cal{N}}_Y\vert_E$ are obtained from $\enm{\cal{N}}_E$ making two positive elementary transformations. Since $h^1(E,\enm{\cal{N}}_E)=0$, we have $h^1(E,\enm{\cal{N}}_Y\vert_E)=0$. Each normal bundle $\enm{\cal{N}}_{D_i}$ is a direct sum of a degree $0$ line bundle and a degree $1$ line bundle. Hence for every vector bundle $\enm{\cal{F}}$ on $D_i$ obtained from $\enm{\cal{N}}_{D_i}$ making one negative elementary transformation has no factor of degree $\le -2$. Hence $h^1(D_i,\enm{\cal{F}})=0$. Hence $h^1(D_1\cup D_2,\enm{\cal{R}})=0$, where $\enm{\cal{R}}$ is the vector bundle obtained from $\enm{\cal{N}}_{D_1\cup D_2}$ making the two negative elementary transformations at $A_1$ and $A_2$ associated to the tangent lines of $E$ at these points. Hence $h^1(Y,\enm{\cal{N}}_Y)=0$ and $Y$ is smoothable inside $Q$ (use \cite{hh}, Theorem 4.1, for $Q$ instead of the smooth 3-fold $\mathbb {P}^3$). We get that the nearby smooth curves, $C$, have $h^1(\enm{\cal{N}}_C)=0$ and they form a $24$-dimensional family smooth at $C$. By semicontinuity the general such $C$ has also $h^1(\mathcal {I}_C(3))=0$. Since in a flat family $\{C_\lambda \}$ of family with constant $h^1(\mathcal {I}_{C_\lambda }(3))$, the condition ``$\mathcal {I}_{C_\lambda }(3)$ is spanned'' is an open condition, to find a degree $8$ smooth elliptic curve $C\subset Q$ with $\mathcal {I}_C(3)$ spanned (and hence to complete the case $c_1=3$, $c_2=8$), it is sufficient to prove that $\mathcal {I}_Y(3)$ is spanned. \begin{lemma}\label{a3} The sheaf $\mathcal {I}_Y(3)$ is spanned. \end{lemma} \begin{proof} Let $\mathcal {B}$ denote the scheme-theoretic base-locus of the linear system $\vert \mathcal {I}_Y(3)\vert$ on $Q$. We need to prove that $\mathcal {B} = Y$ as schemes. Since $Y\cap U = E\cup \{O_1,O_2\}$ as schemes and $h^1(\mathcal {I}_U(3)) =h^1(\mathcal {O}_{Q}(1))=0$, $\mathcal {B} \vert_U$ is the scheme-theoretic base locus of the linear system $\vert \mathcal {I}_{\{O_1,O_2\}}\otimes \mathcal {L}\vert $ on $U$. Lemma \ref{a2} gives $\mathcal {B}\vert_U = E\cup \{O_1,O_2\}$ as schemes. Let $H\subset \mathbb {P}^4$ be the hyperplane spanned by the lines $D_1$ and $D_2$. For general $O_1, O_2$ we may assume that $Q_2:= Q\cap H$ is a smooth quadric surface. Since $U\cup Q_2\in \vert \mathcal {I}_Y(3)\vert$, $h^1(\mathcal {I}_{Q_2}(3)) = h^1(\mathcal {O}_{Q}(2)) =0$ and $\mathcal {B}\vert_U = E\cup \{O_1,O_2\}$, to prove the lemma it is sufficient to prove that $D_1\cup D_2\cup (E\cap Q_2)$ is the scheme theoretic base locus of the linear system $\vert \mathcal {I}_{D_1\cup D_2\cup (E\cap Q_2),Q_2}(3)\vert$ on $Q_2$. We call $(1,0)$ the system of lines of $Q_2$ contained $D_1$ and $D_2$. Since $Y$ is nodal, $D_1\cup D_2\cup (E\cap Q_2) = D_1\cup D_2\cup Z$ with $\deg (Z) =4$ and $E\cap H = \{A_1,A_2\}\cup Z$. Since $\mathcal {I}_{D_1\cup D_2,Q_2}(3) \cong \mathcal {O}_{Q_2}(1,3)$, it is sufficient that $Z$ is not contained in a line. Since $\mathcal {I}_E(3)$ is spanned, no line contains a degree $4$ subscheme of $E$. \end{proof} \begin{lemma}\label{a4} Let $Y = E\cup D_1\cup D_2$ as above. For general $Y$ we have $h^1(\mathcal {I}_Y(3))=0$. \end{lemma} \begin{proof} Let $M\subset \mathbb {P}^4$ be the hyperplane spanned by the lines $D_1$ and $D_2$. Set $Q':= Q\cap M$. Look at the Castelnuovo exact sequence \begin{equation}\label{eqc1} 0 \to \mathcal {I}_E(2) \to \mathcal {I}_Y(3) \to \mathcal {I}_{D_1\cup D_2\cap (E\cap Q'),Q'}(3) \to 0 \end{equation} Since $h^1(\mathcal {I}_E(2))=0$ (Lemma \ref{a1}), the exact sequence (\ref{eqc1}) shows that it is sufficient to prove that $h^1(Q',\mathcal {I}_{D_1\cup D_2\cap (E\cap Q'),Q'}(3))=0$. Since the integral quadric surface $Q'$ contains $D_1$ and $D_2$ and $D_1\cap D_2=\emptyset$, $Q'$ is a smooth quadric surface. Call $(1,0)$ the ruling of $Q'$ containing $D_1$ and $D_2$. For general $D_1$ and $D_2$ the hyperplane $M$ is not tangent to $E$ either at $A_1$ or at $A_2$. Hence the scheme $E\cap Q'$ is the disjoint union of $\{A_1,A_2\}$ (with its reduced structure) and a degree $4$ scheme $Z$. Since $D_i\cap E = \{A_i\}$, we have $Z\cap (D_1\cup D_2)=\emptyset$. Hence it is sufficient to prove $h^1(Q',\mathcal {I}_{Z,Q'}(1,3)) =0$. Since $\deg (Z) = 4$, it is easy to check that $h^1(Q',\mathcal {I}_{Z,Q'}(1,3))>0$ if and only if there is a scheme $B\subset Z$ with $\deg (B)=3$ and $B$ in a line of type $(1,0)$ on $Q'$. To exclude the existence of such a scheme $B$ it is sufficient to find $E$ without a two-dimensional family of trisecant lines (move $D_1$ and $D_2$). \end{proof} \vspace{0.3cm} \quad {\emph {Proof of Proposition \ref{a00}.}} By the Serre correspondence it is sufficient to find a smooth elliptic curve $C\subset Q$ such that $\mathcal {I}_C(3)$ is spanned, $h^1(\mathcal {I}_C(t))=0$ for all $t\ge 3$ and $h^0(\mathcal {I}_C(2)) =0$ (indeed, the last condition implies $h^0(\mathcal {I}_C(t))=0$ for all $t \le 1$ and hence $h^1(\mathcal {I}_C(2)) =2$, $h^1(\mathcal {I}_C(1))=3$ and $h^1(\mathcal {I}_C(t))=0$ for all $t\le 0$ by the Riemann-Roch theorem). By semicontinuity it is sufficient to find $Y$ with the same properties. There is $Y$ with $\mathcal {I}_Y(3)$ spanned (Lemma \ref{a3}) and with $h^1(\mathcal {I}_Y(3))=0$ (Lemma \ref{a3}). Castelnuovo-Mumford's lemma implies $h^1(\mathcal {I}_Y(t))=0$ for all $t\ge 4$. Since $Y =E\cup D_1\cup D_2$ with $h^0(\mathcal {I}_E(2))=0$ (Lemma \ref{a1}), we have $h^0(\mathcal {I}_Y(2))=0$.\qed \begin{remark} For a vector bundle $\enm{\cal{E}}$ in $\mathfrak{M}(3,8)$ that fits into the sequence: $$0\rightarrow \enm{\cal{O}}_Q(1) \rightarrow \enm{\cal{E}} \rightarrow \enm{\cal{I}}_Z(2)\rightarrow 0,$$ where $Z$ is the disjoint union of 4 lines, we can easily compute that $h^2(\mathcal{E}nd(\enm{\cal{E}}))=0$. It implies that $\enm{\cal{E}}$ is a smooth point of $\mathfrak{M}(3,8)$ and the dimension of $\mathfrak{M}(3,8)$ is 18 since such bundles form a 15-dimensional subvariety of $\mathfrak{M}(3,8)$ and $\chi(\mathcal{E}nd (\enm{\cal{E}}))=-17$. \end{remark} \section{Case $(c_1, c_2)=(3,9)$} Assume that $c_2(\enm{\cal{E}})=9$. Again let $\enm{\cal{F}}=\enm{\cal{E}}(-2)$. We can compute:\\ $$ \left\{ \begin{array}{ll} H^1(\enm{\cal{F}}(t))=0 \text{ for }t\leq -2\\ \chi(\enm{\cal{F}}\otimes \enm{\cal{F}}^{\vee})=-23\\ \chi(\enm{\cal{F}})=-\chi (\enm{\cal{F}}(-2))=-4\\ \chi(\enm{\cal{F}}(1))=-\chi(\enm{\cal{F}}(-3))=-4\\ \chi(\enm{\cal{F}}(-1))=0\\ \chi(\enm{\cal{F}}(2))=4 \end{array} \right. $$ Then the cohomology table for $\enm{\cal{F}}=\enm{\cal{E}}(-2)$ is as follows: \begin{equation}\label{t2}\begin{tabular}{|c|c|c|c|c|c|} \hline 0 & 0 & 0 & 0 & 0 & 0 \\ 4 & 4 & c & 0 & 0 & 0 \\ 0 & 0 & c & 4 & 4 & $b$ \\ 0 & 0 & 0 & 0 & 0 & $a$ \\ \hline \hline -3& -2& -1 & 0 & 1 & 2 \\ \hline \end{tabular}\end{equation} with $a-b=4$. If $\{k_1\leq \cdots \leq k_4\}$ is the spectrum of $\enm{\cal{F}}$, then the possibility is either $\{-2,-1,-1,0\}$ or $\{ -1,-1,-1,-1\}$. It implies that $H^1(\enm{\cal{E}}(-3))=0$ or $1$, i.e. $c=0$ or $1$ in the table. Thus $h^0(\enm{\cal{O}}_C)$ is either 1 or 2. Since $\omega_C\simeq \enm{\cal{O}}_C$, so $C$ is either an irreducible elliptic curve of degree 9 or consists of two irreducible elliptic curves. Note that $C$ cannot have a plane cubic curve as its component. \begin{lemma} $C$ is an irreducible smooth elliptic curve of degree 9. \end{lemma} \begin{proof} Assume that $C=C_1\sqcup C_2$ with $C_i$ smooth elliptic, $\deg (C_1) =4$ and $\deg (C_2) = 5$. The curve $C_1$ is a contained in a hyperplane section $J$, which may be a cone. Even if $J$ is not smooth, $C_1$ is a complete intersection of $J$ with a quadric surface in the linear span $\langle J\rangle \cong \mathbb {P}^3$. So in $J\cap (C_1\cup C_2)$ we have $C_1$ and a degree $5$ scheme $J\cap C_2$; the only possibility to have $\mathcal {I}_{C_1\cup C_2}(3)$ spanned is that $\mathcal {I}_{J\cap C_2,J}(1)$ is spanned, which is absurd. \end{proof} In particular, we have $c=0$ in the cohomology table of $\enm{\cal{F}}$. Similarly as in the proposition \ref{monad}, we can show that $\enm{\cal{E}}$ is the cohomology of a monad: $$0\rightarrow \enm{\cal{O}}_Q(1)^{\oplus 4} \rightarrow \Sigma(1)^{\oplus 5} \rightarrow \enm{\cal{O}}_Q(2)^{\oplus 4} \rightarrow 0.$$ Now we will prove the existence of a smooth elliptic curve $C\subset Q$ such that $\deg (C) = 9$ and $h^1(\mathcal {I}_C(3))=0$. Since $3\cdot 9 = 27 = h^0(\mathcal {O}_{Q}(3))-3$, we have $h^1(\mathcal {I}_C(3))=0$ if and only if $h^0(\mathcal {I}_C(3))=3$. The latter condition obviously implies $h^0(\mathcal {I}_C(2)) =0$. Hence proving the existence of $C$ also proves the existence of a rank two vector bundle $\enm{\cal{E}}$ on $Q$ with $(c_1, c_2)=(3,9)$, $h^0(\enm{\cal{E}})=4$, $h^1(\enm{\cal{E}})=0$, $h^0(\enm{\cal{E}}(-1))=0$ (and hence $\enm{\cal{E}}$ is stable). \begin{lemma}\label{b1} There is a smooth elliptic curve $A\subset Q$ such that $\deg (A) =7$ and $h^0(\mathcal {I}_A(2))=0$. \end{lemma} \begin{proof} We start with a smooth hyperplane section $Q_2$ of $Q$ and a smooth elliptic curve $B \subset Q_2$ of type $(2,2)$. Fix $2$ general points $P_1,P_2\in B$. The union $U(P_i)$ of all lines in $Q$ passing through $P_i$ is the quadric cone $T_{P_i}(Q)\cap Q$ of the tangent space $T_{P_i}(Q) \cong \mathbb {P}^3$ of $Q$ at $P_i$. Hence we may find a line $D_1\subset Q$ such that $P_1\in D_1$ and a smooth conic $D_2 \subset Q$ such that $D_2\cap Q_2$ is $P_2$ and a point not in $B$. For general $D_i$ we may also assume $D_1\cap D_2=\emptyset$ and $D_i\nsubseteq Q_2$. For general $D_1, D_2$ we may also assume that no hyperplane section of $Q$ contains $D_1\cup D_2$. Set $Y:= B\cup D_1\cup D_2$. Since $Q_2$ is a hyperplane section of $Q$, $D_i\nsubseteq Q_2$ and $B\subset Q_2$, then $Q_2\cap D_i = \{P_i\}$ and $D_i$ is not the tangent line of $B$ at $P_i$. Hence $Y$ is a nodal connected curve of degree $7$ and with arithmetic genus $1$. The reduced curve $Y$ is locally a complete intersection and hence its normal sheaf $\enm{\cal{N}}_{Y}$ is a rank 2 vector bundle of degree $\deg (TQ\vert_Y) = 21$. Since $Y = B\cup (D_1\cup D_2)$ has nodes only at $P_1,P_2$, the vector bundles $\enm{\cal{N}}_Y\vert_B$ are obtained from $\enm{\cal{N}}_B$ making $2$ positive elementary transformations, each of them at one of the points $P_i$. Since $\enm{\cal{N}}_B$ is a direct sum of a line bundle of degree $4$ and a line bundle of degree $8$, we have $h^1(B,\enm{\cal{N}}_B)=0$. Hence $h^1(B,\enm{\cal{N}}_Y\vert_B)=0$. The normal bundle $\enm{\cal{N}}_{D_1}$ is a direct sum of a degree $0$ line bundle and a degree $1$ line bundle. The normal bundle $\enm{\cal{N}}_{D_2}$ is a direct sum of a degree $2$ line bundle and a degree $4$ line bundle. Hence for every vector bundle $\enm{\cal{F}}$ on $D_i$ obtained from $\enm{\cal{N}}_{D_1\cup D_2}$ making $2$ negative elementary transformations, each of them at a different point $P_1,P_2$, has no factor of degree $\le -2$. Hence $h^1(D_i,\enm{\cal{F}})=0$. Hence $h^1(D_1\cup D_2,\enm{\cal{G}})=0$, where $\enm{\cal{G}}$ is the vector bundle obtained from $\enm{\cal{N}}_{D_1\cup D_2}$ making the $2$ negative elementary transformations at $P_1$ and $P_2$ associated to the tangent lines of $B$ at these points. Hence $h^1(Y,\enm{\cal{N}}_Y)=0$ and $Y$ is smoothable inside $Q$ (use \cite{hh}, Theorem 4.1, for $Q$ instead of the smooth 3-fold $\mathbb {P}^3$). By semicontinuity to prove Lemma \ref{b1} it is sufficient to prove $h^0(\mathcal {I}_Y(2))=0$. Assume $h^0(\mathcal {I}_Y(2))>0$ and take $\Delta \in \vert \mathcal {I}_Y(2)\vert$. Since $Q_2\cap Y$ contains $B$ and a point of $D_2\cap (Q_2\setminus B)$, we have $Q_2\subset \Delta$, i.e. $\Delta = Q_2\cup Q'$ for some hyperplane section $Q'$ of $Q$. Since neither $D_1$ nor $D_2$ is contained in $Q_2$, we get $D_1\cup D_2\subset Q'$, contradicting our choice of $D_1\cup D_2$. \end{proof} \begin{lemma}\label{b2} There is a smooth elliptic curve $C\subset Q$ such that $\deg (C) = 9$, $h^0(\mathcal {I}_C(3))=3$ and $h^1(\mathcal {I}_C(3))=0$. \end{lemma} \begin{proof} Let $C\subset Q$ be any smooth elliptic curve of degree $9$. Since $h^0(\mathcal {O}_C(3)) = 27 = h^0(\mathcal {O}_{Q}(3))-3$, we have $h^1(\mathcal {I}_C(3)) = h^0(\mathcal {I}_C(3))-3$. Hence it is sufficient to prove the existence of a smooth elliptic curve $C$ with $ \deg (C) = 9$ and $h^0(\mathcal {I}_C(3))=3$. Let $A\subset Q$ be a smooth elliptic curve such that $\deg (A) =7$ and $h^0(\mathcal {I}_A(2))=0$. Fix two general $P_1, P_2\in A$ (Lemma \ref{b1}). The union $U(P_i)$ of all lines in $Q$ passing through $P_i$ is the quadric cone $T_{P_i}(Q)\cap Q$ of the tangent space $T_{P_i}(Q)$ of $Q$ at $P_i$. Hence we may find lines $D_i\subset Q$, $i=1,2$, such that $P_i\in D_i$, $D_i$ is not the tangent line to $A$ at $P_i$, $D_1\cap D_2=\emptyset$ and $D_i\cap A = \{P_i\}$. Hence $Y:= A\cup D_1\cup D_2$ is a connected nodal curve with degree $9$ and arithmetic genus $1$. As in the proof of Lemma \ref{b1} we see that $Y$ is smoothable inside $Q$. Hence by semicontinuity to prove Lemma \ref{b2} it is sufficient to prove $h^0(\mathcal {I}_Y(3))=3$. Let $H\subset \mathbb {P}^4$ be the hyperplane spanned by $D_1\cup D_2$. Set $Q':= Q\cap H$. Since $Q'$ contains the disjoint lines $D_1, D_2$, $Q'$ is a smooth quadric surface. Call $(1,0)$ the ruling of $Q'$ containing $D_1$ and $D_2$. Fix general $O_1,O_2,O_3\in Q'$. Since $h^0(\mathcal {I}_Y(3)) \ge 3$ by the Riemann-Roch theorem, to prove $h^0(\mathcal {I}_Y(3))=3$ it is sufficient to prove $h^0(\mathcal {I}_{Y\cup \{O_1,O_2,O_3\}}(3)) =0$. Assume $h^0(\mathcal {I}_{Y\cup \{O_1,O_2,O_3\}}(3)) >0$ and take $\Delta \in \vert \mathcal {I}_{Y\cup \{O_1,O_2,O_3\}}(3)\vert$. For general $D_1$ and $D_2$ we may also assume that $H$ is not tangent to $A$ neither at $P_1$ nor at $P_2$. Hence the scheme $A\cap Q'$ is the disjoint union of $P_1$, $P_2$, and a degree $5$ scheme, $Z$, such that $Z\cap (D_1\cup D_2)=\emptyset$. First assume $Q'\subset \Delta$. Hence $\Delta = Q'\cup T$ with $T$ a quadric hypersurface of $Q$. Since $Q'\cap A$ is a finite set, we get $A\subset T$. Hence $h^0(\mathcal {I}_A(2)) >0$, a contradiction. Hence $\Delta' := \Delta \cap Q'$ is a divisor of type $(3,3)$ of $Q'$ containing $D_1\cup D_2$. Set $J:= \Delta ' -D_1-D_2$. $J$ is a divisor of type $(1,3)$ on $Q'$ containing $Z$, $O_1$, $O_2$ and $O_3$. Since the points $O_i$ are general in $Q'$, to get a contradiction (and hence to prove the lemma) it is sufficient to prove $h^0(Q',\mathcal {I}_Z(1,3)) =3$, i.e. $h^1(\mathcal {I}_Z(1,3))=0$. Fix a smooth $D\in \vert \mathcal {O}_{Q'}(1,1)\vert$ and take a general hyperplane section $T$ of $Q$ with $T\cap T' = D$. $T$ is a smooth quadric surface. We may specialize $A$ to a curve $Y':= A'\cup L_1\cup L_2\cup L_3$ with $A'$ a smooth curve of type $(2,2)$, each $L_i$ a line intersecting transversally $A'$ and $P_i\in L_i$, $i=1,2$. For general $Y'$ we have $Y\cap Q' = Z'\cup \{P_1,P_2\}$ with $\sharp (Z'\cap D) =4$. By semicontinuity it is sufficient to prove $h^1(Q',\mathcal {I}_{Z'}(1,3)) =0$. Since $\sharp (Z'\setminus Z'\cap D)=1$, we have $h^1(Q',\mathcal {I}_{Z'\setminus Z'\cap D;Q'}(0,2))=0$. $D$ is a smooth rational curve and $\deg (\mathcal {O}_D(1,3))=4$. Hence $h^1(D,\mathcal {I}_{D\cap Z',S}(1,3))=0$. From the exact sequence on $Q'$: $$0\to \mathcal {I}_{Z'\setminus Z'\cap D,Q'}(0,2) \to \mathcal {I}_{Z',Q'}(1,3) \to \mathcal {I}_{D\cap Z',D}(1,3)\to 0$$ we get $h^1(Q',\mathcal {I}_{Z'}(1,3)) =0$, concluding the proof. \end{proof} Let us start with the smooth elliptic curve $C$ given by Lemma \ref{b2} ; hence $h^1(\mathcal {I}_C(3))=0$ and $h^0(\mathcal {I}_C(3))=3$. Since $h^0(\enm{\cal{I}} _C(3)) \ge 2$, there are at least two degree $3$ hypersurfaces $M_1, M_2$ containing $C$. Since $h^0(\enm{\cal{I}} _C(2))=0$, the scheme-theoretic intersection $X:= M_1\cap M_2$ has dimension $1$, i.e. it is a complete intersection. By the Liaison induced by $X$, we have \begin{equation}\label{eqa2} 0\to \enm{\cal{I}}_X(3) \to \enm{\cal{I}}_C(3) \to \omega _Y \to 0. \end{equation} We need to prove that $\omega _Y$ is spanned. Since $X$ is a complete intersection of $M_1$ and $M_2$, we have $h^0(\enm{\cal{I}} _X(3))=2$ and $h^1(\enm{\cal{I}} _X(3))=0$. Since $h^0(\enm{\cal{I}} _C(3)) =3$, the sequence (\ref{eqa2}) gives $h^0(\omega _Y)=1$. We have $h^2(\enm{\cal{I}} _C(3)) = h^1(\enm{\cal{O}} _C(3))=0$ and $h^2(\enm{\cal{I}} _X(3)) = h^1(\enm{\cal{O}}_X(3)) =1$ since $\omega _X \cong \enm{\cal{O}} _X(3)$ by the adjunction formula. Since $h^1(\enm{\cal{I}} _C(3)) =0$ we get $h^1(\omega _Y)=1$. Hence $h^0(\mathcal {O}_Y)=1$ by the duality of locally Cohen-Macaulay projective schemes. Since $p_a(Y)=1$ to get that $\omega _Y$ is trivial and hence that $\omega _Y$ is spanned, it is sufficient to prove that (at least for certain $C$) it is spanned outside finitely many points. Since $\omega _Y$ is a quotient of $\mathcal {I}_C(3)$, it is spanned at all points at which $\mathcal {I}_C(3)$ is spanned. Hence it is sufficient to find $C$ with the additional condition that $\mathcal {I}_C(3)$ is spanned outside finitely many points. This is the Lemma \ref{b3} below. \begin{lemma}\label{b3} There is a smooth elliptic curve $C\subset Q$ such that $\deg (C) = 9$, $h^0(\mathcal {I}_C(3))=3$, $h^1(\mathcal {I}_C(3))=0$ and such that $\mathcal {I}_C(3)$ is spanned outside finitely many points. \end{lemma} \begin{proof} By semicontinuity it is sufficient to find $Y = B\cup D_1\cup D_2$ as in the proof of Lemma \ref{b2} with the additional property that the base locus of $\mathcal {I}_Y(3)$ is finite. Fix $B$ satisfying the thesis of Lemma \ref{b1}. Let $H\subset \mathbb {P}^4$ be a general hyperplane. By the Uniform Position Principle (\cite{acgh}, pp. 112--113) the scheme $B\cap H$ is formed by $7$ points in uniform position and we call $A_1, A_2$ two of them. Moreover, the monodromy of the generic hyperplane section is the full transite group (\cite{acgh}, p. 112). Hence for general $H$ we may assume that no two of the points of $A\cap H$ are contained in a line in $H$. Set $Q':= Q\cap H$. For general $H$ the scheme $Q'$ is a smooth quadric surface. Fix one of the system of lines of $Q'$, say $(1,0)$, and call $D_i$ the line of type $(1,0)$ of $Q'$ containing $A_i$. Set $S:= B\cap Q'\setminus \{A_1,A_2\}$. Notice that $\sharp (S)=5$ and $S\cap D_i=\emptyset$. Set $Y:= B\cup D_1\cup D_2$. The proof of Lemma \ref{b2} gives $h^1(\mathcal {I}_Y(3))=0$ and $h^0(\mathcal {I}_Y(3))=3$. Let $\mathcal {B}$ denote the base locus of $\mathcal {I}_Y(3)$. It is sufficient to prove that $\mathcal {B}\cap Q'=\emptyset$. Since $h^i(\mathcal {I}_B(2))=0$, $i=0,1$, we saw in the proof of Lemma \ref{b2} that the restriction of $\vert \mathcal {I}_Y(3)\vert$ to $Q'$ is given by all forms $D_1\cup D_2\cup T$ with $T\in \vert \mathcal {I}_S(1,3)\vert$ and $h^0(Q',\mathcal {I}_S(1,3))=3$. Since $S$ is in uniform position, $T$ is a general element of $\vert \mathcal {I}_S(1,3)\vert$. Fix two general $T_1, T_2\in \vert \mathcal {I}_S(1,3)\vert$. Since $T_1$ is irreducible, the scheme $T_1\cap T_2$ is zero-dimensional. We have $\deg (T_1\cap T_2) = 3+3 =6$. Since $S\subset T_1\cap T_2$ and $h^0(Q',\mathcal {I}_S(1,3))=3$, we get that $\mathcal {I}_S(1,3)$ is a spanned sheaf of $Q'$. Since $S\cap (D_1\cup D_2)=\emptyset$, we also get that the scheme $Y\cap Q'$ is the intersection with $Q'$ of all elements of $\vert \mathcal {I}_Y(3)\vert$. Hence $\mathcal {B}\cap Q'=\emptyset$. Hence $\mathcal {B}$ is supported by finitely many points. \end{proof} \begin{remark} Similarly as in the case $c_1=8$, for a vector bundle $\enm{\cal{E}} \in \mathfrak{M} (3, 9)$ that fits into the sequence: $$0\rightarrow \enm{\cal{O}}_Q(1) \rightarrow \enm{\cal{E}} \rightarrow \enm{\cal{I}}_Z(2)\rightarrow 0,$$ where $Z$ is the disjoint union of 5 lines, we can easily compute that $h^2(\mathcal{E}nd(\enm{\cal{E}}))=0$. It implies that $\enm{\cal{E}}$ is a smooth point of $\mathfrak{M}(3,9)$ and the dimension of $\mathfrak{M}(3,9)$ is 24 since such bundles form a 19-dimensional subvariety of $\mathfrak{M}(3,9)$ and $\chi(\mathcal{E}nd(\enm{\cal{E}}))=-23$. \end{remark} As an automatic consequence from the classification, we observe that if $\enm{\cal{E}}$ is a rank two globally generated vector bundle on $Q$ with $c_1=3$, then we have $c_1(\enm{\cal{E}}(-2))=-1$ and $H^1(\enm{\cal{E}}(-3))=0$. Thus we have the following: \begin{corollary} Every rank two globally generated vector bundle on $Q$ with $c_1=3$, is an odd instanton (see \cite{faenzi}) up to twist. \end{corollary} \section{On higher dimensional quadrics} We denote by $Q_n$ the smooth quadric hypersurface of dimension $n>3$. \begin{theorem}\label{aBonQn} The only indecomposable and globally generated vector bundles $\enm{\cal{F}}$ of rank $2$ on $Q_n$, $n \ge 4$, with $c_1(\enm{\cal{F}})\le3$ are the followings: \begin{enumerate} \item for $n\ge6$, no such bundle exists, \item for $n=5$, $\enm{\cal{F}}(-2)$ is a Cayley bundle, i.e.\ $\enm{\cal{F}}(-2)$ is a bundle with $c_1=-1$, $c_2=2$, \item for $n=4$, either $\enm{\cal{F}}(-2)$ is a spinor bundle or it has $c_1=-1$, $c_2=(1,1)$, i.e.\ $\enm{\cal{F}}(-2)$ is the restriction of a Cayley bundle to $Q_4$. \end{enumerate} \end{theorem} \begin{proof} We have to study which vector bundle of Theorem \ref{mt} extends to a globally generated vector bundle on $Q_n$: \quad{(a)} If $\enm{\cal{E}}$ is a spinor bundle on $Q_3$, then $\enm{\cal{E}}$ extends to a spinor bundle $\Sigma_1$ or $\Sigma_2$ on $Q_4$, but does not extend on $Q_n$ for $n\ge5$ (see \cite{O1}, theorem 2.1). \quad{(b)} If $\enm{\cal{E}}$ is a pullback of a null-correlation bundle on $\mathbb {P}^3$ twisted by $1$, the zero locus of a general section is the disjoint union of two conics. Let assume that $\enm{\cal{F}}$ is a globally generated extension of $\enm{\cal{E}}$ on $Q_4$. Then the zero locus of a general section must be the disjoint union of two quadric surfaces. We recall that a quadric surface $S\subset Q_4$ is the complete intersection of two hyperplane sections of $Q_4$. Hence any two quadric surfaces meet so we get a contradiction. \quad{(c)} If $\enm{\cal{E}}$ is stable with $c_1=-1$, $c_2=2$, then $\enm{\cal{E}}$ extends to a vector bundle on $Q_4$ with Chern classes $c_1=-1$, $c_2=(1,1)$, and even to one on $Q_5$ with $c_1=-1$, $c_2=2$ (Cayley bundles), but no further on $Q_n$, $n\ge6$, (see \cite{Ott}, theorem 3.2). \quad{(d)} Let $\enm{\cal{E}}$ be a rank $2$ globally generated on $Q$ with $c_1(\enm{\cal{E}})=3$ and $c_2(\enm{\cal{E}})=7$. Then he does not extend on $Q_4$ by \cite{bmvv} Theorem $4.3$. \quad{(e)} Let $\enm{\cal{F}}$ be a rank $2$ globally generated on $Q_4$ such that the restriction $\enm{\cal{E}}=\enm{\cal{F}}|_H$ of a general hyperplane $H$ is a globally generated vector bundle of rank $2$ on $Q_3$, with $c_1(\enm{\cal{E}})=3$ and $c_2(\enm{\cal{E}})=8 $. Therefore the zero locus of a general global section of $\enm{\cal{F}}$ is a smooth surface $S$ of degree $8$ and we have the exact sequence $$0 \to \enm{\cal{O}}_{Q_4} \to \enm{\cal{F}} \to \enm{\cal{I}}_S(3) \to 0.$$ Since $\det(\enm{\cal{F}})\cong\enm{\cal{O}}_{Q_4}(3)$ and $\omega_{Q_4}\cong\enm{\cal{O}}_{Q_4}(-4)$, the adjunction formula gives $\omega_S\cong \enm{\cal{O}}_S(-1)$. Thus $S$ is an anticanonically embedded del Pezzo surface. Using Riemann-Roch on the surface $S$ we obtain $h^0(S,\enm{\cal{O}}_S(1))=\chi(\enm{\cal{O}}_S(1)) = (\enm{\cal{O}}_S(1)\cdot\enm{\cal{O}}_S(2))/2 + \chi(\enm{\cal{O}}_S) = 8 + 1 = 9$. On the other hand, using the structure sequence $0 \to \enm{\cal{I}}_S(1) \to \enm{\cal{O}}_{Q_4}(1) \to \enm{\cal{O}}_S(1) \to 0$, we have $h^1(\enm{\cal{I}}_S(1))=h^1(\enm{\cal{F}}((-2)) = 3$. Since $h^1(\enm{\cal{O}}_S(1))=0$ we get also $h^2(\enm{\cal{F}}(-2))=0.$ Now let us consider the sequence, $$ 0 \to \enm{\cal{F}}(-1) \to \enm{\cal{F}} \to \enm{\cal{E}} \to 0.$$ From the cohomology table for $\enm{\cal{E}}(-2)$ (see (\ref{t1})) we get $h^1(\enm{\cal{F}}(t))=0$ for any $t\leq -3$ and from the sequence in cohomology $$0= H^1(\enm{\cal{F}}(-3))\to H^1(\enm{\cal{F}}(-2))\to H^1(\enm{\cal{E}}(-2))=\enm{\mathbb{C}}^{\oplus 3}\to H^2(\enm{\cal{F}}(-3))\to 0,$$ we get also $H^1(\enm{\cal{F}}(-3))=0$. For any $t\geq -3$ we have the sequence $$H^2(\enm{\cal{F}}(t))\to H^1(\enm{\cal{F}}(t+1))\to 0$$ which implies $H^2(\enm{\cal{F}}(t+1))=0$. For any $t\leq -3$ we have the sequence $$0\to H^2(\enm{\cal{F}}(t-1))\to H^1(\enm{\cal{F}}(t))$$ which implies $H^2(\enm{\cal{F}}(t-1))=0$. Then we can conclude that $H^2_*(\enm{\cal{F}})=0$ which a contradiction to the classification of rank two vector bundles without $H^2_*$ given in \cite{ma}. \quad{(f)} Let $\enm{\cal{F}}$ be a rank $2$ globally generated on $Q_4$ such that the restriction $\enm{\cal{E}}=\enm{\cal{F}}|_H$ of a general hyperplane $H$ is a globally generated vector bundle of rank $2$ on $Q_3$, with $c_1(\enm{\cal{E}})=3$ and $c_2(\enm{\cal{E}})=9 $. Therefore the zero locus of a general global section of $\enm{\cal{F}}$ is a smooth surface $S$ of degree $9$ and we have the exact sequence $$0 \to \enm{\cal{O}}_{Q_4} \to \enm{\cal{F}} \to \enm{\cal{I}}_S(3) \to 0.$$ Since $\det(\enm{\cal{F}})\cong\enm{\cal{O}}_{Q_4}(3)$ and $\omega_{Q_4}\cong \enm{\cal{O}}_{Q_4}(-4)$, the adjuction formula gives $\omega_S\cong \enm{\cal{O}}_S(-1)$. Thus $S$ is an anticanonically embedded del Pezzo surface. Using Riemann-Roch on the surface $S$ we obtain $h^0(S,\enm{\cal{O}}_S(1))=\chi(\enm{\cal{O}}_S(1)) = (\enm{\cal{O}}_S(1)\cdot\enm{\cal{O}}_S(2))/2 + \chi(\enm{\cal{O}}_S) = 9 + 1 = 10$. On the other hand, using the structure sequence $0 \to \enm{\cal{I}}_S(1) \to \enm{\cal{O}}_{Q_4}(1) \to \enm{\cal{O}}_S(1) \to 0$, we have $h^1(\enm{\cal{I}}_S(1))=h^1(\enm{\cal{F}}((-2)) = 4$. Since $h^1(\enm{\cal{O}}_S(1))=0$ we get also $h^2(\enm{\cal{F}}(-2))=0.$ Now let us consider the sequence, $$ 0 \to \enm{\cal{F}}(-1) \to \enm{\cal{F}} \to \enm{\cal{E}} \to 0.$$ From the cohomology table for $\enm{\cal{E}}(-2)$ (see (\ref{t2})) we get $h^1(\enm{\cal{F}}(t))=0$ for any $t\leq -3$ and from the sequence in cohomology $$0= H^1(\enm{\cal{F}}(-3))\to H^1(\enm{\cal{F}}(-2))\to H^1(\enm{\cal{E}}(-2))=\enm{\mathbb{C}}^{\oplus 4} \to H^2(\enm{\cal{F}}(-3))\to 0,$$ we get also $H^1(\enm{\cal{F}}(-3))=0$. For any $t\geq -3$ we have the sequence $$H^2(\enm{\cal{F}}(t))\to H^1(\enm{\cal{F}}(t+1))\to 0$$ which implies $H^2(\enm{\cal{F}}(t+1))=0$. For any $t\leq -3$ we have the sequence $$0\to H^2(\enm{\cal{F}}(t-1))\to H^1(\enm{\cal{F}}(t))$$ which implies $H^2(\enm{\cal{F}}(t-1))=0$. Then we can conclude that $H^2_*(\enm{\cal{F}})=0$ which a contradiction with the classification of rank two vector bundles without $H^2_*$ given in \cite{ma}. \end{proof} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}